Inside the Agent Readiness Score: 5 sub-metrics that matter
The Agent Readiness Score is a number from 0 to 100 that summarizes how well a site can be read, verified, and transacted with by an autonomous AI. It’s not arbitrary — it’s the weighted combination of five sub-metrics, each scored independently. This post is the methodology behind the number.
How the score is built
Each of the five sub-metrics scores the site from 0 to 100 along a single axis. The overall score is the weighted average. Bands:
- Excellent (80–100) — Site is fully prepared. Agents can finish tasks in one round-trip.
- Good (65–79) — Most things work. A few specific gaps to close.
- Average (50–64) — Agents will partially succeed and partially scrape. Conversion losses likely.
- Poor (0–49) — Site is invisible to agents in any meaningful way.
1. Identifiability
Does the site declare itself in a way an agent can find and trust? We check for:
- A published
/.well-known/agent.jsondescribing capabilities, supported actions, and contact endpoints. - Schema.org JSON-LD on product, organization, and content pages.
- OpenGraph and Twitter card metadata for social-grade summaries.
- A clean
robots.txtandsitemap.xmlwith explicit AI-crawler directives.
Why it matters: Without identifiability, agents fall back to scraping rendered HTML — slow, brittle, and easily wrong. With it, you appear in agent search results as a first-class peer.
2. Intent Surface
Can the agent see what tasks the site supports — and call them directly? We check whether the obvious user actions (search, browse, buy, sign up, request demo) are exposed as machine-callable surfaces, not just hidden inside JS-only forms.
Why it matters: Tasks become contracts. An agent that can call POST /api/checkout with a structured body finishes the task. An agent that has to fill in a form, wait for a JS validator, and click a button often doesn’t.
3. Adaptive Latency
How quickly does the site respond when an agent asks for structured data instead of HTML? We send a request with Accept: application/agent+json (and a few common variants) and measure the time-to-first-byte for the structured response.
Why it matters: Agents are budget-aware. A site that responds to a structured request in 200ms wins recommendations against a site that takes 1.5s to render full HTML. Speed isn’t a vanity metric — it’s a ranking factor in the agentic web.
4. Verification
Does the site accept signed agent credentials and on-behalf-of headers? We probe for support of the emerging standards — signed JWT-style intent tokens, X-On-Behalf-Of headers, and similar — that let an agent prove it’s acting for a verified human.
Why it matters: Verifiable identity unlocks transactions. Without it, your fraud team blocks every agent — including the legitimate ones acting on behalf of real shoppers. With it, you let qualified agent traffic through to checkout.
5. Action Endpoints
Are critical user actions — checkout, login, form submit, search — addressable as URLs or APIs that an agent can call directly?
Why it matters: A page that cannot be transacted with via URL is invisible to the agentic web. Pages that can be hit, parsed, and called become the new entry points to your business.
How to use the score
The number itself is less interesting than the breakdown. Most sites we score land in the 50–70 range with one or two weak sub-metrics anchoring the result. Closing those specific gaps usually moves the overall score 15–25 points and produces visible changes in agent task completion within weeks.
Try the score on your site. It takes thirty seconds and the breakdown will tell you exactly where to start.