SERP (Rank Tracking)
Pull a live Google SERP for a keyword — organic listings, featured snippets, knowledge panels, AI overviews, and "people also ask" — scoped to a location, language, and device. The same primitive you use for one-off rank checks and scheduled rank tracking.
The SERP endpoint is Ray9's most-used surface and the underlying primitive for rank tracking. Send a keyword and a location; get back a structured ranking with all the modern SERP features your agent needs to reason about visibility — organic results, featured snippets, AI overviews, knowledge panels, and "people also ask" entries. Call it once for a position check, or call it on a schedule to build a rank-tracking time series (see Track rankings over time).
Quick example
The same call from MCP and REST:
MCP (any client)
Tool: serp.search
Arguments:
keyword: "best crm 2026"
location: "United States"
device: "desktop"
depth: 10The MCP tool is registered as serp.search. Pass the targetDomain argument to filter results to a domain you care about (e.g. targetDomain: "example.com").
REST
curl https://api.ray9.ai/v1/serp/search \
-H "Authorization: Bearer rk_..." \
-H "Content-Type: application/json" \
-d '{
"keyword": "best crm 2026",
"location": "United States",
"device": "desktop",
"depth": 10
}'Both calls debit 5 credits (5 mils, $0.005) on success and return the same response shape.
Parameters
| Field | Type | Required | Default | Notes |
|---|---|---|---|---|
keyword | string | yes | — | The search query, exactly as a user would type it. 1–700 characters. |
engine | enum | no | "google" | Only "google" is supported in v1. Any other value returns 501 not_implemented. |
location | string | no | "United States" | Human-readable. Examples: "San Francisco, California, United States", "United Kingdom", "Berlin, Germany". |
language | string | no | "English" | Human-readable language name. Examples: "English", "Spanish", "German", "Japanese". |
device | enum | no | "desktop" | "desktop" or "mobile". Mobile SERPs differ in layout, AI-overview frequency, and feature mix. |
depth | integer | no | 10 | Number of organic results to return. 1–100. Larger values do not cost more, but most analyses only need the top 10–20. |
targetDomain | string | no | — | Filter results to entries whose domain or url contains this substring. Useful for "is example.com on page one for X?" queries. |
Engines
engine: "google" is the only value that returns results. Any other value returns 501 not_implemented. Treat the engine field as forward-compatible: send it explicitly if you want to be defensive about future defaults, or omit it and let the server fill in "google".
Result types
Each entry in the results array has a type field that tells you what surface it represents on the SERP. Possible values:
type | What it is |
|---|---|
organic | A standard organic listing — the bread and butter of the page. |
featured_snippet | The boxed answer Google sometimes shows above organic results. |
knowledge_graph | The right-rail entity panel for things, people, places. |
people_also_ask | The expandable Q&A block — each entry includes question text and a candidate answer. |
local_pack | The map / 3-pack of local businesses for queries with local intent. |
ai_overview | Google's AI-generated summary above the fold. |
answer_box | Direct-answer boxes for unit conversions, definitions, calculator queries, etc. |
rank is 1-based and covers the entire SERP, not just organic — so a featured snippet at rank 1 will push the first organic listing to rank 2.
The type field is intentionally open. New result types may appear in future versions; treat unknown values as forward-compat hints, not parse errors.
Response shape
{
"requestId": "req_4f3a2c1b9e8d",
"query": {
"keyword": "best crm 2026",
"engine": "google",
"location": "United States",
"language": "English",
"device": "desktop",
"depth": 10
},
"creditsCharged": 5,
"creditsRemaining": 995,
"totalResultsCount": 142000000,
"results": [
{
"type": "organic",
"rank": 1,
"title": "Best CRM Software in 2026",
"url": "https://example.com/best-crm",
"domain": "example.com",
"breadcrumb": "example.com › blog › crm",
"description": "Compare the top customer-relationship-management platforms..."
}
]
}requestIdmirrors thex-request-idresponse header — log it; we'll need it if you open a ticket.queryechoes back the resolved query (defaults filled in) so your client can audit what actually ran without re-deriving it from the request body.query.targetDomainis omitted from the echo (not present asnull) when the request didn't include it — treat the field as optional on the client.creditsChargedis always5in v1.creditsRemainingis your post-debit balance in mils (1/1000 USD). Useful for client-side balance UIs without an extra/v1/usageround-trip. The example above shows a fresh free-tier user (1000 mil grant - 5 mil debit = 995 remaining).totalResultsCountis the engine's reported total (often hundreds of millions);results.lengthis bounded bydepth.
Per-item fields beyond the table above (breadcrumb, description, richSnippet, siteLinks, etc.) are populated when the engine returns them and null otherwise — they're additive, so feel free to read them defensively.
Result-set size, not pagination
There's no cursor/page model. Pass a depth of 1–100 to get up to that many organic results in a single call. The cost is the same regardless: 5 credits buys you whatever you ask for, capped at 100. If you genuinely need more than 100 results for a query, that's almost always a sign the query is too broad — narrow it with location, language, or a more specific keyword.
targetDomain filter
When targetDomain is set, results are filtered to those whose domain or url contains the substring. The match is naive substring-match — "example.com" matches both example.com and mail.example.com, and also example.com.fake.com (so prefer specific values). The total credit cost is unchanged by filtering — you're still paying for the full SERP fetch, you're just narrowing what you get back.
If the filter narrows the result set to zero, you get a 200 with results: [] — not a 404 no_results. The 404 path only fires when the engine itself reports zero hits before any client-side filter.
Errors
The full error envelope and code reference is in Errors. The codes you'll see specifically from /v1/serp/search:
| HTTP | code | When |
|---|---|---|
| 400 | bad_request | Body fails JSON-schema validation (missing keyword, depth out of range, etc.). |
| 400 | query_rejected | The engine refused the query. |
| 401 | unauthenticated | Bad / missing / expired key. See Authentication. |
| 402 | plan_required | Your org's plan doesn't include SERP. (Both Free and PayG include SERP today, so this is rare on this endpoint.) |
| 402 | out_of_credits | Balance < 5 mils. Top up. |
| 404 | no_results | The query ran but returned zero results. Treat as valid empty, not an error. |
| 429 | rate_limited | Per-org limiter tripped. See Rate limits. |
| 501 | not_implemented | engine set to a value other than "google". |
| 502 | service_unavailable | Service dependency failed after retries. Retry with exponential backoff. |
| 503 | service_busy | Outbound throttle tripped. Honour Retry-After (also in details.retryAfterMs). |
| 504 | service_timeout | The call exceeded our budget (130s end-to-end). Almost always means the query is too heavy — narrow it. |
You don't pay for failed calls — see Errors → Credits and errors.
Pricing
5 credits per call (5 mils = $0.005), independent of depth. The free $1 grant on signup buys you 200 SERP calls before you need to top up. See Billing for the full meter table and plan tiers.
Performance
Typical latency is 4–8 seconds for a Google SERP at the default depth: 10, longer for depth near the 100 ceiling. Calls are synchronous — there's no async / job-queue model in v1; if you need long-running searches you'll want to wrap the call in your own background-job runner.
Rank tracking
Rank tracking is just SERP called on a schedule with the result history stored — pass the same (keyword, location, language, device) tuple on the cadence you want and persist results[*].rank for the entries you care about.
The minimal pattern, callable from any cron / job runner:
# Once per day, per (keyword, target) pair you track.
curl https://api.ray9.ai/v1/serp/search \
-H "Authorization: Bearer rk_…" \
-H "Content-Type: application/json" \
-d '{ "keyword": "best crm 2026", "targetDomain": "example.com", "depth": 100 }'Use targetDomain on every tracked call so you only pay storage on the entries that mention your domain. Then store (checkedAt, keyword, rank, url) per call, and diff against the previous row to compute movement. See the Track rankings over time guide for a more complete recipe with backoff, dedup, and alerting.