Track rankings over time

Build a rank-tracking time series by calling SERP on a schedule. The minimal recipe with backoff, dedup, and movement detection.

Rank tracking is just SERP called on a schedule with results stored. This guide is the minimal end-to-end pattern: which calls to make, what to store, how to compute movement, and which failures to handle.

When to use this

  • You want a daily / weekly position digest for a portfolio of (keyword, country, device) tuples.
  • You want alerts when a tracked keyword crosses a threshold (top-10 → page-2, etc.).
  • You want history depth so you can answer "what changed this week?" without re-querying.

If you're checking rankings ad-hoc — a one-off "where am I for X?" — just call SERP directly and skip everything below.

The minimal loop

For each tracked tuple (keyword, country, language, device, targetDomain), call SERP on the cadence you want and persist the result.

curl https://api.ray9.ai/v1/serp/search \
  -H "Authorization: Bearer rk_…" \
  -H "Content-Type: application/json" \
  -d '{
    "keyword": "best crm 2026",
    "location": "United States",
    "language": "English",
    "device": "desktop",
    "targetDomain": "example.com",
    "depth": 100
  }'

Always pass targetDomain if you're tracking a single domain — Ray9 still pays for the full SERP fetch, but the response only includes the entries that mention your domain. Less storage, less post-processing.

What to store per call

The minimum useful row, derived from the response:

ColumnSourceNotes
checked_atnow() at call timeUTC, second precision is fine.
keywordecho from requestExact string you sent.
country / language / deviceecho from requestSo you can group time series by locale.
target_domainecho from requestThe domain you're tracking.
rankresults[].rank for the matching entrynull if domain doesn't appear.
urlresults[].url for the matching entryThe exact URL ranking.
total_resultstotalResultsCountEngine's reported total — useful for sanity checks.
request_idrequestIdCarry it for support / audit.

A simple primary key: (target_domain, keyword, country, language, device, checked_at). That lets you query history with one row per check and compute deltas with LAG(rank) OVER (PARTITION BY ...).

Cadence

Sane defaults:

  • Daily — most rank tracking is daily. SERP results don't move minute-to-minute, and daily granularity catches anything you'd care about.
  • Hourly — only if you're watching the immediate aftermath of a launch or a Google update. Burns credits fast.
  • Weekly — fine for low-priority keywords or stable rankings. Run all your trackers in one batch on the same day.

Cost math

Each tracked tuple costs 5 mils ($0.005) per check. For a 1000-keyword portfolio checked daily:

  • 1000 keywords × 30 days = 30,000 calls / month
  • 30,000 × 5 mils = 150,000 mils = $150 / month

If you're tracking the same keyword across multiple (country, device) combos, multiply accordingly. Drop daily → weekly for the long tail to cut spend without losing the signal where it matters.

Failure handling

The two failures you actually need to handle in a tracking loop:

  • 429 rate_limited — you've shared the per-org bucket with another caller (or your tracker is denser than 60 calls/minute). Honour Retry-After (seconds) or details.retryAfterMs (ms, takes precedence). For a tracker, the right pattern is a token bucket on the client side, not retry-on-error.
  • 5xx — exponential backoff with jitter. Keep retries bounded (3 attempts is usually right for a tracking loop — better to skip a check and retry next cycle than to pile up retry traffic).

Don't retry 4xx other than 429 — fix the request first. 404 no_results means the engine returned zero entries (treat as "not ranking", not as failure).

Movement detection

Once you have history, the queries that actually drive a digest:

  • Position change vs yesterday: rank - LAG(rank, 1) per (keyword, target_domain).
  • Top mover (positive): order by prev_rank - current_rank DESC over the day's checks.
  • Newly ranking: rank IS NOT NULL AND LAG(rank) IS NULL — domain showed up where it wasn't yesterday.
  • Newly missing: rank IS NULL AND LAG(rank) IS NOT NULL — domain dropped out.
  • Threshold cross: (prev_rank > 10 AND rank <= 10) — moved to page 1, etc.

Wire those queries into Slack / email and you have a working daily digest.

On this page