The ArcSurf Score.
A transparent, reproducible 0–100 visibility index for AI search. Published formula. No hidden weights.
(0.40 × CHR) + (0.30 × TSR) + (0.20 × PC) + (0.10 × FS)
Each component is normalized to a 0–100 scale.
The four components.
Each measures a different dimension of AI citation performance.
Citation Hit Rate
How often you’re cited at all. % of matrix queries where your domain appears in the citations.
Top Source Rate
How often you’re the primary source. % of queries where you’re the #1 citation.
Platform Coverage
How many AI engines cite you. % of tested platforms where you appear on ≥1 query.
Freshness Score
How recent your cited content is. Weighted by modification date of cited pages.
Score interpretation.
Five descriptive bands from 0 to 100.
Rarely or never cited. Structural content and authority gaps.
Occasional citations. Content exists but isn’t yet competitive.
Consistent presence across most queries. Typical post-Sprint outcome.
Cited on majority of queries, often #1, across 2–3 platforms.
Near-universal citation. Treated as authoritative in the category.
Worked example.
An established B2B SaaS client, Tier 1 tested across 25 queries on 3 platforms.
Why we publish the formula.
Most GEO agencies use proprietary scores with hidden weights. That’s marketing, not measurement.
The ArcSurf Score is reproducible — anyone with the same Tier 1 API test data will compute the same number we do. The formula is published. The weights are published. The inputs are auditable.
This is consistent with how we work: the methodology is open, the measurement is falsifiable, and clients get the data whether or not they hire us.
15 queries · 3 platforms · your real citation data · no commitment