Home / Methodology
// HOW IT WORKS

The methodology
behind the diagnostic.

We test the four AI engines clients actually use against the prompts they actually type. Score from 0 to 16. The methodology, the engines, the prompts, and the limits — fully documented.

Engines tested.

Four engines, in priority order: ChatGPT (gpt-4o), Perplexity (sonar-large), Gemini (2.0-pro), and Google AI Overviews. Together these four cover roughly 99% of "find me a lawyer" AI queries in the United States. Other models — Claude, Grok, You.com — we monitor internally but do not yet include in the diagnostic.

Prompts.

The diagnostic queries each engine with a curated set of consumer-facing prompts, parameterized by the firm name and metro you provide. Each engine receives between three and five prompt variations representative of how a real injury victim would phrase the question. Sample prompt patterns:

The full audit (run by LawShift directly) expands this to 40+ prompts across practice areas and demographic segments. The free diagnostic uses the abbreviated cross-engine set.

Scoring scale.

Each engine returns a result classified as one of three states:

Maximum score across four engines is 16. Scores below 4 are classified as "AI Invisible." Scores 4-8 are "AI Thin." Scores 9-12 are "AI Present." Scores 13+ are "AI Visible."

What the score cannot tell you.

The diagnostic gives you a binary read: is the firm appearing or not. It does not, in its free form, tell you why. The "why" — entity ambiguity, missing schema, citation thinness, generic content, weak reputation surround — is what the full audit reveals. The diagnostic tells you the patient is sick. The audit identifies which organs are failing.

Refresh cadence.

AI engines update their retrieval indexes on different cycles. Perplexity is essentially live. Gemini and Google AIO refresh continuously but weight content age. ChatGPT's retrieval window varies. We re-test our internal benchmark firms monthly and update the diagnostic prompt set quarterly to track shifts in consumer query patterns.

Why we don't gate it.

Most "AI visibility" services require an email and a sales call before showing you anything. We don't. The diagnostic returns a verdict on the spot because the verdict is the point — and because gated diagnostics have terrible accuracy data (you only test the firms motivated enough to give you an email). Open diagnostic = better dataset = better insight = better service downstream.

// ON HONESTY

If the diagnostic shows your firm is visible, the page tells you that. We do not invert results to fabricate panic. The product LawShift sells is the fix; we have no reason to fake the diagnosis.

Sources & data provenance.

The statistics cited across this site come from three sources, in this order of weight:

  1. LawShift internal research (2025–2026). Visibility scoring data across roughly 200 personal injury firm diagnostics LawShift has run during 2025 and the first half of 2026. The 0.076% correlation between Google rank and AI visibility, and the <5% AI-visibility rate across personal injury firms, are derived from this dataset.
  2. Published industry research. The 61% organic CTR drop when AI Overviews appear is consistent with published research from Search Engine Land, Authoritas, Ahrefs, and Sistrix during 2024–2025. Internal client data reflects the same order of magnitude.
  3. Engine-vendor disclosures. User-volume statistics (ChatGPT ~400M weekly active users, Perplexity ~30M monthly, etc.) are drawn from OpenAI, Perplexity AI, and Google product disclosures as of early 2026.

Visibility figures evolve as the underlying engines change retrieval strategies. LawShift re-tests benchmark firms monthly and updates these figures quarterly. For raw data on any specific claim, contact press@lawshift.ai.

// READY

Run the diagnostic.

60 seconds. Four engines. No email.

Start the Test →