We test the four AI engines clients actually use against the prompts they actually type. Score from 0 to 16. The methodology, the engines, the prompts, and the limits — fully documented.
Four engines, in priority order: ChatGPT (gpt-4o), Perplexity (sonar-large), Gemini (2.0-pro), and Google AI Overviews. Together these four cover roughly 99% of "find me a lawyer" AI queries in the United States. Other models — Claude, Grok, You.com — we monitor internally but do not yet include in the diagnostic.
The diagnostic queries each engine with a curated set of consumer-facing prompts, parameterized by the firm name and metro you provide. Each engine receives between three and five prompt variations representative of how a real injury victim would phrase the question. Sample prompt patterns:
The full audit (run by LawShift directly) expands this to 40+ prompts across practice areas and demographic segments. The free diagnostic uses the abbreviated cross-engine set.
Each engine returns a result classified as one of three states:
Maximum score across four engines is 16. Scores below 4 are classified as "AI Invisible." Scores 4-8 are "AI Thin." Scores 9-12 are "AI Present." Scores 13+ are "AI Visible."
The diagnostic gives you a binary read: is the firm appearing or not. It does not, in its free form, tell you why. The "why" — entity ambiguity, missing schema, citation thinness, generic content, weak reputation surround — is what the full audit reveals. The diagnostic tells you the patient is sick. The audit identifies which organs are failing.
AI engines update their retrieval indexes on different cycles. Perplexity is essentially live. Gemini and Google AIO refresh continuously but weight content age. ChatGPT's retrieval window varies. We re-test our internal benchmark firms monthly and update the diagnostic prompt set quarterly to track shifts in consumer query patterns.
Most "AI visibility" services require an email and a sales call before showing you anything. We don't. The diagnostic returns a verdict on the spot because the verdict is the point — and because gated diagnostics have terrible accuracy data (you only test the firms motivated enough to give you an email). Open diagnostic = better dataset = better insight = better service downstream.
If the diagnostic shows your firm is visible, the page tells you that. We do not invert results to fabricate panic. The product LawShift sells is the fix; we have no reason to fake the diagnosis.
The statistics cited across this site come from three sources, in this order of weight:
Visibility figures evolve as the underlying engines change retrieval strategies. LawShift re-tests benchmark firms monthly and updates these figures quarterly. For raw data on any specific claim, contact press@lawshift.ai.