Home / Case Study
// CASE STUDY · 2025 ENGAGEMENT

Zero to 43%
AI visibility,
six weeks.

A mid-sized regional personal injury firm — anonymized — went from being named in zero AI engine answers to appearing in 43% of tested prompts across ChatGPT, Perplexity, Gemini, and Google AI Overviews. Six-week signal stack rebuild. Here is what the work actually looked like.

// STARTING POSITION
0/16

AI visibility score across all four engines before the engagement. Not cited anywhere.

// SIX WEEKS LATER
7/16

Visibility score after the signal stack rebuild. 43% engine coverage. Cited by name in ChatGPT and Perplexity.

// WORK
42d

Days of focused work. Five workstreams running in parallel. Two specialists embedded.

The starting position.

The firm was a 12-attorney plaintiff shop based in a top-25 US metro. Strong brand locally. Strong reputation in the legal community. Consistent six-figure monthly ad spend on Google and Facebook. Decent SEO — top-three rank for two of their three priority keywords. By every traditional measure, they were doing well.

Then they ran our diagnostic. Score: 0 / 16. Not mentioned in ChatGPT. Not mentioned in Perplexity. Not in Gemini. Not in Google AI Overviews. For "best [practice area] lawyer in [their metro]," the AI engines returned three firms — none of which were them.

What was actually wrong.

The full audit identified five gaps. Most of them were "fixable in two weeks" issues that had been sitting unaddressed for years.

  1. Entity ambiguity. The firm's legal name appeared in four different forms across the web — full name with "P.A.," short name without, an old dba, an attorney's name. The model couldn't decide which one was canonical.
  2. No schema markup. The website was a beautifully designed flat HTML site with zero structured data. No Attorney schema, no LegalService, no Review, no FAQPage. To AI engines, it read as undifferentiated marketing text.
  3. Generic practice pages. Twelve practice-area pages, every one of them a 1,500-word "we handle car accidents, truck accidents, motorcycle accidents..." block. None of them answered narrow consumer questions.
  4. Thin citation surface. Listed on three legal directories. Cited zero times in legal publications. Two old press mentions from 2019. No expert-source placements anywhere.
  5. Reviews without substance. 340 five-star Google reviews. Almost all of them said "great firm, would recommend." Nothing AI could pull a quote from.

The six-week plan.

Week 1 — Foundation.

Pick a canonical firm name. Audit and reconcile every existing citation. Update Google Business Profile, state bar profile, every legal directory, every social bio. Same name, same address, same phone, same description. The boring work that turns out to matter the most.

Week 2 — Schema.

Implement Attorney, LegalService, Organization, Review, and FAQPage schema across every page on the site. Validate with Google's Rich Results tester. Resubmit the sitemap. By the end of the week, AI crawlers parsing the site had a clear entity to work with.

Week 3 — Practice-area depth.

Rewrote the four highest-volume practice-area pages into narrow, question-answering content. Each page covered a specific scenario in the firm's metro: "uninsured motorist claim in [state]," "rideshare accident liability," "delayed-onset injury statute of limitations." Each page got its own FAQPage schema with the questions consumers actually search.

Week 4 — Citation engineering.

Targeted placements on five high-authority legal publications. Three attorney-source quotes in legal news articles. Two contributed pieces. Submission to four additional authority directories. The citation surface roughly tripled in seven days.

Week 5 — Reputation substance.

Reset the review acquisition flow. Instead of "leave us a review," clients were asked specific questions about the practice area, the outcome, and the attorney they worked with. The new reviews started populating with the kind of specific language AI engines extract from.

Week 6 — Validation.

Re-ran the diagnostic. Visibility score moved from 0/16 to 7/16. ChatGPT now named the firm in 2 of 3 priority queries. Perplexity cited them in their answer for two consumer prompts. Gemini partial. Google AIO still not citing — that engine has the longest retrieval lag and we expected it to take another four to six weeks to catch up.

// AT 12 WEEKS

Follow-up diagnostic at week 12 (six weeks after engagement end) showed visibility at 11/16. Google AIO had caught up. The signal stack was now compounding — citations continued growing, reviews continued accumulating, and the firm was being named more frequently across all four engines.

What this case study is not.

Not a template. Every firm starts in a different position, in a different metro, with different existing assets. The six-week timeline above is what was achievable in this specific engagement; some firms move faster, some move slower. Saturated metros (NYC, LA, Houston) typically take three to four times longer because the existing visible firms have larger citation surfaces to displace.

It is also not a prediction. AI engines change retrieval strategies. What worked in early 2026 may need adjustment in late 2026. The audit framework is durable; the specific tactics will evolve.

What this engagement is.

Proof that AEO is engineering, not magic. Five gaps. Five workstreams. Six weeks. A measurable score change. The same approach applied to other firms produces different timelines but the same shape — a steep ramp from invisible to present, then compounding visibility as the signal stack gets denser.

To see where your firm currently stands, run the diagnostic. To talk about what a six-week build would look like for your specific situation, start at LawShift.

// WHERE DO YOU STAND

Run the diagnostic.

60 seconds. Four engines. See your starting position.

Run the Diagnostic →