Home / How AI Picks Lawyers
// THE MECHANISM

How AI engines
choose which lawyers
to recommend.

AI engines do not rank lawyers the way Google ranks pages. They reconstruct an answer from training data, retrieved sources, and structured signals. Five factors decide which firms get named in the result — and most personal injury firms have engineered for none of them.

The short version.

When a user asks ChatGPT "best personal injury lawyer in Tampa," the model does not return a ranked list of search results. It generates prose. To generate prose, it has to decide which entities (firms, attorneys, brand names) to include. That decision is made by weighing five signals that the model has either seen during training or retrieved at query time.

Below are those five signals, in approximate order of weight for the consumer "find me a lawyer" query class.

01 //

Citation density.

How often the firm is named, by name, across third-party sources the model trusts. Legal directories (Avvo, Justia, FindLaw, Super Lawyers, Martindale-Hubbell), state and county bar association rosters, news outlets, peer review platforms, awards announcements.

The weighting here is not "how many backlinks." It is "how many independent, authoritative mentions of the entity exist on sources the model considers credible." A firm with five mentions on top-tier legal publications can outperform a firm with five hundred mentions on low-authority directories.

02 //

Structured data signals.

Whether the firm's website emits clean machine-readable schema markup — Attorney, LegalService, Review, Organization, FAQPage. AI engines parse schema directly when retrieving real-time information. Schema makes the firm legible to the model in a way that flat text does not.

The lift from schema alone is one of the highest leverage AEO moves available. Most law firm websites have either no schema or broken schema. Fixing it correctly can move visibility scores noticeably within a single retrieval cycle.

03 //

Practice-area specificity.

Pages that answer narrow questions — "uninsured motorist claim process in Florida," "delayed-onset whiplash settlement value," "rideshare accident liability Texas" — earn more semantic weight than generic "personal injury" pages that say everything and surface nothing.

The specificity functions as topical proof. When the model retrieves content related to the consumer's actual question, the firm whose pages most directly answer that question gets surfaced. Generic pages get filtered out before they reach the synthesis stage.

04 //

Reputation context.

Volume of reviews is a hygiene factor — necessary but not differentiating. The differentiation is the substance of reviews: recency, sentiment, and specific content. Reviews that include phrases like "she explained my UM coverage clearly" get pulled into AI answers as evidentiary substance. Reviews that say only "great firm, would recommend" contribute almost nothing.

The implication for firms: review acquisition strategy needs to encourage specificity, not just volume. A review that mentions a practice area and an outcome is worth more than ten generic five-star ratings.

05 //

Topical authority surround.

The legal web around the firm. A firm cited on five high-authority legal sites with detailed context — practice descriptions, case results, attorney profiles, expert quotes — will beat a firm with twice the SEO traffic and no surrounding signal.

This is where most "we have great SEO" firms lose. SEO traffic is a measure of one site's performance. Topical authority is a measure of how the entire web treats the firm as an entity. AI engines pull from the latter.

What this means in practice.

The five factors do not weight equally for every prompt. A long-form research-mode prompt ("how does a UM claim work in Florida") leans heavily on practice-area specificity. A short purchase-mode prompt ("best PI lawyer Tampa") leans on citation density and reputation context. The full audit tests all factors against representative prompt clusters.

One pattern is consistent across all engines and all prompt types: firms that score zero on three or more of the five factors are invisible. There is no single tactic that compensates for missing inputs. AEO works as a stack, not as a stack of independent levers.

// IMPLICATION

Buying more reviews while ignoring schema and citation work is a common waste of budget. Volume on one factor cannot offset zeros on the others. The signal stack is multiplicative, not additive.

What the engines don't use.

Three signals that move SEO and do not move AEO. PageRank-style backlink graph weight. On-page keyword density. Paid advertising spend in Google Ads. None of these inputs reach the AI synthesis layer. A firm spending forty thousand a month on Google Ads is paying for SEO-channel visibility while remaining invisible in AI. The two budgets are not interchangeable.

// CHECK YOUR STACK

Where does your firm
score on the five factors?

The diagnostic returns a fast read on whether your signal stack is producing visibility.

Run the Diagnostic →