The AI governance question every law firm needs to answer in 2026.
Privilege, conflicts, citations, ethics rules — none of it gets a pass because the workflow involves AI. The five-layer framework that holds, and the questions a state bar will eventually ask.
Walk into a law firm in 2026 and you'll find AI being used in places nobody at the firm has formally approved. The intake coordinator is using ChatGPT to summarize voicemails. A paralegal is using a free LLM to draft initial demand letters. An associate is using a research tool that no one has vetted for citation accuracy. The managing partner finds out about all of this the day a client asks pointedly whether their case file went through any third-party AI service. The answer the firm has at that moment is, almost universally, "I'd have to check."
I am not going to argue that law firms shouldn't use AI. I'm a co-founder of an agency that installs AI-driven operations for law firms. I am going to argue — strenuously — that the firms using AI without a written governance posture are running a risk that's larger than they realize, that's growing every quarter, and that becomes existential the first time a state bar issues a real opinion or a malpractice case touches an AI-generated artifact.
What governance actually has to cover.
A working AI governance posture for a law firm has to answer five questions, in writing, with the answers known and followed by everyone in the firm. Most firms have answered zero of them.
1. Privilege: where does client data go?
Every AI tool the firm uses has a data flow. Some tools process the data and forget it. Some retain it for training. Some retain it for support purposes. Some are explicit about being zero-retention; others are vague. The firm's posture needs to specify, for every approved tool, exactly what happens to the data: where it's stored, who can access it, how long it persists, whether it's used to train models, and what the contractual remedies are if the vendor changes those terms. If a client asks "did my case data leave your firm?" — the firm needs an answer.
The practical mechanism: a vendor whitelist. Tools on the list are approved for client data; tools off the list are not. The list is reviewed quarterly. Anything not on the list is by default not approved.
2. Conflicts: are we using AI on adversarial matters?
A general-purpose AI tool can help draft a brief in matter A and an opposing brief in matter B if the firm represents both sides. The tool itself doesn't trigger a conflict, but the lack of segmentation can. Firm-grade governance means AI use is segmented per matter, per client, with the same rigor as document management — not pooled into a single firm-wide assistant that's seen everything.
In practice this means tool selection that supports matter-level workspaces (most enterprise legal AI does), and a policy that bars use of consumer general-purpose tools for any work product that touches client matters.
3. Citation accuracy: who is responsible for what the AI says?
The well-publicized cases of attorneys submitting AI-generated briefs containing fabricated citations are not the last cases of their kind. They're the first wave. The governance answer is straightforward: every fact, citation, and statutory reference in any work product that touches an AI tool must be verified by a human against the actual source before it leaves the firm. Without exception. The firm's policy needs to name this responsibility, name the verification process, and require attestation in the matter file that verification happened.
This sounds like a slowdown. It is. It's also the only defensible posture. The firms that resist the slowdown are the firms that will eventually be in the news for the wrong reasons.
4. Escalation: when does a human override the AI?
An AI intake agent classifies an inquiry as a non-fit and sends a polite decline. The inquiry was actually a high-value matter the agent misread. The decline went out automatically. There's no malice in this — the AI did exactly what it was designed to do. But the firm needs a defined escalation path that catches edge cases before they become declines that lose business or, worse, declines that miss potential liability.
The mechanism: every AI-driven action that affects a prospect, client, or matter has a defined escalation trigger. High-value-signaling inquiries get human review before any outbound communication. Anything ambiguous gets queued for human handling rather than auto-resolved. Anything tied to a deadline or conflict implication gets human-verified before action.
5. Audit: can we prove what the AI did, when, and why?
The single most important and most-skipped layer. Every AI-driven decision the firm relies on needs to produce a log: input, output, timestamp, model version, who reviewed it, what they decided. Without that log, the firm cannot reconstruct what happened in any particular matter, cannot defend its work to a regulator, and cannot improve its own systems because there's nothing to improve from.
The good news: most enterprise AI tools produce these logs by default. The bad news: the logs are usually scattered across vendors, not consolidated, and not retained for the duration that legal ethics rules might require. The fix is a centralized log retention policy and a quarterly audit that picks 10 random matters and walks through every AI-touched decision.
Governance is not a constraint on AI use. It is the only thing that makes AI use defensible.
The questions a state bar will eventually ask.
Several state bars have already published preliminary guidance on AI use in legal practice, and the trajectory of those opinions points in a clear direction: state bars will hold attorneys responsible for AI-generated work product as if the attorney produced it themselves, and will expect documented diligence on data handling, conflict checks, and verification. Within 24 months, I'd expect at least a third of state bars to have formalized this into rules with teeth.
The questions a regulator is most likely to ask, in roughly the order they're likely to land:
- "Show us your firm's written AI policy."
- "Show us your approved-vendor list and the data handling provisions."
- "For matter X, identify every AI-assisted artifact and the human verification associated with it."
- "Explain how you ensure AI tools are not creating conflicts across matters."
- "Show us your audit log for AI-driven intake decisions in the past 12 months."
A firm with the five-layer framework above can answer all five in under an hour. A firm without it cannot answer any of them, and the absence is itself the finding.
The cost of doing this right.
In firms I've worked with, building a real governance posture takes 4–6 weeks: drafting the written policy, building the vendor whitelist, configuring tools to produce centralized logs, training the team, and installing the audit cadence. None of that is technically difficult. It just requires someone to own it end to end and finish what they start.
The cost of not doing it isn't theoretical anymore. Malpractice insurers are starting to ask about AI policies on renewal. Sophisticated clients (the ones with $50K+ matters) are starting to ask before engaging. State bars are starting to draft rules. Investors looking at firms for acquisition are starting to ask. Six weeks of work, finished in 2026, prevents an unbounded amount of risk in 2027 and 2028. It's the kind of thing that's obvious in retrospect and overlooked in the moment.
Don't be a 2027 cautionary tale. Write the policy in 2026.
If your firm is using AI tools without a written governance posture, that's exactly the kind of gap the diagnostic finds and prioritizes. Start there →