MilikMilik

Using AI in Court? How New Cases and Big Law Deals Are Redrawing the Rules for Legal AI Assistants

Using AI in Court? How New Cases and Big Law Deals Are Redrawing the Rules for Legal AI Assistants
interest|AI Legal Assistant

Courts Start to Signal How They View AI in Litigation

A recent decision in Warner v. Gilbarco, Inc. offers some of the clearest early guidance on AI in litigation. The court refused a request to compel production of all materials related to the plaintiff’s use of a generative tool, treating the legal AI assistant as a modern extension of research and drafting software rather than a third party. Crucially, prompts and outputs tied to case preparation were viewed as classic work product created in anticipation of litigation, protected under Federal Rule of Civil Procedure 26(b)(3)(A). The judge also found these AI materials irrelevant and disproportionate under Rule 26(b)(1), resisting an attempt to turn AI use into a discovery fishing expedition. Yet the opinion is not binding precedent and leaves open whether other courts will adopt the “tool, not a person” framing, or draw sharper lines where different platforms, data policies or sharing practices are involved.

Protecting Privilege and Accuracy When Using Law Firm AI Tools

For firms and in-house teams, Warner underscores that AI use tied closely to litigation strategy may qualify as protected work product, but that protection is fragile in practice. To preserve client data confidentiality and privilege, legal teams should limit legal AI assistant use to case-specific tasks, avoid mixing routine business content with litigation analysis, and carefully control who can see AI prompts and outputs. Input prompts can expose mental impressions, so they should be treated like internal memoranda and stored in secure, access-controlled systems. Accuracy demands human supervision: lawyers remain responsible for fact-checking citations, validating legal positions, and ensuring that AI-generated drafts reflect jurisdiction-specific law. Documenting review steps, retaining underlying sources, and embedding AI use in existing quality controls can help show courts and regulators that AI in litigation is supervised, not delegated. The human lawyer remains the decision-maker, and must be able to explain each AI-assisted judgment.

Freshfields–Anthropic and the New Question of AI Liability in Law

The Freshfields–Anthropic collaboration highlights how AI liability in law is shifting as firms move from passive users to active co-developers of law firm AI tools. Freshfields will deploy Anthropic’s Claude across its global operations while contributing legal expertise to build tools for drafting, contract review and due diligence. That deeper involvement blurs the line between tool and professional judgment: once AI outputs influence legal work, courts will ask whether the firm met its duties of supervision, verification and client protection. If an AI-generated contract omits a key clause or subtly misstates risk, liability does not fall on the software. Regulators are unlikely to accept “the AI got it wrong” as a defence; they will examine whether the firm understood the tool’s limitations and implemented safeguards. Because such tools may be commercialised beyond one firm, their flaws can scale, creating systemic risk if multiple organisations rely on the same underlying models and workflows.

Confidentiality, Data Security and Training Risks in Third-Party AI Platforms

Confidentiality remains a central concern when feeding client materials into third-party AI platforms. In the Freshfields–Anthropic deal, the vendor has stated that the firm’s data will not be used to train its models, an important boundary but only part of the risk picture. Client data confidentiality can be compromised not just by training, but by how information is processed, where it is stored, and how outputs are reused across matters. Legal teams must scrutinise whether prompts and results are logged, who can access them, and how long they persist. They should insist on contract terms that restrict data reuse, clarify hosting arrangements, and require robust security controls aligned with professional obligations. Integrating external AI into existing document management and review workflows also increases complexity: misconfigurations or informal workarounds can unintentionally expose sensitive material. Treat every AI interaction as a potential disclosure event, and align policies, technical settings and user training around that assumption.

A Practical Checklist for Evaluating Legal AI Assistants

In-house counsel and law firms adopting legal AI assistants should apply a structured evaluation checklist. First, governance: define where AI is permitted in litigation and transactions, and require documented human review of all substantive outputs. Second, disclosure and supervision: decide when to inform courts or clients about AI in litigation filings, and ensure partners or senior in-house lawyers remain accountable for final content. Third, client data confidentiality and security: confirm that prompts and outputs are encrypted, access-controlled, and excluded from model training unless explicitly agreed. Fourth, AI liability in law and contracts: negotiate vendor terms covering error handling, audit rights, model change notices and clear allocations of responsibility, while recognising that professional liability ultimately rests with the lawyer. Finally, training and monitoring: educate users on tool limits, track incidents of AI error or “subtle distortion,” and adjust policies as case law and technology evolve. Treat AI as powerful but fallible infrastructure, not an autonomous decision-maker.

Comments
Say Something...
No comments yet. Be the first to share your thoughts!
- THE END -