AI Legal Assistants Put the Billable Hour Under Pressure
As generative AI in law moves from pilot to daily use, it is attacking the economic foundation of many practices: time-based billing. Surveys show that nearly 90 per cent of lawyers in one major market say their firm is piloting or integrating AI tools, with tasks that once took hours now completed in a fraction of the time. At one contract-focused firm, an AI legal assistant helps send emails, schedule meetings and complete government forms that used to consume three to eight hours, enabling the firm to offer flat rates for services such as prenuptial agreements instead of traditional hourly fees. Meanwhile, legal strategy leaders at large firms report cutting research and drafting work from around 10 hours to a single hour of human review. The result is mounting pressure to adopt billable hour alternatives and to monetise outcomes, access and responsiveness rather than raw time spent.

From Headcount Leverage to AI-First Leverage
Behind the pricing debate is a deeper shift in how law firm business models create leverage. Historically, partners scaled profits by stacking teams of associates under them. Now, consultancies built as “AI-native” legal advisors report that using AI has not reduced revenue but increased it, because they can take on more matters and broaden their service scope. This points to a new model where an AI legal assistant becomes the first layer of leverage, with humans focused on higher-value judgment and advocacy. In-house teams and firms are asking how staffing pyramids, support roles and partner–associate ratios should evolve when routine analysis, drafting and document-heavy work can be largely automated. Clients, for their part, are starting to question which tasks they are willing to pay premium rates for, and which should be bundled into subscription or fixed-fee offerings powered by generative AI in law.

Optimists, Skeptics and the Fear of Devalued Craft
The profession is sharply divided over what this AI-driven efficiency means. Enthusiasts argue that AI legal assistants reduce administrative drag and legal research time, letting lawyers focus on strategy and advocacy while making services more affordable. One flat-fee firm points to more accessible pricing on standard agreements once AI automates time-consuming back-office work. Large business firms report using multiple AI systems to generate research, witness questions and drafts, with human lawyers refining the output. Yet many practitioners remain wary. They worry that if generative AI in law performs more of the “thinking,” core legal craft will be devalued, junior lawyers will lose formative training opportunities, and the profession will shoulder new categories of legal AI risk. For them, the question is not just efficiency but who bears responsibility when AI goes wrong and whether clients will still recognise and pay for expert human judgment.

The Sullivan & Cromwell Hallucination and the Cost of Control
Those risks became highly visible when a top-tier firm recently admitted that AI hallucinations slipped into a court filing in major fraud-related proceedings. The motion misquoted key statutory provisions, misdescribed authorities and even cited a case that did not exist. Opposing counsel alerted the court, prompting the firm to apologise and acknowledge that its own AI usage policies—requiring rigorous verification of AI-generated material—had not been followed. The firm re-reviewed all filings and submitted a corrected motion, taking what it called immediate remedial measures. This incident underscores that legal AI risk is not abstract; it translates directly into reputational damage, remediation costs and potential judicial sanctions. It also highlights a paradox for the law firm business model: the more work is accelerated by AI, the more firms must invest in slower, methodical verification workflows, partner sign-off rules and professional responsibility safeguards.

Governance, Transparency and New Client Value Propositions
To turn AI from a liability into a competitive edge, firms are starting to redesign both governance and value propositions. On governance, many are forming cross-functional AI committees, drafting usage policies that specify approved tools and prohibited use cases, and mandating documented quality assurance processes for any AI-assisted work product. The Sullivan & Cromwell episode will likely push more firms to enforce mandatory human verification and clearer sign-off responsibilities. On the client side, firms experimenting with subscriptions, fixed fees and hybrid pricing are discovering that transparency about when and how an AI legal assistant is used can become a selling point rather than a confession. By explaining where generative AI in law accelerates work—and where human expertise still controls outcomes—firms can justify billable hour alternatives, share some efficiency gains with clients and still capture new revenue by handling a greater volume and diversity of matters.
