A Manhattan federal judge stoked debate among legal professionals by suggesting that failing to adopt artificial intelligence tools in legal practice could someday itself be malpractice, reversing the traditional concern that over-reliance on AI poses ethical and professional liability problems. Judge Jesse Furman, chairman of the U.S. judiciary’s Advisory Committee on Evidence Rules, told a New York State Bar Association audience that while some lawyers risk malpractice by relying too heavily on AI, there may come a point when not using AI could be unreasonable or improper practice, especially if AI can perform tasks far more efficiently than humans, potentially affecting fee disputes and professional standards. He noted that as AI evolves, courts and bar authorities will need to grapple with the technology’s reliability, risks such as hallucinations and bias, and how to set rules that reflect both innovation and professional responsibility. The remarks highlight a broader conversation as federal courts consider draft rules on AI-generated evidence and as the legal community voices skepticism about prescriptive regulations for emerging technologies.
Sources:
https://www.semafor.com/article/01/14/2026/new-york-judge-flips-the-ai-malpractice-debate
https://news.bloomberglaw.com/new-york-brief/ny-federal-judge-questions-if-avoiding-ai-could-be-malpractice
https://www.reuters.com/legal/government/lawyers-doubtful-about-us-judiciarys-draft-rule-ai-generated-evidence-2026-01-15/
Key Takeaways
- Role of AI in Legal Duty: A federal judge suggested that not using AI in legal practice could, in the future, constitute malpractice, reversing the prevailing anxiety that AI use itself is negligent.
- Professional Standards Evolving: The legal community is debating draft federal rules on AI-generated evidence, with many lawyers questioning whether formal regulations are premature.
- Ethics and Efficiency: Judge Furman raised the idea that billing clients for tasks that AI could do much faster could become untenable or ethically questionable as AI improves.
In-Depth
In a shift that is already rippling through the legal profession, a New York federal judge publicly questioned whether the next frontier of legal ethics will involve lawsuits not for too much reliance on artificial intelligence, but for too little. At a recent New York State Bar Association event, Judge Jesse Furman, who chairs the U.S. judiciary’s Advisory Committee on Evidence Rules, indicated that the legal community could eventually face a landscape in which adaptive use of artificial intelligence is not merely an advantage but a requirement of competent practice.
Furman’s comments, drawn from a Bloomberg Law report, suggest that the conversation about AI in law has moved beyond cautionary tales of hallucinations, biased algorithms, and confidentiality risks to considering professional liability for attorneys who fail to adopt powerful technology. According to Furman, as generative AI becomes more accurate and efficient, resisting its integration could be seen as falling short of prevailing professional norms, particularly if a client can demonstrate that tasks a lawyer performed manually could have been done more quickly and cheaply with AI.
This perspective emerges against the backdrop of ongoing debates over how to regulate AI in litigation. Federal courts have been working on draft rules to govern the admissibility and reliability of AI-generated evidence, but many attorneys, particularly in corporate and class-action practice, are skeptical that the judiciary should rush into formal restrictions. Lawyers expressed concern that draft proposals might address problems that are not yet clearly defined, arguing that existing rules for expert testimony and evidence are sufficient for current needs.
Despite these reservations, Furman’s remarks signal a recognition that AI’s role in legal proceedings and preparation is expanding rapidly, and the judiciary might eventually view competent representation as inseparable from competent use of available tools — including artificial intelligence. For conservative practitioners, the implications are significant: not only must lawyers stay informed about the evolving capabilities of AI, but they must also consider how ethical standards and malpractice exposure will adapt in response to technological advances, shifting the traditional risk calculus from avoiding innovation to potentially embracing it as a professional duty.

