The Monitor’s Catherine McLaughlin has a long piece in today’s paper about a lawsuit that featured fake A.I.-generated citations, “perhaps the first of its kind in New Hampshire”, and what that means.
It was a garden-variety lawsuit in the state of New Hampshire. A Windham couple hired a contractor to renovate their house, giving him a hefty down payment for the work. After they scaled back their plans, they claim, he ran off with the money. So they sued.
But the July 15 order in the case from Superior Court Judge Lisa English was far from typical, perhaps the first of its kind in New Hampshire.
Numerous citations in legal arguments filed by the couple’s lawyer were “mistaken and misleading,” English wrote. The cases that were referenced came from a different state, or “from a different year, with a different citation, addressing a totally different area of law.” Some of the cases, English wrote, don’t exist.
During a hearing in August, Nicole Bluefort tried to explain.
Another lawyer at her firm who was working on the case used artificial intelligence instead of traditional tools to draft the legal briefs and didn’t tell anyone, she said. After the lawyer for the contractor raised questions, an “amended” brief from Bluefort that was created with the help of the same associate, contained additional errors. The AI platform had mashed up case law and Bluefort didn’t catch it.
“I should have done my due diligence,” she said at the hearing. “It is on me to have verified because my name goes on it.”
Bluefort said she removed the other attorney from this case, began workshopping an AI policy for her law firm, which spans three offices in New Hampshire and Massachusetts, and paid the opposing lawyer, at his request, for the additional time he spent checking and re-checking her work. It was a sum totalling just over $5,000.
The judge in Rockhingham County Superior Court that day, Mark Howard, thanked Bluefort for her honesty and proactive steps towards remediation. As a result, he wrote in an order, “the court considers the matter resolved and no further action is necessary.”
Bluefort declined to be interviewed for this story.
Similar situations are emerging in courtrooms across the country.
In California, a state judge issued a $10,000 fine to an attorney in September who’d submitted false information as a result of A.I. use, as reported by CalMatters. In May, a federal judge in the Golden State required law firms pay more than $30,000 in fees to opposing counsel and to the court for their time.
In states from Arizona, to Utah, to New Jersey, to Colorado, attorneys who made errors because they used A.I. have been removed from cases, ordered to pay attorney fees, required to refund their clients, charged outright fines and, in some cases, formally sanctioned or suspended.
Online databases have tracked hundreds of cases where lawyers or judicial officials have made errors by using AI.
The Windham case, however, appears to be the first reported instance in New Hampshire.
‘A basic duty’
Like with so many other fields, artificial intelligence poses major risks and opportunities for lawyers, and the stakes for error are high. Courts, law firms and law schools nationwide are grappling with how to encourage responsible use.
Ultimately, the ethical standards for truthfulness set by state bar associations and courts are unchanged, whether or not A.I. is used.
“That obligation has always existed,” said Bob Lucic, who chairs the New Hampshire Bar Association’s Special Committee on Artificial Intelligence. “You’re not supposed to cite cases to judges that don’t exist. That’s a bad thing.”
Many lawyers are overestimating just how smart or reliable A.I. tools are, he said.
An online database compiled Jenny Wondracek, the director of the Law Library at Capital University in Ohio, contains nearly 500 cases where A.I. was used and errors were found. The earliest instances are from 2023, and — whether it’s because people are using A.I. more or getting better at checking for hallucinations or, most likely, both — the number of cases is accelerating.
Courts are handling the misuse of AI in a range of ways, Wondracek said, seeking out the best way for deterrence. Fines and suspensions are one tactic, but some courts are going in a different direction by requiring lawyers to take new training.
“We have one judge that waived monetary fees if the attorneys will go talk to law students and explain what they did wrong,” Wondracek said. “So, a little bit more creative.”
There are more aggressive punishments, too. The judge in the California case wrote an opinion that slammed the attorney for having “violated a basic duty … owed to his client and the court.” In addition to the large fine, the judge ordered the attorney to personally serve a copy of her opinion directly to his client and to the California Bar.
On the flipside, Wondracek said, those who take steps to prevent future A.I. misuse are often rewarded by judges with a lesser penalty.
In a Wyoming case involving the personal injury firm Morgan and Morgan, both the associate attorneys who used A.I. and the other lawyers who signed their names to briefs with fake cases were fined by a judge. But their penalties were slighter because, as Bluefort did, they were forthcoming about the error. The firm itself, Wondracek said, wasn’t sanctioned because it quickly implemented new guidelines and training requirements.
When Lucic, also a partner at Sheehan, Phinney, Bass and Green, was graduating from law school, the most groundbreaking technology of the era was the Post-it note, he said.
Decades into a career of litigating cases on technology, it seems quaint to him now.
Lawyers have always been obligated to stay on top of technological advances, he explained, and knowing how they should and shouldn’t be used.
The strengths of A.I. technology, for now, are in time savings.
“I am not one of those people who really thinks that A.I. is going to completely take over the practice of law,” Lucic said. “But if you look at your desk and find the things that you really find the most annoying about your job… AI is really, really good at, for lawyers generally, doing that sort of non-legal stuff.”
At the same time, Lucic continued, lawyers have to know what A.I. is capable of in the courtroom in order to watch out for it.