Order! How AI is causing havoc in the courts

By Jaine GreenAI Aware

I’ve just read two interesting articles from the Guardian and New York Times about the misuse of AI by law firms, which is an area we, at AI Aware are increasingly contacted about.

In recent years, the legal industry has thrown itself into using AI tools for streamlining its many time-intensive, repetitive or complex tasks such as preparing reports, drafting legal briefs and conducting legal research.  Traditionally, the role of paralegals and new lawyers, producing the mountain of legal paperwork required in any law case can take days, weeks, months even, but using AI, the same can be achieved in a matter of hours.  Surely this is a game-changing win-win scenario?  Honest law firms pass these savings on to clients (making legal help more accessible), young lawyers can spend more time getting stuck into the nitty gritty of law, overworked paralegals can leave the office at a decent hour, legal teams can spend more time on higher value work and so provide clients with a better service and the legal system will be more efficient and shuffle more clients through their doors at a faster pace.   It seems it’s a rare case of ‘everyone is happy’ – except it’s not.  

The convenience of using AI comes with significant risks as law firms are beginning to discover.  Perhaps the greatest risk is that AI will often generate inaccurate or fabricated (‘hallucinated’) evidence which, if not properly checked and challenged, can easily make its way into the courts and ultimately waste time and undermine the credibility of our legal system.

The Guardian article cites two legal cases where the misuse of AI contributed in the breakdown of two huge lawsuits.  The first is an £89m case against the Qatar National Bank, so no small fry.  The claimants submitted 45 case law citations; slam dunk you might think, until you discover 18 were completely fictitious, and many of the others contained wrong or made-up quotes, and both witness statements supplied by the client were fictitious. As the judge threw the case out of court, he was quick to chastise and criticise the solicitors for their lack of caution and care.

The second came to light when Haringey Law Centre challenged the borough over its alleged failure to provide its client with temporary housing, in this case, the lawyer cited five ‘AI ‘hallucinated’ cases’.  The defending lawyer could find no reference to any of these cases, and the red flag was raised. In this case, legal action was taken against the Law Centre for wasted legal costs and the centre, it’s lawyers and pupil lawyers were found negligent by the court.

In swift response, a regulatory ruling responding to these incidents said there were ‘serious implications for the administration of justice and public confidence if artificial intelligence is misused’.  A sentiment we share and know can be applied to almost every sector. Dame Victoria Sharp, the president of the King’s Bench Division, did not stop there.  She called on the Bar Council and Law Society to effectively clean up their act when it comes to AI ‘as a matter of urgency’ making sure that all lawyers know their professional and ethical duties if using AI.  Horah to that!

On the same bent, the New York Times reported on a similar case to its side of the Atlantic.  In the case of Mata Vs Avianca, where a passenger sued the airline claiming he had been struck on the knee by a metal drinks trolley.  Unfortunately for the passenger, his lawyers submitted a huge document citing more than half a dozen relevant cases that, when checked by the defending counsel and the judge it turned out none of the cases existed other than in the mind of AI.  The lawyer, Mr Schwartz, new to ChatGPT claims he asked the programme to confirm the cases were genuine, and doubling down on its lie ChatGPT said ‘yes’.  Redder faces than the poor passenger’s knee all round.  It’s a stark reminder that, despite AI’s fluency and persuasive tone, it does not possess a conscience or a concept of truth. It produces plausible-sounding language, not verified legal fact.

The problem we see often is that these ‘hallucinations’ regularly occur because large language models are trained to predict the next word in a sequence based on statistical patterns rather than factual databases. They have no concept of the truth-value of the content they generate, which becomes particularly problematic in legal contexts, where precision, precedent, and factual integrity are non-negotiable. If an AI tool produces a citation that does not exist and a legal professional fails to catch the error, the implications can be disastrous for the case in question while eroding trust in legal practitioners and the system as a whole. 

Reliance on AI without appropriate oversight may breach ethical obligations regarding competence and candor. Lawyers are required to provide competent representation and to refrain from making false statements of fact or law. Ignoring the limitations of AI can easily lead to violations of these standards, exposing law firms and individual lawyers to disciplinary action, malpractice claims, or worse. Many might argue the burden on legal professionals to scrutinise every AI-assisted document negates the time-saving benefits AI promises in the first place.

On one hand, AI has the potential to be a game-changer with legal workflows, but at the moment, it must be treated with the same caution as a speedy but highly fallible legal assistant. Tools like ChatGPT or other language models can assist in drafting or brainstorming, but they cannot replace the judgment, verification, and ethical responsibility that underpin legal practice. The legal profession thrives on trust, accuracy, and accountability, qualities that no AI system currently processes. 

Another concern that was raised by a legal client this week was that far from relieving the pressure on junior lawyers, these changes are questionably diminishing their prospects as they miss essential parts of the training. 

In the past, junior lawyers honed their critical thinking and legal reasoning, sifting through case law to identify relevant precedents, and it’s the hours spent reviewing documents that teach them to spot inconsistencies such as ‘hallucinated’ cases spat out by AI. They learn to write and formulate legal arguments all through hours spent trawling through case law and meticulous research.  Some fear that missing out on fundamental groundwork is storing problems for the future when it comes to the competency of senior legal professionals.   

The allure of faster document preparation and lower research costs is understandable, particularly for overburdened legal teams. However, the potential consequences of using AI tools irresponsibly, from career-damaging sanctions to miscarriages of justice, should compel every legal practitioner to proceed with caution. AI can aid, but it must never be allowed to lead without human oversight. Until these technologies can consistently verify the accuracy of their output, a milestone still far from reality, the burden of truth must remain squarely on human shoulders.

Get in touch