Should lawyers stop using generative AI to prepare their legal arguments?

· Thomas Wood
Should lawyers stop using generative AI to prepare their legal arguments?

Senior lawyers should stop using generative AI to prepare their legal arguments!

Or should they?

A High Court judge in the UK has told senior lawyers off for their use of ChatGPT, because it invents citations to cases and laws that don’t exist!

In case in the High Court, Dame Victoria Sharp referred in her judgment to:

actual or suspected use by lawyers of generative artificial intelligence tools to produce written legal arguments or witness statements which are not then checked, so that false information (typically a fake citation or quotation) is put before the court.

— Frederick Ayinde, R (on the application of) v The London Borough of Haringey (2025) EWHC 1040 (Admin)[2]

This is fascinating because we all know about AI hallucinations. Generative AI is notorious for making things up on the spot where data is sparse. But what it’s good at, is reasoning on large amounts of text - provided that enough information is given to it.

But all this is harmless, right?

In the case R (Ayinde) v Haringey, because of the wasted costs associated with the fake citations, the Court reduced the claimant’s costs from £20,000 to £6,500. Although the lawyer in question denied using AI, the judge ordered the transcript of the case to be sent to the Bar Standards Board and to the Solicitors Regulation Authority. So there is a tangible financial cost to the over-use of generative AI by lawyers for legal work, not to mention the accompanying disruption to a publicly funded and overstretched legal system.

What about litigants in person?

In England and Wales, a litigant in person (LIP) is an individual, company or organisation who goes to court without legal representation from a solicitor or barrister. Although litigants in person don’t have legal representation, they may have received legal advice.

In 2025, litigant in person Dr Mustapha Soufian used ChatGPT to help draft his case for an Intellectual Property Office appeal ruling, resulting in fake cases in his submission.[4] The ruling stated that although Dr Soufian was a litigant in person, “an unrepresented person is still under a duty not to mislead the court”.

At a time when the availability of legal aid and conditional fee agreements have been restricted, some litigants may have little option but to represent themselves. Their lack of representation will often justify making allowances in making case management decisions and in conducting hearings. But it will not usually justify applying to litigants in person a lower standard of compliance with rules or orders of the court.

Lord Sumption in Barton v Wright Hassell LLP [2018] UKSC 12, [2018] 1 WLR 1119 [5]

The appeal ruling referred to the earlier case of Ayinde v Haringey, saying “fabrication of citations can involve making up a case entirely, making up quotes and attributing them to a real case, and also making up a legal proposition and attributing it to a real case even though the case is not relevant to the legal proposition being made”.

This case illustrates how non-experts can easily fall into the trap of using ChatGPT, resulting in harm to their case. If Dr Soufian’s case had been in the insolvency domain and he had use the Insolvency Bot, there would have been no fake cases in his submission.

How can we stop generative AI making fake citations?

Some fake citations

Eugenio Vaccari, Marton Ribary, Miklos Orban, Paul Krause, and I all investigated this when we experimented on using generative AI to answer questions about insolvency in English and Welsh law. We found that off-the-shelf large language models made up cases and statutes which sounded genuine, until we found that they didn’t exist!

Fortunately, a combination of a large language model and a database of law helps us mitigate this. Eugenio Vaccari collected a dataset of cases and statutes in English insolvency law and combined a lookup (finding the right statute due to keywords and text similarity) with a large language model, to get the best of both worlds. We feed the citations (cases and statutes) to GPT as part of the prompt, so we get a response that we know uses real citations.

The response was significantly better than GPT without the extra citations in the prompt. This is called a Retrieval Augmented Generation (“RAG”) system. The size of the performance boost that we found using RAG increased when we used more advanced LLMs

So there is a way to use generative AI and avoid the problem of fake citations - but lawyers shouldn’t be using ChatGPT as-is, without a RAG system or an alternative to keep it on track.

Link to our Insolvency Bot: https://fastdatascience.com/insolvency

Insolvency Bot

Gen AI without hallucinations?

Try our Insolvency Bot where you can see how we have used a lookup to reduce the chances of generative AI hallucinations.

Is this a problem with using American English trained LLMs on UK law?

We know that US content dominates the training sets of LLMs like ChatGPT. LLMs output text in almost exclusively American spelling and vocabulary. I noticed that UK legal questions often result in US-centric responses. However, the problem of generative AI making fake citations isn’t just confined to the UK. Dame Sharp also mentions the case Mata v Avianca Inc. from the US, where

Two lawyers… used ChatGPT – a large language model AI – to identify relevant caselaw. One prompted the tool to draft a court submission, which they submitted verbatim on behalf of their client. However, unbeknownst to them, the AI-generated legal analysis was faulty and contained fictional citations… …the AI output was entirely fabricated, falsely attributing nonsensical opinions to real judges and embellished with further false citations and docket numbers held by actual cases irrelevant to the matter at hand….

What did the fake citations look like?

Dame Sharp reproduced a passage of one of the counsel’s submissions to the Court:

Moreover, in R (on the application of Ibrahim) v Waltham Forest LBC [2019] EWHC 1873 (Admin), the court quashed a local authority decision due to its failure to properly consider the applicant’s medical needs, underscoring the necessity for careful evaluation of such evidence in homelessness determinations. The Respondent’s failure to consider the Appellant’s medical conditions in their entirety, despite being presented with comprehensive medical documentation, renders their decision procedurally improper and irrational.

The Appellant’s situation mirrors the facts in R (on the application of H) v Ealing LBC [2021] EWHC 939 (Admin), where the court found the local authority’s failure to provide interim accommodation irrational in light of the applicant’s Approved Judgment Ayinde v Haringey Al-Haroun v Qatar vulnerability and the potential consequences of homelessness. The Respondent’s conduct in this case similarly lacks rational basis and demonstrates a failure to properly exercise its discretion.

The Respondent’s failure to provide a timely response and its refusal to offer interim accommodation have denied the Appellant a fair opportunity to secure his rights under the homelessness legislation. This breach is further highlighted in R (on the application of KN) v Barnet LBC [2020] EWHC 1066 (Admin), where the court held that procedural fairness includes timely decision-making and the provision of necessary accommodations during the review process. The Respondent’s failure to adhere to these principles constitutes a breach of the duty to act fairly.

The Appellant’s case further aligns with the principles set out in R (on the application of Balogun) v LB Lambeth [2020] EWCA Civ 1442, where the Court of Appeal emphasized that local authorities must ensure fair treatment of applicants in the homelessness review process. The Respondent’s conduct in failing to provide interim accommodation or a timely decision breaches this standard of fairness.

The four cases listed above which I have highlighted do not exist. But the detail is that they look completely genuine - I had no way of knowing that these are not real, until I searched for them online.

What are we working on now?

We are now working on expanding the Insolvency Bot to cover more UK case law and several other European jurisdictions, in an ambitious international project.

You can read our original paper here:

  • Marton Ribary, Paul Krause, Miklos Orban, Eugenio Vaccari, Thomas Wood, Prompt Engineering and Provision of Context in Domain Specific Use of GPT, Frontiers in Artificial Intelligence and Applications 379: Legal Knowledge and Information Systems, 2023. https://doi.org/10.3233/FAIA230979

And you can try the Insolvency Bot here: https://fastdatascience.com/insolvency

References

  1. Robert Booth, High court tells UK lawyers to stop misuse of AI after fake case-law citations, The Guardian, 2025
  2. Dame Victoria Sharp’s judgment: Frederick Ayinde, R (on the application of) v The London Borough of Haringey (2025) EWHC 1040 (Admin)
  3. Ribary, Marton, et al. Prompt Engineering and Provision of Context in Domain Specific Use of GPT. Legal Knowledge and Information Systems. IOS Press, 2023. 305-310.
  4. Pro Health Solutions Ltd Trade Mark Application BL O/0559/25
  5. Barton v Wright Hassell LLP [2018] UKSC 12, [2018] 1 WLR 1119 UKSC/2016/0136

Elevate Your Team with NLP Specialists

Unleash the potential of your NLP projects with the right talent. Post your job with us and attract candidates who are as passionate about natural language processing.

Hire NLP Experts

Fast Data Science at Hamlyn Symposium on Medical Robotics on 27 June 2025
Ai in healthcareEvents

Fast Data Science at Hamlyn Symposium on Medical Robotics on 27 June 2025

Fast Data Science appeared at the Hamlyn Symposium event on “Healing Through Collaboration: Open-Source Software in Surgical, Biomedical and AI Technologies” Thomas Wood of Fast Data Science appeared in a panel at the Hamlyn Symposium workshop titled “Healing Through Collaboration: Open-Source Software in Surgical, Biomedical and AI Technologies”. This was at the Hamlyn Symposium on Medical Robotics on 27th June 2025 at the Royal Geographical Society in London.

Fast Data Science at The 4th Annual Conference on the Intersection of Corporate Law and Technology on 23 June 2025
Legal aiEvents

Fast Data Science at The 4th Annual Conference on the Intersection of Corporate Law and Technology on 23 June 2025

We presented the Insolvency Bot at the 4th Annual Conference on the Intersection of Corporate Law and Technology at Nottingham Trent University Dr Eugenio Vaccari of Royal Holloway University and Thomas Wood of Fast Data Science presented “A Generative AI-Based Legal Advice Tool for Small Businesses in Distress” at the 4th Annual Conference on the Intersection of Corporate Law and Technology at Nottingham Trent University

Generative AI consulting
Generative aiData science consulting

Generative AI consulting

What is generative AI consulting? We have been taking on data science engagements for a number of years. Our main focus has always been textual data, so we have an arsenal of traditional natural language processing techniques to tackle any problem a client could throw at us.

What we can do for you

Transform Unstructured Data into Actionable Insights

Contact us