Can I use AI in court?

· Thomas Wood
Can I use AI in court?

When can lawyers, litigants in person, and expert witnesses use AI in court documents?

In the last few years in the UK, the USA, Canada, Ireland and other jurisdictions, cases have been reported where submissions were made to a court where the author of a document used generative AI tools such as ChatGPT to create those documents. This has wasted court time, resulted in submissions being rejected or even resulted in changes to cost awards.

In the UK, this problem has been most acute for litigants in person (LIPs). In the past, there was a more accessible legal aid budget which allowed people to hire legal representation paid for by the state, but legal aid has recently become inaccessible for many litigants. For example, a person with a household disposable income of £22,325 is excluded from legal aid in the magistrates courts. It is not surprising that a person entering court proceedings, who cannot afford a lawyer and perhaps feels intimidated by legal jargon, would resort to ChatGPT.

Many cases have also been reported where solicitors, barristers, or expert witnesses used AI to prepare submissions to a court, such as expert witness reports, and these contained obvious hallucinations. Often the report is of a judge picking up on fake citations.

AI hallucinations, semantic leakage, and bias

Large language models are notorious for hallucinations. For example, they will very confidently cite non-existent laws in a way that seems perfectly plausible.

If you ask GPT 4o to complete the sentence, “Her favourite colour is yellow. She presented at the emergency room complaining of a high temperature, headache, being sick, and aches and pains”, it will output “jaundice” as the most likely diagnosis - there has been a “leak” from the favourite colour information to the symptoms. This is hard to eliminate but very pervasive. The problem has been termed “semantic leakage”.[7, 8]

You can also see the biases inherent in an AI model by giving it sentences to complete and switching “he” for “she”. For example, GPT 3.5 is 87% likely to complete the sentence “She works at the hospital as a” with the word “nurse”, while the equivalent sentence with “he” yields “doctor” as the most likely completion.

Next token for input 'She works at the hospital as a' (GPT 3)
Next token for input 'He works at the hospital as a' (GPT 3)

Anyone using a large language model for a legal purpose must be aware of its tendency to hidden bias, semantic leakage, and hallucinations. These phenomena are often not easy to spot.

Is there any official guidance on using AI for court submissions?

It is only recently that we have seen the appearance of guidance on the use of AI for legal proceedings. In the UK we have the Artificial Intelligence (AI) Guidance for Judicial Office Holders, published in October 2025, which attempts to lay some ground rules for when and how AI may be used.[1]

The guidance is interesting reading, although it doesn’t cover expert witnesses specifically. Expert witnesses have their own guidance (Civil Procedure Rules 35 and Practice Direction 35). CPR 35 has not yet been updated to provide guidance on generative AI, but does make it clear that the expert witness is responsible for the content of their expert witness report, and they should provide an objective unbiased opinion. I can see how these conditions would be hard to meet if you generated a report using GPT.

I found the final page of the Judicial Guidance very interesting reading, giving some examples of when and when not to use AI, and examples of indicators that a text was written with AI.

AI tools are capable of summarising large bodies of text. As with any summary, care needs to be taken to ensure the summary is accurate.

  • AI tools can be used in writing presentations, e.g. to provide suggestions for topics to cover.
  • Administrative tasks can be performed by AI, including composing, summarising and prioritising emails, transcribing and summarising meetings, and composing memoranda. Tasks not recommended
  • Legal research: AI tools are a poor way of conducting research to find new information you cannot verify independently. They may be useful as a way to be reminded of material you would recognise as correct, although final material should always be checked against maintained authoritative legal sources.
  • Legal analysis: the current public AI chatbots do not produce convincing analysis or reasoning. Indications that work may have been produced by AI:
  • References to cases that do not sound familiar, or have unfamiliar citations (sometimes from the US),
  • Parties citing different bodies of case law in relation to the same legal issues,
  • Submissions that do not accord with your general understanding of the law in the area,
  • Submissions that use American spelling or refer to overseas cases,
  • Content that (superficially at least) appears to be highly persuasive and well written, but on closer inspection contains obvious substantive errors, and
  • The accidental inclusion of an AI prompt, or the retention of a ‘prompt rejection’, such as “as an AI language model, I can’t …” Courts and Tribunals Judiciary, Artificial Intelligence (AI) Guidance for Judicial Office Holders[1]

Interestingly, some of the symptoms of AI generated text listed here tally with Wikipedia’s very comprehensive list of the “signs of AI writing”

In Canada, the Federal Court published a notice in 2024 stating that use of AI should be declared with a text in English or French such as “Artificial intelligence (AI) was used to generate content in this document at paragraphs 20-30.”.[2]

England and Wales

  • Ayinde v Haringey [2025] EWHC 1383 (Admin): The claimant’s barrister made a submission to the court containing false citations.[6] Because of the wasted time and costs associated with the hallucinations in the documents the court reduced the claimants awarded costs from £20,000 to £6,500. This case attracted a lot of attention, being one of the first times that AI caused a problem of this magnitude in a case in England.
  • Barton v Wright Hassell LLP [2018] UKSC 12, [2018] 1 WLR 1119: A litigant in person used chat GPT to draft his submission to an Intellectual Property Office appeal ruling, and made a submission containing fake cases. The ruling stated that “an unrepresented person is still under a duty not to mislead the court”.[5]
  • A solicitor in the Construction and Technology Court insisted that an expert witness use an AI-generated report for a case. Mr Justice Waksman, head of the Construction and Technology Court, said ‘That to my mind is a gross breach of duty on the part of the solicitor.’

Ireland

  • Reddan V An Bord Pleanála HC 2025: A litigant in person made a submission to the court containing a legal term or phrase which is not used in Ireland, but looked Scottish or American. The judge said, “This sounds like something that derived from an artificial intelligence source. It has all the hallmarks of ChatGPT, or some similar AI tool.”[3]

USA

Germany

This problem is not confined to the English-speaking world.

  • July 2, 2025 - 312 F 130/25 - In Cologne, Germany, a lawyer submitted a legal document to the family court clearly written with AI. The Cologne District Court called this a Be­rufs­rechts­ver­stoß - professional misconduct violation, going on to add,

Die weiteren von dem Antragsgegnervertreter im Schriftsatz vom 30.06.2025 genannten Voraussetzungen stammen nicht aus der zitieren Entscheidung und sind offenbar mittels künstlicher Intelligenz generiert und frei erfunden The further conditions mentioned by the respondent’s representative in the brief of June 30, 2025, do not originate from the cited decision and are apparently generated by artificial intelligence and completely fabricated

Internationally

Matthew Lee, a barrister from Doughty Street Chambers in London, is currently working on a list collecting AI-generated hallucinations in court documents from around the world.

Given all the pitfalls detailed above, as well as the emerging official guidance, we can start to put together some ideas for how to work with AI productively in a legal context.

First of all, ask yourself if you really need to use AI. Whether it’s an expert witness report or a legal submission, every word must be accurate and you cannot allow hallucinations. I would personally find it easier to write a sentence myself and ensure that every word is accurate and correct rather than to generate the sentence with AI and then have to check it after the fact. I would be worried that I would have missed an important detail.

You need to ensure that you have permission from the court, your instructing solicitor, or any other relevant parties before proceeding to use AI. It would be unfortunate to use AI and then find out after the fact that it is not allowed.

You should check that the task that you’re using AI for is an appropriate task. The UK’s Judicial Guidance[1] from October 2025 states that AI is suitable for clerical tasks such as document summarization but not for tasks that involve professional expertise.

Finally, any use of AI should be declared. You should record the prompts, the outputs, and the model version.

Some guidance that I found online said that you should also make sure you understand how an AI tool works before you use it in any legal context. I disagree. We don’t require everybody who has a driving licence to understand the inner workings of a combustion engine, and if we needed everybody who used ChatGPT in a professional setting to understand how it works, then very few people would be able to use it. Moreover, a lot of large language models are closed source and the precise details of how they work are not public.

There are technical strategies to reduce the frequency of AI hallucinations, such as pre-pending legal citations and context into the prompt. This is the approach which we have taken in the Insolvency Bot which provides a question answering functionality, aiming to provide correct citations from English and Welsh statute and case law around insolvency.[9] If you have a domain specific AI tool like this, you could use it, but you will still need to check the output for hallucinations.

Checklist for using AI in court submissions

Here is a quick checklist for how you can use AI in a legal context:

☑ You are using AI for an administrative task, such as summarisation, rather than analysis that requires your expertise. (this comes from the UK Judicial Guidance [1])

☑ No sensitive data is being sent out of the jurisdiction.

☑ Data is not being stored by a third party company.

☑ You are complying with all relevant privacy laws e.g. GDPR, HIPAA.

☑ You have saved all prompts and responses as well as the model version (e.g. GPT 4o) so you can reproduce them if asked.

☑ You have written permission from the instructing solicitor, as well as the court if applicable.

☑ You have verified all AI output and checked for hallucinations, fictitious citations, references to legal concepts from US jurisdictions, etc.

In May 2026, I will be presenting at Ireland’s Expert Witness conference in Dublin, on The Role of Artificial Intelligence in Expert Investigations and the Preparation of reports. The talk will cover:

  • Where AI Can Safely Assist in Expert Work
  • Risks of Hallucinations and Inaccuracies in AI Outputs
  • Maintaining Independence and Expert Duty to the Court
  • Confidentiality, Data Protection & Appropriate Use of Tools
  • Transparency and Disclosure of AI Use in Reports

References

  1. Courts and Tribunals Judiciary, Artificial Intelligence (AI) Guidance for Judicial Office Holders, https://www.judiciary.uk/wp-content/uploads/2025/10/Artificial-Intelligence-AI-Guidance-for-Judicial-Office-Holders-2.pdf
  2. Canadian Federal Court, NOTICE TO THE PARTIES AND THE PROFESSION, The Use of Artificial Intelligence in Court Proceedings, May 7, 2024, https://www.fct-cf.ca/Content/assets/pdf/base/FC-Updated-AI-Notice-EN.pdf
  3. Reddan V An Bord Pleanála HC 2025.
  4. Kohls et al v. Ellison et al, No. 0:2024cv03754 - Document 46 (D. Minn. 2025)
  5. Barton v Wright Hassell LLP [2018] UKSC 12, [2018] 1 WLR 1119
  6. Ayinde v Haringey [2025] EWHC 1383 (Admin)
  7. Gonen, Hila, et al. “Does liking yellow imply driving a school bus? semantic leakage in language models.” Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). 2025.
  8. Omar, Mahmud and Agbareia, Reem and Gorenshtein, Alon and Gorenshtein, Alon and Charney, Alexander W. and Nadkarni, Girish N. and Glicksberg, Benjamin S. and PDF, See and Klang, Eyal, Large Language Models Chase Zebras: Salient Cues Overrule Base Rates in Clinical Diagnosis. Available at SSRN: https://ssrn.com/abstract=5988435 or http://dx.doi.org/10.2139/ssrn.5988435
  9. Marton Ribary, Thomas Wood, Miklos Orban, Eugenio Vaccari, Paul Krause, A generative AI-based legal advice tool for small businesses in distress, Journal of International and Comparative Law, Vol 12.2, 2025.

Unlock Your Future in NLP!

Dive into the world of Natural Language Processing! Explore cutting-edge NLP roles that match your skills and passions.

Explore NLP Jobs

Semantic leakage
Generative ai

Semantic leakage

A person has recently returned from a camping trip and has a fever. Should a doctor diagnose flu or Lyme disease? Would this be any different if they had not mentioned their camping trip? Here’s how LLMs differ from human experts.

Predicting Customer Churn using Machine Learning and AI
Data science consultingAi for business

Predicting Customer Churn using Machine Learning and AI

How can you predict customer churn using machine learning and AI? In an earlier blog post, I introduced the concept of customer churn. Here, I’d like to dive into customer churn prediction in more detail and show how we can easily and simply use AI to predict customer churn.

JICL publication: A generative AI-based legal advice tool for small businesses in distress
Ai in research

JICL publication: A generative AI-based legal advice tool for small businesses in distress

A generative AI-based legal advice tool for small businesses in distress We are pleased to announce the publication of our paper A generative AI-based legal advice tool for small businesses in distress, in collaboration with an interdisciplinary team based in the UK and Hungary.

What we can do for you

Transform Unstructured Data into Actionable Insights

Contact us