
Author: Eimear McCann
We spend a lot of time discussing the impact of AI on lawyers, in-house legal teams, and law firms, and yet, how often do we stop to consider the impact of AI on the judiciary? Are we already starting to see subtle shifts in both the input and output elements of the judicial role?
To a degree, this may feel like just another technological transition. Judges have adapted before; they’ve had to grapple with electronic bundles for the first time and learn to swiftly manage the complexity of remote hearings. The challenge that AI presents is a little different. It introduces an inherent tension that places the judiciary in a uniquely complex position.
Judges must now be alert to the risks of AI-generated submissions, such as hallucinated cases and fabricated citations, while simultaneously being encouraged to explore how AI tools (such as Copilot Chat) could assist them in their own work. In other words, they must act as both gatekeepers and users of AI, a role that demands a new kind of digital literacy, alongside the traditional skills of legal analysis and judgement.
The recent case of Ayinde v London Borough of Haringey [2025] EWHC 1040 (Admin) highlights a new risk for practitioners, and a new challenge for the judiciary. In this judicial review case, the council sought wasted costs against the claimant’s solicitors and counsel, on the basis that fictitious legal case citations had been included in the claimant’s submissions. The inclusion of these cases was found to be improper and unreasonable, and although the use of AI was not initially proven in court, it was indisputable that “fake cases” had appeared in the pleadings.
Not only was it made clear that both counsel and solicitors should take responsibility for the factual accuracy of any documents submitted to the court, but it was further emphasised that misleading the court in such a way amounted to negligence.
Subsequent to this, the President of the King’s Bench Division ordered a hearing to be listed to consider any steps the Court should take, including the initiation of proceedings for contempt of Court pursuant to CPR 81.6. Contempt is, of course, a serious matter, attracting fines and even imprisonment. This case is not unique, with a plethora of cases emerging in jurisdictions across the globe, including the US, Canada and Australia.
We must really ask ourselves why this keeps happening, and why lessons aren’t being learned? Perhaps part of the problem with AI is that all conversations tend to fall into a binary (or circular) trap. We have decided that either AI will soon take over the legal profession, or that it will quietly fade away in due course. The reality of course is much more nuanced. The evolution of AI in litigation is likely to be hugely different than we expect. Judges may be much more heavily impacted than we anticipate, and lawyers may embrace AI more willingly than anyone could have predicted.
As things stand, an incredible sense of confidence around the capacity and veracity of AI pervades small segments of the profession. As we become more and more comfortable with both the concept and the engagement of AI, we risk permitting ourselves to become persuaded by a tool that is now very familiar. You may have heard this phenomenon described as “verification drift”, i.e., our misplaced trust may flow from GenAI’s tone of authority and its clear articulation of facts. This appears to run contrary to the usual assumption that lawyers are risk averse and infinitely cautious. Perhaps though, this has nothing to do with professionalism and competence but more to do with the fact that we are immersed in significant behavioural changes, increasingly accustomed to outsourcing our thinking and our to do lists to new technologies.
A very evident solution for lawyers is to avoid GenAI tools which lie in the public domain, and which scrape content from all corners of the internet. Instead, the focus should be on legal-specific AI tools, which work from domain-specific databases of trusted data, supported by retrieval-augmented generation tools. This removes the bulk of the risk; however, review by the human eye will still be needed. As illustrated by the Ayinde case (and by others), if this human review is missed by both solicitor and barrister, responsibility falls to the court to scrutinise, which sounds like an additional, and tedious burden to place on judges. No doubt clearer guidance will emerge as more of these cases inevitably find their way to the court rooms.
On a wider scale though, we also have real opportunities for AI to make fragmented systems more cohesive. At present, the litigation landscape is patchy and uneven, with disparate bundling rules across various courts, differing electronic systems, and procedural inconsistencies which create friction for all involved (the Greener Litigation Pledge is currently working on a project to tackle elements of this*). AI has the potential to help standardise and streamline these processes, providing tools that assist judges in navigating complex procedural rules, drafting judgments, or analysing large volumes of material quickly and accurately.
The simplicity of the refreshed judicial guidance on AI reflects this to a degree. While it cautions judges to critically evaluate AI-generated outputs and to avoid relying on AI for legal reasoning, it also introduces tools like Copilot Chat, designed to help with streamlining information securely within the eJudiciary system. This same guidance is also reflective of the status quo; we are all learning, and we cannot commit just yet to anything beyond skeleton guidelines. To do so would ignore the nuances we face.
Ultimately, what will unite all within legal will be education and training. The risks are real, but the benefits cannot be denied. We need to feel empowered to engage AI, but we need the education and experience to fully scrutinise its outputs.
The real question then is who is responsible for this huge education piece? Perhaps we need to ask ChatGPT…
*If you would like to find out more about the court bundling project, as part of the Greener Litigation Pledge, please get in touch, eimearmccann@trialview.com or visit the Greener Litigation website.
Eimear is a former lawyer, and Commercial Director of TrialView, an AI litigation platform. She also lectures in legal tech and innovation – with a specific focus on AI – at BPP University and is a member of the Law Society Technology & Law Committee.