Artificial Intelligence (AI) – Guidance for Judicial Office Holders

New guidance has been outlined for judges to help them navigate the use of AI in their roles.

3 January 2024

Artificial intelligence

New Guidance was published on 12 December 2023 by the Courts and Tribunals Judiciary service: Artificial Intelligence (AI) – Guidance for Judicial Office Holders. It’s intended to assist and instruct judges on their use of AI.

The purpose of this guidance is to highlight the risk of difficulties arising and to temper reliance by the judiciary upon AI in the work that they and their staff undertake. It suggests ways to avoid unreliable and potentially fundamentally flawed judgments.

How can judges use AI?

When employing AI tools to undertake judgment research, judges are required to observe principles of confidentiality and prudence. As always, they must be forever mindful of their personal responsibility for the rulings that they establish.

Judges are also specifically instructed to supervise work undertaken by their clerks or assistants who might by using AI tools themselves to undertake tasks in the production of judgments.

The guidance envisages AI as a secondary tool, more so utilised as a back up to paid services provided by more readily available and traditional legal research platforms and data bases.

Additionally, by the guidance judges are prohibited from entering confidential or private data into public AI platforms so as to underpin the integrity of the judicial process and to mitigate against data protection breaches.

What judges have been warned against is using AI as a method to confirm non exhaustively, and indeed without complete authority, immediately correct facts.

The guidance therefore emphasises judicial caution in an environment of ever-increasing and evolving AI technology, underscoring the fact that it can never be an answer for judges to rely upon to meet their obligations of validating their decision-making processes.

The guidance lists particular instances of AI application that judges should be wary of:

  • Large bodies of text that can be summarised by AI tools need to be checked for accuracy of content;
  • Topic selection for presentation within a judgment can be facilitated by AI tools but independent thought should be given by judges to their inclusion or to their exclusion;
  • As well as legal research, judicial analysis and reasoning might be assisted by AI tools but conscious of the present limitations of the technology it should not be relied upon by any judge for these purposes; and
  • Appreciation of both the concepts of and the existence of concerning phenomena such as Deep fakes, forgeries, and hallucinated legal authorities is highlighted to judges as well as a requirement for them to appreciate that for certain classes of litigants, such as those without qualified and experienced legal representation, AI chatbots might be all that they have perhaps had reference to in the drafting of their case concepts and submissions.

In summary, the guidance has established a touchstone for judges for reference in a forever changing technological arena that many of them will not be especially familiar with. It’s an area that they are now instructed to be cautious of employing in their work, rather than utilising AI tools in a potentially reckless manner that might give rise to the risk of unforeseen and undesirable outcomes as they make the law that we must all adhere to.