The rapid development of AI is introducing new opportunities and challenges to dispute resolution. AI is already impacting the document review and production process, legal research, and the drafting of court submissions. It is expected that the use of AI will expand into other areas, including predicting case outcomes and adjudicating disputes. However, the use of AI in litigation also bears risk, as highlighted by a recent First-tier Tribunal (Tax) decision, where an appellant had sought to rely on precedent authorities that, in fact, were fabricated by AI (a known risk with AI using large language models, referred to as hallucination).[1] While, in this particular case, no further consequences seemed to follow (in light of the fact that the appellant, a litigant in person, “had been unaware that the AI cases were not genuine and that she did not know how to check their validity[2]), the Tribunal did highlight that “providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue”,[3] suggesting that litigants may incur certain risks by relying on authorities suggested by AI, unless these are independently verified. On 12 December 2023, a group of senior judges, including the Master of the Rolls and the Lady Chief Justice, issued guidance on AI for judicial office holders, which, amongst other things, discourages the use of AI for legal research and analysis and highlights the risk of AI being relied on by litigants to provide legal advice and/or to produce evidence.[4]

Globally, countries are at varying stages in enacting rules governing AI, signalling a range of approaches to the technology’s regulation across jurisdictions.[5]  In the UK, the government published a white paper, which proposes leaving it to individual regulators to lay down sector-specific rules or guidance regarding AI, subject to certain overarching guiding principles.[6] For example, the CMA published an initial report on AI foundation models in September 2023.[7] In October, the Bank of England and the Financial Conduct Authority (“FCA”) released a Feedback Statement[8] concerning AI and machine learning, which summarises responses received to a previous Discussion Paper,[9] but the regulators did not make any policy proposals at this stage, nor did they signal how they were considering clarifying, designing, or implementing future regulatory proposals on this topic. Another development was the publication of the G7 voluntary code of conduct for developers of advanced AI systems.[10]

Aside from regulatory developments, AI raises a number of difficult legal questions which will need to be explored over the forthcoming years.

So far, a significant proportion of cases involving AI have centred around privacy, data protection intellectual property issues. For example, in October 2023, the First-tier Tribunal decided the appeal of Clearview AI, a US provider of facial recognition software, against a fine issued by the Information Commissioner for breaches under the EU and UK data protection regimes.[11] While the tribunal overturned the fine on the basis of a narrow exception that meant that the Commissioner did not have jurisdiction, a number of findings illustrated the broad scope of the data protection regimes and that the maintenance and search of data bases could amount to (potentially unlawful) processing of data. In December 2023, the High Court refused to grant reverse summary judgment against Getty in respect of its claim against Stability AI, an open-source generative AI company, in connection with various IP infringement claims based on the allegation that, amongst other things, Stability “scraped” images from Getty images websites, without Getty’s consent, and used those images unlawfully as input to train and develop Stable Diffusion.[12]Another question that has garnered significant attention relates to the patenting of an invention created by an AI system (and, similarly, the copyright in content created by AI). In a recent judgment, the UK Supreme Court held that an ‘inventor’ within the meaning of the Patents Act 1977 must be a natural person (i.e., that an AI system cannot qualify as ‘inventor’ for these purposes), and that ownership of an AI system does not entitle a person to apply for and obtain a patent for any technical development made such AI system acting autonomously.[13]

More generally, AI raises a number of other complicated issues regarding the assigning of responsibility and liability, and, from the claimants’ perspective, the establishing of all the elements of various causes of action. In the EU, legislators are seeking to address this issue through the proposed AI Liability Directive[14] which, among other things, seeks to clarify how to prove fault on the part of certain AI providers and/or users, and introduces a rebuttable presumption of a causal link in the case of fault as well as certain disclosure rules. With regard to the UK, however, the government’s white paper suggests that there will be no immediate legislative intervention to address accountability across the AI life cycle, so that development of adequate rules might be left to the courts. The same might be true of issues of jurisdiction and evidentiary considerations that arise in the context of AI-related claims.


The following post was originally included as part of our recently published memorandum “Looking Forward Into 2024 and Beyond: Seven Trends in UK Disputes

[1] Harber v Commissioners for His Majesty’s Revenue and Customs [2023] UKFTT 1007 (TC).

[2] Ibid, at para. 3.

[3] Ibid, at para. 5; see also paras. 23 and 24.

[4] Courts and Tribunals Judiciary, Artificial Intelligence (AI): Guidance for Judicial Office Holders (12 December 2023),

[5] For example, in August 2023, China formally enacted the world’s first legislation targeting generative AI (see an open source translation of the law: Interim Measures for the Management of Generative Artificial Intelligence Services,; the EU’s AI Act was adopted by the European Parliament in June 2023, and its precise wording is currently being negotiated between the Parliament, Council and EU Member States (EU AI Act: First Regulation on Artificial Intelligence (8 June 2023),; and on 30 October 2023, the US Administration issued a landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, directing the establishment of new standards for AI safety and security (see Cleary Gottlieb, White House Unveils Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (15 November 2023),

[6] UK Government Policy Paper, AI regulation: a pro-innovation approach (29 March 2023),

[7] See Cleary Gottlieb, CMA Publishes Initial Report on AI Foundation Models and Guiding Principles for Firms (20 September 2023),

[8] FS2/23 – Artificial Intelligence and Machine Learning (26 October 2023), See also Cleary Gottlieb, Artificial Intelligence in the Financial Services Sector: UK Regulators Publish Feedback Statement (30 October 2023),

[9] DP5/22 – Artificial Intelligence and Machine Learning (11 October 2022),

[10] See Cleary Gottlieb, G7 Leaders Publish AI Code of Conduct: A Common Thread in the Patchwork of Emerging AI Regulations Globally? (1 November 2023),

[11] Clearview AI Inc v The Information Commissioner [2023] UKFTT 819.

[12] Getty Images (US) Inc and Ors v Stability AI Ltd [2023] EWHC 3090 (Ch).

[13] Thaler v Comptroller-General of Patents, Designs and Trade Marks [2023] UKSC 49.

[14] See the European Commission’s proposal: Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence (AI Liability Directive), See also Cleary Gottlieb, Modernising Liability Rules for Products and AI in the Digital Age (25 April 2023),