As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.Continue Reading New York’s RAISE Act vs. California’s TFAIA: What Companies Need to Know

For more insights and analysis from Cleary lawyers on policy and regulatory developments from a legal perspective, visit What to Expect From a Second Trump Administration.

On December 11, 2025, President Donald Trump signed an executive order titled Establishing A National Policy Framework For Artificial Intelligence (the “Order”)[1]. The Order’s policy objective is to “enhance the United States’ global AI dominance through a minimally burdensome national policy framework for AI”[2] and comes after Congress considered but did not advance federal legislation that would have preempted state AI regulation earlier this year. The Order justifies federal intervention on three grounds:Continue Reading President Trump Signs Executive Order Seeking to Preempt State AI Regulation

This article was authored by Daniel Ilan, Rahul Mukhi, Prudence Buckland, and Melissa Faragasso from Cleary Gottlieb, and Brian Lichter and Elijah Seymour from Stroz Friedberg, a LevelBlue company.

Recent disclosures by Anthropic and OpenAI highlight a pivotal shift in the cyber threat landscape: AI is no longer merely a tool that aids attackers, in some cases, it has become the attacker itself. Together, these incidents illustrate immediate implications for corporate governance, contracting and security programs as companies integrate AI with their business systems. Below, we explain how these attacks were orchestrated and what steps businesses should consider given the rising cyber risks associated with the adoption of AI.Continue Reading AI-Enabled Cyber Intrusions: What Two Recent Incidents Reveal for Corporate Counsel

On September 29, 2025, Governor Gavin Newsom signed the Transparency in Frontier Artificial Intelligence Act (TFAIA, SB 53 or the Act)[1], establishing a comprehensive framework for transparency, safety and accountability in the development and deployment of the most advanced artificial intelligence models. Building upon existing California laws targeting AI such as AB 2013[2], the Act, which takes effect January 1, 2026 and imposes penalties up to $1 million per violation, creates immediate compliance obligations for AI developers of the most powerful frontier models.Continue Reading California Enacts Landmark AI Safety Law But With Very Narrow Applicability

Last week, the New York Department of Financial Services (“DFS”) issued guidance addressed to executives and information security personnel of entities regulated by DFS to assist them in understanding and assessing cybersecurity risks associated with the use of artificial intelligence (“AI”), and implementing appropriate controls to mitigate such risks (the “Guidance”).[1] In particular, and to address inquiries received by DFS regarding AI’s impact on cyber risk, the Guidance is intended is to explain how the framework set forth in DFS’ Cybersecurity Regulation (23 NYCRR Part 500) should be used to assess and address such risks.Continue Reading New York Department of Financial Services Issues Guidance on Cybersecurity Risks Arising from Artificial Intelligence

The rapid development of AI is introducing new opportunities and challenges to dispute resolution. AI is already impacting the document review and production process, legal research, and the drafting of court submissions. It is expected that the use of AI will expand into other areas, including predicting case outcomes and adjudicating disputes. However, the use of AI in litigation also bears risk, as highlighted by a recent First-tier Tribunal (Tax) decision, where an appellant had sought to rely on precedent authorities that, in fact, were fabricated by AI (a known risk with AI using large language models, referred to as hallucination).[1] While, in this particular case, no further consequences seemed to follow (in light of the fact that the appellant, a litigant in person, “had been unaware that the AI cases were not genuine and that she did not know how to check their validity[2]), the Tribunal did highlight that “providing authorities which are not genuine and asking a court or tribunal to rely on them is a serious and important issue”,[3] suggesting that litigants may incur certain risks by relying on authorities suggested by AI, unless these are independently verified. On 12 December 2023, a group of senior judges, including the Master of the Rolls and the Lady Chief Justice, issued guidance on AI for judicial office holders, which, amongst other things, discourages the use of AI for legal research and analysis and highlights the risk of AI being relied on by litigants to provide legal advice and/or to produce evidence.[4]Continue Reading Nexus of AI, AI Regulation and Dispute Resolution

On July 26, 2023, the Securities and Exchange Commission (“SEC”) proposed new rules targeting the use of predictive data analytics and artificial intelligence (“AI”) by registered investment advisers (“RIAs”) and broker-dealers.[1]  The new proposed rules focus on the potential for conflicts of interest and the possibility that newer, more complex analytics models (including those using AI) might optimize decision making for RIAs and broker-dealers by placing those firms’ interests above the interests of their clients.[2]  The proposed rules would require RIAs and broker-dealers to: (i) evaluate whether their use of technologies “that optimize for, predict, forecast or direct investment-related behaviors or outcomes” create such a conflict of interest, and (ii) either stop using or address the effects of tools that place a firm’s interests before the interests of clients.  RIAs and broker-dealers will also will be required to adopt policies to ensure compliance with the new proposed rules.[3] Continue Reading SEC Proposes Rules Limiting the Use of Artificial Intelligence by Registered Investment Advisers and Broker-Dealers