As states continue to grapple with establishing regulatory frameworks for the most powerful artificial intelligence (“AI”) systems, New York has joined California in targeting frontier AI models with the Responsible AI Safety and Education Act (the “RAISE Act” or the “Act”).[1] Signed into law on December 19, 2025 by Governor Hochul, the Act creates a comprehensive regulatory framework for developers of the most advanced AI systems, marking New York’s entry into the vanguard of state AI safety regulation.
The final version of the Act[2] is narrower than the version of the Act enacted by the legislature in June, reflecting negotiations that more closely align the Act with California’s SB 53 (the “TFAIA”), which took effect January 1. However, while the Act shares California’s focus on transparency and safety, it diverges in critical ways, particularly regarding enforcement mechanisms and reporting timelines. Additional chapter amendments (expected to be finalized in early 2026) will further align New York with California by substituting a $500 million revenue threshold for compute-cost triggers and adjusting reporting timelines, penalties and oversight mechanisms. Below, we discuss the RAISE Act’s requirements at a high level, while also flagging key distinctions from the TFAIA, and anticipated revisions before the law takes effect on January 1, 2027.
Applicability Thresholds and Scope
As enacted, the RAISE Act applies to (1) frontier models with a certain compute intensity and cost and (2) large developers with aggregate compute spend.
Specifically, the RAISE Act currently defines “frontier model” as an AI model trained using greater than 10^26 computational operations with a compute cost exceeding $100 million, or a model produced through “knowledge distillation”[3], and applies to “large developers” meaning persons that have trained at least one frontier model (the compute cost of which exceeds $5 million) and spent over $100 million in aggregate compute costs training frontier models.[4]
However, significant changes are expected that will bring the RAISE Act in line with applicability thresholds set forth under TFAIA. While California’s TFAIA is likewise limited to “frontier models” using computing power greater than 10^26 operations, the TFAIA distinguishes “large frontier developers” using a revenue threshold where developers (together with affiliates) with annual gross revenues above $500 million in the preceding year face heightened obligations. The California regime thus layers a compute-based model definition with a revenue-based developer trigger, creating a narrower class of entities subject to more stringent transparency and governance documentation.
Although the RAISE Act, as signed, uses compute-cost thresholds to define covered entities, public reporting suggests that Governor Hochul has secured legislative agreement to replace those provisions with a revenue-based trigger that mirrors California’s approach. Specifically, New York policymakers have publicly signaled an intent to align the “large developer” trigger with California’s $500 million revenue threshold to materially harmonize coverage with the TFAIA, simplifying compliance for companies operating in both jurisdictions. The revisions would have the effect of narrowing applicability given that many emerging AI developers (particularly those attracting substantial venture capital to fund compute-intensive model development) may quickly exceed compute-cost thresholds while generating little or no revenue, and that international competitors operating at lower revenue levels could otherwise face disproportionate regulatory burdens under a compute-only framework.
Key Operative Requirements
The RAISE Act imposes three core obligations on large developers:
- Safety and Security Protocols. Before deploying a frontier model, developers must implement a written safety and security protocol similar in nature to the frontier AI framework required under the TFAIA. Specifically, the protocol must consist of documented technical and organizational protocols that (a) specify reasonable protections to reduce the risk of “critical harm”[5], (b) describe reasonable cybersecurity protections against unauthorized access to or misuse of frontier models that could lead to “critical harm” and (c) outline detailed testing procedures and assessment measures to evaluate unreasonable risk of “critical harm” (including how the frontier model could be misused or modified, how it could evade control of the large developer or user, etc.), (d) state compliance requirements with specificity to allow for confirmation of adoption and otherwise describe how the developer will comply with the Act and (e) designate senior personnel responsible for ensuring compliance. The protocol must be conspicuously posted (though the posted version may be appropriately redacted) and transmitted to the NY Attorney General and Division of Homeland Security and Emergency Services (with redactions only where required by federal law) upon request. Frontier model developers must further annually review and, where applicable, modify and republish the protocol to account for changes in model capabilities and industry best practices. Finally, developers are further required to implement appropriate safeguards to prevent unreasonable risk of “critical harm” and are prohibited from deploying a frontier model if doing so would create an unreasonable risk of “critical harm” (although this last requirement is anticipated to be removed in the chapter amendments).
- Safety Incident Reporting. The most significant operational difference between New York and California’s regimes lies in incident reporting timelines. Under the RAISE Act, large developers must disclose reportable safety incidents[6] to the Division of Homeland Security and Emergency Services within 72 hours of learning of the incident or within 72 hours of learning facts sufficient to establish a reasonable belief that a safety incident has occurred. California’s TFAIA, by contrast, requires frontier developers to report “critical safety incidents” within 15 days of discovery, with a shortened 24-hour window only for incidents posing imminent risk of death or serious physical injury. New York’s uniform 72-hour requirement thus represents a middle ground (i.e., stricter than California’s standard timeline but more flexible than the 24-hour emergency threshold).
- Recordkeeping. Large developers must record and retain (a) copies of its unredacted safety and security protocol, including records and dates of any updates or revisions and (b) information on specific tests and test results with sufficient detail for third parties to replicate the testing procedure, in each case, for as long as the frontier model is deployed plus 5 years.
In addition, the Act confirms that large developers violate the Act where they “knowingly make false or materially misleading statements or omissions in or regarding documents produced” under the Act and, unless removed by the chapter amendments, requires annual, independent third party compliance audits with detailed reporting that must also be conspicuously published and provided to regulatory authorities.
Enforcement
In addition to oversight by an AI office to be established within the New York Department of Financial Services, the RAISE Act grants the Attorney General authority to bring civil actions for violations of the Act. Following anticipated chapter amendments, penalties will be capped at $1 million for initial violations and $3 million for repeat offenses (substantially reduced from the $10 million and $30 million figures in the originally signed statute). The Attorney General may also pursue injunctive or declaratory relief. Critically, the Act does not establish a private right of action.
By comparison, California’s TFAIA authorizes the California Attorney General to seek civil penalties up to $1 million per violation, scaled to the severity of the offense, and also contains provisions that empower whistleblowers to bring civil actions for injunctive relief and recovery of attorneys’ fees for violations of their rights.[7]
Key Takeaways
Most businesses, including the vast majority of AI developers, will be relieved that the RAISE Act has narrow applicability. With thresholds targeting only frontier models and anticipated chapter amendments further narrowing coverage, the Act is unlikely to materially impact most organization’ operations. However, compliance remains a moving target, and thus businesses must stay abreast of legislative developments (particularly in light of the recently issued Executive Order aimed at state AI law preemption)[8].
For the few businesses that may meet the RAISE Act’s applicability thresholds, the alignment between New York and California’s frameworks offers a welcome development what is already slated to be an otherwise fragmented regulatory environment. Just as state privacy laws have created a challenging patchwork of requirements that businesses have learned to navigate, the harmonization of New York’s revenue threshold with California’s TFAIA represents a step toward more coherent multi-state compliance. However, where requirements diverge (such as New York’s stricter 72-hour incident reporting window compared to California’s 15-day standard) covered entities should draw upon the strategies and infrastructure developed through their privacy compliance programs. The same disciplined approach to documentation, risk assessment and incident response that businesses have refined while managing obligations under state privacy laws and the GDPR can be effectively adapted to address the RAISE Act’s nuanced requirements.
To prepare for compliance:
- Prepare for Threshold Alignment: Businesses should (a) anticipate January amendments replacing New York’s compute-cost thresholds with California’s $500 million revenue standard and (b) conduct threshold analyses to determine whether it will qualify as a large frontier developer under the harmonized framework.
- Implement Dual-Compliant Safety Protocols: While awaiting confirmation of New York’s amendments, covered entities should develop safety and security protocols that satisfy both states’ requirements, combining New York’s emphasis on pre-deployment implementation with California’s focus on annual public disclosure and risk assessment reporting.
- Prioritize Incident Response Capabilities: New York’s 72-hour reporting window demands robust incident detection and response systems. Covered entities operating in both jurisdictions should build compliance infrastructure around the stricter New York timeline to ensure dual compliance, including by revising contracts where relevant to revise reporting timelines by third party vendors.
- Account for Enforcement Risk: With penalties up to $3 million for repeat violations, New York’s RAISE Act presents potentially higher financial exposure than California’s framework. Risk management strategies should reflect this disparity, with particular attention to documentation practices and compliance verification to avoid repeat violations.
[1] A copy of the RAISE Act can be accessed here.
[2] This article reflects the RAISE Act as it will be implemented following expected chapter amendments that Governor Hochul and legislative leaders committed to enacting in January 2026, including substituting a $500 million revenue threshold for the compute-cost triggers in the enacted text, reducing enforcement penalties and establishing a Department of Financial Services oversight office.
[3] Defined in the Act as “any supervised learning technique that uses a larger artificial intelligence model or the output of a larger artificial intelligence model to train a smaller artificial intelligence model with similar or equivalent capabilities as the larger artificial intelligence model.”
[4] Notably, the Act applies to frontier models “developed, deployed or operating in whole or in part in New York State”, and exempts accredited colleges and universities conducting academic research or persons that subsequently transfer full intellectual property rights in its frontier model to a third party.
[5] The Act defines “critical harm” to mean the death or serious injury of at least 100 people or at least $1 billion of damages to rights in money or property caused or materially enabled by a large developer’s use, storage, or release of a frontier model, through either of the following: (a) the creation or use of a chemical, biological, radiological, or nuclear weapon; or (b) an AI model engaging in conduct that does both of the following: (i) acts with no meaningful human intervention; and (ii) would, if committed by a human, constitute a crime specified in the penal law that requires intent, recklessness, or gross negligence, or the solicitation or aiding and abetting of such a crime.
[6] The Act defines “safety incident” broadly to include known incidences of critical harm, autonomous model behavior, theft or unauthorized access to model weights, critical failure of technical controls or unauthorized use of a frontier model.
[7] Notably, RAISE Act does expressly (a) prohibit large developers, or their contractors or subcontractors, from preventing an employee from disclosing or attempting to disclose information to the large developer or the NY Attorney General, if the employee has reasonable cause to believe that the large developer’s activities pose an unreasonable or substantial risk of “critical harm”, regardless of the employer’s compliance with applicable law and (b) permit an employee to seek injunctive relief for any harms caused by such retaliation.
[8] For our Firm’s detailed analysis of the Executive Order, see here.