New York has taken a firm step into AI governance with the passage of the RAISE Act. The move places the state alongside California in shaping transparency rules for large AI developers. Governor Kathy Hochul signed the Responsible AI Safety and Education Act into law, creating new disclosure and safety expectations for advanced AI systems. Together, the two states are forming a bicoastal framework that could influence national AI policy.
However, the law will not take effect right away. State officials have confirmed that lawmakers plan to introduce follow-up amendments. These changes aim to better align the RAISE Act with California’s Transparency in Frontier Artificial Intelligence Act. Under the current plan, enforcement will begin on January 1, 2027. California’s law takes effect one year earlier.
Taken together, the two laws rank among the strongest AI transparency measures in the United States. Their passage comes as federal lawmakers continue to debate whether national rules should override state-level AI regulation.
Who the RAISE Act Applies To
Under the RAISE Act, coverage is limited to so-called “large developers.” These are defined as entities that have trained at least one frontier AI model with more than $5 million in compute costs and have spent over $100 million in total compute on frontier model training. Colleges and universities are exempt when their work is conducted for academic research.
However, that definition is expected to change. According to public reporting, lawmakers and the governor have agreed that the compute-based thresholds will likely be replaced by a revenue-based standard. If adopted, the RAISE Act would mirror California’s approach, which applies to frontier AI developers generating more than $500 million in annual revenue.
The law defines a “frontier model” as an AI system trained using more than 10²⁶ computational operations and costing over $100 million to develop. Models created through knowledge distillation are also included, a distinction not found in California’s statute. Importantly, the RAISE Act applies only to models that are developed, deployed, or operated at least in part within New York.
RAISE Act Transparency and Safety Requirements
At the core of the RAISE Act is a mandate for structured transparency. Before a frontier model is deployed, a written safety and security protocol must be created, retained, and publicly disclosed. That documentation must also be submitted to a state agency.
The protocol is required to outline how risks of “critical harm” are mitigated. These risks include large-scale loss of life or property damage exceeding $1 billion, when such outcomes are materially enabled by an AI system. Detailed testing procedures, cybersecurity protections, compliance measures, and internal accountability structures must be described. Senior personnel must also be designated as responsible for oversight.
While the law is primarily focused on disclosure, it currently includes substantive obligations. Developers are required to implement safeguards to reduce unreasonable risk. In addition, deployment of models posing such risks is prohibited, although that restriction may be removed in future amendments. Even so, annual review and updating of safety documentation will remain mandatory.
Enforcement, Audits, and Incident Reporting Under the RAISE Act
The RAISE Act introduces several enforcement mechanisms that distinguish it from other state laws. Large developers are required to retain an independent third party each year to audit compliance. These audits must follow recognized best practices, and the resulting reports must be published and submitted to the state.
In addition, safety incidents must be reported within 72 hours of discovery or reasonable belief that an incident has occurred. Covered incidents include unauthorized access to model weights, failures of technical controls, autonomous actions beyond user requests, or unauthorized use of the system.
Enforcement authority is granted to the New York Attorney General. While the signed bill references higher penalties, the governor’s office has stated that future amendments will cap fines at $1 million for a first violation and $3 million for subsequent violations. The law also prevents developers from shifting liability through contracts or corporate restructuring, with courts instructed to impose joint liability where avoidance tactics are found.
Why the RAISE Act Matters Nationally
Although the RAISE Act is a state law, its implications extend well beyond New York. Once transparency disclosures are made public, they become accessible nationwide. As a result, both the RAISE Act and California’s TFAIA are being viewed as early building blocks of a broader national standard for AI accountability.
Rather than restricting innovation outright, the RAISE Act emphasizes visibility into how advanced AI systems are built, tested, and governed. Over time, this growing body of public information may shape federal policy debates and future regulatory frameworks.
For AI developers operating at scale, the direction is increasingly clear. Even with amendments pending and enforcement still years away, compliance planning is already being encouraged. Aligning internal protocols with both New York and California requirements may soon become a baseline expectation for participation at the frontier of artificial intelligence.
