Computer & Communication Industry Association
PublishedOctober 14, 2025

An Examination of California and New York Policy on Regulating Frontier AI

As the 2025 California and New York legislative sessions drew to a close, it was clear that regulating artificial intelligence was at the forefront of debate. California Governor Gavin Newsom signed SB 53 (Transparency in Frontier Artificial Intelligence Act) into law to establish AI safety and transparency regulations after vetoing SB 1047 last session over concerns that it was too broadly written and not appropriately risk based. 

Meanwhile, New York Governor Kathy Hochul has yet to take action on the S6953/A 6453, known as the New York RAISE Act (Responsible AI Safety and Education Act), which regulates frontier AI models and places obligations on developers of frontier models. Among those obligations is a requirement that they be held liable for the actions of third parties using their models.

While California and New York strive to continue to be powerhouses of AI technology in the world, it is essential that legislatures focus on a workable regulatory framework that will build public trust and support research, rather than enacting measures that will propel innovation elsewhere. Outlined below are similarities and differences between the two state policies highlighting the impact on AI development in the U.S. moving forward. 

Similarities Between California and New York 

Both CA SB 53 and NY S6952/A6453 seek to place frontier AI systems and its developers under a regulatory microscope. Developers must prepare safety/security protocols, documenting how they plan to reduce risks, and provide incident reports. Both allow developers to redact trade secrets, proprietary security information, or details that could compromise safety or intellectual property, when establishing their protocols. While both state bills do not require third party audits, there is cause for concern that the New York bill calls for state oversight and some have lobbied to include mandatory audits, something that is not even required in the EU. There is a warning call amongst some in academia in the EU, which has attempted to regulate the AI sector in a similar manner as noted in the article here.

CCIA has been actively engaged throughout the CA and NY legislative process. Both bills inappropriately focus on large developers without considering the consequence of AI models from smaller developers. The bills do not recognize that multiple actors, including downstream deployers, can modify models in a way that could potentially increase safety concerns. CCIA joined a coalition letter opposing this proposal in California. Similarly, in New York, CCIA submitted several comments and a veto request letter noting harm to the AI industry overall. 

Key Differences 

While the intent of the policies aim to achieve a similar goal, they differ in impact, which could create a plethora of compliance questions and stifle innovation as a country overall.

Before releasing a frontier model, NY S6953/A6453 requires a developer to implement safety and security protocols, including for risks created by third-party uses of the model outside of the developer’s control, and to implement safeguards to prevent unreasonable risks of critical harm, without any criteria for what an appropriate safeguard is and despite the impossibility of assessing risks from speculative uses by third parties. 

Also, the severity threshold in NY is more stringent than CA. NY S6953/A6453 uses the term “critical harm”, as either death/serious injury to 100 or more persons, or greater than $1 billion in damages. CA SB 53 uses “catastrophic risk” or “frontier AI risk” thresholds as events causing greater than 50 casualties or greater than $1 billion in damage. 

In terms of incident reporting, the windows on timing differ significantly. NY S6953/A6453 requires reporting of a “safety incident” within 72 hours after awareness or determination of reasonable belief that such incident occurred, whereas CA SB 53 sets a 15-day reporting window for critical safety incidents to the California Office of Emergency Services. 

Moreover, public infrastructure is not the same in New York. For instance, CA SB 53 includes a provision to create a public cloud compute consortium (“CalCompute”) within the state’s Government Operations Agency, intended to support safe, equitable, and public-interest AI research and infrastructure. NY S6953/A6453 does not have such a provision.

Finally, NY S6953/A6453 proposes significantly heavier penalties, which include up to $10 million for a first violation, and up to $30 million for repeat violations. CA SB 53 imposes civil penalties up to $1 million per violation. 

Impact 

In California, CCIA maintains the position that small entities can develop hugely influential and potentially risky models with similar capabilities to the models developed by “large developers.” The law does not clarify that a frontier developer’s obligations do not extend to models that have been substantially modified by unaffiliated parties, otherwise accountability will be muddled and innovation chilled. Requiring developers to justify redactions is less effective than not requiring developers to disclose any information that would include trade secrets, cybersecurity information, or other confidential or proprietary information. 

In NY, the proposed law would be especially harmful for publicly shared AI models, which allow researchers and startups to freely build and improve on each other’s work. Because the original developers can’t oversee every use by every user, legislation that blocks standard safeguards would expose them to lawsuits for things they did not cause or intend. Such a change would force many projects to shut down or move out of state. Some legislators have argued that the RAISE Act should go beyond the EU’s AI Act by mandating third-party audits. Under the EU’s new Code of Practice, providers of general AI models may opt for internal assessments in lieu of external ones. An independent audit requirement would create inflexible compliance costs without increasing safety. Ameliorating AI fear amongst the general public has been the intent of many legislatures passing such laws, but the consequences may be the exact opposite.

Conclusion

While both bills focus on the most powerful AI systems, NY’s version is more aggressive in fines and oversight, while California includes more of an interconnected framework and whistleblower protections, but still presents major concerns regarding impact to innovation and safety. With both states moving to regulate frontier AI, companies may face a medley of compliance burdens. The differences between the two give a snapshot into how diverse state-level AI regulation may become and into the unintended consequences such regulations will have on the digital ecosystem in the United States.

Aodhan Downey

State Policy Manager, West Region, CCIA

Kyle Sepe

State Policy Manager, Northeast Region, CCIA
Article

A New Era for American Space: The SAT Streamlining Act and Regulatory Modernization

The booming U.S. commercial space economy is not just a technological revolution, it is a critical frontier for American competitiveness. To ensure the United States remains the global leader in this ...
  • Space & Spectrum
Article

$600 Billion AI Abundance Dividend from Federal Preemption of State Laws

Recent reports indicate that the U.S. Congress is considering attaching a proposal to preempt state-level discriminatory regulation of AI to the National Defense Authorization Act (NDAA). If enacted, ...
  • Emerging Technology
Article

Is the Digital Markets Act Limiting European Businesses’ Potential? 

Some time ago, I joined a panel discussion asking a very timely question about Europe’s digital future: Is the Digital Markets Act (DMA) levelling the playing field for EU businesses, or limiting th...
Article

In Pictures – Online Personalisation and Consumer Experience Take Centre Stage at CCIA Europe Roundtable

On 20 November 2025, the Computer & Communications Industry Association (CCIA Europe) convened a roundtable in Brussels to discuss the Digital Fairness Act (DFA). The event focused on key question...