Washington – The Computer & Communications Industry Association released a report today examining the 2025 surge of state-level legislation introduced to regulate artificial intelligence. Nearly every state in the country considered measures related to AI this year. This reality reflects a growing legislative interest but also raises concerns about the emerging patchwork of rules that could affect innovation, research, and business compliance nationwide.
The report analyzes several categories of AI-related proposals, including safety guardrails for advanced AI models, restrictions on chatbot use among minors, digital watermarking requirements, deepfake liability rules, and expanded right of publicity protections. Many of these measures seek to address legitimate concerns about misuse of AI technologies, but the report cautions that overly broad or unclear legislation could limit beneficial applications, increase compliance risks, and stifle responsible development.
In California, lawmakers advanced multiple proposals focused on online safety and AI accountability. This includes SB 53, which establishes transparency and reporting requirements for certain high-risk AI systems, and SB 243, which addresses chatbot interactions and the protection of young users. CCIA joined coalition efforts in the state to encourage a more balanced, risk-based approach that can adapt to new AI developments.
In New Hampshire, legislators examined liability concerns related to chatbots and automated systems. CCIA engaged with state leadership to help ensure that new requirements do not unintentionally limit access to beneficial digital tools or expose responsible developers to unreasonable legal risk. The final version of HB 143 reflected several improvements recommended by stakeholders.
In New York, lawmakers passed the RAISE Act (A 6453), which would make AI developers liable for outcomes outside of their control and restrict safe, open research practices. CCIA submitted testimony and a veto request letter outlining the potential negative economic and innovation impacts of the legislation and encouraging state leadership to focus on clear, targeted rules that address harmful conduct without discouraging investment in research.
While state interest in AI governance continues to grow, the report emphasizes the importance of aligning policy solutions with the appropriate party responsible for the technology’s use. Effective AI governance should recognize the different roles of developers, deployers, and users, and ensure accountability is placed where it can actually prevent harm. An updated map and one pager, illustrating AI-related legislation introduced or advancing across all 50 states are available on CCIA’s website.
The following statement can be attributed to Megan Stokes, State Policy Director for CCIA:
“Policymakers across the country are working to understand how best to address emerging AI use cases. As these conversations continue, it’s essential that legislation be precise, risk-based, and workable in practice. Approaches that are too broad or assign responsibility to the wrong actor may unintentionally limit innovation and access to useful tools. We look forward to continuing to work with lawmakers to develop frameworks that support safety, accountability, and continued technological progress.”