Computer & Communication Industry Association
PublishedSeptember 15, 2025

California’s Legislative Push for Tech Regulation: A Glimpse into the 2025 Session

During the 2025 California legislative session, lawmakers continued their aggressive approach to regulating the technology industry with a particular focus on artificial intelligence, online safety, and data privacy. For the Computer & Communications Industry Association (CCIA), this session has been defined by proactive engagement and a push for legislation that allows for innovation.

Artificial Intelligence and Automated Decision-Making

A major theme of the 2025 session has been regulating automated decision systems (ADS), which use AI to make or facilitate consequential decisions, particularly in the workplace. The California Civil Rights Council finalized new regulations that will take effect on October 1, 2025. These regulations aim to prohibit employers from using ADS that result in discrimination and require employers to keep records and conduct bias audits. However, the new rules have broadly defined “automated decision system,” likely impacting any computational process used for employment decisions, from resume screening to performance evaluations. CCIA submitted comments on the proposed regulations.

On the legislative front, bills like AB 1018 aimed to impose new compliance requirements on developers and deployers of AI systems designated as “high-risk.” This bill would have mandated impact assessments and required businesses and government agencies to establish risk governance programs. CCIA and its members have consistently advocated for a balanced approach to AI regulation that targets actual risks. This measure would have had the potential to cost the state hundreds of millions of dollars but the true cost was unknown due to contradictory information from state agencies regarding their current use of AI systems. This measure failed to move through the legislature before adjournment and will carry over to the 2026 session. 

SB 7 is known as the “No Robo Bosses Act” and would require an employer to provide a written notice that an ADS is in use at the workplace to all workers who will foreseeably be directly affected by the ADS. This measure would limit the purposes and manner in which an ADS may be used to make decisions. Many larger firms would be able to handle this shift, however, there are no exemptions for small businesses or independent contractors. The efficiencies created by this technology would likely be lost to smaller firms, and competing in the digital marketplace would be that much harder. The broad language used in SB 7 would apply to low-risk ADS software, such as an automatic scheduler, and would call for an expansion in human resource capacity, as labor-intensive activities would require human oversight. This measure passed the legislature, but was vetoed by Governor Gavin Newsom. The Governor cited concerns about overly broad restrictions for businesses using ADS tools as his reasoning for vetoing the bill.

SB 53 is attempting to establish AI safety regulations after Governor Gavin Newsom vetoed SB 1047 over concerns that it was too broadly written and not appropriately risk based. Building off the report from the Joint California Policy Working Group on AI Frontier Models, SB 53 focuses on frontier models that are deemed to present a “credible risk” and have mandated transparency requirements and whistleblower protection. SB 53 inappropriately focuses on large developers without considering that small companies can create powerful models that may post safety risks. The bill does not recognize that multiple actors including downstream deployers can modify models in a way that could potentially increase safety concerns. This measure passed the legislature and was signed by Governor Gavin Newsom. The majority of the measure will take effect on January 1, 2026, with some provisions taking effect January 1, 2027.

SB 259 and SB 295 were two measures that focused on the use of algorithms in pricing. Even with multiple amendments, both measures were not able to address the concerns from interested stakeholders. These measures will carry over to the 2026 session as they were not passed before the legislature adjourned.

Privacy and Data Brokers

California’s privacy landscape, already defined by the California Consumer Privacy Act (CCPA), is seeing further refinement and new enforcement efforts. A key bill, SB 361, aims to expand data broker registration requirements. It requires data brokers to provide more detailed information about the types of data they collect and whether they have sold or shared personal information with law enforcement or AI developers. This bill also requires data brokers to process CCPA opt-out requests within 45 days. This measure passed the legislature and was signed by Governor Gavin Newsom. This measure will take effect January 1, 2026.

The CCPA’s rulemaking process has also continued, with the California Privacy Protection Agency (CPPA) drafted new regulations focused on Automated Decision Making Tools and Risk Assessments. CCIA filed comments expressing concerns that some of the draft rules could create unnecessary burdens and confusion, particularly with overly broad definitions of “automated decision-making technology.” Experts urged the CPPA to refine the rules in a way that protects user privacy while preserving companies’ ability to innovate. The Office of Administrative Law approved the proposed rules and the rules have been filed with the Secretary of State’s office. 

Social Media and Online Safety

The debate over social media’s impact on youth mental health has also been a central part of the 2025 session. AB 56, a bill that requires “social media platforms” to display a “black box warning” about mental health harms, has advanced in the legislature. The bill requires a government-mandated warning label for minors on “social media platforms” that a covered platform “has reasonably determined” to be under 18 years old. That means children will face warnings, even when accessing safe, educational, or supportive content. This measure passed the legislature and was signed by Governor Gavin Newsom. This measure will go into effect January 1, 2027.

AB 410 would have required a chatbot to disclose that it is a bot and not a human being, answer truthfully when asked to verify that it is a bot, and to refrain from attempting to mislead a person regarding its identity as a bot. This measure was not voted out of the Senate Appropriations Committee before the deadline and will carry over to the 2026 legislative session.

AB 1064 would have prohibited the development and use of any artificial intelligence (AI) system that would benefit children in the state. Restrictions in California this severe will disadvantage California companies developing AI technology in the state. This measure passed the legislature but was vetoed by Governor Gavin Newsom. The message cites concerns about the breadth of the bill and states that SB 243 being signed creates a framework for chatbot developers to adhere to, creating stronger child safety protections.

SB 243 creates standards that chatbot service operators must follow in order to provide a safer environment for children, while also not creating an overbroad ban on AI products. This measure passed the legislature and was signed by Governor Gavin Newsom. This measure will become effective on January 1, 2027.

Conclusion

California lawmakers view themselves as on the forefront of technology regulation. The focus on AI, privacy, and online safety has led to a number of significant bills and regulations that will impact how technology companies operate. For CCIA, this session has required a multi-pronged approach of lobbying, regulatory engagement, and public commentary to ensure that regulations protect consumers without creating an undue burden on innovation. The ongoing friction between legislative intent and industry concerns suggests that the debate over technology’s role in society is far from over in the Golden State.

Megan Stokes

State Policy Director, CCIA
Article

$600 Billion AI Abundance Dividend from Federal Preemption of State Laws

Recent reports indicate that the U.S. Congress is considering attaching a proposal to preempt state-level discriminatory regulation of AI to the National Defense Authorization Act (NDAA). If enacted, ...
  • Emerging Technology
Article

Is the Digital Markets Act Limiting European Businesses’ Potential? 

Some time ago, I joined a panel discussion asking a very timely question about Europe’s digital future: Is the Digital Markets Act (DMA) levelling the playing field for EU businesses, or limiting th...
Article

In Pictures – Online Personalisation and Consumer Experience Take Centre Stage at CCIA Europe Roundtable

On 20 November 2025, the Computer & Communications Industry Association (CCIA Europe) convened a roundtable in Brussels to discuss the Digital Fairness Act (DFA). The event focused on key question...
Article

How to Hide a Discriminatory Tax: Call It an Incentive

The Australian Government has formally begun its consultation process on its proposed News Media Bargaining Code Incentive. As anticipated, the country is continuing down the path of penalizing specif...
  • Digital Economy