Washington – The Computer & Communications Industry Association is testifying today before the California Assembly Privacy and Consumer Protection Committee to voice concerns over SB 243. While intended to protect children from deceptive chatbot interactions, the bill’s broad scope could impose costly requirements even on AI tools that are not designed to act like human companions or engage users in personal conversations.
Under SB 243, AI models that support everyday tasks like tutoring, mock interviews, or customer service could be classified as “companion chatbots,” even if they weren’t designed to simulate human companionship or meet users’ social needs. These tools would be subject to new rules, including repeated pop-up disclosures, mandatory audits, and detailed reporting requirements.
California law already requires bots to identify themselves under SB 1001, which was enacted in 2018 to prohibit bots from misleading users about their artificial identity during online interactions. CCIA believes layering additional obligations on low-risk AI tools would create compliance confusion without offering meaningful safety benefits.
The bill would also allow private lawsuits for even minor or technical violations, such as a brief delay in a required notification. CCIA recommends a more effective and balanced enforcement approach, such as centralized oversight by the Attorney General, which would promote consistency and allow businesses to seek guidance and demonstrate good-faith compliance.
The following statement can be attributed to Aodhan Downey, State Policy Manager for CCIA, who is testifying before the committee today:
“We agree with California’s leadership that children’s online safety is of the utmost importance, and our members prioritize advanced tools that reflect that priority. But SB 243 casts too wide a net, applying strict rules to everyday AI tools that were never intended to act like human companions. Requiring repeated notices, age verification, and audits would impose significant costs without providing meaningful new protections. We urge lawmakers to narrow the scope of this bill and move toward a more targeted, consistent approach that supports both user safety and responsible innovation.”