Computer & Communication Industry Association

Artificial Intelligence

Artificial intelligence (AI) is becoming a central consideration in policy debates around privacy, competition, workforce development, and innovation. As governments and regulators work to balance the benefits of AI with potential risks, stakeholders are increasingly focused on frameworks that protect consumers while preserving the conditions needed for technological progress.

CCIA’s view:

In the fast-evolving field of AI, it is important to find a balance in regulation in order to ensure that rules are not so rigid as to hinder innovation. Achieving this balance requires thoughtful, adaptable regulations that are informed by the principles of responsible AI and can be applied across diverse contexts. Rather than imposing overly detailed rules, the focus should be on establishing frameworks that enable the designing of AI systems that serve society’s best interests, while actively assessing and addressing risks throughout the development and deployment phases. Additionally, in the absence of a single federal framework regulating AI, any single state’s efforts to implement overly broad regulation would potentially place that state at a competitive disadvantage by inhibiting the use of new technologies to further growth, while other states may not implement such obstacles. 

There are multiple entities involved in an AI system—developers, deployers, users, and compute resources. It is crucial to correctly assign liability among them. Legislation should ensure that developers and deployers are not held liable for the harmful actions of users. Similarly, end-users should not be responsible for intentionally created flaws in an AI model, such as one that consistently produces biased outcomes. Correctly assigning responsibility ensures that liability falls on the party best positioned to prevent harm and be held accountable for any damages.