As policymakers consider various proposals for safeguarding younger users online, this article serves as a guide to understanding most commonly used legal knowledge standards. It describes existing knowledge standards in U.S. civil law — actual knowledge, reckless or willful disregard, and constructive knowledge — and how some federal online safety proposals seek to import these requirements.
Based on these dynamics, we conclude that the existing Children’s Online Privacy Protection Act (COPPA) “actual knowledge” standard is preferable for proposed online safety legislation because it is clearly articulated through a well defined mix of judicial interpretation and regulatory guidance. Imparting a new legal knowledge standard, especially the constructive knowledge standard in some legislative proposals, would require substantial fact-based analysis, which could lead courts to disagree on its meaning and issue conflicting rulings that frustrate Congress’s purpose of strengthening online protections.
- What is a knowledge standard?
A knowledge standard is the state of mind a defendant must possess to incur liability under law. The legal distinctions between states of mind are more fully developed in criminal law, where they are known as mens rea (“guilty mind”), but analogous standards can be used to determine civil liability. In the online safety context, knowledge standards often determine a covered digital service’s obligations under laws that protect younger users online. Federal courts have summarized some common knowledge standards as follows:
Actual Knowledge: “[A]n awareness or understanding of a fact or circumstance; a state of mind in which a person has no substantial doubt about the existence of a fact.”
Reckless/Willful Disregard: Recklessness is an “unjustifiably high risk of harm that is either known or so obvious that it should be known.” Federal courts generally treat willfulness as equivalent to recklessness.
Constructive Knowledge: “The law will sometimes impute knowledge—often called ‘constructive’ knowledge—to a person who fails to learn something that a reasonably diligent person would have learned.”
Establishing any of these mental states also establishes every mental state below it. For instance, proving that a defendant acted knowingly will also prove that the defendant acted recklessly and with constructive knowledge.
Many online safety and privacy laws regulate digital services based on the information they collect, the user base, and associated risks, which collectively determine obligations under a given knowledge standard. For instance, the Video Privacy Protection Act (VPPA) holds a service provider liable if it “knowingly discloses” a consumer’s personal information. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) imposes civil penalties for persons who unknowingly disclose protected information based on whether they exercised reasonable care, acted negligently, or acted with willful disregard.
Note, however, that privacy laws use the above knowledge standards to assess penalties, but not to determine which entities the laws cover. Instead, federal privacy laws generally cover entities based on which activities they perform or what records they possess. Apart from the COPPA actual knowledge standard discussed below, federal privacy laws have not used knowledge standards to assess who is covered under the law.
- COPPA relies on an “actual knowledge” standard.
COPPA is the primary federal law regulating the processing of children’s data online. COPPA “imposes certain requirements on operators of websites or online services directed to children under 13 years of age, and on operators of other websites or online services that have actual knowledge that they are collecting personal information online from a child under 13 years of age.” The first of these categories relates to an online service’s purpose; the second relates to its actual knowledge.
COPPA’s statutory text does not define the term “actual knowledge” for digital service “operators” that provide services to children. However, the Federal Trade Commission (FTC) has issued guidance on this point. According to the FTC, “an operator has actual knowledge of a user’s age if the site or service asks for — and receives — information from the user that allows it to determine the person’s age,” e.g., by asking for a user’s date of birth. Actual knowledge can also be satisfied through age-adjacent requests which provide operators with sufficient notice of a user’s age. For example, questions that ask an individual’s grade in school or what school they attend may satisfy actual knowledge under COPPA.
Third-party sites or services may have actual knowledge under COPPA if an operator directly communicates to the third party about the nature of its product or service, or if an individual informs a third party that an operator is collecting information from children. In sum, operators have actual knowledge that a user is a minor if they receive information allowing them to make such a determination, whether from a third party or through their own requests.
FTC consent decrees have further refined this standard. In 2019, the FTC obtained a consent decree classifying operators’ use of “both automated and manual means” to designate certain content as “generally intended for children ages 0-7” as actual knowledge, even if the operator did not directly ask individuals for their ages. Likewise, in United States v. Musical.ly, the FTC obtained a consent decree holding that users who “self-identify as under 13 in their profile bios or provide grade or school information indicating an age under 13” provide actual knowledge to operators that they are under 13.
Federal courts have clarified this standard as well. In New Mexico v. Tiny Lab Productions, a federal district court held that receiving automated transmissions of data signals containing a user’s age did not establish actual knowledge. In the pending case United States v. ByteDance, a federal district court will decide if a company’s internal algorithmic flags indicating that a user is under 13 provide actual knowledge of the user’s age.
The combination of underlying federal law, regulatory guidance, FTC enforcement actions, and judicial interpretation provide operators with many tools for understanding compliance requirements. This is of particular importance given the numerous privacy and safety requirements that apply under COPPA when operators obtain actual knowledge.
- Recent congressional proposals create novel knowledge standards.
In recent years, numerous federal bills have attempted to create new legal knowledge standards for online safety or modify those in existing laws like COPPA. Many of these proposals seek primarily to promote online safety through new and far more rigorous requirements for digital services or to increase the number of entities which should comply with COPPA. We explore some examples below.
- The Kids Online Safety Act proposals have used several different knowledge standards.
Various iterations of the Kids Online Safety Act (KOSA) exemplify some options Congress is considering for an online safety knowledge standard. When first proposed in the Senate in 2022, the legislation was silent on the issue of what knowledge standard providers would be held to. In 2024 and 2025, the Senate proposed requiring “actual knowledge or knowledge fairly implied on the basis of objective circumstances,” a formulation which most closely aligns with the constructive knowledge standard already existing in civil law. The House of Representatives included a tiered knowledge standard in its 2024 version of KOSA, under which small companies would be held to an actual knowledge standard, larger companies would be held to a standard of actual knowledge or willful disregard, and the largest companies would be held to a “knew or should have known” standard. In 2025, a revised House proposal held all companies to a standard of actual knowledge or willful disregard.
- The Senate’s most recent knowledge standard for KOSA would create legal uncertainty in the online safety space.
The knowledge standard in the Senate’s most recent KOSA proposal mirrors one selected provision in the FTC Act that allows the agency to assess liability and penalties for violations of rules or cease and desist orders using “knowledge fairly implied on the basis of objective circumstances.” This approach raises multiple concerns. The selected standard is just one of multiple knowledge standards found in FTC law and jurisprudence (there is, for instance, no requirement that the FTC prove intent or knowledge to establish that a defendant’s conduct is either unfair or deceptive). If the Senate wished to apply an FTC-minted standard, there are others available, such as COPPA’s, that have been regularly applied in online safety contexts.
Moreover, of these available standards, the selected FTC Act standard represents a curious choice because it was not intended to apply to online safety. Rather, the standard was developed specifically as a tool to enforce administrative cease-and-desist orders or FTC rules.
Finally, no federal privacy law has used constructive knowledge (or an adjacent “knowledge fairly implied” standard) to assess which businesses are covered. Consequently, federal regulations detailing an online safety law’s scope using this knowledge standard have not been developed. Similarly, courts, industry associations, and other organizations that can guide compliance practices have not crafted guidelines for applying this standard in an online privacy context. Accordingly, digital services will have far less clarity regarding their expected compliance measures under a constructive knowledge or adjacent standard than under actual knowledge. If the goal is to regulate minors’ online conduct, a more natural and intuitive fit would be to adopt the knowledge standard currently used by digital services and other online content providers to protect their users: COPPA’s actual knowledge standard.
- The proposed knowledge standards for the Children and Teens’ Online Privacy Protection Act are similarly evolving.
A legislative effort known as the Children and Teens’ Online Privacy Protection Act (COPPA 2.0) has aimed to extend some or all of COPPA’s existing privacy protections for users under 13 to teenage users. Similarly to KOSA, Congress has proposed a variety of knowledge standards. In 2024 and 2025, the Senate proposed a constructive knowledge adjacent formulation, while the House proposed a tiered approach holding smaller companies to an actual knowledge standard and larger ones to a standard of actual knowledge or willful disregard. These new formulations of the knowledge standard would also cause substantial concerns, as detailed below.
- Imposing a new knowledge standard for online safety legislation invites uncertainty.
Congress’s proposals for modifying the knowledge standards used in online safety and privacy law raise several key questions. Which digital services does Congress intend to regulate? Would a new knowledge standard negate existing judicial and agency guidance under federal online safety laws like COPPA? Finally, when disputes inevitably arise as to the specifics of a new knowledge standard, should courts start over in their analysis?
- Constructive knowledge or adjacent standards would exponentially broaden liability.
Existing congressional online safety proposals indicate a clear intent to regulate additional digital services. Few legislative drafts maintain an “actual knowledge” standard, instead adopting broader standards which expand regulation of digital services based on their perceived accessibility to children.
Some lawmakers explain that these changes are needed due to purported “compliance avoidance.” Yet, a constructive knowledge or adjacent standard is ultimately much less precise as to which digital services the law intends to regulate. This ambiguity poses real compliance challenges. When liability turns on subjective judgments regarding objective factual circumstances, digital services are left to infer their obligations. Legal uncertainty will complicate product design decisions, risk assessments, and investment planning, particularly for smaller digital services with limited resources for compliance operations. While Congress may view broader regulatory coverage as a feature rather than a flaw, ambiguous standards risk creating a compliance environment where digital services struggle to determine not just how to comply, but whether they are covered at all.
- A constructive knowledge or adjacent standard conflicts with existing regulatory actions.
Pending online safety bills that depart from the “actual knowledge” framework risk upending administrative guidance which has existed for decades. As noted earlier, regulators have developed a body of COPPA guidance clarifying when a digital service is deemed to have actual knowledge that it is collecting personal information from a child. Amending this standard would undercut existing regulatory guidance.
Crucially, no federal privacy law has ever used a constructive knowledge or adjacent standard to assess whether a particular entity is covered. Consequently, federal regulations regarding when an entity “should have known” that it is within a given law’s scope have not been developed. Accordingly, digital services will have far less clarity regarding their expected compliance measures than under COPPA’s existing actual knowledge framework.
Furthermore, a new congressional knowledge standard would undoubtedly conflict with existing regulations. Online safety regulators, like the FTC, often accompany enforcement actions with guidance stating why they have brought a given action, which can help clarify key legal concepts like actual knowledge. Altering COPPA’s actual knowledge standard would therefore generate significant confusion as digital services attempt to reconcile legacy guidance with new statutory obligations. Enforcement agencies would also be left to apply new, amorphous standards without clear legislative direction, increasing the risk of inconsistent or unpredictable outcomes.
Conduct that regulators previously treated as outside the scope of federal laws like COPPA could suddenly become actionable, even where a company has taken steps consistent with prior guidance. This not only complicates forward-looking compliance efforts, but also raises questions about how past and ongoing enforcement actions should be understood under a newly expanded standard that is neither clearly defined nor historically grounded.
Even issuing new regulatory guidance to resolve scoping ambiguities carries its own risks. Agency interpretations are inherently subject to change across administrations, particularly in an area as politically salient as online safety. This dynamic would increase the likelihood of inconsistent enforcement and undermine the stability that clear, durable legislative scoping provides.
- Creation of a new knowledge standard will introduce confusion for consumers, regulated businesses, and the courts.
A new statutory knowledge standard would also impose substantial burdens on the courts. Federal courts have spent more than a decade developing case law regarding what constitutes actual knowledge under COPPA, drawing careful distinctions between awareness and intent. A new standard would effectively reset that jurisprudence, requiring courts to revisit foundational questions that have largely been settled.
However courts ultimately settle these questions, judicial analysis would become highly fact-specific and inconsistent. Determining whether knowledge can be “fairly implied” from “objective circumstances” will almost certainly require case-by-case evaluations of granular details about digital services’ designs, user behavior, and internal processes, balanced against common industry practices. While these analyses happen in existing litigation, it usually occurs after the discovery phase, in which parties to a lawsuit will employ costly experts to opine on the nuances of their conduct. Put another way, every claim would require extensive cost and legal uncertainty before even reaching the merits of liability and damages, a result that our litigation process is designed to minimize.
- Inability to resolve litigation at the motions stage leads to inconsistent decisions and standards.
Litigation in the courts is the most common venue for settling online-based harms and disputes. Introducing a constructive knowledge or adjacent standard would significantly depart from existing judicial interpretation of federal online safety statutes. Courts would therefore be tasked with defining when covered digital services “should have known” a user’s age, or when such knowledge is “fairly implied on the basis of objective circumstances.” These formulations would prove difficult to apply in practice.
Under such a fact-intensive analysis, early phases of litigation that dismiss claims without legal merit would become an impossible bar to meet. For example, a motion to dismiss is designed to test a complaint’s legal sufficiency, requiring courts to dispose of claims that fail on their face before parties incur the costs associated with document production, depositions, and expert discovery. Although surviving a motion to dismiss is a low bar, it is nonetheless important in screening out cases based on vague allegations rather than concrete ones.
If the current Senate version of KOSA were enacted, compliance questions would be fact-intensive inquiries regarding whether a digital service objectively “should have known” a user’s age, or where this knowledge was “fairly implied.” Under this standard, courts may be reluctant — or unable — to ever dismiss claims for legal insufficiency until more facts could be gathered in the discovery phase. This result follows because each digital service is unique, making it difficult to form broadly applicable guidelines regarding when companies “should have known,” or can be “fairly implied” to have known, that a user is a minor.
Another major litigation stage would also suffer. During the motion for summary judgment phase, legal analysis turns on whether there are genuine disputes of material fact. This process is often the most expensive phase of civil litigation, particularly in complex technology cases. Success means that claims have demonstrated sufficient fact and legal basis to proceed to a trial.
Here too, fact-intensive standards like constructive knowledge or adjacent standards would prevent expeditious resolution of online safety and privacy cases. Under such standards, litigating these motions would require examining all information a digital service possesses regarding a given user and then deciding what conclusions about a user’s age could “reasonably” be drawn from this information.
These new dynamics would significantly affect litigants’ incentives. If defendants are routinely forced past the motion to dismiss phase and into full discovery, the cost of litigation itself becomes a coercive force, encouraging settlements unrelated to the strength of the legal claims. This dynamic is particularly troubling in the online safety context, where allegations about knowledge, foreseeability, and reasonableness may rest on broad assertions rather than concrete evidence. By making early dismissal functionally unattainable, an overly fact-intensive knowledge standard risks transforming the litigation process into a blunt regulatory tool — one that imposes substantial costs and uncertainty even in cases that ultimately fail on the merits.
- Legal uncertainty will undoubtedly lead to years of litigation which must be settled by the Supreme Court.
As discussed, judges across the country may diverge sharply in how they interpret a new knowledge standard, particularly when balancing child safety objectives against constitutional and statutory constraints. These divergences are likely to produce conflicting standards across jurisdictions, with some courts adopting expansive interpretations and others applying more restrained readings.
Over time, these inconsistencies would almost certainly produce circuit splits on threshold questions of coverage and liability, leaving the Supreme Court to resolve issues that could have been mitigated through clearer legislative drafting. In the interim, years of uncertainty would persist, frustrating Congress’s efforts to provide clear direction regarding how online services must protect children. In particular, this uncertainty would greatly disrupt the early stages of litigation which preserve claims against bad actors, while quickly dismissing claims that lack merit.
- Legal uncertainty will undermine privacy and competition.
- Fact-intensive knowledge standards will lead to overcollection of data about young people.
Should Congress adopt a more fact-intensive knowledge standard, covered providers must seek ways of attaining greater legal certainty to ensure that they are not unwittingly violating the law. Even if a new online safety law does not directly require age verification, covered providers will effectively be forced to adopt age verification measures anyway in order to minimize the risks of noncompliance. Strong algorithmic filters that overblock potentially sensitive, but constitutionally-protected, speech may also become the norm, especially as the contours of a law are continually evolving through the courts. Such a result would undermine privacy and data minimization by incentivizing covered businesses to overcollect sensitive data from both minors and adults.
Policies that run contrary to data minimization principles should be avoided. Indeed, data minimization prioritizes important privacy-enhancing features which limit data collection, retention, and processing to defined purposes. Requiring individuals to share sensitive personal information with third parties, including IDs or biometrics, can make recipients a prime target for identity theft, cyberattacks, or other data breaches. These dangers are far from hypothetical: several of the most devastating data breaches in recent years are directly attributable to age verification requirements.
- Forced age verification undermines competition.
Forcing companies to adopt age verification or face increased legal uncertainty undermines competition. Collecting sensitive data from one’s entire population of users, proactively screening it, and properly securing it is cost-intensive and serves as a barrier to entry for smaller businesses. Additionally, having to request a user ID is a deterrent for customers, and can cut conversion rates for new businesses in half.
As noted above, instituting age verification makes companies especially attractive targets for hackers. Startups are particularly financially vulnerable to data breaches, which cost companies an average of $160 per record in 2025, or $4.44 million per breach, enough to bankrupt many small companies. Consequently, over 60 percent of startups close after being hacked. For these reasons, a recent Digital Trust & Safety Partnership (DTSP) report, Age Assurance: Guiding Principles and Best Practices, found that “smaller companies may not be able to sustain their business” if forced to determine user ages.
Finally, introducing a new knowledge standard would deter competition by dissuading risk-averse businesses from entering the digital services marketplace. In areas where potential liability is high and the law is unsettled, entities will self-select in and out of relevant markets based on inherent risk tolerance, and those with the highest appetite for risk will remain. Creating a regulatory environment where the only digital services in the market are the ones willing to take the greatest risk is not in the best interest of online safety.
- Conclusion
The current actual knowledge standard best protects the public because of its legal certainty, and will lead to well-defined enforcement and civil actions that improve online safety. Amending this standard would upend this framework, potentially requiring decades of expensive litigation before covered entities could clarify their legal obligations. By the time these rules are clarified, digital services may well have evolved to the point where enacted online safety rules are now obsolete, frustrating stakeholders and lawmakers’ shared goal of protecting children online.