The proposed NO FAKES Act of 2025 before the U.S. Congress is the latest example of the “we must do something; this is something; therefore we must do this” fallacy in action. While curbing infringing uses of one’s digital name, image, and likeness is an important issue, NO FAKES creates a burdensome federal compliance regime which goes substantially farther than any federal or state analog. While implementing a notice-and-takedown system which allows a rightsholder to provide notice to an online service provider to take down content is not a new concept, for the first time under U.S. law, NO FAKES would additionally require around the clock monitoring of identical infringing content across the entire internet (also known as notice‑and‑staydown) for most digital services. If enacted, this legal design carries large, recurring costs for online businesses of all sizes, creating enormous barriers to entry and potentially forcing market exit for smaller players. This mandatory filtering and surveillance would also create significant risks for Americans’ speech and privacy.
- It costs a digital service tens of millions of dollars to build internal tools capable of compliance with NO FAKES, and hundreds of millions of dollars to make a full internal copyright compliance system inclusive of NO FAKES requirements.
- For smaller services that cannot afford such a capital expense, the use of vendors for digital fingerprinting and filtering of user uploads is likely to cost hundreds of thousands of dollars per year, and processing NO FAKES notifications is likely to cost hundreds of thousands of additional dollars per year that goes to lawyers and compliance staff rather than innovation and serving customers.
- In this article’s example, NO FAKES creates a $1.14 million annual compliance cost for a digital startup, plus a $500,000 fixed cost to fingerprint the existing catalog of user generated content, for a total first year cost of $1.64 million. For comparison, a typical early stage startup has just $8.7 million in total capital after receiving funding.
What NO FAKES Would Really Do
The 2025 version of NO FAKES establishes a notice-and-takedown regime for purported digital replicas that draws from concepts in the Digital Millennium Copyright Act (DMCA) and existing state laws in more than half of the U.S., which provide rights for one’s name, image, and likeness. Yet, NO FAKES goes further than the DMCA and every state law by conditioning the safe harbor on a de facto notice-and-staydown standard. The objective of the bill is to force rapid removal of content from digital services upon receiving a notice. But it also requires the numerous sites and apps allowing user‑uploaded content to block all future uploads that “match the digital fingerprint” of the notified material. The bill’s definition of “digital fingerprint” expects near‑perfect uniqueness (“effectively certain” not to misidentify), a high bar that pushes covered services toward expensive filters and cautious over‑blocking, which may restrict legitimate speech.
Importantly, the bill defines “digital replica” and covered “online services” incredibly broadly, including app stores, search engines, ad networks, e‑commerce, mapping, hosting, and cloud providers, so long as they provide public access to user-uploaded content. It creates aggressive statutory damages—up to $750,000 per work embodying an unauthorized digital replica, or per service or product used to generate an unauthorized digital replica—far in excess of plausible actual harms. It also classifies the law as “intellectual property” which would carve it out of Section 230, removing key protections for intermediary liability.
NO FAKES takedown requirements trigger “as soon as is technically and practically feasible” after receiving a notification making an allegation. The bill would only allow services to restore user-generated content if the user filed a lawsuit against the person sending the notification within 14 days of the content’s removal. Given the costs involved in filing such a lawsuit so quickly, it amounts to an effective prohibition on restoring user content for a typical non-commercial user.
Perhaps most importantly, NO FAKES does not require any actual showing of harm before liability attaches. Such a result would move substantially further than any similar state law which requires proof from a rightsholder of a “commercial misuse” before liability attaches. This is particularly notable for notice-and-staydown obligations in NO FAKES, which would likely lead to even heavier over-filtering of content, including lawful speech, in order to avoid substantial penalties. Additionally, the absence of an actual harm requirement is constitutionally suspect. As the U.S. Supreme Court ruled in TransUnion v. Ramirez, 594 U.S. 413 (2021), only plaintiffs concretely harmed by a defendant’s statutory violation have standing under Article III of the Constitution to seek actual and statutory damages available under federal law.
The result is a mandate for digital services to take down user-generated content upon receipt of an allegation—without considering whether the content was non-infringing or constitutionally protected. Once taken down, the same mandate would apply to any similar replicated work going forward. The costs associated with complying with such a regime are significant, and the potential for abuse that chills speech is enormous.
Estimating the Compliance Costs Associated with NO FAKES
Fixed Compliance Costs
- NO FAKES will cost covered services either hundreds of millions of dollars to build an bespoke, in-house content identification and filtering system, or up to $0.05 per image/audio/video generated by users—hundreds of thousands of dollars up front for a small digital service, and millions of dollars up front for a midsize service.
Designated agent & policy build‑out: Every covered provider must maintain a takedown agent listing in the Copyright Office directory, publish a repeat‑violator policy, and stand up processes to receive and act on notices. The DMCA designated agent fee is fairly trivial ($6 every 3 years) but the operational and legal setup isn’t. Even a small service will incur legal fees and training to operate a compliant pipeline.
Content identification (filters): Notice-and-takedown, and especially notice-and-staydown, requires investment in content identification and related filters. Content identification is a key cost driver of NO FAKES. A service can build a bespoke, in‑house system or a service may be able to license a tool from a vendor. Either path is expensive, and imperfect, often resulting in false positives. Researchers evaluating historic vendor benchmarks and digital service spending suggest per‑asset monthly pricing for tracking that implied monthly costs of tens of thousands of dollars for midsize services in 2011 when user generated content volumes were much lower than they are today, and therefore imply six‑ to seven‑figure annual bills today for mid‑sized libraries even before integration and storage.
For a concrete sense of magnitude, a mid‑size user-generated content site with ~50 million active media files that elects to fingerprint its back catalog at a price of up to $0.05 per asset per year (illustrative pricing from 2017 regulatory filing by Audible Magic) faces $2.5 million just in base vendor fees, before staff time, compute, and legal. At that point, many startups will either restrict uploads (e.g., images only, no audio/video), geo‑block U.S. users, or exit. There are also ongoing costs for fingerprinting new files, which can be treated as variable compliance costs.
Variable Compliance Costs
- NO FAKES will cost covered services between $9 and $30 per notification ($180,000 to $600,000 per month at 20,000 notifications/month) and up to $0.05 per image/audio/video file generated by users on an ongoing basis.
New User Generated Content Fingerprinting: As new user generated content files are uploaded to digital services, they must be “fingerprinted” as well, whether by an in-house system with a large up-front capital cost, or an ongoing vendor contract. We can treat this as an ongoing variable cost of up to $0.05 per file uploaded by users.
Notice processing: The bill requires removal “as soon as is technically and practically feasible,” plus user and rightsholder notifications and repeat‑violator tracking. Even if each notice consumes just 15–30 minutes of analyst time, at $35–$60/hour, digital services spend $9–$30 per notice. At just 20,000 notices per month, that’s $180,000–$600,000 monthly. This is not a hypothetical scale problem: individual digital services have disclosed hundreds of millions of notices or copyright actions in six‑month windows. Tellingly, fewer than 1% of automated claims were disputed, but a large share (up to 60%) of disputes were resolved in favor of the users uploading content—signaling both the volume of notices and the error/abuse potential of notice‑driven systems.
Upload‑time hashing/matching: Staydown requirements turn every upload into a compute job. Perceptual hashing and robust matching across images, audio, and video increase CPU, memory, and storage consumption. Those cloud costs scale with user activity. The aforementioned vendors exist to provide solutions in part because very few firms can do this in‑house at tolerable error rates.
Litigation Risks
- NO FAKES will cost covered services up to $750,000 per work in statutory damages.
The statutory damages structure is punishing. An online service that “undertook a good‑faith effort” but still erred faces $25,000 per work; without good‑faith compliance, it faces $5,000 per display/copy/transmission, capped at $750,000 per work, plus punitive damages for willful conduct. There is an “objectively reasonable belief” carve‑out, but covered websites will still price in high legal risk, higher insurance premiums, and cautious moderation. How could they do otherwise, when just 1,334 user-generated content uploads out of many millions or billions could lead to over $1 billion in statutory damages? A trial court showed no hesitation to hit Cox with $1 billion in statutory damages in a recent copyright case.
Crucially, the bill lacks a counter‑notice regime, which in the DMCA enables allegedly infringing content to be put back up. Instead, NO FAKES prohibits restoration of content, unless a lawsuit filed within 14 days proves that the original notice was false or deceptive. That asymmetry all but guarantees over‑removal and makes end users bear the litigation burden.
Finally, the bill creates a fast lane to unmask users: a rightsholder files a compliant notice, attaches a sworn declaration, and the clerk “shall” issue a subpoena compelling the service to disclose identifying information. That exposes creators and critics to doxxing or retaliation and forces providers into rapid, lawyer‑intensive disclosure workflows, without the procedural safeguards of the DMCA’s subpoena provision.
Costs for a Hypothetical Small Digital Service Provider
- NO FAKES first-year compliance costs would consume up to 19% of an average early round of venture capital funding. Ongoing NO FAKES compliance costs would consume an entire early funding round in as little as 7.2 years.
Consider the illustrative example of a hypothetical small digital service called “SmallTech” that has just raised a typical early round of venture capital (VC) financing. Per the 2024 NVCA Yearbook, in Q4 2023 there were 837 early VC rounds totaling $7.32 billion, meaning the average early VC round was about $8.7 million, and that figure represents SmallTech’s entire capital runway before it must either raise another round of funding or shut down. Suppose SmallTech is now faced with new NO FAKES compliance burdens. Assume SmallTech has 10 million user generated content files in its catalog as of the effective date of the NO FAKES Act, and its users are uploading 1 million new files per month.
SmallTech cannot afford the cost of creating an in-house content identification system, so SmallTech uses an outside vendor. Fingerprinting their back catalog of user generated files costs about $0.05 per file * 10 million files = $500,000 as an up front fixed cost. Fingerprinting ongoing user-generated file uploads at 1 million files per month will cost an additional $50,000 per month, equal to $600,000 per year. We will assume the vendor service includes ongoing filtering, but that SmallTech must process NO FAKES notifications internally. Assuming 0.5% of user uploads results in a NO FAKES notification, that implies 5,000 notifications per month, which each take at least 15 minutes to process at a total employee cost to SmallTech of about $36 per hour (so $9 per notification). This implies a monthly NO FAKES notification processing cost of $45,000 per month, equal to $540,000 per year.
Before taking any other compliance costs into account, NO FAKES creates a $1.14 million annual compliance cost for SmallTech plus a $500,000 fixed cost to fingerprint the existing catalog of user generated content, for a total first year cost of $1.64 million. That means 19% of SmallTech’s entire $8.7 million in capital runway is lost to core NO FAKES requirements in just the first year after the law becomes effective. Just the cost of digital fingerprinting of user generated content and processing NO FAKES notifications would deplete the remaining capital runway by 7.2 years after NO FAKES became effective.
Competitive Effects from Barriers to Entry
Mandated matching and staydown create economies of scale: the bigger your user base, the more you can amortize expensive filters and trust‑and‑safety teams; the more notices you process, the more automation you can justify. Smaller competitors can’t. The result is a structural barrier to entry and likely market exit among community websites, forums, and other smaller sites carrying user-generated content. A notice‑and‑staydown design will predictably drive over‑removal and disproportionate compliance burdens on small firms.
The bill also sweeps in tools and services “primarily designed” to produce a digital replica of a specifically identified individual—exposure that will chill startups offering synthetic voice, dubbing, accessibility, and creative tools unless they invest in costly gating, licensing, and provenance features. That’s a significant compliance hurdle for early-stage companies.
Costs and Harms for Everyday Americans
The same mechanics that expedite rights-holder requests also maximize friction for lawful users:
- Speech comes down first: Satire, commentary, documentary uses, and other lawful expression can be caught in the net; the end user’s restoration path runs through federal court. That’s an unrealistic remedy for most people, so suppressed speech often stays suppressed.
- Anonymity erodes: The clerk‑issued subpoena route makes it easier to unmask speakers based on a notice and a declaration, not an adversarial hearing. The impacts to political speech and whistleblowing are likely to be deleterious.
- Abuse is cheap; defense is costly: We’ve seen at scale that automated or templated claims are easy to send and rarely contested by uploaders, even when uploaders ultimately prevail. A system that presumes notices are right, requires staydown, and offers no routine counter‑notice will be used to suppress or demonetize opponents’ content. Civil‑society analysis of this bill warns precisely about entrenching filters and facilitating takedown abuse. Experience under the DMCA, which as noted has more safeguards than NO FAKES, is telling: for one company, “Over 6% of videos requested for removal through the public webform in 2024 were the subject of abusive copyright removal requests.” NO FAKES may face an even higher rate, as NO FAKES pushes digital services to lean more strongly toward automated takedown without likely restoration of content than under the DMCA.
The Bottom Line
The NO FAKES Act is a high‑cost “something” that can be done in response to concerns about digital replicas that generates many more problems than it solves. By federalizing a new likeness right and coupling it to notice-and-staydown requirements, aggressive statutory damages, and easy unmasking, it creates millions in new annual costs for digital services, both small and large, which favors incumbents and creates barriers to entry. NO FAKES also encourages over‑removal of content, and incentivizes strategic use of bad-faith notices against ordinary users to suppress legitimate speech such as satire—all while giving ordinary speakers an impractical remedy requiring the costs of filing a federal lawsuit within 14 days of content takedown.