The U.S. Supreme Court just made it much harder to hold at least some internet intermediaries liable for what their users do. And in the process, it may have made key statutory safe harbors largely irrelevant.
The Court’s unanimous reversal of the billion-dollar copyright verdict against Cox Communications has drawn predictable headlines. Some commentators cast it as a reprieve for “mere conduit” internet service providers (ISPs) from overzealous copyright enforcement. At a doctrinal level, that’s at least directionally right: an ISP that provides undifferentiated internet access does not incur secondary copyright liability simply because some subscribers use that connection to pirate content.
But the decision’s real significance likely lies elsewhere. Its importance extends beyond the immediate holding to a more consequential question: how courts calibrate legal protections for online platforms and their management of user-generated content.
That shift deserves closer attention. It signals a growing judicial willingness to revisit longstanding immunities. Cox fits into an emerging line of cases—including Twitter v. Taamneh and Gonzalez v. Google—that increasingly render statutory safe harbors decorative by narrowing the scope of background secondary liability. The implications reach directly to issues we, along with Kristian Stout, explored in “Who Moderates the Moderators?: A Law & Economics Approach to Holding Online Platforms Accountable Without Destroying the Internet.”
The Court Draws the Line at Intent
Cox Communications serves roughly 6 million subscribers. Over about two years, it received more than 163,000 infringement notices identifying subscriber IP addresses linked to piracy. It terminated just 32 accounts. (For context, during that same period, Cox terminated “hundreds of thousands of subscribers for nonpayment.”)
Justice Clarence Thomas, writing for seven justices (with Justices Sonia Sotomayor and Ketanji Brown Jackson concurring separately), held that contributory copyright liability turns on intent. Plaintiffs must show either inducement or the provision of a service tailored to infringement. Mere knowledge that users infringe does not suffice:
The provider of a service is contributorily liable for the user’s infringement only if it intended that the provided service be used for infringement. The intent required for contributory liability can be shown only if the party . . . induces infringement [by] actively encourag[ing] infringement through specific acts. . . . [or] if [a service] is “not capable of ‘substantial’ or ‘commercially significant’ noninfringing uses.” (citing Grokster and Sony) (Slip op. at 7).
This Court has repeatedly made clear that mere knowledge that a service will be used to infringe is insufficient to establish the required intent to infringe. (Slip op. at 8).
Given the scale of lawful uses for internet access—and the absence of any evidence that Cox encouraged piracy—the majority treated the case as straightforward.
As a matter of copyright law, the holding is significant but relatively narrow. The Recording Industry Association of America (RIAA) called the decision “disappointing,” while emphasizing its limited scope: it applies only to contributory-infringement claims against defendants that do not themselves copy, host, distribute, or publish infringing material. Services that host or distribute infringing content remain exposed under Grokster’s inducement theory, vicarious liability (not at issue here), and the Digital Millennium Copyright Act’s (DMCA) notice-and-takedown regime under § 512(c). And in practice, the labels’ primary enforcement targets are not conduit ISPs, but platforms and services that more directly facilitate infringement.
A Safe Harbor With No Storm
Section 512(i)(1)(A) of the DMCA offers ISPs a deal: implement a policy to terminate repeat infringers “in appropriate circumstances,” and receive immunity from secondary copyright liability. Congress structured this as a bargain—conditional immunity in exchange for demonstrably reasonable behavior.
Cox effectively wipes out one side of that bargain. If ISPs face no secondary liability to begin with—because internet access is a general-purpose service with substantial noninfringing uses—then the safe harbor has nothing to shield. It becomes a phantom defense to a nonexistent claim—a shield without a sword.
Justice Sotomayor makes the point directly in her concurrence. Cox’s own counsel agreed at oral argument: under the majority’s rule, the safe harbor “will not do anything at all” going forward:
The majority’s decision thus permits ISPs to sell an internet connection to every single infringer who wants one without fear of liability and without lifting a finger to prevent infringement. It also means that Cox is free to abandon its current policy of responding to copyright infringement. As Cox’s counsel conceded at oral argument, under the rule the majority adopts today, the safe harbor provision will not “d[o] anything at all” going forward. Congress did not enact the safe harbor just so that this Court could eviscerate it. (Slip op., Sotomayor, J., concurring in judgment, at 7).
Once the out-of-harbor state carries no cost, the harbor stops functioning as a harbor. It instead operates as a subsidy for indifference.
The majority is remarkably unbothered by this. Justice Thomas dispatches the argument in two brisk paragraphs, emphasizing that the DMCA creates defenses—not liability—and that failing to qualify for the safe harbor does not count against a defendant who can show its conduct is not infringing.
Finally, Sony argues that the DMCA safe harbor would have no effect if Internet service providers are not liable for providing Internet service to known infringers. . . . Sony argues that Congress must have enacted the DMCA on the presumption that Internet service providers could be held liable in cases such as these.
Sony overreads the DMCA. Sony does not contend that the DMCA expressly imposes liability for Internet service providers who serve known infringers. It does not. The DMCA merely creates new defenses from liability for such providers. And, the DMCA made clear that failure to comply with the safe-harbor rules “shall not bear adversely upon . . . a defense by the service provider that the service provider’s conduct is not infringing.” (Slip op. at 10).
From Carrots to Blank Checks
Cox spotlights a familiar problem: an exclusive focus on intent risks foreclosing liability when an intermediary sits in the best position to mitigate harm—what law & economics calls the least-cost avoider.
We have seen this before in modern Section 230 jurisprudence. In “Who Moderates the Moderators?,” we argued that early internet law worked because it relied on conditioned immunity. Whether under Section 512 of the DMCA or the original conception of Section 230, immunity operated as a carrot—encouraging socially beneficial behavior, calibrated to costs and benefits:
The animating principle behind Section 230 was always to protect platforms from legal liability for their own efforts to deter undesirable online content… The relevant question attending Section 230 reforms that encourage platforms to engage in more moderation is not whether this will deter some legal/harmless content (it will), but whether the marginal increase in the amount of legal/harmless content deterred is warranted. (p. 35-36)
But courts have increasingly treated Section 230 as an absolute shield, untethered from a platform’s capacity to mitigate harm. That move removes the stick of potential liability. Cox does something similar to the DMCA safe harbor: it converts conditional immunity into unconditional immunity by eliminating the underlying threat that gives the condition its force.
The Court’s focus on specific intent—rather than knowledge—tracks directly from its 2023 ruling in Twitter v. Taamneh. There, the Court held that platforms do not aid and abet terrorism simply by offering a generally available service, even if they know terrorists use it.
The statutes differ—Taamneh arose under the Antiterrorism Act, Cox under copyright law—but the logic converges. In both, the Court permits intermediaries to take no meaningful action despite specific, actionable knowledge of harm, even where they might be uniquely positioned to reduce it. That result sits uneasily with the law & economics of intermediary liability. As we explained:
[T]he law has long wrestled with how to frame the legal duties owed by a service provider to its customers and the public, while also policing the bad acts of third parties. (p. 104)
[T]he common law has developed several standards of care for intermediaries in situations where the intermediary either otherwise prevents or reduces the direct enforcement of the law, or else where the intermediary is the least-cost avoider of harm, such that imposing upon it a duty of care results in the efficient level of precautions and activity to mitigate harm. (p. 106).
Raising the liability bar to specific intent or active inducement carries a predictable consequence: it effectively grants intermediaries a judicially conferred right to ignore particularized knowledge of ongoing harm on their services. That sits in tension with the standard economic rationale for imposing duties on actors who can prevent harm at relatively low cost.
To be sure, Taamneh reached a defensible result. The Court held that Twitter was not liable for aiding and abetting terrorism based on algorithmic recommendations alone. It also left the door open to liability where a platform “consciously and selectively” promotes terrorist content. As one of us noted at the time:
[T]his language could suggest that, as long as the algorithms are essentially “neutral tools” (to use the language of Roommates.com and its progeny), social-media platforms are immune for third-party speech that they incidentally promote. But if they design their algorithmic recommendations in such a way that suggests the platforms “consciously and selectively” promote illegal content, then they could lose immunity.
Perhaps algorithmic recommendations are, in fact, suitably neutral tools to merit immunity under Section 230 and to avoid liability under aiding-and-abetting statutes. The deeper problem lies not in the outcome, but in the reasoning. The Court sidesteps two central questions: whether an intermediary can monitor and control harms on its platform, and whether imposing liability would generate excessive collateral censorship.
We made this point before the Court decided Taamneh:
Taamneh presents a complex question of intermediary liability generally that goes beyond the bounds of a (relatively) simpler Section 230 analysis. . . . [I]ntermediary liability generally cannot be predicated on the mere existence of harmful or illegal content on an online platform that could, conceivably, have been prevented by some action by the platform or other intermediary.
The specific statute may impose other limits (like the “knowing” requirement in the Antiterrorism Act), but intermediary liability makes sense only when a particular intermediary defendant is positioned to control (and thus remedy) the bad conduct in question, and when imposing liability would cause the intermediary to act in such a way that the benefits of its conduct in deterring harm outweigh the costs of impeding its normal functioning as an intermediary. (emphasis added).
In other words, one might conclude—as we did—that Twitter had the technical capacity to monitor and control harmful content, but that imposing liability would generate excessive costs in the form of collateral censorship. The Court, however, largely bypassed that inquiry. In doing so, it risks collapsing the distinction between a dumb pipe and a sophisticated recommendation engine, making an intermediary’s ability to monitor and control increasingly irrelevant to the liability analysis.
Cox follows the same path. The majority centers intent and dismisses knowledge as insufficient. From a law & economics perspective, that is incomplete. Knowledge alone may not justify liability, but it may still identify an intermediary as the least-cost avoider—especially where monitoring costs are low and direct enforcement against end users is impractical.
The record in Cox underscores the point. Rightsholders sent Cox infringement notices tied to specific IP addresses, and Cox terminated at least some accounts. That suggests a meaningful—if imperfect—capacity to monitor and control user behavior. Given the difficulty of pursuing individual end users, that capacity matters.
The remaining question, then, is the one the Court largely leaves untouched: whether imposing contributory liability on Cox would generate costs that outweigh the benefits of reducing infringement.
The Unlikely Ally Who Takes Incentives Seriously
There is an irony here worth pausing over. The opinion in Cox that most closely tracks a law & economics approach to intermediary incentives—the one that takes seriously how conditional immunity shapes behavior—comes not from Justice Thomas or Justice Neil Gorsuch, but from Justice Sotomayor.
Her concurrence (joined by Justice Brown Jackson) reaches the same bottom line—Cox prevails—but through a framework that preserves the possibility of fault-based secondary liability grounded in common-law aiding and abetting. Drawing on Taamneh and Smith & Wesson Brands v. Estados Unidos Mexicanos, she articulates a standard requiring conscious participation in wrongdoing: an affirmative act coupled with intent to help the misconduct succeed.
Cox still wins under that test. Plaintiffs could not show that Cox intended to aid specific acts of infringement. A notice ties infringement to an IP address, not to a particular individual in a household, coffee shop, or dormitory. Without more granular knowledge—and without evidence of “pervasive, systemic, and culpable assistance”—plaintiffs established, at most, indifference:
Without proof that Cox knew more about individual instances of infringement, and without evidence of “pervasive, systemic, and culpable assistance” needed to support a more generalized theory of liability, plaintiffs have at most shown that Cox was “indifferent” to infringement conducted via the connections it sells. Mere indifference, however, is not enough for aiding and abetting liability to attach. (Slip op., Sotomayor, J., concurring in judgment, at 12).
What matters most, though, is not the outcome. It is the structure of the analysis. Sotomayor preserves what the majority discards: the incentive framework Congress built into the DMCA safe harbor.
Her reasoning closely tracks Reinier Kraakman’s gatekeeper-liability model. Conditional immunity works only if the condition has teeth. As she puts it:
The majority’s new rule completely upends that balance and consigns the safe harbor provision to obsolescence. . . . After today, ISPs no longer face any realistic probability of secondary liability for copyright infringement, regardless of whether they take steps to address infringement on their networks and regardless of what they know about their users’ activity. Slip op., Sotomayor, J., concurring in judgment, at 9)
Under the majority’s rule, an ISP faces no liability even if a customer walks in and announces he needs a new provider because the last one cut him off after years of piracy. The safe harbor’s logic—behave reasonably and receive protection—depends on the inverse: unreasonable conduct must carry risk. By foreclosing knowledge-based secondary liability for general-purpose service providers, the majority eliminates that risk—and with it, the incentive the safe harbor was designed to create.
It is, to put it mildly, an unusual moment. The most rigorous economic analysis of intermediary incentives in a Supreme Court copyright opinion comes from the Court’s left flank, not from the justices more typically associated with economic reasoning. But Sotomayor takes the core insight seriously: conditional immunity aligns private incentives with social welfare, and stripping away the condition while preserving the immunity breaks the mechanism entirely.
Not All Intermediaries Are Created Equal
To be fair, several distinctions caution against reading too much from Cox into the Section 230 context.
Cox concerns a conduit provider—an entity offering undifferentiated internet access. Cox does not host, curate, recommend, or monetize specific content. Content platforms operate differently. They maintain account-level data, deploy content-matching algorithms, exercise editorial judgment through recommendation engines, and receive particularized complaints tied to identifiable users.
That difference matters. As we have argued, intermediary liability makes economic sense only when the intermediary can monitor and control the relevant conduct, and when the benefits of deterrence outweigh the costs of disrupting the service. Cox may fail that test. A platform that hosts and curates content—and knows exactly who uploaded a defamatory post—may not. The informational gap that defeated liability in Cox (and in Taamneh and Smith & Wesson) narrows considerably for platforms with granular user-level data.
Our proposed approach to Section 230 reflects that distinction. In “Who Moderates the Moderators?,” we argue for conditioning immunity on a platform’s own conduct—specifically, whether it exercised reasonable care. That is not a derivative claim based on a user’s underlying offense. It is a straightforward negligence standard, closer to premise-liability doctrine than to the contributory-infringement theory at issue in Cox.
To avoid death-by-10,000-duck-bites litigation, we also propose procedural safeguards: heightened pleading standards, a “certified answer” mechanism allowing platforms that follow industry-developed best practices to secure early dismissal, and agency oversight (with a sunset) to guide the development of those standards.
Notably, the Motion Picture Association’s (MPA) amicus brief in Cox sketches a similar framework: graduated responses, proportionate measures, and conditional protection. In substance, the MPA argued for liability when ISPs fail to take reasonable, proportionate steps in response to known infringement—a duty-of-care standard. The Court rejected that premise by eliminating the underlying liability. But the framework itself remains analytically sound and readily generalizable beyond copyright.
The Genie, the Bottle, and What Comes Next
We should not overstate the holding. Cox may be a defensible application of existing secondary-liability principles to a passive conduit provider on these facts. From a social-welfare perspective, one could reasonably worry about a regime that imposes billions in liability on an ISP because a small fraction of its 6 million subscribers downloaded music. Still, Justice Sotomayor is right to flag the majority’s categorical rejection of knowledge-based theories.
The more interesting question is what comes next. The RIAA has already urged “policymakers” to “look closely at the impact of this ruling”—a thinly veiled call for legislative intervention. The MPA’s amicus brief, meanwhile, outlined a familiar framework: conditioned immunity, proportionate responses, and duties of care.
Cox may also complicate any lingering faith in the common-law process to calibrate intermediary liability in digital markets. The Court has now struggled with this problem across multiple domains: Cox in copyright, Taamneh and Gonzalez v. Google under the Antiterrorism Act, and a long line of Section 230 cases in the platform-liability context. The pattern is hard to miss. Courts preserve the formal structure of liability while hollowing out its practical effect.
Perhaps the genie does not go back in the bottle without legislative action. Unless policymakers restore the conditioned nature of safe harbors, we may continue to operate in a system where the machinery of intermediary accountability remains formally intact but functionally inert. (Of course, recent legislative proposals suggest that any statutory fix could easily make things worse.)
At a minimum, the logic of Cox unsettles our first reform principle:
First and foremost, we believe that Section 230(c)(1)’s intermediary-liability protections for illegal or tortious conduct by third parties can and should be conditioned on taking reasonable steps to curb such conduct, subject to procedural constraints that will prevent a tide of unmeritorious litigation. (p. 106).
And, more concretely:
[O]nline platforms should not face liability for communication torts arising out of user-generated content unless they fail to remove content they knew or should have known was defamatory. . . . Once it has such knowledge, however, it should have an obligation to make reasonable efforts to remove and prevent republication of the defamatory material. This is an extension of the common law rule for offline distributors of tortious content, keyed (again) to the relevant distinctions between offline and online intermediaries cutting both for and against heightened liability. (p. 110).
After Cox, it is no longer clear that this approach remains viable. That is unfortunate.
