Home EconomyThe Hidden Premise: Smuggling Paternalism Through the Back Door

The Hidden Premise: Smuggling Paternalism Through the Back Door

by Staff Reporter
0 comments

A familiar pattern has taken hold in platform regulation—and in the academic and policy commentary that surrounds it. Critics spot a real phenomenon, recast it as market failure, and then press for intervention that far outstrips what the evidence can support. The result: arguments that read as persuasive but collapse under scrutiny. They conflate distinct problems, gloss over the lack of a limiting principle, and land on remedies that are unadministrable, counterproductive, or both.

A recent Economist essay by a former competition lawyer offers a clean example.  Writing in the magazine’s By Invitation section, Marie Potel-Saville argues that digital platforms engage in “cognitive exploitation” through infinite scroll, dark patterns, and dopaminergic feedback loops—practices that, in her view, erode the conditions necessary for functioning markets. Her proposed fix would flip the burden of proof, forcing platforms to show they are “not predatory by design” before deployment. The piece is polished and clearly motivated by real concern. It also neatly illustrates the problem it sets out to diagnose. 

When Analogies Do All the Work—and None of the Proof

This style of argument usually rests on a familiar rhetorical move: analogize the conduct at issue to a well-established legal category, then let that category do the normative heavy lifting. The analogy makes intervention feel legally natural, even when it does not hold up. Neo-Brandeisians often reach for infrastructure comparisons—treating e-commerce platforms like Amazon as if they were railways, telephone lines, or electricity grids. The comparison sounds intuitive. It is also deeply flawed.

In the Economist essay, the chosen analogy comes from securities regulation. When a trader manipulates stock prices, the argument goes, the law treats it as structural harm to the market: a corrupted price no longer conveys reliable information. Cognitive exploitation, on this view, works the same way. If platforms “manufacture” the preferences of billions of users, then consumer signals lose their informational value.

It is a tidy parallel. It is also technically unsound—and the problem runs deeper than a loose comparison.

Securities law can identify manipulation because it has an objective benchmark: the unmanipulated price. Courts and regulators can specify a counterfactual and test against it. “Manufactured preferences” offer no such anchor. There is no observable, measurable “authentic preference” that a regulator can point to or that a court can adjudicate against. What would an unmanipulated preference for social media consumption look like? How would anyone identify it? The analogy does not say, because it cannot. Without a workable counterfactual, the framework collapses at the threshold.

This flaw shows up across the genre. The Digital Markets Act (DMA) starts from a similar premise: that platform “gatekeeper” power mirrors traditional bottleneck monopoly, and so justifies ex ante obligations without proof of harm in individual cases. The German Federal Cartel Office (Bundeskartellamt) takes a comparable tack in its Amazon decision, treating price-prominence effects as if they were exclusionary conduct—without establishing the competitive harm that analogy requires. In each instance, the analogy carries the argument where the evidence does not. 

Deception Isn’t the Same as Desire

The deeper problem is a conflation that runs through the Economist essay—and much of the broader literature. It treats dark patterns and engagement optimization as the same phenomenon when, in analytically relevant respects, they are opposites.

Dark patterns work against user preferences. Hidden unsubscribe buttons, fake urgency timers, deliberately obscured cancellation flows—each steers users toward outcomes they did not choose and would not endorse if the interface were honest. The Federal Trade Commission’s (FTC) case against Amazon, which resulted in a $2.5 billion settlement in September 2025, targets this kind of conduct. It focuses on interface design—specifically, the “Iliad Flow,” a four-page, six-click cancellation labyrinth that impedes a decision the user is actively trying to make. The multistate lawsuit against Meta Platforms Inc. (Meta) by 42 attorneys general raises related issues, particularly around disclosure and consent for minors. These cases may rise or fall on the merits, and the legal standards remain unsettled (here and here). But they at least point to a coherent target: conduct that allegedly deceives users about what they are getting. 

Engagement optimization moves in the opposite direction. A recommendation algorithm that learns, in real time, what a user watches, clicks, and dwells on—and then delivers more of it—matches preferences with a level of precision no earlier market mechanism could achieve. Users do not stay on social media because they have been tricked. They return, often for hours, across demographics and income levels, despite abundant alternatives, because the product delivers what they demonstrably want.

Lumping these phenomena together under the label “cognitive exploitation,” as the Economist essay does, lets a defensible claim about deception do the work for a much more contestable claim about efficient engagement. That move is rhetorical, not analytical. It posits a structural unfairness across the entire platform and hands regulators a mandate far broader than the specific harms could justify.

There is a further irony—one that warrants its own treatment. The platforms drawing the most scrutiny often have the strongest internal incentives to limit manipulative design. Facebook and Amazon depend on sustained user engagement and trust; dark patterns that alienate users undermine both advertising and subscription models. The more aggressive dark-pattern ecosystem—fake close buttons, deceptive download prompts, cookie-consent walls designed to exhaust rather than inform—thrives in the long tail of the web, among publishers and ad networks with no ongoing relationship with users and no reputational stake. If the goal is to curb deceptive design at scale, current enforcement priorities look almost exactly backward.

From Choice to Paternalism

Once you separate the two concepts, the engagement-optimization claim runs into a problem its proponents rarely face head-on: revealed preferences.

Users stay. They return. What critics call manufactured compulsion looks, from a standard welfare perspective, like a product doing its job—matching supply to demand with a level of efficiency earlier markets could not reach. To reframe that as exploitation, you have to argue that revealed preferences count for less than some other set of preferences that should replace them. Proponents rarely say what those substitute preferences are, let alone defend why they should override observed behavior.

What they are reaching for—without quite saying so—is the distinction between first- and second-order preferences, i.e., between what people want in the moment and what they would endorse on reflection. That is a serious idea with a substantial literature, from Harry Frankfurt’s work on second-order volitions to Cass Sunstein and Richard Thaler’s ”nudge” framework, which at least states its normative commitments openly. But this is political philosophy, not competition or consumer-protection law. It requires answers to hard questions: Which preferences count? Who decides? On what basis should a regulator’s view of reflective endorsement displace what consumers actually do? Dressing the argument in market-failure language does not answer those questions; it sidesteps them. 

This is the same move I have identified elsewhere as a hallmark of anti-consumer-welfare antitrust: substituting regulatory preferences for consumer preferences, justified by an abstract appeal to what consumers would want if they wanted differently. Whether the vehicle is neo-Brandeisian antitrust, the DMA’s choice-architecture mandates, or “cognitive exploitation” theory, the logic is the same. Consumers generate outcomes critics dislike—platform dominance, high engagement, preferences for convenience and integration—and the framework steps in to override them, with “market failure” standing in for what is, at bottom, a dispute about values.

A Rule Without Edges

Arguments like this share a second structural flaw: they lack any limiting principle.

The Economist essay’s test—design that engineers behavior users might not endorse on reflection—does not describe anything unique to digital platforms. Loyalty programs create switching costs. Scarcity cues create urgency. Aspirational advertising creates desire. Subscription defaults exploit inertia. Loss-leader pricing skews comparison. If dopaminergic feedback loops are disqualifying, why not manufactured brand attachment? Why treat the sensory engineering of a restaurant, mall, or store as categorically different from the algorithmic engineering of a social media feed? The line never appears, because the framework offers no principled way to draw one.

This is not just an academic concern. Courts and regulators need administrable standards—rules with enough substance to produce predictable outcomes and to constrain enforcement discretion. “Predatory by design” offers neither. Who determines compliance? Against what baseline? Using what method? The vagueness hands regulators sweeping discretion without guidance on how to use it, creating what Brian Albrecht and Erik Hovenkamp have described as a wishing well “into which one may peer and find nearly anything he wishes.” Far from checking platform power, a standard like this injects uncertainty that hits smaller firms hardest while giving regulators a tool they can wield selectively against disfavored companies.

The Startup Filter Disguised as Safety

The proposed remedy adds a structural problem to the analytical one. Precautionary, pre-deployment review—i.e., requiring platforms to prove they are not harmful before launch—is a standard typically reserved for irreversible, catastrophic risks: pharmaceuticals, nuclear plants, medical devices. Extending that framework to consumer software would impose compliance costs that only established incumbents can bear. A startup building a new social app cannot finance the legal and regulatory apparatus needed to clear a “not predatory by design” standard before it even reaches users. Meta can. Google can

The predictable result is higher barriers to entry—high enough to keep out the very challengers such rules claim to protect. The policy would entrench the incumbents it purports to discipline. We have seen this dynamic before. The General Data Protection Regulation (GDPR), for example, has imposed compliance burdens that fall hardest on smaller firms, often foreclosing the competitors most likely to disrupt larger platforms. 

The progressive framing should not obscure the bottom line. This is incumbency protection by another name.

When Getting What You Want Is the Harm

There is a more candid version of this argument—and it is worth stating directly.

The concern is not that markets are failing. It is that they are working—efficiently allocating attention to content that humans, given their cognitive architecture, tend to overconsume. On this view, the problem is not deception. Users are getting exactly what they want. The problem is that what they want may be bad for them in ways they do not fully appreciate. That concern has familiar roots. It echoes long-running debates over sugar, gambling, alcohol, and cheap calories—markets where the interaction of human preferences and efficient supply produces outcomes critics find troubling. None has been “solved” by regulating product design.

That is a legitimate debate. But it is not a competition or consumer-protection argument. It is about the limits of preference satisfaction as a measure of welfare, about paternalism and autonomy, and about whether—and when—second-order preferences or “human flourishing” (as defined by regulators) should override first-order ones. Those are questions of political philosophy and political economy. Labeling them “market failure” does not make them so. It just obscures the normative premises doing the work. 

If the claim is that human wants can conflict with human flourishing—and that platforms efficiently satisfying those wants create a problem—then the argument should say so, and defend it on those terms. That would at least let readers evaluate whether the proposed remedy—regulatory override of consumer choice, implemented through discretionary design standards and precautionary, pre-deployment review—is proportionate to the concern, and whether it fits within the legal tradition invoked to justify it. Or, for that matter, within a liberal polity at all.

Call the Problem by Its Name

The preference-substitution problem in platform regulation is not, at bottom, about platforms. It is about analytical honesty. Conflating deceptive design with efficient engagement, leaning on analogies that collapse on inspection, skipping any limiting principle, ignoring entry effects—these are features of a style of argument, not defects in any single proposal.

Where dark patterns actually exist—where design works against what users are trying to do—existing consumer-protection frameworks can respond, if applied with care. The Amazon and Meta lawsuits show as much, whatever their ultimate merits. That is no small point.

What those frameworks cannot do—and what no legal regime can do without stating its premises—is condemn preference satisfaction at scale because regulators dislike the preferences being satisfied. Every proposal in this space ultimately turns on a simple question, and most try not to answer it: are digital markets failing—or succeeding too well?

You may also like

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More