Home EconomyThe Myth of the Unwanted Internet

The Myth of the Unwanted Internet

by Staff Reporter
0 comments

In a recent podcast, New York Times journalist Ezra Klein hosted lawyer Tim Wu and writer Cory Doctorow for a conversation titled “We Didn’t Ask for This Internet.” They ran through a familiar bill of indictment against the modern internet: surveillance, manipulation, algorithmic pricing, the squeezing of creators, spam, fraud, and the dehumanization of work. It was an engaging discussion among three thoughtful people. It was also, in important respects, wrong.

I don’t claim expertise across every issue they covered—and, unlike them, I won’t pretend otherwise. But I have spent considerable time studying the evolution of online consumer protection: how trust emerged in a radically new environment, how entrepreneurs built the mechanisms that made online commerce possible, and how those mechanisms continue to evolve in response to AI. On those questions, the Klein-Wu-Doctorow narrative gets the story—and its causes—wrong.

We Built This Internet

The podcast’s central conceit—that “we didn’t ask for this internet”—reflects either a category error or a simple mistake.

Start with the category error. “We” (society?) never made a substantive, collective request for a particular kind of internet. At most, our elected representatives made a high-level choice in the early 1990s to move the internet out of government and academic control and open it to commerce. Congress amended the National Science Foundation’s (NSF) statutory authority in 1992 to permit “additional purposes.” Three years later, the NSF shut down NSFNET, and the acceptable-use policy that had limited commercial activity fell away. Everything else followed.

If “asking for” the internet means revealed preferences—what users chose through their behavior—then today’s internet is, quite literally, what we asked for. It emerged from the actions of billions of individuals, including a small share who try to exploit others, and a set of entrepreneurs who figured out how to counter that behavior by building tools that protect users from malicious actors.

How the Internet Solved for Trust

Klein, Wu, and Doctorow describe the modern internet as if it sprang from a conspiracy between large tech firms and a passive government. The reality is more interesting—and more encouraging.

Commerce depends on trust. Each party must trust the counterparty. In person, that trust grows through repetition, reputation, reliable identification, and legal enforcement. Online, those mechanisms appear only in fragments. Entrepreneurs filled the gap.

Start with identity and security. Taher Elgamal and his team at Netscape developed SSL, using public-key infrastructure to authenticate websites through digital certificates and asymmetric cryptography. That small padlock in your browser bar marks one of the most consequential innovations in the history of commerce.

Then came user ratings. Pierre Omidyar’s feedback forum on eBay, launched in 1996, created a decentralized reputation system that allowed strangers to transact with confidence. Variants of that system now underpin Amazon, Uber, Airbnb, and countless other platforms.

Payment systems followed. Credit-card networks adapted their rules for “card-not-present” transactions, and intermediaries like PayPal emerged to stand between buyers and sellers.

Each innovation solved a specific trust problem. None came from a government committee. Entrepreneurs identified real needs and built tools to meet them.

No Data, No Defenses

Here is where Klein, Wu, and Cory Doctorow’s analysis goes most seriously off track. They treat the collection and use of consumer data as purely extractive—as if firms harvest data only for their own benefit, offering nothing in return. That framing ignores the basic economics of multisided markets operating on open, distributed networks.

Consider bots. Since at least 1997—when website operators tried to game AltaVista’s rankings by submitting massive numbers of URLs—malicious bots have posed a persistent threat to the internet’s functioning. The countermeasures that followed—CAPTCHA, reCAPTCHA, hCaptcha, and Cloudflare’s suite of protection tools—all rely on collecting and analyzing user data.

Google’s reCAPTCHA used user responses to improve machine word recognition for Google Books and Maps. hCaptcha funds its privacy-preserving bot detection by selling labeled data to train AI models. Cloudflare, which now handles an estimated 21% of global internet traffic, uses analytics to distinguish humans from bots with minimal friction. In each case, user data serves as the fuel that powers the protection system.

It is hard to see how email spam, website infiltration, distributed denial-of-service (DDoD) attacks, and the many other threats to online commerce could have been addressed without tools that monetize—or otherwise leverage—user data. This is the quid pro quo of open, distributed networks. You can dislike it, but you cannot wish it away without also wishing away the protections it makes possible.

The Internet Has Been Here Before

The most productive part of the Klein-Wu-Doctorow conversation focused on AI. Ironically, the strongest rebuttal to their pessimism lies in the very trust ecosystem they discount.

AI poses three familiar trust problems: Can we trust that content is real? Can websites trust that visitors are human? And can either side trust the counterparty’s identity? These challenges are serious, but not new. They are updated versions of problems entrepreneurs have been solving for three decades.

Start with content authenticity. Digital content credentials—the C2PA standard, backed by Adobe, Google, Meta, Microsoft, OpenAI, and Sony—and digital watermarking tools like Google’s SynthID aim to verify provenance. Adoption remains early, but the pattern looks familiar. SSL began as a niche solution for high-value transactions, then spread as costs fell and benefits became clear. Cloudflare’s free Universal SSL reportedly doubled adoption almost overnight in 2014.

Identity presents a parallel challenge. Deepfake-enabled fraud has accelerated demand for multi-factor authentication, digital identity systems like Estonia’s X-Road architecture, and emerging zero-knowledge proof systems that let users verify claims without disclosing sensitive personal data. The next frontier—trusting AI agents that act on users’ behalf—will likely rely on intermediary-trust models similar to those that enabled online travel agencies: accredited, bonded corporate entities that stand behind the transactions their bots execute.

None of these solutions is perfect. All remain in development. But they are developing—driven by the same entrepreneurial forces that created SSL, eBay’s rating system, CAPTCHA, and Cloudflare.

The Unspecified Better Option

The core problem with the “we didn’t ask for this internet” framing is its premise. It assumes a better alternative existed—and that someone blocked it. But what, exactly, is the counterfactual? A government-designed internet? A network where commerce remained off-limits? An internet without user ratings, without SSL, without the data-driven protections that keep most users safe most of the time?

Tim Wu and Cory Doctorow, in particular, have long advocated more aggressive regulation of technology firms. That approach may make sense in some contexts. But their critique consistently discounts the role of decentralized, market-driven innovation in solving the very problems they highlight. User-rating systems—now continuously refined and verified through AI-based fraud detection—often outperform top-down licensing regimes. Entrepreneurial bot-detection tools adapt faster, and more precisely, than regulatory mandates.

The internet took its current form through the interaction between online protection and the use of consumer data—an interaction driven by the inherently multisided nature of markets and the distributed nature of networks. The resulting trust ecosystem is imperfect. It is also remarkable. And it continues to evolve to meet the demands of the AI era.

The Case for This Internet

We should be grateful—not just for the internet’s enormous benefits today, but for the fact that the trust mechanisms built over the past three decades can continue to evolve to meet the risks of the AI era. That requires preserving the conditions that made this progress possible: enforceable contracts, relatively open markets, and a regulatory environment that does not choke off the entrepreneurs who keep building architectures of trust.

Klein, Wu, and Doctorow are right about one thing: the internet has real problems. But those problems do not reflect a fundamental system failure. And they do not show that “we didn’t ask for this.” We did. 

This is the internet we asked for. On balance, it is rather glorious.

You may also like

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More