Home EconomyAI’s Scientific Ethos and the Moat That Wouldn’t Hold

AI’s Scientific Ethos and the Moat That Wouldn’t Hold

by Staff Reporter
0 comments

Google may have built the foundation of the modern AI economy—and then published the instructions.

In 2017, eight researchers across Google’s Brain and Research divisions released a paper titled “Attention Is All You Need.” What followed is now familiar: a technological inflection point, rapid diffusion, and an explosion of competitors building on the same core idea. Less appreciated is the mechanism behind it. This is not just a story about a breakthrough. It is a story about why that breakthrough did not—and perhaps could not—remain proprietary.

That dynamic is the focus of this post. The norms that govern AI research—what I will call the “scientific ethos”—systematically undermine any single firm’s ability to hoard knowledge for long. The transformer is the clearest example.

The Transformer Leaves the Building

The 2017 paper introduced the transformer, a neural-network architecture that replaced sequential processing with self-attention. Self-attention allows a model to interpret a sentence by weighing how each word relates to every other word in the same sentence. Instead of processing tokens one at a time, the model can evaluate the entire sequence simultaneously, focusing on the most relevant relationships to determine meaning. That shift—from step-by-step processing to parallel attention—proved decisive.

Within a few years, the paper became one of the most-cited in computer science. More importantly, it supplied the architectural backbone for nearly every major AI system in use today—from OpenAI’s GPT series (the “T” in ChatGPT stands for “transformer”) to Google’s Gemini and Anthropic’s Claude.

It is no exaggeration to say that, without this paper, the generative AI boom would look very different—or would have arrived years later. Some might argue that an equivalent breakthrough would have emerged eventually, or that competitors could have reverse-engineered Google’s models. Neither claim holds up. The transformer was not a visible product that others could disassemble. It was a counterintuitive insight: dispense with recurrence and convolutions in favor of self-attention. Without its publication, outsiders would have had no practical way to understand why Google’s models kept improving.

As Geoffrey Hinton—himself a former Google researcher and widely regarded as the godfather of modern AI—told Wired: “Without transformers I don’t think we’d be here now.” Other architectures, such as state space models or recurring neural networks, may now rival transformers. But they arrived years later—after the paper’s open publication enabled new entrants that otherwise would not have existed.

That raises a basic question: why did Google publish the paper at all? The company could have treated the research as a trade secret and secured a multi-year competitive advantage. Part of the answer lies in the “scientific ethos”—the shared norms among AI researchers. The field prizes competition and individual achievement, but also transparency and the idea that knowledge should be universal and research pursued for its own sake.

Google’s broader culture reinforces that ethos. The company offers many products, such as Gmail and Maps, for free in their basic versions and often shares research openly. It recently released its Gemma model under a relatively permissive license. Even so, Google did not face an obvious incentive to publish the transformer research.

Employee incentives cut the other way. Google struck an implicit bargain with the researchers it recruited. As the Acquired podcast’s episode on Google’s AI era recounts, the company built its mid-2010s leadership by hiring nearly every major figure in the field: Geoffrey Hinton, Ilya Sutskever, Demis Hassabis, Dario Amodei, and many others. Most came from academia and kept joint appointments. Hinton retained his university position while at Google; Yann LeCun did the same at New York University while working with Meta.

These researchers joined Google for its unmatched compute and data—but on the understanding that they would continue to operate as scientists. They would pursue fundamental research, present at conferences, and publish their findings. Google likely could not have assembled the team that invented the transformer without offering that freedom.

Even so, all eight authors eventually left Google. Some sought greater academic freedom; others pursued commercial opportunities. Six went on to found companies that collectively raised more than $1.3 billion and produced multiple unicorns. Noam Shazeer co-founded Character.AI, which reached a $1 billion valuation in under two years before he returned to Google in August 2024 in a $2.7 billion deal. Aidan Gomez co-founded Cohere, now a leading enterprise AI firm. Ashish Vaswani and Niki Parmar left together in 2021, co-founded Adept AI, and later Essential AI. Llion Jones co-founded Sakana AI in Tokyo, explicitly pursuing a non-transformer approach. Jakob Uszkoreit co-founded a biotech startup applying AI to drug design. ?ukasz Kaiser did not found a startup but joined OpenAI in 2021, where he worked on GPT-4 and the o1 and o3 reasoning models.

In short, a single openly published paper seeded an entire ecosystem of competitors to the company that produced it. Google gave away the blueprint for what may be the world’s most valuable technology—and its own researchers walked out the door to build on it.

That deeply embedded scientific ethos among leading researchers offers reason for optimism about competition in AI, even as firms scale and commercial pressures intensify.

It’s Hard to NDA a Scientist

The AI research community grew out of academia, and it still carries that culture. Its norms look less like a Silicon Valley product team and more like a university department. Researchers present at conferences, share code, and publish. The Partnership on AI captures the point: openness is a “fundamental scientific value” in artificial intelligence and machine learning (AI/ML). Indeed, one of the field’s signal achievements is its shift toward open-access platforms like arXiv—“going against significant pressure to publish in traditional closed academic journals.”

That choice matters. The AI/ML community embraced openness even as the academic establishment pushed in the opposite direction. Researchers routinely post preprints before peer review, share code on GitHub, and present work at conferences such as NeurIPS and ICML, where the culture rewards novelty and reproducibility over secrecy. Firms such as Google, Meta, and OpenAI reinforced these norms by releasing landmark research and normalizing preprints across both academic and policy contexts.

These norms shape what researchers want. And those preferences, in turn, shape competition. The people building frontier AI systems are not interchangeable employees executing top-down strategy. They are scientists with strong views about how their work should be conducted, shared, and recognized. When firms disregard those views, researchers leave.

The pattern repeats. After Google’s DeepMind merged with Google Brain and shifted toward productization, some researchers began to exit. They had been hired to recreate an academic environment inside a company. As one former DeepMind researcher told Sifted

We hired a lot of really good, really senior engineers, researchers who we basically asked to replicate an academic setting within industry, which was unique at the time…  It’s no longer just an academic setting and rightfully so, in my view. But if you came from that [academic] perspective, you go, ‘This isn’t great—what we were hired to do is no longer the priority. 

The departures followed. Sixteen former DeepMind employees launched ventures in a single 12-month period, up from seven the year before. David Silver, a central figure behind AlphaGo and AlphaZero, left in early 2026 to found Ineffable Intelligence, arguing that the field must move beyond the large-language-model paradigm.

The same dynamic appears across the industry. Half of xAI’s founding team left within 30 months, citing disagreements over research direction and strategy. In 2025, more than 20 leading researchers exited OpenAI, Google, and Meta to join Periodic Labs, which focuses on AI-driven scientific discovery, rather than scaling language models. That same year, DeepMind alumni launched Reflection AI—which raised $2 billion at an $8 billion valuation—along with Generalist AI in robotics and a range of other ventures.

The throughline is clear. As one cross-industry analysis puts it:

Compensation alone does not retain elite AI talent. The researchers and engineers who define the frontier of AI capability are motivated by a mix of intellectual challenge, research freedom, mission alignment, and peer quality. When any of these factors degrade, the financial incentives to stay become insufficient, because every other major lab is willing to match or exceed the compensation package.

The Knowledge Won’t Stay Put

At its core, this is a story about knowledge diffusion. The scientific ethos creates powerful counterweights to any single firm’s incentive to bottle up AI capabilities for long. It operates through several reinforcing channels.

Start with talent mobility. Unlike trade secrets embedded in manufacturing processes or proprietary datasets, AI’s most valuable knowledge sits with people. And those people are often scientists who value publication, autonomy, and intellectual freedom. They move—to competitors, startups, and ventures of their own. Each departure carries institutional knowledge and often seeds a new rival. The transformer paper is the extreme case, but the broader pattern is everywhere. As the Acquired podcast documents in its episode on “Google, the AI Company,” nearly every major AI lab today traces key talent back to Google circa 2014—including OpenAI (Ilya Sutskever), Anthropic (Dario Amodei), and Microsoft’s AI division (Mustafa Suleyman, via DeepMind).

Next, open publication. When researchers publish their methods, anyone with the technical capability can build on them. The Stanford AI Index 2025 underscores the scale: from 2013 to 2023, AI publications nearly tripled, from roughly 102,000 to more than 242,000. AI’s share of all computer-science publications rose from 21.6% to 41.8% over the same period. The field is expanding both absolutely and relatively. Industry now produces nearly 90% of notable AI models, but academia remains the leading source of highly cited research. That division of labor matters. Firms build and scale systems, but foundational insights still flow through open academic channels.

The competitive implications follow. As Stanford Human-Centered Artificial Intelligence (HAI) researchers note:

When research is shared openly, innovation accelerates, duplication is minimized, and ideas build upon one another. In AI research, these shared open-source tools, datasets, libraries, and benchmarks have enabled progress that emerged from one lab and spread globally—from students, to startups, to large industry deployments.

Then there are open-source and open-weight models, which lower barriers to entry. Google’s recent release of  Gemma 4—its most capable open-weight model family to date—illustrates the shift. The company released it under the Apache 2.0 license, one of the most permissive open-source licenses available. Earlier Gemma versions used a more restrictive custom license that limited certain uses and reserved Google’s right to terminate access. Gemma 4 allows anyone to download, modify, fine-tune, and deploy the model weights commercially, without fees or special permission.

These dynamics can reinforce one another. Researchers want to share their work, and firms accommodate that preference to attract and retain talent. Some firms release models, which draws more researchers. Meta’s decision to open-source its Llama models, for example, attracted top talent from Google DeepMind and OpenAI. Many open-source contributors later joined Meta’s AI teams. Openness begets talent, which begets more openness.

None of this eliminates countervailing incentives. As AI grows more commercially valuable—and as computing costs rise—firms will sometimes prefer secrecy. The relevant question for competition, though, is not whether any single firm remains fully open. It is whether the industry’s overall dynamics allow knowledge to diffuse in ways that enable entry and rivalry. On that score, the evidence points in a favorable direction.

Why the Moat Won’t Hold

For competition scholars and enforcers, the scientific ethos in AI belongs in the analysis. It does not eliminate the risk of anticompetitive conduct. Firms can still engage in exclusionary practices. The recent wave of acquihires, for example, has raised concerns about merger-control evasion and “further consolidating the Big Tech industry.” Dirk Auer and Onyeka Aralu, however, argue those concerns may be overstated. Acquihires also facilitate labor mobility, and the scientific ethos helps drive that movement.

This is only one piece of the competitive puzzle. Other factors point in the same direction. As I discussed in prior posts, the cost of compute continues to fall, and data remains accessible and replicable.  If those conditions persist—and they show every sign of doing so—new entrants will keep emerging, competitive gaps will narrow, and the “inevitable concentration” narrative will keep colliding with a more dynamic reality.

For now, the evidence points one way. The people building frontier AI systems are, at their core, scientists. They want to publish. They want recognition. They want to work on meaningful problems with the freedom to pursue them. When firms constrain those preferences, researchers leave—taking knowledge with them and, more often than not, building the next competitor.

You can’t bottle that.

You may also like

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More