Home EconomyThe European Commission’s ‘Six-Seven’ Theory of Interoperability

The European Commission’s ‘Six-Seven’ Theory of Interoperability

by Staff Reporter
0 comments

If you have been near anyone under the age of 15 in the past year, you may have heard the phrase “six seven” shouted with great conviction and no discernible content. It usually comes with a hand gesture. It means, as best anyone can tell, absolutely nothing. That is the joke: a number pair masquerading as communication, repeated so often that the repetition becomes the point.

In Brussels, Article 6(7) of the Digital Markets Act (DMA) has begun to suffer a similar fate. The DMA is the European Union’s flagship law for regulating large digital platforms, which it calls “gatekeepers.” Article 6(7) is supposed to require those gatekeepers to make certain hardware and software features interoperable—that is, usable by rival services—while still allowing them to protect security, privacy, and system integrity.

Increasingly, though, the provision gets invoked with great solemnity in every new specification proceeding the European Commission opens, while its actual content keeps shifting to mean whatever the Commission needs in a given case. The trajectory of enforcement—from Apple’s iOS connected-devices proceedings last year to the current Google Android artificial-intelligence (AI) proceedings—suggests the Commission increasingly treats Article 6(7) less like a legal text with an internal structure and more like a slogan: interoperability now, for everyone, on whatever terms the Commission prefers. 

That is unfortunate, because Article 6(7) is not a blank check. It has two distinct halves. The provision requires gatekeepers to provide “effective interoperability with, and access for the purposes of interoperability to, the same hardware and software features” enjoyed by their own services. But it also expressly permits integrity measures that are “strictly necessary and proportionate.” Recital 50 of the DMA confirms that integrity-preserving measures are a legitimate part of implementing Article 6(7), not a loophole to be sheepishly apologized for after the fact.

Properly read, the provision establishes a balancing test, not a maximalist openness mandate. “Effective” does not mean “identical,” and “identical” does not mean “unlimited.”

The Commission’s enforcement to date has steadily read the second half of the provision out of existence. The result is weaker security, less inter-platform competition, and a regulatory tool that increasingly puts a thumb on the scale in the rapidly unfolding generative-AI race, often to the detriment of European consumers.

The Curious Disappearance of ‘Proportionate’

The first real-world stress test for Article 6(7) arrived in late 2024 and early 2025, when the European Commission opened specification proceedings against Apple. Those proceedings concerned iOS interoperability with connected devices and the process through which third-party developers can request access to iOS features. 

The features at issue were relatively well-defined: near-field communication (NFC) pairing, Bluetooth, Wi-Fi accessory configuration, AirDrop, AirPlay, notification forwarding, and related functions. In ordinary English, this was about how well non-Apple devices—headphones, watches, fitness trackers, speakers, and the like—could work with the iPhone.

What made the case interesting—as the International Center for Law & Economics (ICLE) pointed out in its January 2025 comments—was that these features were already meaningfully open. Third-party headphones, fitness trackers, and smartwatches already worked on the iPhone. Pairing was generally reliable. Battery information was accessible. First-party apps from Sony, Bose, JBL, and Fitbit already extended the user experience further. 

What Apple’s own devices received on top of that was mostly a slightly smoother pairing process and a handful of additional display and design features. Those sorts of refinements are precisely the kind of product differentiation Apple plausibly needs to compete against the broader Android ecosystem in the first place.

Forcing parity at the level of system access—in pursuit of parity at the level of the pairing animation—was a textbook example of a measure that is not “necessary and proportionate.” The marginal gains for contestability, or the ability of rivals to compete, were trivial. The marginal risks were not.

The July 2024 CrowdStrike incident, which knocked airlines, hospitals, banks, and other critical systems offline for hours, illustrated the problem vividly. Investigations traced part of the issue to a 2009 European Union agreement requiring Microsoft to grant third-party security software the same kernel-level access as Microsoft’s own tools. The kernel is the core layer of an operating system—the part with near-total control over the device. Giving outside software kernel access is not like letting someone borrow a charging cable. It is more like handing over the master key to the building and hoping they do not trip over the wiring. 

Apple stopped giving third parties that level of access to macOS in 2020. Apple devices were not affected by the CrowdStrike outage.

The Commission’s March 2025 specification decisions nonetheless went well beyond what was necessary to achieve effective interoperability. The Commission’s June 2025 noncompliance decision against Meta then revealed why. In that decision, the Commission explicitly disclaimed any obligation to weigh the economic consequences of its enforcement choices on either the gatekeeper or third parties. 

That posture has now bled into the Commission’s interpretation of the integrity exception in Article 6(7). If the Commission need not consider tradeoffs, then “strictly necessary and proportionate” collapses into little more than “necessary in the Commission’s view.”

From Pairing Animations to Ambient Surveillance

If the iOS case was the rehearsal, the Android AI case is the main event. On April 27, the European Commission adopted preliminary findings outlining the measures it proposes Alphabet should implement under Article 6(7) for AI-facing features in Google Android. 

The measures listed in the annex would require Google to provide third parties access, on terms equally effective to those enjoyed by Google’s own services, to: :

  • continuous background access to a device’s core ambient sensors, including the microphone, camera, screen, speakers, accelerometer, and GPS;
  • centralized, concurrent access to data shared by other apps through on-device databases like AppSearch, including data shared by Google’s own first-party apps;
  • custom always-on wake-word detection—the “Hey Google” or “Alexa”-style listening function—running on the device’s digital signal processor, including while the device is in battery-saver mode;
  • the ability to take agentic control of other apps through screen automation, including observing screen content, imitating user inputs, and executing multi-step transactions in a background virtual window;
  • system-privileged access to AICore, Gemini Nano, and the underlying neural-processing-unit (NPU), graphics-processing-unit (GPU), and random-access-memory (RAM) resources currently reserved for Google’s own on-device AI models; and
  • expanded background-execution privileges equivalent to those enjoyed by Google’s own apps.

Some translation is useful here. “Ambient sensors” are the parts of a phone that can see, hear, locate, and measure the world around it. A “wake word” is the phrase that activates a voice assistant. “Agentic control” means software that does not merely answer a question but can take actions for the user, such as opening apps, reading screens, clicking buttons, and completing transactions. NPUs and GPUs are specialized chips that help run AI models efficiently on the device, rather than sending everything to the cloud.

Put differently: this is not just about whether a third-party smartwatch gets the pretty pairing animation. It is about whether third-party AI assistants should receive deep, persistent access to the phone’s sensors, app data, computing resources, and user interface.

That is a qualitatively different proposition. As Mikolaj Barczentewicz put it in his April 2026 post, this is “opening Pandora’s interface.” The features at issue involve sensors that run continuously, span the device’s entire app-data layer, and grant programmatic control over other applications. In the wrong hands, those capabilities enable mass surveillance, credential theft, and unauthorized transactions at scale.

Article 6(7) expressly recognizes those risks. The provision permits integrity measures that are “strictly necessary and proportionate.” But the Commission’s annex operationalizes that exception in a way that, as a practical matter, largely closes it off.

Section 5.3 requires that any integrity measure rest on “objective and verifiable evidence showing the existence and magnitude of the integrity risk,” apply symmetrically to Google’s own services, and remain capable of independent verification by parties other than Alphabet itself.

Each of those criteria sounds reasonable in isolation. Taken together, they make the most natural response to genuinely novel risks—declining to expose a sensitive feature until the threat landscape becomes better understood—effectively unavailable. By definition, evidence of harm cannot be “objective and verifiable” ex ante. Conservative assumptions about attacker behavior, which underpin modern operating-system security architecture, are not “objective evidence” under this framework.

A standard that treats the absence of demonstrated past harm as evidence that a restriction is unjustified would not have produced many of the security practices the European Union now takes for granted.

The symmetry requirement is equally perverse. Google typically distinguishes between its own first-party services and user-installed third-party apps across multiple dimensions of trust, including code-signing provenance, internal review processes, contractual liability with device manufacturers, and the ability to revoke access quickly when problems emerge. Code-signing, in simple terms, is a way to verify where software came from and whether it has been tampered with. These distinctions are not decorative. They are part of how modern platforms keep devices from becoming very expensive malware terrariums.

The annex insists those underlying trust distinctions are irrelevant: any restriction applied to a third-party app must also apply to Google’s own services.

That leaves Google with a binary choice. Either it extends sensitive capabilities to every third party on equal terms, or it strips those capabilities from its own services entirely. The first option may be unsafe. The second leaves European users with a worse product. Neither option is what users actually want.

Contestability Cuts Both Ways

There is another problem with the Android AI case, and it flips the usual DMA narrative on its head.

In the AI-assistant market itself, Google is the challenger, not the incumbent. Recent StatCounter data puts ChatGPT at roughly 70% of European Union AI-chatbot usage. Anthropic’s Claude has reportedly been adopted by eight of the Fortune 10 and was operating at a $14 billion annualized revenue run rate as of February 2026. Google’s Gemini, despite the company’s enormous investment in its integrated AI stack, arguably trails both.

Google’s competitive strategy depends precisely on that integrated stack: chips (Tensor and TPU), cloud infrastructure (Google Cloud), foundation models (Gemini), platform integration (Search, Maps, Calendar, Gmail, YouTube, and Photos), and distribution channels (Android, Chrome, and Play). Foundation models are the large AI systems trained on vast amounts of data that power tools like chatbots, coding assistants, and image generators. Google’s bet is that it can combine those models with the services people already use and the devices they already carry.

Deep system-level integration on Android—wake-word reservation, on-device database access, preferential NPU, GPU, and RAM allocation, along with structured App Functions—is one of the few ways Google can translate that stack into a differentiated user experience capable of competing with OpenAI’s and Anthropic’s dedicated assistants. App Functions are structured ways for apps to expose actions—say, booking a ride, sending a message, or editing a photo—so an assistant can perform those tasks reliably, rather than simply guessing where to tap.

To some observers, Google leveraging its ecosystem to compete more effectively in AI may sound like precisely the scenario the DMA was designed to prevent. But in a world where Google is playing catch-up to OpenAI and Anthropic, that integration may be the difference between having two major AI competitors and three.

That is difficult to square with the DMA’s contestability rationale. Proponents often invoke contestability as a tool for disciplining entrenched big-tech incumbents. But contestability cuts both ways. If promoting competition is genuinely the goal of the DMA, it should matter just as much when enforcement reduces rivalry as when it increases it.

The Android Article 6(7) proceedings are not unique in this respect. The same structural pattern is emerging in the Meta/WhatsApp proceedings, where enforcement nominally framed as protecting AI competition on a dominant European consumer platform has the practical effect of requiring the platform owner to grant equivalent system access to the very firms that already lead the AI-assistant market. 

It is also worth questioning the empirical premise underlying the Android case: namely, that Android currently functions as an “important gateway” for AI services. The Commission’s preliminary findings take that proposition largely for granted, but the available evidence looks shakier. AI services today are still consumed disproportionately on desktop devices, not smartphones.

In October 2025, Gemini reportedly recorded roughly 813 million monthly desktop sessions, compared to 369 million mobile sessions—a desktop-to-mobile ratio of more than two-to-one. Usage patterns across other AI services appear similarly lopsided. At present, most users access AI services primarily through desktop browsers. Mobile may become more important over time. It does not yet appear to be the indispensable distribution channel these proceedings assume. 

The broader lesson is uncomfortable, but important. Mandating openness to increase rivalry within Android can simultaneously weaken rivalry between AI firms. The DMA’s twin goals—contestability and fairness—can pull in opposite directions. Right now, the Commission appears to be privileging the wrong side of that tradeoff.

What Six-Seven Actually Means

Article 6(7) has the potential to become a powerful tool for promoting contestability in digital markets. But realizing that potential requires the European Commission to take seriously both halves of the provision: openness and integrity. It also requires the Commission to evaluate enforcement decisions based on their effects in the markets that ultimately matter to European consumers.

Concretely, that means doing three things the Commission has largely skipped so far.

First, reconsider the gateway premise. Article 3 of the DMA conditions gatekeeper obligations on a service functioning as an “important gateway for business users to reach end users.” In plainer terms, the DMA’s special obligations are supposed to attach to services that businesses really need in order to reach customers. If a particular class of business users—in this case, AI-service providers—primarily reaches users through other channels, the case for mandating risky system-level mobile access becomes weaker, not stronger. The Commission’s limited enforcement resources could likely be deployed more effectively elsewhere.

Second, account for cross-market effects. When the platform owner is the trailing competitor in the upstream market a remedy will reshape, the “fairness” gains from mandated equal access may be outweighed by reduced contestability among the firms the platform is trying to catch. That is not an argument for abandoning Article 6(7). It is an argument for applying the provision cautiously, with the recognition that competition policy often affects multiple markets simultaneously.

Third, take integrity seriously. “Strictly necessary and proportionate” cannot mean regulators demand proof of harm before the harm occurs. No modern security architecture operates that way. Many of the security practices the European Union now treats as commonplace were built precisely on precautionary assumptions about how systems might fail or be exploited. Time-limited restrictions on the most sensitive feature classes should remain available to gatekeepers without requiring an ex ante demonstration that the threat has already materialized.

The kids are wrong. “Six seven” does mean something. The Commission simply has to read the whole provision—not just the half it likes.

You may also like

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More