If the Digital Markets Act (DMA) is going to force open the most sensitive parts of modern smartphones, it will have to answer a basic question it has so far sidestepped: how much security risk is too much in the name of interoperability?
In January, the European Commission opened proceedings to define Google’s duties under the DMA for Android. The focus: how much access third-party AI services should get to features like hotword detection, on-screen content, and audio-output monitoring—capabilities Google currently reserves for its own AI assistants. The Commission has six months to issue a specification decision, and its announcement already signals where it may land.
This marks the first time the Commission has applied Article 6(7) DMA—the interoperability obligation for operating systems—to AI-assistant features. It has already deployed the same provision against Apple. In March 2025, it issued two specification decisions requiring Apple to open iOS connectivity features—near field communication (NFC), Wi-Fi, Bluetooth pairing, and notification forwarding—to third-party devices. In doing so, the Commission developed a narrow “integrity” doctrine that sharply limits when gatekeepers may restrict interoperability on security and privacy grounds.
The key question is whether that doctrine can hold when applied to the more sensitive system access that AI services demand. I argue that the Commission should offer a more robust, explicit account of Article 6(7) for AI-facing features—one that advances the DMA’s aims while accommodating security controls. Otherwise, the DMA risks an awkward outcome: interoperability for an AI assistant’s sensory inputs—what appears on a screen or plays through a device’s speakers—would face a weaker legal safety valve than something like sideloading an app.
The legal basis for the Android proceedings lies in Article 6(7) DMA, which requires gatekeepers to:
… allow providers of services and providers of hardware, free of charge, effective interoperability with, and access for the purposes of interoperability to, the same hardware and software features accessed or controlled via the operating system … as are available to services or hardware provided by the gatekeeper.
Article 6(7) also reaches “hardware or software features” not formally part of the operating system if they are “available to, or used by” the gatekeeper in providing services “together with, or in support of” the operating system.
Article 6(7) in Practice: Apple as Test Case
Article 6(7) DMA is the same provision the European Commission has already enforced—aggressively—against Apple. In March 2025, the Commission adopted two specification decisions under Article 8(2) DMA, directing Apple to implement concrete interoperability measures for iOS and iPadOS.
The first—the “Connected Devices Decision” (DMA.100203)—targeted nine connectivity features Apple had reserved for its own ecosystem, including peer-to-peer Wi-Fi, NFC access in Reader/Writer Mode, background execution for Bluetooth companion apps, automatic Wi-Fi connection, and proximity-triggered pairing. The second—the “Process Decision” (DMA.100204)—required Apple to build a structured, transparent system for handling third-party interoperability requests.
Apple’s experience under Article 6(7) offers a useful preview. It shows both how broadly the Commission reads the interoperability obligation and why its approach has proven controversial.
On the procedural side, Apple created a dedicated engineering team to develop interoperability solutions for new iOS features. It also built a formal request system through its Feedback Assistant. Developers can submit interoperability requests that move through three phases: eligibility assessment, project planning, and development and release. They can also submit technical reference queries seeking documentation on how iOS enables specific features.
Apple publishes an Interoperability Request Tracker and Technical Reference Summaries so developers can follow progress and timelines. For disputes, it set up a two-tier process: an internal Interoperability Request Review Board (IRRB) for initial appeals, followed by external, nonbinding conciliation led by independent experts.
On the technical side, Apple has rolled out several interoperability measures. These include support for Wi-Fi Aware 4.0 for high-bandwidth, peer-to-peer connections; a new NFC API that lets iOS apps use the NFC Controller in Reader/Writer Mode without restrictions on payment-related Application Identifiers; and a Wi-Fi Infrastructure framework that shares network metadata with third-party connected devices. Notably, Apple initially disabled Wi-Fi sync between iPhone and Apple Watch in Europe rather than extend the feature to rivals, before later implementing the framework.
Other features remain in beta. These include proximity-triggered pairing—enabling AirPods-like, one-tap setup for third-party accessories—and an Accessory Notifications framework to forward iOS notifications.
Even as it complies, Apple is contesting the Commission’s approach. The company has appealed both the Connected Devices Decision (Case T-354/25) and the Process Decision (Case T-359/25) to the EU General Court. It has also challenged aspects of its gatekeeper designation as they relate to Article 6(7).
Apple’s DMA compliance reports underscore that tension. The most recent report describes itself as a “factual record,” not a statement of Apple’s position on the “validity, scope and proper application” of the DMA. It emphasizes that all measures are implemented “without prejudice to Apple’s legal position.” Apple also argues that its preexisting developer programs already go “well beyond the scope of effective interoperability as required by Art. 6(7) DMA.”
The compliance burden has produced real tradeoffs. Apple attributes EU-specific delays for features like AirPods live translation and iPhone Mirroring to DMA interoperability requirements, noting that compliance has consumed “hundreds of thousands of hours” of engineering time.
Apple has also raised security concerns. Discussing its new NFC API, the company notes in its compliance report that it was built “[n]otwithstanding the security concerns Apple has repeatedly highlighted.” It makes similar arguments about alternative browser engines—required under Article 5(7) DMA, which bars gatekeepers from mandating their own browser engine—warning that browser engines “are one of the most common attack vectors for bad actors” and are “constantly exposed to untrusted and potentially malicious content.”
In the International Center for Law & Economics’ (ICLE) response to the Commission’s consultation on the proposed connected-devices measures, Geoffrey Manne, Dirk Auer, and Mario Zúñiga raised parallel concerns. They warned that mandated NFC access could enable skimming attacks and unauthorized transactions; that background-execution requirements create battery-drain and data-collection risks; and that third-party developers may lack the resources to match platform-level security protections. They proposed a risk-based, tiered approach—restricting access to sensitive features while allowing lower-risk interoperability—that the Commission’s narrow integrity framework does not easily accommodate.
These objections point to a real tension in the DMA’s interoperability mandate. Opening platform features inevitably trades off against the gatekeeper’s ability to secure its ecosystem. Article 6(7) recognizes that tension by allowing “strictly necessary and proportionate measures” to protect operating-system integrity. But drawing the line between legitimate security concerns and strategic gatekeeping is the hard part—and one the Commission and courts will have to resolve, first for Apple, and soon for Google.
Android AI as the Next Interoperability Frontier
The opening decision—formally, the Android AI specification proceedings (DMA.100220)—is only a procedural step. It initiates proceedings under Article 20(1) DMA “with a view to the possible adoption of an implementing act” under Article 8(2). The Commission now has six months to issue a final specification.
For now, the decision does not mandate specific measures or resolve the scope of “integrity” under Article 6(7). But its reasoning already signals the Commission’s priorities—and, more importantly, the fact pattern it has chosen to test its interoperability doctrine.
That investigation centers on AI services. Google offers a range of AI capabilities on Android—Gemini, Google Assistant (together, “Google AI Assistants”), Gemini Nano (an on-device AI model), AICore (a system service that manages on-device models), and Android System Intelligence. These services benefit from privileged access to features controlled through the operating system.
The Commission highlights several of those features: hotword detection (voice-triggered activation, as in “Hey Google”); Circle to Search (selecting on-screen content to query); access to on-screen content; access to audio output (what media is playing); and tools that integrate AI services with other apps on the device. In the Commission’s view, third-party AI providers cannot access these capabilities on equal terms.
Google responds that the Android Open Source Project (AOSP) already delivers full interoperability. Because the code is open source, third parties can access and interoperate with the operating system just as Google’s own apps do. Google also points to its developer portal for interoperability requests.
The Commission is not convinced. It notes that many of the relevant features remain available only to apps preinstalled by original equipment manufacturers (OEMs) under Google’s Compatibility Definition Document (CDD) requirements. That leaves user-installed, third-party AI apps effectively locked out.
These proceedings are not about permitting sideloading (Article 6(4)) or allowing alternative app-store listings. Instead, the Commission targets deeper system-level access: voice invocation, screen-content reading, audio monitoring, and inter-app communication channels. These sit at the boundary between the operating system and AI services.
And the Commission is pursuing this under Article 6(7), whose only safety valve allows “strictly necessary and proportionate measures” to protect “the integrity of the operating system.” The provision, notably, does not explicitly mention security.
Testing the Boundaries of ‘Integrity’
This is where the Android AI proceedings start to look like a real stress test of the Commission’s approach to Article 6(7). To be clear, the opening decision does not adopt a definitive reading of “integrity.” It largely restates the Article 6(7)(b) exception and suggests that further guidance may be needed on “exact interoperability solutions,” their technical and contractual design, and the modalities of access.
The more important point is the Commission’s choice of case. It has picked a scenario in which simply reusing its Apple-focused interpretation would be especially exposed.
In the Apple specification decisions, the Commission embraced a narrow view of integrity. It held that “integrity has a distinct meaning from users’ privacy and security” (DMA.100204, recital 82), that “some privacy and security aspects fall outside the scope of integrity” (recital 87), and that integrity “does not allow gatekeepers to impose their own model of security and privacy on third-party services” (recital 87).
It also imposed what amounts to a ceiling rule: an integrity measure “cannot be considered strictly necessary and proportionate if it seeks to achieve a higher level of integrity than the one that Apple requires or accepts in relation to its own services or hardware” (recital 91). And it dismissed the argument that, in some cases, third parties are less trustworthy than the gatekeeper, reasoning that “whether a gatekeeper trusts a third party is a subjective assessment exclusively within the gatekeeper’s control” (recital 93).
That framework was designed for a particular set of features: NFC access, peer-to-peer Wi-Fi, Bluetooth companion apps, notification forwarding.
Now consider what is at stake in the Android AI proceedings: hotword detection (always-on microphone access), access to on-screen content (potentially everything a user sees), access to audio output (capturing what a user hears), and tools that integrate AI services across apps on the device.
At that level of access, the line between “platform integrity” and broader security and privacy concerns starts to collapse.
The Structural Gap Between Articles 6(4) and 6(7)
Article 6(4) DMA—covering app sideloading and alternative app stores—includes two distinct safety valves. The first allows measures to protect “the integrity of the hardware or operating system.” The second, in a separate subparagraph, permits “measures and settings other than default settings, enabling end users to effectively protect security in relation to third-party software applications or software application stores.”
Article 6(7), by contrast, explicitly includes only the integrity valve.
The Commission leans on this alleged asymmetry to construct its legal theory. In the Apple Process Decision, it notes that while Article 6(7) measures are limited to integrity, this “does not exclude” separate Article 6(4) measures that enable end users to protect security in relation to third-party apps. The result is a layered model: Article 6(7) governs feature-access restrictions; Article 6(4) governs app-level security settings; and Article 8(1) aligns the entire framework with the General Data Protection Regulation (GDPR), the ePrivacy Directive, cybersecurity rules, and product-safety law.
That layered answer does not fully resolve the problem—and the Android AI case exposes the gap. Article 6(4) is an app- and app-store provision. Article 6(7), however, expressly covers “providers of hardware and services,” not just app developers.
For the AI features at issue here, the main risk may arise from the interoperability interface itself: a third-party AI assistant with access to on-screen content, audio output, and inter-app integration. That risk may not come from—and be manageable by imposing conditions on—a downloadable app in the Article 6(4) sense.
Treating Article 6(4) as the security backstop for Article 6(7) is therefore underinclusive. It leaves a gap where the most sensitive access patterns sit. The Commission risks giving Article 6(7) a narrower legal safety valve than Article 6(4)—in a context that is at least as sensitive.
AI Interoperability and the Privacy Blind Spot
The Android AI proceedings opening decision makes clear that Article 6(7) interoperability is not privacy-neutral. In the AI-assistant context, the features at issue involve access to some of the most sensitive data streams on a device. Yet, on the Commission’s reading, the integrity exception does not appear to accommodate privacy-protective measures—unless they can be recast as narrow, anti-tamper safeguards.
Android brings that tension into focus. Google relies on secure environments like the Private Compute Core to process AI tasks locally, isolating sensitive contextual data—what users see, hear, and say—from extraction. Apple and Meta have converged on similar architectures: Trusted Execution Environments (TEEs) that provide hardware-level isolation, stateless computation, and cryptographic verification. Apple’s Private Cloud Compute and Meta’s WhatsApp Private Processing reflect the same design logic.
As I have argued, this infrastructure can—and should—be opened to third-party developers. But access should run through the security architecture, not around it. A narrow reading of integrity that forces gatekeepers to bypass their own TEE protections would undermine the very infrastructure that makes safe AI interoperability possible.
The gap shows up elsewhere. The draft joint European Commission–European Data Protection Board (EDPB) guidelines on the interplay between the DMA and the GDPR largely sidestep Article 6(7). They treat the provision as if it raises no serious privacy or data-protection issues.
As I argued in ICLE’s response to that consultation, that omission matters. Article 6(7) inherently raises questions about compliance with the GDPR and the ePrivacy Directive—questions the draft guidelines leave unanswered.
The Android AI case makes the problem hard to ignore. If interoperability means giving third parties access to on-screen content, audio output, and hotword detection, the claim that Article 6(7) is data-protection-neutral does not hold.
From Standard Protocols to Uncharted Interfaces
Even if someone accepts the Commission’s narrow integrity doctrine in the context of Apple’s tightly controlled stack, it is much harder to transplant it unchanged into Android’s multi-OEM ecosystem. The Commission acknowledges as much in the Android AI proceedings opening decision: Android is largely deployed on third-party OEM devices, OEM customization produces variation across implementations, and those differences affect how Article 6(7) operates in practice.
In Android’s world, features vary by OEM and by device. A specification decision that mandates uniform access to AI features across this fragmented landscape will likely require a more nuanced integrity framework than the Apple decisions offered.
There is also a difference in kind. The Apple cases involved features—peer-to-peer Wi-Fi, NFC, Bluetooth connectivity—that map onto well-established, industry-standard protocols with known risk profiles.
The Android AI case does not. There are no obvious cross-platform standards comparable to Wi-Fi Aware, NFC, or Bluetooth for third-party access to hotword detection, screen-content reading, audio-output capture, or AI-to-app integration. The Commission is not asking Google to implement an existing protocol. It is asking Google to design new interoperability interfaces for capabilities at the frontier of platform engineering.
That shift raises the stakes. The security and privacy implications of these interfaces remain uncertain, even to specialists. The specification task becomes harder. The evidentiary burden on the gatekeeper becomes more difficult to meet—how does one demonstrate the “existence and magnitude” of risks for interfaces that are still emerging or do not yet exist? And the need for a workable integrity framework becomes more urgent.
Reading Article 6(7) in Light of Alphabet
The Court of Justice of the European Union’s recent judgment in Alphabet and Others (C-233/23, 25 February 2025) points in the same direction—though it requires careful handling. The case arose under Article 102 TFEU, not the DMA, and addressed Google’s refusal to make Android Auto interoperable with a third-party EV-charging app.
The Court held that a refusal to grant interoperability may be justified “where to grant such interoperability … would compromise the integrity or security of the platform concerned.” That is not a direct gloss on Article 6(7) DMA. But it does undercut claims that platform security is legally irrelevant to interoperability analysis under EU law.
At a minimum, the judgment supports a reading of “integrity” that remains tied to security where the two are functionally inseparable—rather than one that treats security as largely out of bounds once the analysis runs through Article 6(7).
Toward a Workable Test for AI Interoperability
The Android AI proceedings opening decision does not settle the scope of “integrity,” but it moves the debate into a far more security-sensitive setting. If the Commission carries over its Apple-based reading unchanged, it risks treating Article 6(7) as offering a thinner legal safety valve than Article 6(4)—in a context involving hotword detection, screen content, and audio output, where evidence-based security controls matter most.
The Commission’s own Apple decisions recognize the constraint. Compliance measures must align with the GDPR, the ePrivacy Directive, cybersecurity rules, and product-safety law. They also emphasize that Article 6(7) must be applied consistently with proportionality and the EU Charter of Fundamental Rights.
The final specification will have to make those commitments concrete. That means articulating a clearer, more predictable test for acceptable AI-access controls than the current Apple line provides.
