Apple Gets Full Gemini Model Access for On-Device Siri Distillation
Apple's artificial intelligence partnership with Google runs far deeper than anyone publicly knew. According to new reporting from The Information, Apple has been granted full access to Google's Gemini model inside its own data centers — an arrangement that lets Apple use a technique called "distillation" to produce smaller, task-specific models that can run directly on iPhones and Macs. What makes this especially significant is that these so-called "student models" can be trained to replicate Gemini's internal reasoning computations, not just its surface-level outputs. That distinction means the compact models Apple deploys on-device inherit far more of Gemini's problem-solving depth than a traditional knowledge-transfer approach would allow.
The implications ripple across the entire mobile AI landscape. Siri's next generation isn't being built on Apple's foundation models alone — it's being shaped, at an architectural level, by the model family Google has spent years and billions of dollars developing. For developers and enterprises watching the AI platform wars, this deal quietly positions Gemini as the backbone of the world's most widely-used consumer AI assistant, even if the Gemini name never appears on the screen.
The partnership also raises fresh questions about competitive dynamics. Apple retains full control over how the distilled models behave, what data they're trained on, and how they're deployed — but the lineage traces back to Google's research. It's a form of AI infrastructure dependency that neither company has been eager to advertise, and The Information's reporting suggests the full scope is still not widely understood even inside the industry.