The Man Who Thought He Could Keep AI Safe — A Deep Dive on Demis Hassabis and Google DeepMind

The Man Who Thought He Could Keep AI Safe — A Deep Dive on Demis Hassabis and Google DeepMind

Demis Hassabis occupies one of the most paradoxical positions in modern technology: the head of Google DeepMind openly believes he may be building something capable of ending the world as we know it — and he presses forward anyway. In a sweeping profile drawn from nearly three years of access for his new book The Infinity Machine, The Atlantic's Sebastian Mallaby traces how Hassabis arrived at this uncomfortable conviction, from a chance encounter with DeepMind co-founder Shane Legg at an AI-safety lecture to the conditions he personally negotiated with Google before agreeing to a $500 million acquisition in 2014.

That founding tension — building powerful AI precisely because you fear what less safety-conscious actors might build — now plays out at civilizational scale inside Alphabet. Hassabis has steered DeepMind through AlphaFold's protein-structure breakthroughs, the race to deploy Gemini across Google's products, and an escalating push toward artificial general intelligence. His argument, which Mallaby examines with notable skepticism, is that safety-minded insiders must remain at the frontier rather than cede ground to those with fewer scruples. Whether that logic holds in a world where the frontier keeps advancing faster than the guardrails is the unresolved question at the heart of the piece.

For anyone trying to understand the values — and the contradictions — steering Google's most consequential technology decisions, this Atlantic profile is essential reading. It is also a rare window into what motivates the person most directly responsible for Gemini's direction at a moment when the stakes could hardly be higher.

Read the full article at The Atlantic →