
The last time I interviewed Demis Hassabis was back in November 2022, just a few weeks before the release of ChatGPT. Even then — before the rest of the world went AI-crazy — the CEO of Google DeepMind had a stark warning about the accelerating pace of AI progress. “I would advocate not moving fast and breaking things,” Hassabis told me back then. He criticized what he saw as a reckless attitude among some in his field, whom he likened to experimentalists who “don’t realize they’re holding dangerous material.”
Two and a half years later, much has changed in the world of AI. Hassabis, for his part, won a share of the 2024 Nobel Prize in Chemistry for his work on Alphafold — an AI system that can predict the 3D structures of proteins, and which has turbocharged biomedical research. The pace of AI improvement has been so rapid that many researchers, Hassabis among them, now believe Artificial General Intelligence (AGI) will perhaps arrive this decade. In 2022, even acknowledging the possibility of AGI was seen as fringe. But Hassabis has always been a believer. In fact, creating AGI is his life’s goal.
Summoning AGI will require huge amounts of computing power — infrastructure that only a few tech giants, Google being one of them, possess. That gives Google more leverage over Hassabis than he might like to admit. When Hassabis joined Google, he extracted a pledge from the company: that DeepMind’s AI would never be used for military or weapons purposes. But 10 years later, that pledge is no more. Now Google sells its services — including DeepMind’s AI — to militaries including those of the United States and, as I revealed last year, Israel. So one of the questions I wanted to ask Hassabis, when we sat down for a chat last month on the occasion of his inclusion in this year’s TIME100, was this: did you make a compromise in order to have the chance of achieving your life’s goal?
For the answer to that question and many more, read my latest profile of Hassabis in TIME:
What else I’ve written
Exclusive: Every AI Datacenter Is Vulnerable to Chinese Espionage, Report Says
Today’s top AI datacenters are vulnerable to both asymmetrical sabotage—where relatively cheap attacks could disable them for months—and exfiltration attacks, in which closely guarded AI models could be stolen or surveilled, [a new report warns]. Even the most advanced datacenters currently under construction—including OpenAI’s Stargate project—are likely vulnerable to the same attacks, the authors tell TIME. “You could end up with dozens of datacenter sites that are essentially stranded assets that can’t be retrofitted for the level of security that’s required,” says Edouard Harris, one of the authors of the report. “That’s just a brutal gut-punch.”
Trump Wants Tariffs to Bring Back U.S. Jobs. They Might Speed Up AI Automation Instead
Rather than enticing companies to create new jobs in the U.S., economists say, the new tariffs—bolstered by recent advancements in artificial intelligence and robotics—could instead increase incentives for companies to automate human labor entirely. “There’s no reason whatsoever to believe that this is going to bring back a lot of jobs,” says Carl Benedikt Frey, an economist and professor of AI & work at Oxford University. “Costs are higher in the United States. That means there’s an even stronger economic incentive to find ways of automating even more tasks.”