Be a part of our every day and weekly newsletters for the most recent updates and unique content material on industry-leading AI protection. Be taught Extra
A phase on CBS weekly in-depth TV information program 60 Minutes final evening (additionally shared on YouTube right here) supplied an inside have a look at Google’s DeepMind and the imaginative and prescient of its co-founder and Nobel Prize-winning CEO, legendary AI researcher Demis Hassabis.
The interview traced DeepMind’s fast progress in synthetic intelligence and its ambition to realize synthetic common intelligence (AGI)—a machine intelligence with human-like versatility and superhuman scale.
Hassabis described right this moment’s AI trajectory as being on an “exponential curve of enchancment,” fueled by rising curiosity, expertise, and assets getting into the sphere.
Two years after a previous 60 Minutes interview heralded the chatbot period, Hassabis and DeepMind at the moment are pursuing extra succesful programs designed not solely to know language, but additionally the bodily world round them.
The interview got here after Google’s Cloud Subsequent 2025 convention earlier this month, by which the search large launched a number of latest AI fashions and options centered round its Gemini 2.5 multimodal AI mannequin household. Google got here out of that convention showing to have taken a lead in comparison with different tech corporations at offering highly effective AI for enterprise use circumstances on the most reasonably priced value factors, surpassing OpenAI.
Extra particulars on Google DeepMind’s ‘Undertaking Astra’
One of many phase’s focal factors was Undertaking Astra, DeepMind’s next-generation chatbot that goes past textual content. Astra is designed to interpret the visible world in actual time.
In a single demo, it recognized work, inferred emotional states, and created a narrative round a Hopper portray with the road: “Solely the stream of concepts transferring onward.”
When requested if it was rising bored, Astra replied thoughtfully, revealing a level of sensitivity to tone and interpersonal nuance.
Product supervisor Bibbo Shu underscored Astra’s distinctive design: an AI that may “see, hear, and chat about something”—a marked step towards embodied AI programs.
Gemini: Towards actionable AI
The published additionally featured Gemini, DeepMind’s AI system being skilled not solely to interpret the world but additionally to behave in it—finishing duties like reserving tickets and buying on-line.
Hassabis mentioned Gemini is a step towards AGI: an AI with a human-like potential to navigate and function in complicated environments.
The 60 Minutes workforce tried out a prototype embedded in glasses, demonstrating real-time visible recognition and audio responses. May it additionally trace at an upcoming return of the pioneering but finally off-putting early augmented actuality glasses often called Google Glass, which debuted in 2012 earlier than being retired in 2015?
Whereas particular Gemini mannequin variations like Gemini 2.5 Professional or Flash weren’t talked about within the phase, Google’s broader AI ecosystem has lately launched these fashions for enterprise use, which can replicate parallel improvement efforts.
These integrations help Google’s rising ambitions in utilized AI, although they fall exterior the scope of what was straight coated within the interview.
AGI as quickly as 2030?
When requested for a timeline, Hassabis projected AGI might arrive as quickly as 2030, with programs that perceive their environments “in very nuanced and deep methods.” He urged that such programs might be seamlessly embedded into on a regular basis life, from wearables to residence assistants.
The interview additionally addressed the potential of self-awareness in AI. Hassabis mentioned present programs are usually not acutely aware, however that future fashions might exhibit indicators of self-understanding. Nonetheless, he emphasised the philosophical and organic divide: even when machines mimic acutely aware habits, they don’t seem to be fabricated from the identical “squishy carbon matter” as people.
Hassabis additionally predicted main developments in robotics, saying breakthroughs might come within the subsequent few years. The phase featured robots finishing duties with obscure directions—like figuring out a inexperienced block shaped by mixing yellow and blue—suggesting rising reasoning talents in bodily programs.
Accomplishments and security issues
The phase revisited DeepMind’s landmark achievement with AlphaFold, the AI mannequin that predicted the construction of over 200 million proteins.
Hassabis and colleague John Jumper had been awarded the 2024 Nobel Prize in Chemistry for this work. Hassabis emphasised that this advance might speed up drug improvement, probably shrinking timelines from a decade to only weeks. “I believe sooner or later perhaps we are able to remedy all illness with the assistance of AI,” he mentioned.
Regardless of the optimism, Hassabis voiced clear issues. He cited two main dangers: the misuse of AI by dangerous actors and the rising autonomy of programs past human management. He emphasised the significance of constructing in guardrails and worth programs—educating AI as one would possibly train a toddler. He additionally referred to as for worldwide cooperation, noting that AI’s affect will contact each nation and tradition.
“One among my huge worries,” he mentioned, “is that the race for AI dominance might turn out to be a race to the underside for security.” He careworn the necessity for main gamers and nation-states to coordinate on moral improvement and oversight.
The phase ended with a meditation on the long run: a world the place AI instruments might rework nearly each human endeavor—and ultimately reshape how we take into consideration information, consciousness, and even the which means of life. As Hassabis put it, “We’d like new nice philosophers to return about… to know the implications of this method.”