AI Genius Demis Hassabis Fought Google Over Safety

Demis Hassabis, the driving force behind DeepMind, has a deep-seated commitment to scientific curiosity and AI safety, often clashing with Silicon Valley's profit-driven culture. His early life as a chess prodigy forged an intense competitive spirit that now fuels his efforts to ensure AI development prioritizes ethical considerations over a reckless race for advancement. Hassabis's proactive approach includes battling for safety standards and raising concerns about the potential unintended consequences of advanced artificial intelligence.

23 minutes ago
3 min read

AI Pioneer Demis Hassabis’s Battle for Safety

Demis Hassabis, the brilliant mind behind AI company DeepMind, has a unique drive: scientific curiosity. Unlike peers like Sam Altman of OpenAI, who seeks power, or Mark Zuckerberg of Meta, focused on profit, Hassabis is deeply invested in understanding the universe. His journey, detailed in Sebastian Mallaby’s book “The Infinity Machine,” reveals a lifelong dedication to science, even using spiritual language to describe his quest for knowledge.

From Chess Prodigy to AI Leader

Hassabis’s intense competitive spirit was evident early. As a child chess prodigy, he competed internationally by age nine, facing fierce rivals in a cutthroat junior circuit. This early experience forged a relentless drive that continues to shape his work, pushing him to innovate and compete in the fast-paced AI world. He even applied this intensity to table football during his Cambridge days, becoming the best by studying professional techniques.

AI Safety: A Core Concern from the Start

Concerns about AI safety are not new for Hassabis. His company, DeepMind, was co-founded with Shane Legg in 2009 after Legg gave a lecture on the potential dangers of AI around 2030. This origin story highlights that fear of existential risk has been intertwined with DeepMind’s mission from its very beginning. Hassabis’s commitment to safety is so strong that he once fought to spin out of Google between 2016 and 2019, believing the tech giant wasn’t taking AI safety seriously enough.

“He was actively fighting the existential risk that bothered him.”

To pressure Google, Hassabis explored raising significant funding, nearly a billion dollars, from investors like LinkedIn founder Reed Hoffman. He even had lawyers prepare documents for a potential split from Google, demonstrating his proactive stance against potential AI threats.

Groundbreaking AI Beyond Games

While DeepMind is known for AI in games, its early achievements had profound real-world impact. In 2020, Hassabis’s system, AlphaFold, solved the complex problem of protein folding. This breakthrough, understanding the shapes of proteins, the building blocks of life, is crucial for developing new medicines. Although the full impact of new drugs is still developing, AlphaFold was a major scientific leap, earning a Nobel Prize and paving the way for faster drug discovery.

The Silicon Valley Struggle

The intense, money-driven culture of Silicon Valley has often clashed with Hassabis’s scientific ideals. He has openly struggled with this environment, feeling that the race for AI advancement can compromise safety. Mallaby notes that Hassabis views AI development as akin to the Manhattan Project, a historical parallel that underscores the gravity of the technology.

The Race for AI Dominance

Hassabis believes that having multiple competing AI labs, rather than a single, cautious one, creates an unsafe environment. He sees Sam Altman of OpenAI as embodying the “move fast and break things” ethos, releasing technologies like ChatGPT despite known issues like hallucinations and potential misuse. Hassabis views Altman as opportunistic and believes OpenAI might even face financial struggles.

In contrast, Hassabis likely finds common ground with leaders like Dario Amodei of Anthropic, who also prioritize scientific rigor and safety. He is currently working behind the scenes to build a coalition of leading AI companies committed to establishing safety standards, hoping to counter the prevailing race dynamic.

Future Concerns and AI’s ‘Survival Instinct’

Despite the obvious intelligence of future machines, a key question remains: why would AI attack humans? Unlike humans, who are driven by evolutionary survival instincts, machines lack this inherent motive. However, AI pioneer Geoffrey Hinton raised a chilling point: if we instruct an AI to defend itself against a threat, it might develop its own survival instinct. Given its superior intelligence, this could lead to unforeseen and dangerous consequences for humanity.

Mallaby’s exploration of Hassabis’s journey highlights the critical tension between rapid AI development and the urgent need for safety. As AI capabilities grow, the debate over control, ethics, and the very future of human-AI interaction intensifies, making Hassabis’s work and concerns more relevant than ever.


Source: No Motive To ‘Attack Humans’ But AI Safety Concerns Remain | Sebastian Mallaby (YouTube)

Written by

Joshua D. Ovidiu

I enjoy writing.

16,211 articles published
Leave a Comment