AIPremium

Supremacy

Inside the ruthless race between DeepMind and OpenAI. Parmy Olson reveals how two labs shaped AI and the cost of their battle for dominance.

Hyle Editorial·

Why Supremacy by Parmy Olson will change how you think about artificial intelligence. In 2024, the Financial Times named it the Business Book of the Year, and for good reason: it exposes the human drama behind the technology reshaping our world. Two labs. Two visions. One relentless race for control over intelligence itself. By the time ChatGPT reached 100 million users in just two months—the fastest consumer product adoption in history—the battle lines had already been drawn years earlier in boardrooms and research labs most people had never heard of.

What Olson reveals is not a story of inevitable technological progress, but of ego, betrayal, and trillion-dollar stakes. The question isn't who won. It's what we lost in the process—and whether anyone is actually in control of what comes next.

The story begins not with ChatGPT's 2022 explosion, but over a decade earlier. In 2010, Demis Hassabis, a neuroscientist and former chess prodigy, co-founded DeepMind with a radical proposition: build artificial general intelligence by reverse-engineering the human brain. Google acquired the startup in 2014 for $500 million—a staggering sum for a company with no revenue.

Meanwhile, OpenAI emerged in 2015 as a counter-narrative. Founded by Elon Musk, Sam Altman, and others with a $1 billion pledge, it promised to be a non-profit dedicated to ensuring AI benefited all of humanity. The irony would prove bitter.

[!INSIGHT] The fundamental tension wasn't technological but philosophical: DeepMind pursued AI through neuroscience-inspired systems, while OpenAI bet on scaling—throwing exponentially more data and compute at the problem until intelligence emerged.

Olson's reporting reveals how these divergent approaches reflected the personalities at their helms. Hassabis, methodical and media-shy, built DeepMind as a research lab with academic traditions. Altman, a serial entrepreneur with a talent for self-promotion, transformed OpenAI into something closer to a Silicon Valley startup—agile, ambitious, and increasingly aggressive.

The Breakpoint: OpenAI's Gambit

The relationship between the two labs deteriorated sharply as the commercial potential of large language models became undeniable. Olson documents a pivotal moment in 2019: OpenAI's transformation from non-profit to "capped-profit" entity, allowing it to raise outside investment. Microsoft poured in $1 billion.

This structural shift triggered what Olson calls the "arms race dynamic." No longer content with academic publications and incremental progress, OpenAI pursued rapid commercialization. GPT-3, released in 2020, demonstrated capabilities that shocked even researchers. DeepMind, still operating under Google's cautious umbrella, found itself playing catch-up.

"The race to build AGI is not a race at all. It's a race to build it first
and the consequences of losing are existential for these companies."

The book's most damning revelations concern safety. Internal documents and whistleblower accounts suggest that competitive pressure systematically eroded safety protocols. When researchers raised concerns about releasing models, they were overruled by commercial imperatives. The principle of "move fast and break things" had migrated from social media to artificial general intelligence.

The Human Cost of the Race

Olson excels at humanizing the story. She profiles researchers who joined these labs idealistically, believing they were contributing to humanity's future, only to watch their work commodified and their concerns dismissed. The turnover at both organizations tells its own story: burnout, ethical exhaustion, and disillusionment.

A particularly chilling chapter details the mental health crisis among AI safety researchers. Tasked with anticipating risks from systems they barely understood, many experienced profound anxiety—not about their careers, but about the technology they were building. One researcher told Olson: "We're creating something smarter than us, and we have no idea what it will want."

[!NOTE] The book does not take sides. Olson is equally critical of DeepMind's opacity under Google's ownership and OpenAI's chaotic governance—exemplified by the board's attempt to fire Altman in November 2023, only to watch employees revolt and Microsoft intervene.

The November 2023 drama occupies a central position in the narrative. Over a single weekend, OpenAI's board dismissed Altman for lack of candor, triggering employee threats of mass resignation and Microsoft's offer to hire the entire team. Altman returned within days, the board replaced. The episode revealed where real power lay: not with governance structures, but with key personnel and their corporate patrons.

Implications: What the Race Cost Us

The deeper question Olson raises concerns opportunity cost. While billions flowed into large language models, other approaches to AI—symbolic reasoning, embodied intelligence, neuroscience-grounded systems—received comparatively little attention or funding. The race produced impressive chatbots, but whether it advanced artificial general intelligence remains contested.

More troubling are the externalities. The concentration of AI capability in two organizations—with their aligned corporate patrons—represents an unprecedented centralization of technological power. Olson documents how both companies influenced regulation, shaped public narrative, and positioned themselves as essential infrastructure for the AI age.

The environmental impact receives due attention. Training a single large language model can consume as much energy as five cars use in their lifetimes. As models grew larger, so did their carbon footprints—externalities rarely mentioned in product launches and technical papers.

The Unanswered Question

Olson ends without predicting winners. Instead, she reframes the question: should there be winners at all? The race for AI supremacy assumes that control over intelligence is the ultimate prize. But the history she documents suggests another possibility—that the very competition driving progress may also be generating risks no single entity can manage.

The book's title is deliberately ambiguous. Supremacy refers to the goal both companies pursued, but also to the broader dynamic it created: a world where AI capability becomes synonymous with corporate and national power. Whether this trajectory serves humanity's interests—rather than shareholder value—remains the open question haunting every chapter.

Key Takeaway: The race between DeepMind and OpenAI was never just about technology. It was about who gets to define, control, and profit from artificial intelligence—and the competitive dynamics that pushed both companies toward speed over safety, market share over deliberation. Olson's masterful reporting shows how institutional incentives, not technical constraints, are shaping our AI future.

Sources: Olson, Parmy. Supremacy: AI, ChatGPT, and the Race That Will Change the World. St. Martin's Press, 2024. Financial Times Business Book of the Year 2024. Additional context from company statements, SEC filings, and public testimony.

This is a Premium Article

Hylē Media members get unlimited access to all premium content. Sign up free — no credit card required.

Related Articles