Stephen Witt's The Thinking Machine reveals how a tiny lab conquered AI. The 2025 Financial Times winner exposes power, ambition, and computing's future.
Hyle Editorial·
Why The Thinking Machine by Stephen Witt will change how you think about artificial intelligence—and more importantly, who controls it. In 2012, a small research lab in Toronto had zero employees and virtually no funding. By 2023, that same lab had spawned technologies worth over $500 billion in market capitalization. How did Geoffrey Hinton's modest academic operation become the epicenter of the most transformative technology since electricity?
This question haunted me for months until I read Stephen Witt'sFinancial Times Business Book of the Year 2025 winner. The answer is far more unsettling than any narrative about visionary genius or inevitable progress.
Witt's reporting reveals something that decades of Silicon Valley mythology have obscured: the AI revolution was never inevitable, and its current power structure emerged from a series of contingent decisions, lucky breaks, and moral compromises that could have easily gone differently.
The book centers on Geoffrey Hinton, the cognitive psychologist who spent thirty years pursuing an approach to artificial intelligence that the entire field had declared a dead end. Neural networks—the technology now powering everything from ChatGPT to autonomous vehicles—were considered a career-killing backwater as recently as 2010. Hinton persisted not because he foresaw the future, but because he was constitutionally incapable of abandoning an intellectual puzzle.
[!INSIGHT] Hinton's stubbornness wasn't strategic vision—it was intellectual obsession. The difference matters because it suggests that the $500 billion industry now dominated by a handful of tech giants exists largely by accident.
Witt gained unprecedented access to Hinton and his inner circle, including the pivotal moment when Hinton's small company, DNNresearch, was acquired by Google in 2013 for $44 million. That acquisition, barely noticed by the financial press at the time, would prove to be the opening move in the greatest concentration of technological power since the Manhattan Project.
The Great AI Brain Drain
One of the book's most revelatory sections documents the systematic transfer of academic AI talent into corporate laboratories. Between 2015 and 2020, every single one of Hinton's doctoral students accepted positions at Google, Facebook, Apple, or OpenAI. The brain drain wasn't just about salary—though compensation packages routinely exceeded $2 million annually for fresh PhDs. It was about access to computing resources that no university could match.
“"In academia, I could run experiments on maybe 4 GPUs. At Google, I had access to thousands. It wasn't a choice between industry and academia”
— it was a choice between doing the science and watching others do it."
This consolidation has profound implications for the future of artificial general intelligence. The companies that control the most computing power also control the most talented researchers, creating a feedback loop that concentrates rather than disperses technological capability.
The Computing Power Cascade
Witt introduces a concept that deserves wider currency: the "computing power cascade." Traditional industrial monopolies formed around scarce resources like oil or minerals. AI monopolies form around something that technically isn't scarce—computing power—but which becomes effectively scarce through capital intensity.
Training a state-of-the-art language model now costs between $100 million and $1 billion in computing resources alone. Only five companies on Earth have both the financial resources and technical infrastructure to train such models. This isn't a natural monopoly in the traditional economic sense; it's a capital barrier that creates the same concentration of power.
The numbers are staggering. In 2023, Google spent an estimated $12 billion on AI infrastructure. Microsoft committed another $10 billion to OpenAI. Meta, Amazon, and the Chinese tech giants made comparable investments. Meanwhile, the entire National Science Foundation budget for computer science research that year was approximately $1.2 billion.
[!NOTE] The concentration of AI research in corporate labs has accelerated development dramatically—GPT-4 would likely not exist in an academic timeline—but it has also narrowed the range of questions being asked. Corporate AI research tends to focus on commercially viable applications rather than fundamental questions about intelligence, consciousness, or alignment with human values.
The Alignment Problem Is a Power Problem
Witt's reporting touches briefly on AI alignment—the challenge of ensuring that artificial intelligence systems pursue goals compatible with human flourishing—but his more original contribution is framing alignment as fundamentally a question of power distribution rather than technical specification.
The technical alignment research community, concentrated at places like the Machine Intelligence Research Institute and various university labs, assumes that the key challenge is specifying the right objective function for AI systems. Witt's reporting suggests a different framing: the key challenge is distributing the power to specify objective functions broadly enough that no single actor can impose their values on everyone else.
This reframing has significant implications. If alignment is primarily a technical problem, then the solution is better engineering. If alignment is primarily a power problem, then the solution is political—antitrust enforcement, international coordination, democratic oversight of AI development.
What This Means for the Next Decade
The Thinking Machine arrived at a consequential moment. In late 2024 and early 2025, regulators in the United States, European Union, and China began seriously considering restrictions on AI development and deployment. The companies profiled in Witt's book simultaneously lobbied for regulations that would entrench their advantages while warning that excessive constraints would handicap Western AI development relative to Chinese competitors.
The book provides essential context for evaluating these claims. The current AI oligopoly is not a natural outcome of technological progress—it's the result of specific policy choices, corporate strategies, and historical accidents that could have gone differently. Understanding this contingency is the first step toward imagining alternative futures.
Witt concludes with a scene that will linger with any reader: Geoffrey Hinton, having resigned from Google to speak freely about AI risks, testifying before a congressional committee about the technology he helped create. The man who spent decades convincing the world that neural networks were the path to artificial intelligence now spends his time warning that we may not survive the journey.
“"Iconsole myself with the thought that if I hadn't done it, someone else would have. But that's not really comforting, is it? Someone was going to invent the atomic bomb. That doesn't make it less terrifying that Oppenheimer actually did.”
— Geoffrey Hinton, as quoted in The Thinking Machine
Key Takeaway
**The AI revolution was never inevitable—it emerged from specific choices, accidents, and power consolidations that concentrated world-changing technology in the hands of a few corporate giants. Understanding this history is essential if we want different outcomes in the future. Witt's masterpiece shows that who controls AI matters just as much as what AI can do.
Sources: Witt, Stephen. The Thinking Machine: Geoffrey Hinton and the Quest for Artificial Intelligence. Penguin Press, 2025; Financial Times Business Book of the Year Award Announcement, 2025; National Science Foundation Budget Reports, 2023; Corporate AI Spending Analysis, Stanford HAI Report, 2024.
This is a Premium Article
Hylē Media members get unlimited access to all premium content. Sign up free — no credit card required.