Karen Hao's investigation reveals how OpenAI transformed from a nonprofit idealist to a $100B corporate empire—and what it means for humanity's AI future.
Hyle Editorial·
In 2015, OpenAI was founded with a promise that sounded almost messianic: to ensure artificial general intelligence benefits all of humanity. The initial $1 billion pledge came with an explicit vow—no profit motive, no corporate capture, pure mission-driven research. By 2024, that same organization had secured a valuation exceeding $80 billion, with Microsoft holding a 49% stake and exclusive rights to its most powerful models. What happened in those nine years isn't just corporate evolution. According to Karen Hao's devastating investigation Empire of AI, it represents one of the most consequential bait-and-switches in technological history.
The numbers alone should disturb you. OpenAI's computing costs skyrocketed from roughly $12 million in 2017 to a projected $7 billion by 2024—a 58,000% increase that made dependency on Microsoft not just convenient but existential. Meanwhile, the company's original charter, which explicitly prohibited allowing "subordination of our mission to commercial considerations," was quietly rewritten in 2019 to accommodate a "capped profit" structure.
Hao's reporting, built on over 300 interviews and internal documents, traces OpenAI's transformation to a fundamental collision between idealism and computational reality. The organization's founders—Sam Altman, Elon Musk, Ilya Sutskever, and others—genuinely believed they could democratize AGI development. What they underestimated was the sheer physical cost of training increasingly powerful models.
[!INSIGHT] The scaling hypothesis—the belief that more compute plus more data inevitably produces more capable AI—became both OpenAI's guiding philosophy and its financial prison. Each breakthrough required exponentially more resources, forcing the organization into the arms of those who had them.
GPT-3's training run in 2020 cost an estimated $4.6 million in computing resources alone. GPT-4? Industry analysts place the figure between $50-100 million. This mathematical reality meant that OpenAI's nonprofit structure became untenable almost immediately after its founding. The mission to benefit humanity required resources that only humanity's wealthiest corporations possessed.
The Microsoft Entanglement
The 2019 partnership with Microsoft initially appeared as a $1 billion investment in shared ideals. Hao reveals it was something far more transactional. Internal communications show Microsoft executives negotiating for exclusive licensing rights, preferential access to research, and increasingly, influence over deployment decisions. The phrase "aligned interests" appeared in press releases while contractual terms told a different story.
“"We are basically a Microsoft subcontractor at this point. The mission is secondary to the product roadmap.”
— Anonymous OpenAI employee, quoted in Empire of AI
By 2023, the entanglement had become so complete that Microsoft's cloud infrastructure wasn't just hosting OpenAI's models—it was structurally inseparable from them. Azure OpenAI Service became a billion-dollar revenue stream for Microsoft, while OpenAI's research directions increasingly mirrored what enterprise customers wanted rather than what humanity needed.
The Safety Theater
Perhaps the most uncomfortable section of Hao's book examines what happened to AI safety research within OpenAI. The organization was founded with the explicit goal of ensuring AGI didn't harm humanity. Its founding document stated that if another organization appeared close to achieving safe AGI, OpenAI would stop competing and assist them instead.
[!NOTE] The concept of "alignment" in AI safety refers to ensuring AI systems pursue goals that match human values. OpenAI's original charter committed significant resources to this research, including a dedicated Superalignment team.
The reality proved messier. As commercial pressures mounted, the Superalignment team saw its compute allocations slashed. Key researchers, including cofounder Ilya Sutskever and safety lead Jan Leike, departed in 2024 under circumstances that suggested fundamental disagreements about priorities. Leike's public statement was cryptic but pointed: "I've been disagreeing with OpenAI leadership about the company's core priorities for quite some time."
Hao documents a pattern where safety research was publicly celebrated while privately deprioritized. Press releases announced new safety initiatives while internal budget documents showed the majority of compute going to capabilities research—the work that produced marketable products.
The November Coup That Wasn't
The brief ouster of Sam Altman in November 2023 receives extensive treatment in Hao's account, and her findings challenge both mainstream narratives. The board's stated concerns about Altman's candor masked deeper tensions about the pace of commercialization, the erosion of safety culture, and the effective surrender of governance to Microsoft.
What's remarkable in hindsight isn't that Altman was fired—it's that the board thought they could fire him. Within days, Microsoft had offered to hire Altman and his allies, 95% of OpenAI employees signed a letter threatening to quit, and the board's authority evaporated. The resolution restored Altman with a new, more compliant board. Hao characterizes this not as governance failure but as governance impossibility—proof that by 2023, OpenAI had become too commercially important to be governed by its own nonprofit charter.
What This Means for AI's Future
The implications of Hao's investigation extend far beyond one company's corporate drama. If OpenAI—the most idealistically founded AI lab in history—could not resist capture by commercial interests, what hope exists for actually democratic AI development?
The concentration of advanced AI capabilities in a handful of companies is now effectively complete. Google DeepMind, Anthropic, and OpenAI collectively control the frontier of large language model development. All three are either subsidiaries of tech giants (DeepMind/Google), heavily invested by them (Anthropic/Google and Amazon), or effectively dependent on them (OpenAI/Microsoft). The notion of independent AI research has become almost oxymoronic.
[!INSIGHT] Hao's central argument isn't that profit is evil or that corporations can't produce beneficial technology. It's that the specific structure of AI development—requiring billions in compute, producing products with immediate commercial applications, and operating in a regulatory vacuum—makes genuine public-interest governance nearly impossible under current arrangements.
The book points to alternative models: publicly funded AI research, international coordination on compute governance, and antitrust action to break the cloud-compute duopoly. But Hao is honest about these prospects. They require political will that doesn't currently exist.
The Empire's Shadow
Empire of AI arrives at a moment when the public is being asked to trust the very institutions that Hao shows are structurally untrustworthy. OpenAI's current mission statement has evolved from "benefiting humanity" to "building safe and beneficial AGI," a subtle shift that replaces universal benefit with product development. The company's relationships with government agencies, educational institutions, and research organizations continue to grow, spreading its influence into domains far beyond commercial AI.
Key Takeaway
Karen Hao's investigation reveals that OpenAI's transformation from nonprofit idealist to corporate empire wasn't a betrayal of its mission—it was the inevitable result of founding an organization that required billions in corporate resources while pretending those resources came without strings attached. The lesson isn't that the founders were dishonest, but that the structure was impossible. Any serious attempt to ensure AI benefits humanity must begin by acknowledging this reality and designing governance mechanisms that can actually constrain concentrated power. The empire exists. The question is whether democratic institutions can ever meaningfully constrain it.
Sources: Empire of AI: Dreams, Fears, and the Quest for Control at the Frontier of Technology by Karen Hao (2024); OpenAI founding charter (2015); OpenAI corporate restructuring documents (2019); Microsoft-Q4 2024 earnings report; Industry analyst estimates on AI training costs; Public statements by Jan Leike and Ilya Sutskever (2024)
This is a Premium Article
Hylē Media members get unlimited access to all premium content. Sign up free — no credit card required.