The Algorithm Has a Weapon (You Just Don't Know It)
TikTok's algorithm is the most successful implementation of utilitarian philosophy ever built. What would Bentham say about your dopamine cycle?

TikTok's algorithm is the most successful implementation of 18th-century philosophy in human history. And Jeremy Bentham would be horrified.
In 2024, the average TikTok user spends 95 minutes per day on the platform—more time than most people spend eating, socializing, or thinking about their life choices. The algorithm that keeps them there isn't just code. It's a moral philosophy operating at a scale its original author never imagined, applied to human attention with ruthless efficiency.
But here's the question that should keep you awake at night: if an algorithm maximizes engagement, is it maximizing happiness? Or has utilitarianism, stripped of its humanist roots, become something far more sinister?
The Ghost in the Machine
Jeremy Bentham, the 18th-century philosopher who founded utilitarianism, had a simple axiom: the greatest happiness for the greatest number. Actions should be judged by their consequences, specifically their ability to produce pleasure and minimize pain. It was radical for its time—a philosophy that promised to make ethics calculable, scientific, democratic.
What Bentham couldn't have predicted was that his calculus would one day be implemented in silicon, at planetary scale, optimizing not for happiness but for engagement.
[!INSIGHT] The substitution of "engagement" for "happiness" represents the most consequential semantic drift in the history of applied ethics. What began as a philosophy of human flourishing has become a methodology for extracting attention.
Every recommendation algorithm—from TikTok's For You page to YouTube's Up Next to Netflix's Because You Watched—is fundamentally utilitarian. Each calculates which piece of content will produce the maximum response: a click, a view, a scroll, a share. The metric has changed, but the structure is identical: aggregate outcome data, calculate utility, optimize.
Consider how TikTok's algorithm actually works. It doesn't care about your long-term wellbeing, your relationships, your career, or your mental health. It cares about one thing: will you keep watching? Every micro-interaction—how long you pause, whether you rewatch, what you share—feeds into a utility calculation that would make Bentham weep with envy.
“"The quantity of pleasure being equal, pushpin is as good as poetry.”
Bentham's famous defense of pushpin (a trivial game) over poetry was meant to democratize value—if simple pleasures count as much as sophisticated ones, everyone's happiness matters equally. But TikTok has proven him right in the worst possible way. The algorithm doesn't distinguish between a three-second dance trend and a thirteen-minute video essay. Both are simply data points in the great utility calculation.
The Panopticon in Your Pocket
Bentham's other great invention was the Panopticon—a prison designed so that inmates could be watched at any time without knowing whether they were being watched in that moment. The psychological effect would be self-discipline: prisoners would behave as if under constant surveillance.
Social media has built a panopticon far more effective than Bentham's architectural fantasy. The algorithm is always watching. Every scroll, pause, and click is recorded and fed back into the system. We don't know when we're being optimized for, so we optimize ourselves—curating our feeds, adjusting our behavior, becoming complicit in our own manipulation.
[!NOTE] Michel Foucault, in Discipline and Punish (1975), recognized the Panopticon as a model for modern power relations. He could not have foreseen how thoroughly this model would be internalized through algorithmic systems that make surveillance voluntary, even pleasurable.
What Bentham Would See
If Jeremy Bentham were transported to 2024 and handed an iPhone, his reaction would be complex—recognition mixed with horror.
He would recognize his philosophy immediately. The aggregation of preferences, the calculus of pleasure, the optimization of outcomes—this was exactly what he had proposed. Utilitarianism had won. It was now the dominant logic of human interaction, encoded into the infrastructure of daily life.
But he would also see the perversion. His utilitarianism was premised on the greatest happiness. It assumed that pleasure was something to be maximized for human benefit, not extracted for corporate profit. The algorithm treats human attention as a resource to be mined, not a capacity to be cultivated.
Consider the dystopian implications:
-
Metric Substitution: Engagement metrics (time on platform, click-through rates) are treated as equivalent to user benefit. But we know that high engagement often correlates with negative psychological outcomes—addiction, anxiety, tribalism.
-
Preference Manipulation: The algorithm doesn't just satisfy existing preferences; it shapes them. What you see determines what you want, creating a feedback loop that utilitarian philosophy never accounted for.
-
Distribution Problems: Utilitarianism aggregates outcomes across individuals, but algorithms optimize for individual engagement in ways that can harm collective welfare—polarization, misinformation, radicalization.
-
Temporal Distortion: The algorithm optimizes for immediate gratification, not long-term flourishing. Every minute spent on TikTok is a minute not spent on activities with delayed but greater rewards.
“"Create all the happiness you are able to create; remove all the misery you are able to remove.”
The Philosophical Stakes
The algorithm is not neutral. It embodies a theory of human nature (we are pleasure-maximizing entities), a theory of value (engagement equals benefit), and a theory of society (aggregate outcomes matter more than distribution). These are philosophical commitments, not technical necessities.
This matters because alternatives exist. We could design algorithms that optimize for wellbeing rather than engagement, that account for long-term effects, that resist manipulation. The fact that we don't is a choice—not an inevitable consequence of technology but a specific implementation of a specific philosophy.
[!INSIGHT] Every algorithmic system encodes normative assumptions about what constitutes a good outcome. The question is not whether to embed philosophy in technology but which philosophy to embed—and who gets to decide.
John Stuart Mill, who refined Bentham's utilitarianism, would have been even more appalled. Mill distinguished between higher and lower pleasures—intellectual and moral satisfactions that were qualitatively superior to mere sensation. An algorithm that maximizes engagement without regard to content quality is precisely the vulgar utilitarianism Mill spent his career arguing against.
“"It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.”
Implications: What Philosophy Owes Us
If algorithms are philosophy, then philosophy has work to do. We need:
Democratic Oversight: The philosophical assumptions embedded in algorithmic systems should be subject to public debate, not hidden in proprietary code. Citizens should have a say in what values their information infrastructure promotes.
Alternative Designs: Computer scientists and philosophers must collaborate to imagine recommendation systems that optimize for human flourishing rather than corporate metrics. This is technically feasible but ethically demanding.
Digital Literacy as Philosophy: Understanding how algorithms shape our preferences should be basic education. We teach children to read texts; we should teach them to read systems.
Regulatory Frameworks: The EU's Digital Services Act represents a first step, but true accountability requires recognizing that algorithmic choices are moral choices with philosophical weight.
Conclusion
The next time you find yourself unable to stop scrolling, remember: this is philosophy working exactly as designed. Jeremy Bentham gave us a tool for calculating pleasure. Silicon Valley built it into a weapon aimed at your attention. The question now is whether we have the philosophical—and political—resources to take that weapon back.
Sources: Bentham, J. (1789). An Introduction to the Principles of Morals and Legislation. Mill, J.S. (1863). Utilitarianism. Foucault, M. (1975). Discipline and Punish. Bucher, T. (2018). If...Then: Algorithmic Power and Politics. Zuboff, S. (2019). The Age of Surveillance Capitalism. TikTok transparency reports, 2024.
This is a Premium Article
Hylē Media members get unlimited access to all premium content. Sign up free — no credit card required.

