This past spring, I went down to Texas to visit a dear friend. She and her partner are among the class of Americans that Kurt Vonnegut called “fabulously well-to-do,” and on the second day of my visit, we decided to swing by their tennis club for lunch. My friend drives a Tesla. When we got in, the control panel prompted her: The car was due for some kind of software update. We were hungry. She declined.
Twenty minutes later, we arrived at the club, and it turned out that the dining room was closed. Ah well. We walked back out to the parking lot and chatted about other options. There was a spot they both liked a few miles down the road. In Texas, everything is at least a few miles down the road. We got into the car. It had, as if of its own insufferable volition, begun the software update that she’d just declined, and it would be undrivable for 45 minutes. We stood on the baking asphalt, defeated by her own inanimate possession. It could have been worse; we were only stuck in a parking lot in the middle of a city, not bricked on some roadside a thousand miles from home, as has been the case in other Tesla-based tales of woe.

The travails and minor inconveniences of the affluent are hardly the stuff of revolutions—unless, I suppose, you count our original American one—but it seems worth noting that the awfulness, stupidity, bad design, and poor customer service of the flashy technological marvels that were meant to deliver the future to us now affect even the rich, and it feels more and more as if we, instead, have been delivered to the future.
Search engines have gotten worse, the results of almost any query an unusable mush of sponsored products and AI gibberish. Artificial intelligence itself (or at least the large language models that have been branded as AI), which was supposed to augur unimaginable advances in science, medicine, knowledge, and productivity, has so far turned out to be largely a machine for cheating, either on your homework or your spouse. Many of the products on your favorite shopping sites are fake. The ones that aren’t? Well, the customer reviews of them might be. It is difficult and unwieldy to contact Facebook to remove the profiles of deceased relatives. Farmers struggle to repair their own farm equipment, and the corner mechanic can no longer tune up your car. Entire companies exist simply to buy up or create dubious patents and then sue other companies for breaching them. The social media networks that promised to connect us to our friends and families, to talk back to our favorite journalists and celebrities, have transformed from innocent timelines into algorithm-driven outrage machines, delivering snuff films, unwanted pornography, and vile political vitriol. What is to be done?
Cory Doctorow—writer, pundit, professor, philosopher, NGO crusader—has been beating the drum on the decline of the internet and related technologies for some time now. In 2022, in search of a simple, popularizable explanatory scheme, he coined the term “enshittification.” It caught on. Broadcaster Chris Hayes invoked it in his book The Sirens’ Call: How Attention Became the World’s Most Endangered Resource, when observing how “the ‘dream of connection’ that tech companies have marketed and promised” has failed, leaving phone-addicted Americans “more isolated and less social” than before. The economist Paul Krugman has used the term to explain why the “tech industry has fallen out of public favor” and its leaders have displayed an “increasingly anti-democratic rage.” It was named a word of the year by several dictionaries in 2023 and 2024. It is, Doctorow tells us, “funny and naughty to say.” I am not entirely sure how naughty it is; yes, there remain a couple of major American dailies that still clutch their style guides when it comes to cussing, not so much for the rest of us. Nonetheless, it rolls off the tongue more easily and is certainly more evocative than the concept it describes, which Doctorow more clinically calls “sudden-onset platform collapse.”
He has now produced a book on the topic, Enshittification: Why Everything Suddenly Got Worse and What to Do About It. We might quibble with the “suddenly,” but there is no doubt that it’s an accelerating trend, if not a new one. The services and platforms that were once relatively pleasant places to post vacation pictures and party invitations are now awash in incomprehensible AI slop (another relatively recent and evocative coinage) and seething political vitriol; the online marketplaces where we once bought books and polo shirts are now clogged with LLM-generated fake books and drop-ship knockoff products from brands you love, such as MAGCOMSEN, LETSEE, or TELALEO; the cheap Uber from the airport can now be costlier than the most expensive taxi (remember those?); your web search is now as likely to return a half-hallucinated AI summary as it is to return the quotation you were trying to remember or the recipe you were looking for. “Shit,” as the Occupy Wall Street protest sign memorably proclaimed, “is fucked up and bullshit.”
The most salient characteristic of Doctorow’s diagnosis is that the decline is neither sudden nor accidental, but that it is deliberate. It is not a natural process of evolution, nor the slow falling-into-disrepair that we would call, simply, depreciation in the real, physical world. Doctorow believes that “platforms,” the companies, systems, and devices that act as mediums and middlemen in “two-sided markets,” connecting “business customers and end users,” induce users and businesses into technological ecosystems through perks, incentives, convenience, and good design—all subsidized by excess investor capital—before purposefully and incrementally sucking back all the value and profit in the system through decreasing or eliminating investment in services and support. By then it is too late. Users and businesses are locked in, and whether it is because transfer costs are too high or because monopolization has eliminated alternatives, they cannot escape.
Doctorow employs a clinical framework as a structuring device. He divides the book into four parts: “The Natural History,” “The Pathology,” “The Epidemiology,” and “The Cure.” The overall scheme is straightforward. A platform raises capital—surplus—from investors and uses it to the advantage of its end users. In the case of a social network like Facebook, it promises not to spy and surveil and sell user data; in the case of Amazon or Uber, it sells at a loss and generates deals that undercut competitors. Once customers are locked in, they begin to screw users in order to deliver benefits to their business partners. Suddenly the platforms do spy, surveil, and sell data. Suddenly they offer businesses the opportunity to juke algorithms and put their own products at the top of every search. And finally, when businesses are locked in as well, the platforms extract exorbitant rents for access and claw back all the value for themselves and their shareholders.
In the past, a variety of disciplinary mechanisms may have provided constraints on the capacity of tech firms to enshittify their own products. Market competition meant that users could decamp to better platforms, products, and services as they entered the market. Regulation could prevent too much concentration. A “right to repair” meant that users and customers could fix their own problems, limited only by their own technical know-how or that of a skilled third party. And, for a time, tech workers—the coders and programmers who performed the labor that built these platforms and companies—had tremendous labor power derived from their relatively scarce and uniquely portable skills.
These observations are all true to some degree, but they do not arrive without a whiff of an Econ 101 just-so story, and Doctorow’s own reporting undermines any sense that these factors ever really constrained the tech sector. He shows Google workers standing up to the bosses from time to time, particularly in the giddy social movements of the 2010s, as the company found itself confronted by a workforce that, hilariously, believed in the now-abandoned corporate slogan, “Don’t be evil.” Yet how much did their actions accomplish? The largest tech companies have always operated in a predatory and unscrupulous manner. Even when Google was at the height of its efficacy as a search tool earlier in the century, it attracted intense congressional and Federal Trade Commission scrutiny for monopolistic practices in search and online advertising. Apple has used its absolute control over its App Store to extract monstrous fees from merchants and developers since the advent of the smartphone era. And those ethical Googlers? Doctorow also documents an astonishing shell game, in which workers spun off companies specifically so that they would be acquired for millions by their own former employer, for which they then went right back to work.
Enshittification is a swift and entertaining, if frequently enraging, read. Its case studies of Facebook, Amazon, the iPhone, and Twitter don’t break any new reportorial ground; if you read Wired or The Wall Street Journal or The New York Times, you are probably familiar with many of the same tales of thoughtful, planned, strategic abuse of users, customers, and business partners alike. Yet many of the details still stand out as shocking: Doctorow recounts, for example, how Uber uses algorithmic wage-arbitrage to punish and discipline its drivers. It is ingenious and diabolical. If drivers are selective about the rides they accept, Uber begins to offer higher payments to induce them to accept more riders and do more driving. Uber doesn’t want picky drivers; it wants volume. But as drivers respond and become less picky, the app begins to randomly but steadily reduce the payments. It incentivizes them to work more, and when they do, it pays them less.
Of all the stories in the book, perhaps the most sordid and dystopian is the tale of Adobe using its market dominance to deny artists, photographers, and designers access to color itself: The company, which dominates the graphic design and illustration market, waged a decade-long campaign to convince its users to switch from individually purchased and locally run versions of its software to its cloud-based Adobe Creative Cloud. And then, in 2022, it announced that they would henceforth have to pay a monthly subscription fee for access to the full panoply of Pantone colors—special, proprietary “spot colors” that are essential to the work of illustration and design. If they didn’t, any pixels containing any Pantone colors in any images created by users, they warned, would be replaced with plain black. Drop-shipped crap from Amazon is one thing; extorting money for color is like the Martian villains of Total Recall forcing the poor and downtrodden to pay for air.
This all leads inevitably to what Doctorow and others call “technofeudalism,” a term that will make any medievalists reading this throw their copies of the books across the room, as it relies on a pretty thoroughly discredited understanding of premodern Europe. Doctorow’s historical comparison seems to elide some form of serfdom with the lot of a tenant farmer in eighteenth-century England. He compares these older systems, in which the landed gentry extracted rents from a tenant class that was either legally or informally bound to the land, to today’s economy, in which everything is somehow a subscription service. Until recently, companies depended on consumption—purchase—and profit. But now you no longer really buy a product, whether it is the e-books you rent (because digital rights management, or DRM, prevents the free lending, gifting, or resale of the work); the iPhone for which you pay a monthly fee to your carrier; the rapidly disappearing content on your favorite streaming platform; or even your word processor or spreadsheet, which you also rent from somewhere out there in the cloud. You are something more like a peasant, chained to a set of overlapping digital manors, paying monthly to a distant lord for your right to be an influencer, a dropseller, or a watcher of TV.
I am not sure this state of affairs is as novel to capitalism as Enshittification makes it out to be, and this is a frustrating aspect of the book. What was the subprime mortgage crisis, to cite a recent example, if not a scheme to lock borrowers into initially attractive mortgages and then to steadily jack up their rates and costs in order to extract more cash? What were midcentury scams like Columbia House, whose “negative option” billing model (they send you an item, and you are charged unless you return it) presaged today’s online subscription models, which make signing up easy and canceling seemingly impossible? What are our classic multilevel marketing programs, from Mary Kay to Amway, if not models of the everyone-an-entrepreneur, no-one-an-employee exploitation of today’s Ubers, DoorDashes, or even of today’s social media-driven influencer hustles?
Doctorow is a leftist writer and thinker who has expressed deep skepticism about whether “capitalism has a future,” but both his diagnoses and his recommended treatments tend to circle around a rather nostalgic notion of a unionized, highly regulated, almost midcentury American capitalism, or perhaps just a pre-AfD, pre–National Rally, common market European social democracy, neither of which seems especially adequate to the present moment. He can also be frustratingly sloppy with his terms. He writes about monopoly and monopsony, words with quite specific meanings, in a breezily vernacular way (explaining in a jokey footnote that he is doing so because the alternative, more accurate terms are hard to pronounce). He refers to “efficient market hypothesis” in an otherwise quite interesting treatment of abuses of copyright and intellectual property law, but efficient-market hypothesis is not about consumer or business-to-business markets; it is, rather, a hypothesis about the nature and accuracy of securities prices in capital markets. He uses “orders of magnitude” to mean “a whole lot,” rather than a ratio scale based on multiples of 10, which is a venial sin, but a frustrating one from a usually technically fluent author.
While Doctorow’s diagnoses are damning and infuriating, his prescription is tinged with a sentimental vision of what he calls, without apparent irony, “the old, good internet.” I am a decade younger than Doctorow, but was myself a fairly early adopter, and I understand this wistfulness. I, too, made friends in early chat rooms, explored my sexuality, engaged in debate, and adopted and discarded all sorts of strange politics and intellectual currents that I never would have otherwise encountered. I miss that internet, but would I go back? Could I?
“Lowering the barriers to entry for participation in digital life is an unalloyed good,” Doctorow writes. Simply: no. If anything, the opposite. License it like a car. Or a firearm. If there is anything that the past several decades have shown us, it is that “digital life”—access to tools of information and communication more powerful and universal than any heretofore invented or available—is something that most individuals, and perhaps even the most responsible among us, simply should not possess. Its capacity to enable demagoguery, to dissolve ideology, to promote conspiracy, and to encourage violence and antisocial behavior is wild and dangerous. Are corporate algorithms alone to blame? Perhaps, for your racist uncle, for your perpetually enraged hashtag-resistance aunt, for your Fox News dad, for your MSNBC mom. But for the disaffected young man, angry and future-deprived, lurking in some strange bespoke Discord channel or creating untranslatable racial slurs with which to bait his fellow gamers? Long before Elon Musk bought Twitter, before YouTube made celebrities out of “manosphere” pickup artists, before a Facebook group convinced your neighbor that vaccines cause autism, the internet proved a fertile ground for fervid weirdos to find each other and psych up strange beliefs.
No less so today, whether on Discord or one of the “chans” or deep within Reddit, where, far from any particular corporate algorithm, hate, bile, violence, and rage express themselves nonetheless. When the conservative influencer and organizer Charlie Kirk was shot dead in September and it was revealed that his killer had scrawled messages onto bullets, one of the more alarming aspects was not so much the question of the shooter’s political ideology, which dominated the news, but that the language—inscrutable, irony-poisoned gamer argot—came from online. I am increasingly convinced that giving most of us access to online is handing a toddler a gun. The problem is giving it to the toddler. But the problem, also, is that it is a gun.
Enshittification fundamentally is premised on the idea that we created a technological marvel (or a set of them), and then corporate greed and avarice ruined it. But what if that is not the case? What if, as was the case with nuclear weapons, we have created technology that we are not yet mature enough to wield? What if in our quest for infinite energy, we built a bomb?
Doctorow suggests that the problem with the internet is mostly that platforms have become too powerful. I have sympathy for this view. “What if we switched our emphasis from ‘making platforms nicer’ to ‘making platforms less important’?” he asks. What if it were easier to leave them without losing friends, customers, connections? He uses federated Twitter alternatives like Mastodon as examples of a better possible internet. (Never mind that Mastodon, to use a technical term, sucks.) It is tempting to believe that if the Zuckerbergs and Musks of the world were knocked down a few pegs, if content moderation improved, if European-style regulation became the norm in more countries, then we might find some modicum of fairness and usability returning to our online worlds, and we would be back to baby pictures, book-buying, and dorky humor.
Would we? I am beyond beginning to doubt it. I think the horse is out of the barn. It is out of the paddock. It is galloping toward the horizon, some kind of future we’ve not quite imagined yet, and I don’t think we can just call it back. Digital computing was an extraordinary invention; the constellation of technologies we now just call “the internet” even more so; the great text-concatenating engines of large language model “artificial intelligence,” which derives its power from training on the internet’s near-infinite texts and images, some of the most impressive yet. But the question that Enshittification doesn’t ask, doesn’t want to ask, is what if none of this was a good idea to begin with?




