GeistHaus
log in · sign up

Westenberg.

Field Notes on Now.

rss
15 posts · 1 narrative
Feed metadata
Generator Ghost 6.33
Status active
Last polled Apr 29, 2026 01:39 UTC
Next poll Apr 30, 2026 01:39 UTC
Poll interval 86400s
ETag W/"38ce4-NNNa3Rd/zmQ+tgzSMHtoansxxic"

Posts

The Loop: everything has happened before, and everything will happen again
We keep replaying the same human mistakes -bubbles, strongmen, scapegoats, and panics -because the operating system in our skulls hasn’t updated in ten thousand years.
Show full content
The Loop: everything has happened before, and everything will happen again

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have told me it’s worth it.

Upgrade

In February 1637, a single tulip bulb in Haarlem sold for 5,200 guilders - the price of a canal house on the Keizersgracht // ten times the annual salary of a skilled craftsman. The bulb was a Semper Augustus, streaked white and flame-red, and the buyer never saw it. He bought a piece of paper representing a future flower, and within 3 weeks, the market collapsed. Men who had mortgaged their workshops to buy futures on something they would never hold went home to explain it to their families. Within a century, the Dutch and English were back, inside the South Sea Company, whose directors had printed prospectuses for an undertaking they refused to describe. Within two centuries, French investors were holding the worthless scrip of John Law's Mississippi Scheme, which had promised them a share of Louisiana gold that didn't exist. By the 1840s it was railway shares, with one in ten English investors buying into lines that were never built. By the 1920s it was radio stocks. By the late 1990s it was dot-com. By 2008 it was tranched American mortgages, rated AAA by people paid by the banks that issued them. By 2021 it was JPEGs of monkeys. By 2026, it was AI stocks.

Every one of these episodes was preceded by someone writing a book about how the last one could never happen again, and every single one ended with the same sentence murmured like a prayer on the way up:

This time is different.

…But it never is.

Is it?

The claim I want to make…

Humans do near-identical things, over and over again, across history. And we do it because our cognitive equipment hasn't changed - the brain running a 21st-century civilization is a Paleolithic brain, shaped by 200,000 years on the savannah and another 10,000 years in small agricultural settlements, and it fears the same things our ancestors feared, and it wants the same things they wanted, and it fails in the same ways.

The loop itself is, in fact, our operating system.

Everything else, the political systems, the technologies, the languages, the ideologies, is the application layer. Applications change, but the operating system doesn't. When an application throws the same error message in Rome, in Berlin in 1933, in Phnom Penh in 1975, and on a Saturday afternoon in a suburban American town in 2024, the error sits in the kernel - and the kernel is not getting patched.

The bubble

The financial bubble (and by that I mean every financial bubble) is the cleanest version of the loop there is. Prices rise, greed overrides caution, debt piles on debt, and the floor gives way. Within ten years the same people, or their children, do it again. And again. And again.

Every bubble is catalogued and studied before the next one begins. Charles Mackay wrote Memoirs of Extraordinary Popular Delusions and the Madness of Crowds in 1841. The book became a bestseller among the same London financiers who would soon be pouring money into Latin American mining schemes that required them to invest in countries they couldn't find on a map. In 1929, Irving Fisher, one of the most published economists in America, declared that stocks had reached a permanently high plateau - and the crash began nine days later. In 2005, Alan Greenspan testified to Congress that American housing prices reflected local conditions and there was no nationwide bubble. In 2008, there was. The brain has a failure mode around probabilistic risk: it discounts low-probability catastrophic outcomes in favor of high-probability mild gains, it reads social consensus as information, and its dopamine circuit rewards the anticipation of gain more reliably than the gain itself.

The hunt feels better than the meal.

Humans pretty reliably miscalculate risk at every step of the process, but somehow the profession of finance is built on the assumption that markets aggregate these miscalculations into wisdom. They don't. They aggregate them into stampedes, and herd cognition does the rest. When everyone around you is buying, the cost of not buying is financial + social. You miss the gain, and your neighbor gets rich, and your brother-in-law mentions it at dinner. The brain treats this as a threat to status, and status, in primate terms, is survival. Solomon Asch's conformity experiments in 1951 showed that ordinary people will deny the evidence of their own eyes rather than disagree with a confident group, and bubbles are Asch experiments with money on the line.

Every bubble ends with the same discovery, which is that the asset was never worth what it traded for; every bubble starts, though, from the matching belief that this time, it is.

The strongman

The strongman arrives on schedule, and the preconditions are consistent. A frightened middle class + institutions that have stopped delivering + an establishment that has lost the trust of the people it governs. Put those pieces in a room together and within a decade someone walks in who promises to cut through all of it. Caesar in 49 BCE. Napoleon in 1799. Mussolini in 1922. Hitler in 1933. Perón in 1946. A catalogue since then that hardly needs naming. The strongman is a phenotype; he's what the interaction between primate dominance hierarchies and political instability produces. Chimpanzee troops have alpha males, and human societies have them too. Under stable conditions, the alpha position is distributed across institutions, softened by law, and rotated by elections. Under unstable conditions, the position re-concentrates around a single body. Frans de Waal watched the same sequence play out among captive chimpanzees at Arnhem; Hannah Arendt watched it play out among human beings in the twentieth century. The mechanics were the same. The stakes differed only in body count.

Apparently, the human brain under stress doesn't want deliberation; it wants authority. Uncertainty burns more energy than bad news, and so the prefrontal cortex tries to resolve ambiguity, and when it fails, it hands control to older circuits that prefer a simple answer to the right answer - any right answer. MRI studies of people presented with ambiguous political images show amygdala activation patterns close to indistinguishable from fear responses; the feeling of not knowing whether your world is safe is, in brain-chemistry terms, very close to the feeling of being in danger.

And, sooner or later, there will always someone willing to supply the simple answer. The man who says he alone can fix it believes it, because the crowd that believes it first has already told him so; what his opponents call a lie, he experiences as a revelation. The feedback loop between a frightened population and a would-be strongman runs on the same neurology in both directions: he needs them as much as they need him, and they produce each other.

The cycle tends to run thirty years from collapse to collapse. Long enough for the generation that lived through the last strongman to die, and short enough that their grandchildren are available // ready // willing to repeat the experiment.

The scapegoat

When a society is in pain, it finds someone to blame. Rarely the structure. Rarely the people who benefit most from the structure. Always someone weaker, someone already marginal, someone who can be sacrificed without the majority feeling the cost. Jews in medieval Europe during the Black Death, when entire communities were burned alive on the accusation that they had poisoned wells, and Jews again in Weimar Germany during the hyperinflation. Catholics in Elizabethan England, hunted by priest-catchers who were paid by the head, and Chinese merchants in Indonesia in 1965, and again in 1998. Tutsis in Rwanda in 1994, 800,000 dead in a hundred days, killed with machetes by neighbors who had lived next door for generations. Muslims in post-9/11 America. Immigrants, always, everywhere.

The mechanism was described by René Girard, a French literary critic who argued that violence against the innocent is the engine of social cohesion. His book Violence and the Sacred in 1972 laid out the structure: a community in conflict with itself discovers that it can reconcile by turning collectively on a single victim, and all that the victim has to be is unanimous. Guilt is beside the point, which is the part of this I find hardest to sit with; once the blow lands and the crowd goes quiet, the community feels cleansed. Girard's work sits uncomfortably among the more respectable social sciences because it says something his colleagues didn't want to hear: the crowd's sense of unity is purchased with the body of someone who didn't deserve to die, and the mechanism doesn't give a shit about ideology. It works for medieval Catholics, for Jacobin revolutionaries, for Nazi party members, for Twitter mobs. The crowd needs its victim, and the victim needs to be innocent enough that the guilt of destroying him is too heavy to carry, which is why the sacrifice must be followed by denial.

The scapegoat loop is neurology under pressure. The brain performs in-group and out-group sorting in under 200 milliseconds, before conscious perception arrives - a feature of human vision that kept small bands of primates alive on the savannah. A stranger at 40 meters could be trade or death. You didn't have time to think it through. Demagogues know this, or they feel it, which amounts to the same thing. They weaponize a perceptual shortcut human beings can't turn off, and they provide a face for a pain that has no face. The crowd does the rest.

The invention that eats its children

The printing press was going to democratize knowledge. And it did! But first, it launched two centuries of religious war. Johannes Gutenberg pressed his first Bible in 1455. By 1517, Luther's theses were being reproduced across Europe in weeks, and by 1618, the Thirty Years' War had begun. By its end in 1648, a third of the German-speaking population was dead. Elizabeth Eisenstein's The Printing Press as an Agent of Change in 1979 documented how the technology that was supposed to bring light to the masses also industrialized the production of astrology, witch-hunting manuals, and anti-Semitic pamphlets. The press amplified everything, including the things its advocates hoped it would abolish.

Radio was going to educate the masses. It gave Hitler a direct line to every kitchen in Germany, and Father Coughlin a direct line to thirty million American listeners in the 1930s, and Radio Rwanda the tool it needed to coordinate a genocide in 1994. Television was going to create an informed electorate - but it simultaneously created a visual electorate, which turned out to be a different thing. Marshall McLuhan saw all of this in Understanding Media in 1964 and was called a charlatan for saying so.

Social media was going to connect the world.

Well, it has, and the connection is the problem.

Every new tool that reshapes a society follows the same arc: it gets pitched as utopia, adopted before anyone understands it, panicked about ten years too late, and regulated (badly) ten years after that. By the time the culture has a theory of what the tool does, the social fabric has already been re-stitched around it, in a structural mismatch between the speed of technological change and the speed of social adaptation. The brain adopting a new tool has never been the brain that understands its second-order effects, because the lag is biological. The telegraph took 50 years to saturate the industrialized world, but the internet took 20, and the smartphone took 10.

And generative AI has taken half of that to be near-ubiquitous…

The adaptation lag stays constant, meaning each new technology is more disruptive than the last. We're adopting tools - right now - that will shape the next century without having metabolized the last century's tools; the printing press hasn't been fully understood, radio hasn't been understood, television hasn't been understood, and the side effects of social media are being "lived" through in real time by people who haven't yet admitted what it's doing to us.

The war that ends all wars

Humans don't go to war despite knowing what war does, they go to war because the knowledge of what it does fades, even though it technically exists. Nobody forgot the pain of WW1. It just became less vivid…

The generation that fought swears never again, and their children believe them, but their grandchildren might not. By the fourth generation, war is an abstraction, something that happened to other people, in old photographs, with outdated weapons. William Tecumseh Sherman spent his last years giving speeches against war to audiences who listened attentively, and then sent their sons to Cuba in 1898.

The French generation that survived 1918 built the Maginot Line because it couldn't imagine living through another Somme, but their sons were overrun by a tactic that didn't exist when the walls were poured, by an enemy they had forgotten to fear. Robert McNamara, the architect of the Vietnam War, produced a documentary in 2003 called The Fog of War in which he admitted that the policies he had designed had been wrong for reasons he actually understood at the time.

The film was released during the invasion of Iraq; the lessons were on screen, broadcast to millions, but the tanks kept rolling.

You can teach someone that fire burns, but you can't make them feel the heat. A lesson that can't be felt won't prevent the behavior it describes. Wilfred Owen wrote Dulce et Decorum Est in 1917 about the sweet lie that dying for your country was noble. The poem is taught in every British secondary school; it has stopped zero wars. The interval between great-power wars in Europe from 1648 to 1945 averaged around forty years. That’s how long it takes for the generational memory of the last war to fade from the bodies of the people who vote in the next one. The post-1945 peace in Europe is the longest stretch in recorded history, which means we have a decade or two before the generation that could say "I remember" no longer exists in political life.

What happens then, is what always happens.

The moral panic

Witches in Salem, 1692, where twenty people were executed on evidence so thin the colony issued an apology within a generation. Catholics in Elizabethan London. Comic books in the 1950s, after Fredric Wertham's Seduction of the Innocent triggered a US Senate investigation and forced the creation of the Comics Code Authority. Rock and roll. Dungeons & Dragons, where a generation of American parents were convinced their children were being recruited into a satanic cult by a dice game. Video nasties in 1980s Britain. The Parents Music Resource Center, chaired by Tipper Gore in 1985, running hearings against heavy metal. Rap music. Violent video games after Columbine. Social media. TikTok. Transgender rights. I feel like I’m reciting a depressing cover of We Didn’t Start the Fire…

The moral panic follows the same sequence every time. A new thing emerges that the older generation doesn't understand, and someone somewhere claims it's destroying children. The media amplifies the fear, and legislation follows. The panic burns out.

Then, twenty years later everyone agrees it was overblown.

Then, the next one begins.

The moral panic is a reaction to a loss of control; it's the terror that arises when a parent, or a culture, realizes the next generation is building a world they can't enter. The target changes every twenty years, but the terror doesn't change at all. The sociologist Stanley Cohen named the phenomenon in 1972 in Folk Devils and Moral Panics, writing about British seaside brawls between mods and rockers. The book could have been written about anything. A manufactured villain, a media cycle, a legislative response disproportionate to the threat etc, maps cleanly onto every subsequent panic, including the ones he couldn't have predicted. QAnon is a moral panic. So is the 1980s satanic-ritual-abuse craze that it grew out of, during which adults were sent to prison for crimes that forensic evidence later showed had never occurred. A panic doesn't have to be wrong to be a panic. It just has to be out of proportion, and they almost always are.

The empire

We all think our specific empire is the exception.

Rome believed it was eternal. The Chinese dynastic system believed in the Mandate of Heaven as a stable arrangement between rulers and cosmos, which is why each new dynasty claimed to have received the mandate from the last. The British believed their empire was a civilizing force that would last centuries. The Americans believe they're not an empire at all. Every imperial project follows the same arc: expansion driven by economic need, sold to the public as ideology. Overextension, and the cost of maintenance exceeding the benefit of possession. Internal rot funded by external extraction, and the slow or sudden loss of the periphery while the center insists everything is A-OK.

Edward Gibbon began publishing The Decline and Fall of the Roman Empire in 1776, the same year the American colonies declared independence from the British. Joseph Tainter's The Collapse of Complex Societies in 1988 argued that civilizations fail when the marginal return on complexity flips negative, which is a technical way of saying that empires break when each new administrative layer costs more than it adds. The Romans kept adding layers until the layers collapsed under their own weight, and eery subsequent empire has done the same.

Empire is an emergent property of human social organization at scale. Dominance hierarchies scale, as they always have, and they always produce the same endpoint: a system too large to govern, too expensive to maintain, too proud to contract voluntarily. The final stage is denial. The senators in Honorius's Rome debated traditional agricultural policy in 410 CE while the Visigoths were sacking the city; the Ottoman Porte in 1911 was still issuing decrees about the administration of the Balkans after it had lost them; the bureaucrats of the Third Reich set a record for how many memos they were writing and sending in 30 days or so befoe Hitler’s suicide; the British government after Suez spent a decade insisting that the empire was managing an orderly transition, a phrase that meant nothing because nobody was managing anything; the Soviet Politburo in 1988 was discussing the modernization of Cuban sugar exports while their own economy imploded.

When the center begins to legislate the future of a periphery it no longer controls, the collapse is already underway.

The God cycle

Religions rise when the existing structures of meaning collapse. They institutionalize, they accumulate power and wealth, they become the thing they were founded to resist and they calcify. A new crisis of meaning arrives, and a new religion, or reformation, or spiritual movement rises up to replace them. The cycle runs from the Axial Age (Karl Jaspers's name for the period between 800 and 200 BCE when Confucius, the Buddha, Zoroaster, the Hebrew prophets, and the pre-Socratics emerged, each proposing a new relationship between the human and the cosmos) through the European Reformation, the First and Second Great Awakenings in America, the political religions of the 20th century, and the current explosion of secular faith substitutes - from Wellness to Bitcoin.

The true believers in CrossFit and the true believers in early 4th-century Arianism have more in common than either would like to admit. So do the adherents of long-form supplement protocols and the followers of Girolamo Savonarola, who burned the vanities in Florence in 1497 and was himself burned at the stake a year later. The impulse to purify the self through ritual deprivation is older than any of the current practitioners know - Bryan Johnson is reinventing the early Christian ascetic and selling it as biometrics. The brain requires narrative. When one narrative fails, it doesn't default to a net-zero narrative, it just grabs the nearest replacement, however ragged. This = a neurological need for meaning and coherence that no rational framework has ever been able to satisfy. The experiments of Michael Gazzaniga on split-brain patients in the 1960s showed that the left hemisphere of the human brain will manufacture an explanation for any observed behavior, including behaviors it didn't cause, rather than admit it doesn't know.

The brain won't tolerate a gap in the story, and if you don't give it a religion, it will invent one. The content of belief changes but the need for it doesn't, which is why the most confident atheists end up sounding the most religious. It’s the same apparatus, different idol.

The exhaustion

Deforestation in Mesopotamia by 2000 BCE left the fields salt-crusted and the population migrating; soil depletion in Roman North Africa turned the granary of the empire into desert within three centuries; the residents of Easter Island cut down every tree on the island, lost the ability to build canoes, and were reduced to eating the dead by the time Europeans arrived in 1722; the 19th-century guano trade reshaped Pacific geopolitics around bird excrement until the deposits ran out. Whale oil, coal, petroleum, silicon, compute, housing etc. And on it goes.

Every civilization finds a resource, builds itself around that resource, burns through it, and either collapses or scrambles for the next; it’s temporal discounting, the brain's systematic undervaluation of future consequences relative to present rewards, running at civilizational scale. The Atlantic cod fishery off Newfoundland was fished every year for 500 years, and then, in the decade after 1992, it collapsed and has never returned.

The Canadian government knew the catch was unsustainable in the 1980s, but the boats went out anyway. They had mortgages to pay. Every generation knows it's borrowing from the future, but no generation stops. The cognitive machinery that would allow them to care enough doesn't exist.

The revolution that becomes the thing it replaced

The French revolutionaries executed a king and installed an emperor.

The Bolsheviks overthrew a tsar and built a new one, with secret police larger and more thorough than the Okhrana had ever been. The Iranian revolution deposed the Shah in 1979 and produced a theocracy whose morality police have arrested more women than SAVAK ever did. The anticolonial movements across Africa and Asia expelled foreign rulers and produced domestic dictators within a generation. The tech companies “disrupted” monopolies and became monopolies. Every revolution promises a break from the past and delivers a reproduction of it.

This is close to structural; the act of seizing power requires the construction of hierarchies, the concentration of authority, and the suppression of dissent, the exact things the revolution was against. The tools of liberation turn out to be the tools of control…they have to be, because they're the only tools that work.

Robespierre in 1793 believed he was defending liberty by executing 17,000 people in ten months. By the time the guillotine took him too, in July 1794, the mechanics of the Terror had built a state apparatus more centralized than anything Louis XVI had commanded. Milovan Djilas, once a senior official in Tito's Yugoslavia, wrote The New Class in 1957 from his prison cell, describing how the communist revolution had produced a bureaucratic elite with privileges indistinguishable from the aristocracy it replaced. He was right, which is why he was in prison.

The revolution is in the method; you can't win by being peaceful against a state that isn't, and you can't build by refusing to govern. But the moment the revolutionaries become the government, they become the state, and the state has structural interests. Those interests don't care who's running it. George Orwell, who had seen the Spanish Civil War up close in 1937, understood this well enough to write Animal Farm about it in 1945 and Nineteen Eighty-Four about it in 1949. Both books are taught in schools, and both are cheerfully ignored in practice. The revolutionaries who most need to read them are always the ones who believe the books can't possibly be about them…

The Cassandra

Every loop has someone who sees it coming, and they're never believed.

The evidence is strong, but the warning is unwelcome, and unwelcome beats true...

Jeremiah in Jerusalem before the Babylonian conquest was ridiculed in the temple courts, and thrown into a cistern, only to be vindicated after the fact by the destruction of everything he had warned about. Cato the Elder ended every speech with Carthago delenda est until his colleagues stopped listening. Churchill in the 1930s, was frozen out of government, still warning about German rearmament to a House of Commons that preferred to discuss cricket. Eugene Stoner testified before Congress in the 1960s about the inadequacy of the M16 rifle he had designed, and the Pentagon ignored him until American soldiers in Vietnam started being found dead with their rifles in pieces in their hands. The climate scientists of the 1980s, whose testimony was televised and archived and treated, for four decades, as the background noise of cable news. The economists who called the 2008 crash, including Raghuram Rajan at Jackson Hole in 2005, and were told by Larry Summers that their analysis was "slightly Luddite." The epidemiologists who warned about pandemic preparedness in 2015, whose reports were filed, then forgotten, then pulled off the shelf in March 2020 when there was no longer time to act on them.

Accurate prediction doesn't lead to prevention.

The reason is part political: acting on a warning is expensive, and ignoring it is free, up until it isn't. But it’s equal parts cognitive: the brain treats unfamiliar threats as less real than familiar ones, regardless of probability. Shark attacks over car crashes; plane crashes over heart disease; terrorist attacks over obesity. The risks that kill us are not the risks that frighten us, because the brain evolved in an environment where the frightening things were almost always the things that killed us, and we haven't updated the pattern.

Cassandra herself, in the Iliad and the Aeneid, was cursed by Apollo to always tell the truth and never be believed. Virgil gave her the line: insani Vatis verba, “the words of a madwoman.” Even without divine curse, the outcome is the same: truth has rarely been sufficient.

The loop

The loops are caused by the species. Bad luck, bad leaders, and bad cultures show up in every story but they don't generate the pattern. The pattern is downstream of the brain that produces the stories. That's the argument.

But can the loops be broken?

So far, the answer is discouraging; but they have occasionally been lengthened. The interval between crises has been extended, the damage mitigated, and the recovery accelerated. The post-1945 international order bought 80 years of relative peace in Europe by building institutions designed to resist the strongman loop - a massive, landmark accomplishment, but an accomplishment with an expiration date, because the institutions are only as good as the generation running them, and that generation is dying off in real time.

The bubble loop has been shortened in some respects by regulation and lengthened in others by cheaper borrowing; the scapegoat loop has been softened in many places by norms of tolerance, which the current decade is stress-testing; the empire loop has been delayed for the United States by a combination of military spending and currency dominance, neither of which is permanent; the invention loop has been accelerated by every successful attempt to regulate it, because the regulation creates markets for jurisdictional arbitrage that didn't exist before.

We're very good at making the loops run faster.

We're not so good at stopping them.

The loops persist because the brain persists, and you can build a fence around a feature of human cognition. You just can't. The loops are a tendency of the species, and you can push back against a tendency within limits that go only so far.

Seeing the loop while you're inside it is a good deal harder than it sounds. Every bubble feels like a new era, and everyone saying otherwise sounds like a total bore. Every strongman feels like a savior, at least until the night he stops taking questions, and every scapegoat feels like a real enemy, because your cousin lost his job last month and somebody has to have taken it. Every war feels necessary. Every panic feels justified. Every empire feels eternal and every new God feels true. Every resource looks infinite right up until it isn’t. Every revolution feels pure for about eighteen months. Every Cassandra looks hysterical.

Every mistake of the past was made by people who were certain they weren't making it.

The move, if there is one, is the move the Trojans couldn't make, the one the Weimar voters couldn't make in 1932, the one the subprime borrowers couldn't make in 2007, the one the American cod fleet couldn't make in 1991. Treat the thing that feels obviously true with the utmost suspicion. Look for the loop in the direction you most want to walk. Ask whether the people you most agree with are the same people who would have agreed with the crowd at every previous iteration of this same mistake. It won't save you - but it might slow you down. The loop is older than any of us, and the loop has been true for 10,000 years. I think it will be true tomorrow. The only thing we get to decide is what we do with the knowledge in the interval between now and whichever loop is already closing around us.

SPONSORED

Westenberg is designed, built and funded by my agency, Studio Self. Reach out and work with me:

Work with me
69efe2d5b3831a0001f7f772
Extensions
Why prediction markets are a sure sign that our civilisation is in decay
Prediction markets are the clearest single sign our civilisation has entered a late and decadent stage. The reason isn't that they're new or sinister. It's that the case for them is defensible, the technology works, the outputs are useful, but the long-term effect is corrosive anyway.
Show full content
Why prediction markets are a sure sign that our civilisation is in decay

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have told me it’s worth it.

Upgrade

In July 2003, the public found out that DARPA (the research and dev agency responsible for the internet itself) had been funding a futures market called the Policy Analysis Market. Traders would bet on Middle East political events, including assassinations, coups, terror attacks, and regime changes. The program had been proposed in 2001 by a small San Diego research firm called Net Exchange, with intellectual scaffolding from the economist Robin Hanson at George Mason; and by 2003 it sat inside the Information Awareness Office, whose director was Admiral John Poindexter, Reagan's former National Security Advisor.

Poindexter had been convicted in 1990 on five counts of lying to Congress over Iran-Contra; his convictions were vacated on appeal a year later, on the grounds that trial witnesses had been contaminated by his own immunised Congressional testimony, but their aura never went away.

Senators Ron Wyden and Byron Dorgan went public with the future market on July 28.

The program was killed within 24 hours, and Poindexter resigned two weeks later, because the public reaction - way back in 2003 - was utter revulsion. The idea of betting on whether a head of state would be murdered etc struck almost everyone as obviously gruesome and beyond redemption; editorial writers called it grotesque; and Pentagon officials spent days apologising.

Twenty-two years later, we seem to have drifted a long way from that moral high watermark.

Polymarket ran live contracts in 2024 on whether Vladimir Putin would remain in office, whether Joe Biden would drop out, whether a ceasefire would hold in Gaza by a given date, whether Donald Trump would be assassinated before the November election. Kalshi, the CFTC-regulated American competitor, took hundreds of millions of dollars in volume on the 2024 presidential race. In 2026, folks have been betting on the deaths of Iranian officials and Israeli civilians and nuclear war.

Nobody has resigned, and no senator has been forced to hold a press conference. The markets are covered in the financial press as an actual innovation in retail trading.

We’ve gone from "this is too ghoulish to exist" in 2003 to "this is the new wisdom-of-crowds infrastructure" in 2026. And it's a symptom of how we, all of us, are coming apart.

Prediction markets are, I think, the clearest single sign that our civilisation has entered a late and decadent stage.

The dream and the pitch

The pitch for prediction markets has been the same since Robin Hanson started writing about "idea futures" in 1988 and 1990, and since the Iowa Electronic Markets launched their political futures market that same decade. Markets aggregate dispersed information better than polls, pundits, or committees; if you put money on the line, people stop posturing and start estimating, and prices become a running readout of collective belief.

Hanson's version of this ran deep. He proposed "futarchy," a system where citizens vote on values and markets decide on policy. You'd ask the market whether a given policy would raise GDP, reduce childhood poverty, or cut CO2, and whichever policy the market priced highest would get implemented.

Philip Tetlock's Expert Political Judgment in 2005 and Superforecasting in 2015 supplied the scientific underpinning. Tetlock found that generalist forecasters who updated on evidence, tracked calibration, and competed in open tournaments routinely beat credentialed experts. The Good Judgment Project, funded by IARPA starting in 2011, showed this was repeatable.

Markets do aggregate information. Forecasting tournaments do beat pundits. The humiliation of the 2003 Iraq WMD consensus, and of nearly every major think tank's prediction record in the decade after, gave the prediction-market crowd a genuine argument.

So if the pitch is good, why is the product a sign of rot?

Because the pitch was about epistemics.

The product is about something worse.

What the markets price

Open Polymarket in April 2026. Scroll the trending contracts. You'll find markets on celebrity divorces, CEO firings, troop movements, drone strikes, papal health, celebrity deaths recast as "will X still be alive on December 31," and whether a given pop star will release an album in Q3. The biggest volumes cluster around elections and the personal misfortunes of public figures.

These are bets on whether bad things will happen to specific people, and groups of people, whether institutions will hold, whether the world will feel more or less stable in 90 days.

The prediction-market community will tell you the content of the contracts doesn't matter, because the market's function is to produce accurate probabilities and nothing more - and I don't buy this for a single second. What a society chooses to price reveals what it actually gives a shit about, in the same way that what a society chooses to memorialise reveals what it honours. Tell me which contracts move size and I'll tell you what your civilisation has decided is interesting.

In Renaissance Florence, the biggest public wagers were on papal elections, the outcomes of condottieri campaigns, and whether the Arno would flood before June; you can reconstruct the city's anxieties from the betting books. Our betting books show a civilisation fixated on the humiliation and removal of a small number of public figures, and on the probability that large systems will crack on a short timescale.

This is an unflattering portrait.

Assassination contracts

Polymarket listed a contract in summer 2024 on whether Donald Trump would be assassinated before the election. The contract was scrubbed after the Butler, Pennsylvania shooting in July, for obvious reasons, but crucially it had traded. There was liquidity. There were people on both sides of the bet.

In 2005, Nick Szabo wrote about the dangers of what a crypto-anarchist named Jim Bell had called "assassination politics" back in 1995. Szabo came close to inventing Bitcoin before Satoshi did, and he knew what he was looking at. Bell's original proposal was a market where anonymous donors could pool money that would pay out to whoever correctly "predicted" the date of a public official's death; and the prediction would, of course, be a contract for the hit.

Every prediction-market platform that goes live has to run a gauntlet around Bell’s ghost. Polymarket's terms of service prohibit contracts that could function as murder contracts, and Kalshi does the same - the lawyers know the argument.

But the argument doesn't depend on intent. Hanson himself has written that you can’t cleanly separate a prediction market on whether X will be killed from an incentive to kill X, because the market is information to a would-be assassin about how much financial upside exists in acting on their impulse; it’s a relatively clean way for a hostile state actor to hedge a covert operation. A sovereign that wants a rival head of state dead can, in principle, acquire a large position on a thinly traded market, wait for someone to commit the act, and pay for the operation with the winnings.

In 2003, this argument was enough to kill a DARPA program and end a career.

In 2026, the same argument is background noise. We've collectively decided that the information value of these markets outweighs the moral cost of treating human lives as tradable securities, and this (to me, at least, and I accept that I may be alone in this) that decision is a bleeding mistake.

The dead pool and the decline

Tudor Londoners wagered on the life expectancy of public figures so routinely that life insurance, as we understand it, grew out of the same market. Geoffrey Clark's Betting on Lives, published in 1999, traces the 18th century English insurance market as a functioning prediction market on the deaths of dukes and royal mistresses. Parliament shut it down in 1774 with the Life Assurance Act, which required insurable interest, because the legislators of the era understood something we've apparently, conveniently and somewhat profitably forgotten. Permitting strangers to bet on whether a named person would live or die produced, in aggregate, darker incentives than the information-gathering benefit could justify. This should be obvious. In fact, to anyone paying attention, this is obvious.

The 18th century London markets at scale were disastrous. Ambassadors were assassinated. Heirs were poisoned. The statute was, by the standards of the 1770s, a moral intervention.

But we repealed that moral intervention, and we repealed it with software. Each new prediction market opens with a standard disclaimer that the platform doesn't allow murder contracts, and then lists contracts on the lives of named public figures, reinventing 18th century betting practices and rebranding them too, as innovations and disruptions.

The Roman Empire late in its decline had booming gambling markets on gladiatorial outcomes. The Byzantines had a full betting economy around chariot racing that produced the Nika riots of 532 CE, which killed tens of thousands. Late Qing China had opium-fueled fan-tan parlors that functioned as quasi-markets on political outcomes. Weimar Germany had the Tauentzienstraße betting shops that took wagers on the next Chancellor and, after 1930, on which faction would be next to be shot in a street brawl.

None of this is to claim that gambling causes decline; that would be a cheap causal argument, and I’m not yet in that business...

My claim is a little narrower, at least.

In each case, a civilisation under strain stopped prosecuting its disputes through argument and institution, and started pricing them; the bettors were reading the decline the way a barometer reads a storm, even if the storm came from somewhere else.

Sandel's objection, twenty years late

Michael Sandel, the Harvard political philosopher, published What Money Can't Buy in 2012. The core argument of the book is that some goods are corrupted they moment they’re priced. A Nobel Peace Prize that can be bought at auction isn't a Nobel Peace Prize, something that Donald Trump may or may not have grokked; a friendship that's bought and sold cannot possibly qualify as a friendship; a citizenship that has a purchase point, in the Maltese Golden Visa sense, isn't actually any kind of citizenship that actually matters, in any kind of philosophical sense.

Sandel's objection to prediction markets is that certain questions change their nature when you put them in a market frame. Markets don't need to produce bad information for this to go wrong; they do the damage by producing any number at all. Ask "is the Secretary of Defense going to resign by June 1" in a newsroom and you get a political question - you talk about his relationship with the President, the policy disputes inside the cabinet, the institutional pressures from Congress etc. The question is embedded in a set of relationships and public obligations.

Ask the same question on a prediction market and you get a probability between 0 and 1. The market has no view on whether he should resign, whether the policy fight is worth winning, whether the institutional damage is worth the political cost and so on - It only has a price, because it only needs a price.

Prediction markets route around normative argument without destroying it; they provide a parallel answer, priced and continuous, that makes the unpriced conversation feel slow and unserious by comparison. Why listen to a journalist reason about whether the ceasefire will hold when you can see that it's trading at 34 cents?

The laziness dividend

Cass Sunstein and Richard Thaler wrote Nudge in 2008 with a section on prediction markets that reads, now, as a period piece. They praised the markets as a way to get past groupthink and expert capture and perhaps they were right about the epistemic problem, but I think it’s easy to see that they were wrong about where the pressure would move. The pressure has moved toward laziness - once a price exists, a journalist stops reporting and an analyst stops analysing and a decision-maker stops deciding. Everyone's waiting for Polymarket to update.

During the 2024 US election, major news outlets, including the Financial Times and the Washington Post, quoted Polymarket's implied probabilities in running coverage. The number was treated as a live readout of election reality, and when the numbers moved, articles were written about the movement. The question of what was driving the movement, which is the actual journalism, came second. In 2024, Nate Silver shifted to publishing both his own forecast and the Polymarket number, and spent much of October explaining why they diverged. His model at 538 had dominated election coverage for 12 years before that. The work of explanation became a reaction to the price.

Silver is one of the more honest figures here. He's said in print that prediction markets are his competitors, that they force him to sharpen his reasoning, and that he thinks the aggregated number contains signal his model misses. And fair enough - I can accept that as a good faith position. But the broader effect, across the field, has been that journalism about uncertain future events has collapsed into price commentary, and the markets have become the story, and the story about the markets has replaced the story about the world.

Replacing argument with price

Alasdair MacIntyre argued in After Virtue, published in 1981, that modern moral discourse is a ruin. We use the vocabulary of older ethical traditions, Aristotelian virtue, Christian duty, Kantian rights, without the shared community of practice that gave those words their meaning, and so we shout past each other, trading fragments that no longer cohere. His example was the debate over abortion - but you could use almost any political question from 1981 forward.

The prediction market is the ultimate post-MacIntyre moral technology, asking only what will happen. Questions about what we owe each other, what justice requires, what a good outcome would be, what a morally defensible position would represent - the market has no machinery for. Values drop out of the picture, because the price is the only fact.

he defenders rarely argue that the markets produce better outcomes in any thick sense of "better." They argue that the markets produce more accurate probabilities, as though accuracy is the only remaining virtue; but it's the virtue you keep when you've stopped believing in any of the others.

When a civilisation loses its ability to answer "what should we do" it retreats to answering "what will happen?" The late Romans did it, and late medieval astrologers did it, and late 19th century social Darwinists did it too. Each of these movements felt, to its practitioners, like a rigorous clarification, and each, in retrospect, is closer to a surrender.

Prediction markets are the 21st century version of that surrender: a technology for converting questions of value into questions of fact, and then trading the facts.

The Scott Alexander problem

Scott Alexander Siskind, writing as Scott Alexander at Astral Codex Ten, is the most thoughtful public defender of prediction markets working today. His argument, refined across a dozen essays from 2012 to 2025, is about this: prediction markets are useful tools for aggregating information and forcing experts to put money where their mouths are. They have costs, yes, but the costs are manageable, and so we should want more of them, not fewer.

But the question that matters isn't "do prediction markets produce accurate probabilities." They do, sometimes, on questions where they have enough liquidity and no manipulation incentive. I think the question is whether a civilisation that routes more and more of its public life through these markets is one in good health or coming apart at the seams.

The rationalist position = that better epistemics is always a good; knowing what's true is the first step to making things better, and you can't improve what you can't measure. But some things, some parts of our existence are degraded by measurement. Marriage quality, and artistic achievement, and the sanctity of a deliberative process. When you take a thing that was embedded in relational or political context and reduce it to a number, you may have made the thing more easy to understand; but you've also changed what the thing is.

This was James Scott's argument in Seeing Like a State, published in 1998, about forestry and city planning. The state, in order to manage a forest, has to render it as timber volume; once rendered, the forest is managed as timber; and so the ecological complexity, the cultural meaning, the local knowledge of which stands of trees matter for which villagers, all of it disappears into the measurement...it’s all timber, all the way down. Which of course, is not so different from defining an entire population as so many corpses.

Prediction markets render deliberation as probability, and once rendered, public questions are managed as probability, and the deliberation that produced the question vanishes - the argument for why the question matters vanishes too.

What's left?

The price.

Who benefits

The money on Polymarket and Kalshi comes from some identifiable sources. Crypto-native traders looking for a new volatility surface after the 2022 collapse of the lending markets, and Quant firms running information-arbitrage strategies, and political operatives testing narratives, and Journalists and hobbyists putting down small stakes for entertainment.

In the 2024 US election, Polymarket data showed a single French trader, named in a Wall Street Journal piece by Alexander Osipovich and Shane Shifflett in October 2024, putting down about $30 million across several accounts to bet on Trump. The bet moved the implied probability for weeks. And he won about $85 million when the results came in.

Set aside whether he had inside information. The point is that the "market consensus" on the most important political question of the year was shaped by the convictions of one rich person willing to take a large position. The market aggregated information, yes, but the information it aggregated was dominated by the bankroll of one participant.

This is the manipulation problem in miniature. In any market with thin liquidity and high civic importance, the price is going to reflect the beliefs of whoever's willing to put the most money in. The people who gain from this arrangement are the same people who gain from any financialisation of public life. Traders, platform operators, and a small cohort of well-capitalised political actors who can now move the apparent consensus on a question by buying it. The people who lose are everyone else. The citizen who reads the Polymarket number as a fact is consuming a number produced in part by someone's willingness to spend. The journalist who quotes the number is laundering that person's money into public knowledge. The policy-maker who uses the number to justify a decision is delegating to the bankroll. This was always the critique of modern financial markets at scale, from Hyman Minsky in the 1980s through to Adair Turner and Mariana Mazzucato in the 2010s. Prediction markets inherit it, and the civic stakes make it worse.

Why this feels like decay

Joseph Tainter argued in The Collapse of Complex Societies, published in 1988, that collapses share a signature. The society develops expensive institutions to manage complexity, and the returns on those institutions decline. The society can't afford them, and so the institutions fail or are abandoned. Reading Tainter alongside the current prediction-market boom is a strange experience. The pattern fits, but it fits sideways. The expensive institutions are things like the professional press, the civil service, the academy, the peer-reviewed journal, the national statistical agency - these were built through the 19th and 20th centuries to produce reliable public knowledge. They're all, right now, in various states of crisis // collapse…

Prediction markets are cheap. They don't need credentialed staff, or editorial judgment, or institutional memory. They produce a continuous stream of apparently reliable numbers, on any question you can phrase, for the cost of a small trading fee. From a pure cost-benefit standpoint, they look like a massive improvement on the old institutions. From a civic standpoint, they're a replacement, like a drive-through replacing a dinner table. The drive-through feeds you faster and cheaper; but the thing the dinner table was for, a slow shared practice of attention, isn't among the things the drive-through provides. Tainter's argument is that societies rarely notice they're replacing the dinner table with the drive-through until the dinner tables are gone. The cost savings look real and the institutional loss is invisible until a crisis demands the capabilities that only the old institutions had.

We've seen previews of this. During the 2020 pandemic, prediction markets priced the course of the disease with about the same accuracy as public health agencies, sometimes better. Many technologists used this as evidence that the CDC and WHO should be replaced in part by forecasting infrastructure. But the CDC was built to coordinate the response, distribute vaccines, run surveillance, and train the next generation of epidemiologists so the country would have them when the next crisis came - and forecasting was a small piece of a much larger public-health organism.

When you propose to replace the organism with a market, you're trading a capability for a number. The number is cheaper. When the next crisis comes, the number won't help. This is what civilisational decay looks like in detail. Expensive institutions are eaten by cheap substitutes, who are capable of doing only one thing the institutions did; the other things, the work of being a polity that can act, drop out. And by the time the polity needs to act, the infrastructure is long, long gone.

Late moves, short histories

Late-period civilisations discover elegant, efficient-looking technologies right before they're unable to use them...

The late Roman Empire had glass blowing, sophisticated concrete, hydraulic engineering, and long-distance banking on a level the West wouldn't rebuild for 800 years. The late Song dynasty had printing, gunpowder, movable type, and a paper currency system a millennium before European equivalents. Each civilisation's late period looks, to the historian, like a technological peak right before a collapse. This is partly survival bias, I’ll admit: we notice the technologies because they survived the collapse in the written record. But civilisations under pressure do accelerate innovation as a substitute for institutional repair - because the efficient new tool is cheaper than the expensive old practice, and the new tool gets adopted fast. The old practice atrophies, and when the new tool runs up against a problem it can't solve, the civilisation has no fallback.

Prediction markets fit this. They're elegant, they're efficient, they're sold to all of us as a modern replacement for expensive institutional practice, and they do solve one real problem, which is that credentialed experts are overconfident in their forecasts. But that problem is a tiny piece of the problem that the old institutions were built to handle. If you read Polybius on late Rome, or Ibn Khaldun on the Maghreb dynasties of the 14th century, or Gibbon on the Antonine age, you’ll highlight the same shit: clever technical solutions proliferate, and civic competence declines. People blame the institutions for their inefficiency, without noticing that the efficiency of the replacements is achieved by discarding the functions that the institutions existed to provide. Khaldun called this stage haḍara, the settled, luxurious phase of a civilisation where the original virtues have been hollowed out by comfort and specialisation.

What's missing from the price

Take a contract on "will Israel and Hamas reach a ceasefire by December 31, 2026." The price, on any given day, is some number between 0 and 1.0 Say it's 0.22.

What the price doesn't contain: an account of why a ceasefire would be a moral good, an account of who bears responsibility for the failure of previous ceasefires, a theory of what international pressure could shift the outcome, a map of which hostages are still alive, a record of what the killed journalists were writing before they died, or any sense of what it would mean, for the children now growing up in the Middle East, for the war to end in November instead of January.

None of this can be priced - the market cannot hold it.

The market can only hold the collapsed summary.

The market's defenders will say that all of this exists elsewhere, in the journalism, the NGO reports, the academic analysis, the long-form commentary. But "elsewhere" is losing its funding, losing its audience and losing its status, while the market is gaining all three. The price is becoming the authoritative output, while the elsewhere is becoming the decorative commentary around the price. And when you invert the relationship between the deliberation and the summary, you change what the summary means. In a healthy system, the probability number is a shorthand for a rich debate; in a decaying system, and we are in a decaying system, the debate is a shorthand for the probability number.

From orbit…

If I were diagnosing a civilisation from orbit, I'd look at what it bets on and what it refuses to bet on. A healthy civilisation bets on games, on contests, on horses, on private entertainments; it draws a line around the sacred or the civic, and refuses to price what's inside that line, and the line might move around, but at least it exists. A civilisation in decay erases the line: everything becomes a contract, from the death of a public figure, to the course of a war, to the outcome of an election, the next pandemic, the marriage of a celebrity, the survival of a pope. Nothing is held out of the market, because nothing and no one is sacred.

The first prediction markets, the Iowa markets in the late 1980s, confined themselves to electoral outcomes. Intrade, which launched in 2001 and collapsed in 2013, pushed the envelope into celebrity deaths and ran into legal and reputational trouble; Polymarket, since 2020, has been willing to list almost anything that generates volume. Each platform that pushed the boundary might not have gotten away with it, the boundary still moved all the same. The social response got weaker, and so did the legal response. I don’t think the boundary actually exists anymore, not in any meaningful sense. You can bet, right now, on the death of almost any named public figure, on the outcome of active military operations, on whether specific children of specific celebrities will be arrested. I don't think this happened because we decided as a society that it was fine. I think it happened because we stopped having a mechanism for deciding anything as a collective bunch of normal bloody people.

There's no golden age of public deliberation to return to. The 18th century betting books I described earlier coexisted with slavery, wife-selling, and press-ganging, and the 20th century public sphere excluded roughly half the population. My claim isn't that we've fallen from some prior height of mora superiority. I’m no fool. But civilisations - ours, specifically - can build institutions that hold certain questions out of the market, treat them as scred, and handle them through deliberation instead. When those institutions are healthy, the society can argue and act together; when they rot, the market floods in and prices what the institutions held out.

A version of us that wasn't decaying would have, in 2003, rejected the Policy Analysis Market, built better public forecasting inside the civil service, and kept the private prediction markets confined to commerce and entertainment. A version of us that wasn't decaying would treat Polymarket contracts on assassinations the way we treat snuff films, as something the market can technically produce and that the society refuses to consume. But we don't treat them that way, do we?

We cite them in the Financial fucking Times.

A small hopeful note

A few people are still holding the line. The UK's Government Office for Science has experimented with internal prediction markets limited to scientific and technical questions, while keeping political questions out. Singapore's civil service uses forecasting tournaments of the Tetlock kind, carefully scoped. The Metaculus platform, non-monetary and governed by a research norm, has tried to build forecasting infrastructure with stronger civic guardrails than the commercial markets.

These are small efforts, fighting against a much larger tide, but they suggest that the choice between "no forecasting at all" and "price everything" isn't the only choice available. You can have institutions that use prediction-market techniques on some questions, under constraints, while defending a line that keeps other questions civic.

I think Polymarket and Kalshi are early rather than final. The infrastructure is cheap, the regulatory fights are mostly won, and the cultural objection has collapsed. Over the next 10 years, you'll see prediction markets embedded in news apps (they’re already live in Substack), used as the primary data feed for political coverage, integrated into corporate decision-making, and deployed inside political campaigns as both polling infrastructure and voter-suppression tools.

You'll see a second wave of markets on things that now seem unthinkable: markets on the outcomes of specific criminal trials, markets on marriages and divorces of named ordinary people who become briefly famous, markets on child custody outcomes, markets on refugee-camp mortality. There's no principled line that stops the expansion once the line against "civic questions" is gone, and that line fell in the early 2020s.

The prediction markets are the clearest sign of decay because they're the case where the pitch is most defensible, the technology works, the outputs are useful, and the long-term effect is corrosive anyway. You can't argue against them on their own terms; the terms are already the problem.

What you can do is keep asking the questions that the market can't price. What do we owe each other? What should we refuse to sell, even if someone wants to buy it? What are the things we used to know and have started forgetting? Those questions produce arguments and if we’re lucky, sometimes the arguments produce institutions, and if we’re luckier still, the institutions are the load-bearing walls of a civilisation that's still alive.

Our civilisation can still produce them. It mostly doesn't.

SPONSORED

Westenberg is designed, built and funded by my solo-powered agency, Studio Self. Reach out and work with me:

Work with me
69e98bc1d72e1f0001fd5729
Extensions
How we lost the living Now
Before 1840, noon in Bristol happened about ten minutes after noon in London, and nobody much cared. The railway needed a common minute or it couldn't run - and that common minute is now a common nanosecond, shipped in real time.
Show full content
How we lost the living Now

In 1840, England’s Great Western Railway started running the trains on “railway time” - a single standard, set by Greenwich, instead of the local // solar time each town had kept, independently for centuries.

Before the railway, noon in Bristol happened roughly ten minutes after noon in London, and nobody much gave a damn - they had no reason to. Time was...time. After the railway, people had to care - because a train leaving Paddington at 12 couldn’t mean one thing in London and another thing in Reading, or the passengers would miss it, or the signalmen would have no ability to coordinate, and the whole apparatus would fall apart.

That moment is, I believe, when we started losing our hold on the present.

Before the railway, time belonged to the place where you stood. Your noon was the noon of the sun over your head; a farmer in Wiltshire and a clerk in Liverpool would share a year, and a season, but they didn’t share a minute. The minute was solely the possession of your immediate surroundings, and you owned it.

But the railway needed a common minute - or it couldn’t run.

And then - once we had the common minute - we discovered that it could be commoditised. It could be bought and sold.

In 1911, Frederick Winslow Taylor turned the commoditisation of the minute into a science when he published The Principles of Scientific Management - which he had assembled by standing over the shoulder of various factory workers, wielding a stopwatch, breaking their labour into fractions of a minute. He calculated how long it should take to lift a pig iron bar, and how long to carry it across a yard, and how long to drop it onto a pile. He paid workers more, if they hit his numbers, and less if they “whiffed” - and he wrote all of this down in tables which became, eventually, an entire philosophy of industrial productivity...

Taylor’s “innovation” - if we can call it that - was treating a human’s time, and by extension their very mortality, as a commodity priced by the single second; and building on that foundation the idea that time, left unoptimised, was “theft.”

After Taylor, time was something you either used, or you wasted - no third option. The present moment became a quantity.

The telegraph had started this work 70-odd years before. Samuel Morse’s first public transmission in 1844 (“What Hath God Wrought”) collapsed the time between Baltimore and Washington, from days into seconds. The phone would collapse it further, and radio would collapse it for everyone all at once...

Every technological acceleration is framed as a gift of time to all mankind, but every acceleration arrives, in practice, with increased expectations, with increased demand, with more and more pressure. The letter you could answer on your time, became the telegram you had to answer today. The phone call you could ignore in 1950 (because you simply weren’t home to take it) became a call you had to return in 1985 because the answering machine upped the ante. Then the answering machine was replaced by your mobile phone and (insert montage of technological advances here) by 2026, a 2 hour delay replying to a Slack message became a social failure...

Hartmut Rosa, the German sociologist, wrote a book in 2005 called Beschleunigung - translated as Social Acceleration. It traces this pattern across 3 layers: tech acceleration speeds up the machines, acceleration of social change speeds up the rate at which institutions and relationships change, and the acceleration of the pace of life speeds up how much we can (or are forced to) cram into a single day. Rosa’s argument is that these layers feed off each other; faster machines let us change faster, which means we need faster machines to keep up, and the loop tightens, and so...

Well, here we are.

The original promise of acceleration was always more free time. Washing machines would give us more leisure, email would cut our labour, automation would give us a 4 day work week, and so on. None of this really happened; a rising floor of expected output swallowed the gains, and so we signed up for more, and we ended up running faster to stay in the same damn place.

And somewhere in the early 2000’s, this crossed a cursed threshold. Before that point, tech was mostly compressing the time between events - the telegram, and the fax, and the email and the IM each shortened the gap between when you sent something and when it arrived; the gap was the thing getting smaller and smaller.

After the smartphone, the gap just...vanished. The feed became real-time, and the notifications constant. Information stopped arriving as discrete, gapped packets and started arriving as a continuous drip, and then a steady flow, and then a firehose, timed by the network’s ambient activity and no longer by anything you happened to be doing. And suddenly, you weren’t receiving mail anymore. You were drowning in a raging river of information.

Paul Virilio, the philosopher, called the condition of real-time media an accident of time itself; he argued that when everything happens at once, nothing actually happens at all, because events lose their distinguishing temporal edges, and the past // present // future collapse into a single undifferentiated smear. A 2021 RescueTime study found that the average knowledge worker checked communication tools roughly every six minutes; other studies put the average smartphone user at around 2,000-3,000 touches per day. We interrupt ourselves, or we get interrupted, enough that sustained attention has become a minority activity. It no longer happens naturally; if it happens at all, it must be scheduled.

Each notification is a tax on the present moment - pulling you into either a micro-past (what did I just see?) or a micro-future (what should I do about this?) while the here and now is skipped over like the intro to a Netflix show. And ironically - we consented to this. We signed up for it without thinking twice. The telegram was imposed on us by commerce, the factory clock by management, but we installed and embraced the push notifications ourselves, app by app, in exchange for convenience - in exchange for acceleration - in exchange for collapse.

If the Now has any place at all, it’s as “content.” We watch an event happen, and we’re already narrating it for a future audience, for a draft post, for a video - as if the event itself isn’t quite real, until it has been recorded in some way. Call it what you want; but it describes a condition in which our tools have trained us to convert the present into its sole acceptable format. It becomes raw material for a feed. You’re standing inside it and outside of it, holding a lens and prepping a caption.

The French sociologist Henri Lefebvre wrote in 1947 about the colonisation of everyday life. He saw the structure of a world that would eventually (perhaps, inevitably) produce Instagram. Commerce, and then bureaucracy each laid claim to a bigger piece of the ordinary everyday, until the ordinary itself became a product. A few decades later, we started calling that product “content.” Not a bad word for it, actually, considering that Content only has to be Contained - it doesn’t have to offer anything of value on its own.

We know this is happening - all of us. We talk about it constantly - I just did, and you just read it, and we both probably felt briefly quite pleased with ourselves for noticing, and that’s part of the problem too...every other bestseller is about mindfulness and slow living, digital detoxes and offline retreats, sabbath practices and meditation apps that send you push notifications reminding you to experience the moment.

Silence is a malfunction. Grief is harder now, because to grieve means to sit inside a moment, and we’ve lost the practice of it. Joy is thinner, because joy needs a present it can occupy, and the present has been divided into micro-slices already claimed by the next scroll, the next ping, and the next thing we should be looking at instead...

The generational data is looking bleak.

In his 2024 book The Anxious Generation, Jonathan Haidt argued that the cohort born after 1995 - the first to get smartphones before they were fully developed - show a sharp increase in anxiety, depression and self-harm; and the increase tracks against the rollout of social media. Haidt’s causal story might be contested, but the numbers aren’t. We’re all a little broken. We’re all breaking a little more.

My own take is that it’s not only about screen time, it’s about a generation who never had the chance to experience a present moment, without a second channel running underneath it all. The backchannel of the phone, the draft message, the group chat, the algorithm etc is all humming under whatever is supposedly happening in the room, until the hum gets so loud it takes over everything else. A childhood of partial presence creates an adulthood where you can’t watch Sabrina Carpenter and Madonna share the stage without the intermediate of an iPhone camera and screen...

The older generations lost the present slowly, and can still remember what it was like to have one. The younger are trying to reconstruct it from second principles, if at all.

I have no program to offer here. The essays that end with a neat 5-step plan to reclaim attention are almost without exception published by those who sell courses, and may my bank account forgive me, I still don’t have such a product. I do think the present can return in small pockets, and under specific conditions - when you make something with your hands, and the thing resists, when you’re looking after your friend’s dog, who is the best dog in the world, who has no opinion about the future, in the middle of a long walk and after the internal monologue has run out of fresh grievances...

How we lost the living Now

It returns when the compressors and the accelerators are out of reach for long enough that your nervous system remembers it has other settings.

The railway clock runs across server farms in places you have probably never been and will probably never go. The minute is measured by atomic oscillation and shopped out, in real time, to the watch on your wrist, the phone in your pocket, the Tesla in your driveway, the smart fridge that can tweet better than it can moderate its internal temperature etc., synced to the same atomic pulse.

It’s the same common minute. But it’s only ever the minute gone by or the minute yet to come. The minute we used to have and hold is gone.

SPONSORED

Westenberg is designed, built and funded by my solo-powered agency, Studio Self. Reach out and work with me:

Work with me
69e579348a8c9600016dd373
Extensions
I truly hate mostpeopleslop

In 2006, Joe Sugarman published a book called The Adweek Copywriting Handbook - and an axiom stuck...

"The sole purpose of the first sentence in an advertisement is to get you to read the second sentence."

That line, more or less, explains how social media turned into a

Show full content
I truly hate mostpeopleslop

In 2006, Joe Sugarman published a book called The Adweek Copywriting Handbook - and an axiom stuck...

"The sole purpose of the first sentence in an advertisement is to get you to read the second sentence."

That line, more or less, explains how social media turned into a pile of shit.

Sugarman's advice became the core system prompt for 300,000 tech assholes on Twitter. They've run it through algorithm after algorithm and produced the most soul destroying rhetorical tic of the 2020s. I'm talking about "Mostpeopleslop." "Most founders don't know this yet." "Most people aren't paying attention to this." "Most founders skip [thing my startup sells] because [bad reason]." "Most founders treat [normal activity] like [wrong version of activity]." "Most founders say they want [thing]. Few actually [thing] well." "Most founders confuse [vague concept A] with [vague concept B]." You've seen it, you've scrolled past it, and you've maybe even liked one or two of these excretions before your brain caught up to your thumb, because it's bloody everywhere. It breeds in the dark spaces between LinkedIn notifications, it has colonized every timeline on every platform where a man with a podcast and a Calendly link can post for free, and I hate it. May God forgive me, I hate it.

I truly hate mostpeopleslopI truly hate mostpeopleslopI truly hate mostpeopleslopI truly hate mostpeopleslopI truly hate mostpeopleslopI truly hate mostpeopleslop
Why it works (and why that's the problem)

I'll give the format its due: it works // performs. And the reason why is simple. "Most people" is a tribal signal - when you read "most people don't know about this," your brain does a quick calculation: Am I most people? Do I want to be most people? No? Then I better keep reading, so I can be the Holy Exception. But you're not actually learning fucking anything. You're being told you're special for having stopped to read, and the poster is offering you membership in an in-group, and the price of admission is a like, a retweet, any scrap of engagement. It's a scarcity play - people pay more attention to shit that feels exclusive.

"Most people don't know this" is exactly that.

It comes in a few different flavours...

The Reframe Artist goes "Most people are treating [recent tech acquisition] as a media story. It's a distribution story." This guy read one Ben Thompson article in 2019 and has been repackaging the word "distribution" as a personality trait ever since. The point underneath might even be fine! But he can't say it straight.

The Trojan Horse is "Most founders skip analytics because setup is painful. [My startup] is native. Zero setup." These are just ads. They are indistinguishable from late-night infomercials. "Are YOU tired of [thing]? Most founders are! But wait, there's more and if you follow and reply CRAP now, you get a set of steak knives..."

The Self-Eating Snake: "Most founders treat building in public like a highlight reel. They're doing it wrong. 7 ways to build in public without being cringe." Followed by a numbered list that packages a real idea in the same exact format it claims to be critiquing.

The Fortune Cookie: "Most founders confuse motivation with desperation." "Most founders mistake speed for progress." These sound wise if you scroll past them fast enough. They're fortune cookies, and they get engagement because they're perfect for screenshotting into your Instagram story, but there's nothing actually there...

And the Parasite: some guy quote-tweets "What keeps you moving? Progress or Pressure?" and adds "Most founders confuse which one they're running on." You take someone else's thought, bolt on the "most founders" frame, and now you've "created content." The confidence-to-effort ratio should embarrass anyone. It's intellectual house-flipping, with all the integrity attached.

The content industrial complex

Mostpeopleslop has metastasized because Twitter started rewarding engagement bait at the same time the creator economy started demanding you post all day // every day. If you're a tech influencer in 2026, you probably post 10 to 20 times a day, maybe more - this is what the gurus tell you to do. You need formats you can crank out fast that reliably get impressions, and "most people" threads do exactly that. There's no research required, and no original data - you barely need an opinion. You could generate these in your sleep, and thanks to OpenClaw some of these guys clearly do...

The easiest content to produce is the content that mimics existing successful content. The "most people" format is the shallow work of tech Twitter. It looks like thought leadership. It reads like wisdom. It's still slop.

The result is a timeline full of people telling you what "most people" get wrong, while they all say roughly the same things, in roughly the same format, to the same audience with a near-uniform contrarianism. Everyone is standing on a soapbox yelling "wake up, sheeple" at a crowd of other people on soapboxes.

The aesthetic crime of reading the same tweet structure 40 times a day isn't even the worst part - it's that mostpeopleslop degrades the information environment. When every piece of advice is framed as something "most people" don't know, you lose the ability to distinguish between underappreciated ideas and stuff someone repackaged from a blog post they read that morning...

And it trains audiences to value framing over substance - if you read enough "most people" posts, you start evaluating ideas based on how they're packaged rather than whether they're true. A well-formatted "most people" thread with a mediocre idea will outperform a useful post that doesn't use the formula, and so yes the medium becomes the message, but the message is: style points matter more than being right or even being valuable in the first place.

Everyone is an insider and an outsider at the same time; you're an insider because you're reading this post, you're an outsider because "most people" haven't figured this out yet, but since everyone is reading these posts, everyone is an insider, which means the distinction is fictional and we seem to have a collective hallucination of exclusivity.

The incentive structure on Twitter (and LinkedIn, where this format is somehow even more prevalent) rewards this kind of posting. If you're building an audience to sell a course, a SaaS product, a consulting practice, or a $249/month community where you teach other people to build audiences to sell courses, you need impressions, and you need followers, and mostpeopleslop delivers both. The people posting this stuff aren't stupid; some of them (a select // rare few, I'll grant) are sharp, have real experience, and could write things worth reading, but the format is a trap. Once you see that it performs, you keep using it, and every time you use it, you get a little further from saying something real and a little closer to being a content-generation machine optimized for engagement metrics. You have become the slop.

I want people to say the thing. If you have an observation about distribution, share the observation. If you built a product that solves a problem, describe the problem and describe the solution and have done with it. You don't need to frame every single post as a correction of what "most people" believe, and you don't need to position yourself as the lone voice of reason in a sea of ignorance. You can just ~say the thing.

The best writers and thinkers in tech have never needed the "most people" crutch. You can be interesting without being condescending. You can build an audience by being useful rather than by manufacturing a false sense of exclusivity 280 characters at a time.

But most people don't know that yet. (Sorry. Had to.)

69e063e88a8c9600016dd112
Extensions
Sometimes powerful people just do dumb shit

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have

Show full content
Sometimes powerful people just do dumb shit

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have told me it’s worth it.

Upgrade

In June 1812, Napoleon Bonaparte marched 685,000 soldiers into Russia - the largest military force ever assembled in European history up to that point, and one of the largest military fuckups of all time.

He had no coherent supply plan for feeding them, he had no realistic timeline for when, exactly, the Russians would agree to fight a decisive battle on his terms, and he couldn’t even articulate a coherent goal for his gamble, beyond ~beat the Russians in some vague way.

He had been warned by his own foreign minister Talleyrand that invading Russia was a catastrophic idea - and he did it anyway.

By December, roughly 400,000 of his soldiers were dead, mostly from starvation and exposure and the consequences of field surgery, and another 100,000 had been captured. The Grande Armée, the most feared fighting force on the continent, clawed its way back across the Niemen River as a frozen, shattered remnant of itself. It was the beginning of the end for Napoleon, who would never again be able to field an army of the size // quality he squandered on his pointless excursion into Russia.

Napoleon was, by any reasonable accounting, a genius - a military mind who rewrote the rules of European warfare, a political operator who fought his way up from being a minor league Corsican nobility to the Emperor of France and ruler of most of modern Europe before he turned 35, and a reformer whose ideas around the judicial system and the liberal order still echo today.

But none of that stopped him from making one of the dumbest decisions any leader has ever made, because he was arrogant, because he’d gotten away with so much for so long that he confused his luck for a system, and because (with the exception of Talleyrand) most of the people around him had simply stopped telling him no.

There’s a particular kind of person who can’t accept that story at face value, and you’ve met them. I am absolutely sure of it. They show up in every comment section and reply thread where someone powerful does something that looks, on its face, like a mistake - and their argument always runs the same way: you don’t understand, this is actually part of a larger plan, there’s a strategy here that you and I can’t see because we’re not operating at that annointed and elevated level…

They’re the 4D chess crowd.

And they are fucking everywhere.

When Elon Musk bought Twitter in October 2022 for $44 billion, a price he himself had tried to back out of after waiving due diligence (a decision so baffling that the presiding judge, Kathleen McCormick, openly marveled at it in court), the 4D chess analysts fired up immediately. You haven’t seen the inside of the honeycomb, they insisted! You don’t get it! You’re not the richest man on earth - how could you possibly hope to process his brilliance?

The mass layoffs that gutted the company’s accessibility team and its content moderation staff were, obviously, equally strategic. The verification fiasco that let someone impersonate Eli Lilly and tank their stock price with a fake tweet about free insulin had to be part of The Plan™️. The advertiser exodus that cratered the company’s revenue was just Musk shaking off the dead weight, building something new, playing a longer game than any of us could understand.

But a jury in 2026 found Musk liable for deliberately misleading investors during the acquisition. People he’d fired had to be re-hired weeks later because nobody had bothered to check whether they were, you know, running anything important before they were shown the door. The company lost roughly 80% of its value under his ownership - because there was no 4D chess. There was simply a billionaire who’d gotten used to being the smartest person in every room he walked into, who didn’t even have a Talleyrand of his own to hint that it might be a bad call, who bought a company on impulse, and then made it worse through a series of decisions that were exactly as bad as they looked from the outside.

The same thing plays out with Trump; every chaotic press conference, every contradictory policy announcement is immediately reframed by his most sycophantic supporters (and, weirdly, by a certain type of opponent who wants to believe they’re up against a mastermind).

“He’s distracting you.”

“He’s controlling the news cycle.”

“He’s flooding the zone.”

“He knows exactly what he’s doing.”

“Wake up, sheeple.”

I’ll grant you - sometimes Trump does know what he’s doing. Sometimes a provocation is calculated and the outrage does serve a purpose. But the 4D chess crowd can’t distinguish between those moments and the moments where the simplest explanation is just that a 79-year-old man with a phone, no impulse control, and an audience of millions is posting whatever dumb shit he feels like posting at 2 AM.

But the powerful don’t get to be powerful without being special, right?

And if they’re special, if they’re smarter than all the rest of us, everything they do must be for a reason, right?

And if we can’t see that reason, that must be a problem with us - mere mortals - not the divinely appointed titans, right?

Right?!

The most recent entry in this genre is OpenAI’s acquisition of TBPN, the daily tech talk show hosted by John Coogan and Jordi Hays. OpenAI reportedly paid in the low hundreds of millions of dollars for a show with 58,000 subscribers on YouTube. The show reports to Chris Lehane, OpenAI’s chief political operative. And predictably, the rationalizers have lined up.

Fortune ran a piece titled “3 reasons OpenAI buying daily tech show TBPN for hundreds of millions isn’t totally crazy.” The argument boiled down to: OpenAI is buying influence; packaging distribution with narrative control; and positioning itself to shape public conversation about AI at a moment when that conversation will determine the regulatory environment the company operates in.

And look, some of that might be true.

But it’s worth sitting with the simpler read for a second: a company whose own executives told staff to stop chasing “side quests” and focus on core AI model development spent hundreds of millions of dollars on a podcast. CNBC’s headline called it “chasing vibes.”

Slate called it “sleazy.”

Ben Thompson at Stratechery compared OpenAI to “the short bus at the end of the rainbow.”

OpenAI’s strategy has become catastrophically fragmentated. they were against ads until suddenly ads were the plan, Apple was a partner until they poached Jony Ive, and Anthropic is over there shipping models while Sam Altman is signing checks for a talk show. As Thompson put it, “there just isn’t much evidence that anyone knows what they are doing or that there is any sort of overarching plan.”

And that’s the rub.

The 4D chess read asks you to believe that Altman - Google breathing down his neck, Anthropic breathing down his neck, Meta breathing down his neck - sat down and decided a talk show with fewer subscribers than most mid-tier gaming streamers was the best possible use of hundreds of millions of dollars.

The boring read asks you to believe a CEO did something that served his ego. Pick whichever one requires less of a leap of faith. I know which one I’m going with...

Why do people resist the boring read? Melvin Lerner had a theory. He published a book in 1980 called The Belief in a Just World, and his argument was that most of us walk around with a bone-deep need to believe that people Get What They Deserve. If someone is rich, they must be smart. If they’re smart, their decisions must make sense. And if their decisions look dumb, well, you must be the one who’s missing something. It’s a warm blanket of a worldview. It just doesn’t survive contact with reality.

There’s something else going on, too; less intellectual // more animal. We see patterns everywhere. We see them when they’re not there. Kahneman built half his career on this - we are so desperate to find signal in the noise that we’ll construct entire narratives out of nothing, and a narrative where the powerful guy is playing 12 moves ahead is just a better story than one where he fucked up because that’s what people do.

But the 4D chess framing also flatters the believer. If you can see the hidden strategy that everyone else is missing, you’re the smart one, you’re the one who gets it. Which rather stops being funny when you realize what it costs…when you insist that every action a powerful person takes is part of a grand strategy, you strip away accountability and you make it impossible to call a bad decision a bad decision.

Every failure becomes a setup for a future success that never arrives, and every scandal a distraction from a larger game that never materializes. The goalposts disappear entirely, because the frame has become unfalsifiable; any outcome can be absorbed into the theory. If the plan works, it was genius. If it doesn’t, the real plan hasn’t been revealed yet.

This is how cults of personality sustain themselves - through interpretation, and through a community of believers who will do the intellectual labor of making sense of the nonsensical, who treat confusion as evidence of their own limited understanding rather than evidence that the thing they’re looking at is, in fact, confused.

The higher someone climbs, the fewer people around them will push back.

The richer they get, the more their bad ideas get funded instead of challenged.

The more successful they become, the more they start to believe that their success came from skill rather than from some volatile, unrepeatable cocktail of skill, timing, luck, and other people’s labor.

Napoleon was brilliant. He was also surrounded, by 1812, by marshals who were tired of arguing with him and a court that had learned it was safer to agree, and the invasion of Russia was precisely what happens when a brilliant person loses the feedback mechanisms that kept them brilliant.

Musk buying Twitter wasn’t 4D chess.

OpenAI buying a podcast for a price that could have funded a mid-sized AI research lab wasn’t a strategic fucking masterstroke.

Sometimes powerful people just do dumb shit, and sometimes there is no plan.

The people who will pay the highest price for the 4D chess delusion are, ironically, the people most devoted to it; because if you can’t look at a powerful person’s decision and say “that was a bloody stupid thing to do,” you can’t learn anything from their mistakes, and you can’t see the world clearly.

But when the choice is between speaking up and watching an unchecked megalomaniac march 685,000 soldiers into a Russian winter without a fur coat in sight, clarity is the only thing worth having.

SPONSORED

Westenberg is designed, built and funded by my solo-powered agency, Studio Self. Reach out and work with me:

Work with me
69dc86704f50af00015749c1
Extensions
Optimism is not a personality flaw

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have

Show full content
Optimism is not a personality flaw

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have told me it’s worth it.

Upgrade

On October 4, 1957, the Soviet Union launched Sputnik - and the United States lost its collective mind. Newspapers ran headlines about Soviet nuclear weapons raining from orbit, and schools held duck-and-cover drills. Eisenhower's approval rating cratered and the smartest people in Washington agreed that America had fallen behind, for good, the free world was in terminal decline, and their enemies were about to launch space-faring Nukes.

Then, eleven years and 8 months later, Neil Armstrong stepped onto the moon. One small step, etc.

Folks in 1957 had at least some reason to be afraid, and the fear was grounded in something real: you could measure the gap in rocket technology down to the pound of thrust. But the people who responded to that fear by building things (the Apollo program, the engineers who decided the problem was solvable) landed on the moon. The people who responded by predicting doom were forgotten before the decade was out.

I can't think of a better summary of the argument I'm trying to make here...

In the last 15 years, a specific kind of intellectual posture has taken hold everywhere. I've started calling it "competitive pessimism" - which might not be perfect, but it's the best I've got.

Whoever can list the most reasons something won't work gets treated as the smartest person in the room. If you say "I think this could go well," you get ~the look. That slight tilt of the head. Optimism is treated like a belief in astrology.

Pessimism reads as intelligence now.

Optimism reads as naivety.

This has gotten so baked into educated Western culture that most people don't notice they're doing it. But it's toxic, all the same.

Where this came from

The instinct has some logic to it, I'll be fair about that.

The 21st century opened with the dot-com crash, which wiped $5 trillion off the NASDAQ between March 2000 and October 2002. Then September 11th, and the Afghanistan War, and then the Iraq War. Then the 2008 financial crisis, which destroyed 8.7 million American jobs in eighteen months. Then Obama! And then Trump. Then a pandemic that killed over a million people in the US alone. Climate reports from the IPCC kept landing, each one worse. If you paid attention to any of this, bracing for impact started to look like base common sense.

The internet of course poured a gasoline on all of it. In 2012, Jonah Berger and Katherine Milkman at Wharton went through about seven thousand New York Times articles and tracked which ones readers actually emailed to their friends. The anxious // angry pieces won. The hopeful writing just sort of...sat there. The platforms didn't need an academic paper to work this out. Doom = clicks. Doom = ad revenue. Doom got you a booking on Joe Rogan. Pessimists built media empires, and optimists built water treatment plants in sub-Saharan Africa and nobody wrote a magazine cover about them.

Pessimism is useful and I won't be glib about that. You want a pessimist reviewing the specs on the I-35W bridge before it goes up. You want a pessimist reading your bloodwork. Risk assessment is a discipline that saves lives every single day. What happened over the last two decades is that risk assessment slid from being a discipline into being a disposition; worry stopped being something you ~did and became something you ~were.

And that's where the trouble started.

What the doomers predicted

In 1894, the Times of London published a calculation: at current rates of growth, the city's streets would be buried under nine feet of horse manure by the 1940s. The math was technically on the money: fifty thousand horses working in London at the time, each producing 15 to 35 pounds of dung per day, with a population growing...the arithmetic pointed one way.

What nobody at the Times knew was that Karl Benz, tinkering in a shop in Mannheim, had already patented a gasoline-powered vehicle eight years earlier. The car scaled up, the manure problem disappeared and a completely different set of problems showed up in its place.

Thomas Malthus did the same thing a century earlier, in his 1798 Essay on the Principle of Population. Population grows geometrically, food arithmetically, and therefore - famine is mathematically inevitable. He published this at the start of the Agricultural Revolution. Paul Ehrlich doubled down in 1968: "The battle to feed all of humanity is over," he wrote on page one of The Population Bomb, "In the 1970s hundreds of millions of people will starve to death."

Well, food production tripled instead.

Peak oil in 2005, same story; the doomsday logic was always internally consistent. The world just did not cooperate.

I know I'm cherry-picking here. Listing reversals is easy. Plenty of problems didn't get fixed. Plenty of hopeful people were dead wrong. But the doomers were also wrong, and the doomers didn't build anything while they were busy ~being wrong.

The optimists who failed at least generated attempts.

This is the asymmetry I keep snagging on.

Why it feels smart to be grim

In 1984, a psychologist at Berkeley named Philip Tetlock started cornering experts at conferences and asking them to make specific, testable predictions.

  • Would the Soviet Union collapse within five years?
  • Would inflation top 6%?

He accumulated tens of thousands of these forecasts from 284 political scientists, economists, and government advisors, and after twenty years he scored them all against reality. Most of the experts performed about as well as "dart-throwing chimpanzees" -- Tetlock's line, not mine. The very worst forecasters were the folks with One Grand Theory™️ who bent all incoming data to fit it.

Nobody puts the careful, uncertain forecasters on television.

"I see arguments on both sides and I'm not confident" doesn't fill a segment.

This is an ongoing complaint of mine.

And then there's the reputation factor. If you predict a catastrophe and you're wrong, nobody circles back to check - and your wrong call just dissolves. If you predict things will work out and they don't, that shit follows you around forever. The lopsidedness of the payoff alone is enough to push smart, careful people toward the darkest possible forecast even when the evidence is genuinely mixed.

Being wrong about doom costs you nothing.

Being wrong about hope costs you your career.

Hope as a decision

Cornel West split optimism and hope into two separate things...

  • Optimism is a spectator sport, in his framing. You watch the data and decide whether the trend lines look good.
  • Hope is a commitment to act as though improvement is possible, because without that commitment you guarantee it isn't. Nobody serious is claiming things will get better on their own. In fact, things can only get better if enough people act as though they might.

Every hospital that got built started with someone saying "we can treat those people." Every civil rights movement and every vaccine that reduced suffering began with someone who looked at a bad situation and decided to treat it as a problem to solve, and a problem that ~could be solved.

"What about the problems that didn't get solved? What about the optimists who were wrong?"

Well, what about them?

The optimists who were wrong still attempted something.

The pessimists who were right attempted nothing.

And the world runs on attempts, not on accurate // profound predictions of failure.

The permanent bracing costs

At the University of Pennsylvania, Martin Seligman spent decades studying what he called "learned helplessness." He found that people who explain bad events as permanent and personal, baked into the fabric of reality, are more likely to become depressed and less likely to keep trying. I should know. That about accurately describes my emotional state for most of my 20's.

The cultural pessimism that passes for intelligence has the same structure - if your default explanation for every problem is "systems are broken and people are selfish, so nothing will ever be different," you've adopted a worldview that is indistinguishable from despair, and you might call it realism but it produces the same behavior as hopelessness.

When pessimism becomes the default in public conversation, it starts building the world it claims to be describing. People who believe nothing can be different don't vote, don't volunteer, don't start companies, don't run for office, don't build the thing that might have mattered.

Pessimism at scale is a self-fulfilling prophecy.

Rebecca Solnit put it well in Hope in the Dark: "Hope is not a lottery ticket you can sit on the sofa and clutch, feeling lucky. It is something you do."

The stubborn, irrational case

In 1903, Simon Newcomb - a professor of mathematics at Johns Hopkins and probably the most credentialed scientist in the country - wrote a widely-read essay arguing that powered heavier-than-air flight was a practical impossibility. And on December 17 of that same year, Wilbur and Orville Wright flew four times at Kitty Hawk. The longest flight lasted 59 seconds and covered 852 feet. Newcomb never revised his position.

Pessimism is more accurate in the short term - almost always, I'll give it that. Things do go wrong in roughly the ways people predict they will. But optimism is more productive over decades. Optimism is the thing that generates attempts, and without attempts nothing changes.

Blind cheerfulness ignores evidence, crashes planes, builds the Humane AI Pin and bankrupts companies. Nobody wants that. But the choice to look at bad data and act anyway, because sitting still is the one move that guarantees the bad outcome, is a noble one.

The most dangerous idea I keep running into is that there is nothing to be done. It's the one idea that, if enough people hold it, comes true - and I refuse to treat that as a serious intellectual position. I refuse to let Quiet Quitting become the dominant intellectual model of our age.

I would rather be wrong about what we're capable of than right about why we shouldn't bother trying...

SPONSORED

Westenberg is designed, built and funded by my solo-powered agency, Studio Self. Reach out and work with me:

Work with me
69daf40c8186b2000175138c
Extensions
Why I quit "The Strive"

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have

Show full content
Why I quit "The Strive"

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have told me it’s worth it.

Upgrade

I spent about a decade waking up at 6am and checking my follower count before I brushed my teeth. Refreshing analytics while the coffee brewed, reading Y Combinator essays, networking on Twitter and trying to reverse-engineer what made people break out. I'd look at every piece of creative work I produced and ask "will this scale?" Every night, I'd calculate the gap between where I was and where Mark Zuckerberg was at my age, or Marc Andreessen, or Om Malik, or Ryan Holiday etc etc etc - which is an unhinged thing to do.

But I did it all the time.

I was obsessed with whatever it was that would come Next. After all of this, after I had made it, after I'd stopped "plateauing" and reached my potential.

This is what I've come to call The Strive.

And you know it too, even if you've never called it that. It's the obsession with making it, going viral, founding a billion-dollar company, becoming the next TBPN, raising millions of dollars, getting profiled in Wired, landing on the Forbes 30 Under 30 list (or, for those of us who aged out, the Forbes 40 Under 40 list that doesn't exist but should). The Strive is what makes people put "serial entrepreneur" in their Twitter bio. It's what makes a 24-year-old describe their note-taking app as "disrupting knowledge work." It's what kept me awake at 2am calculating how many followers I needed per week to hit my Q3 growth target.

Back in the 70's a researcher named Philip Brickman studied lottery winners and found that people who won big were no happier than anyone else within a few months. He called it the "hedonic treadmill." You get the thing, the feeling fades, you chase the next thing.

The Strive is the hedonic treadmill all over again, but it comes with a mandatory pair of white sneakers and a Claude Max subscription.

How the machine works

From the outside, The Strive it looks like that thing we call passion; but from the inside it feels like a crushing, soul destroying anxiety disorder.

You set a goal, and with a bit of luck and a whole lot of blood, you hit it, and you feel good for somewhere between 4 hours and 2 days (and I'm being generous with the 2 days), and then it fades and you set a bigger goal because the old one, the one you were sure would change everything, turned out to be a thing that happened and nothing more.

I hit 10,000 followers once and felt good for about an afternoon, and I got published in an outlet I'd been pitching for months and the dopamine lasted maybe 36 hours before I was back at my laptop, pitching the next thing.

Your brain's ~wanting system and your brain's ~liking system are separate circuits, and the wanting system is bigger and louder and more connected to everything else.

As it turns out, dopamine fires for the chase, not for the catch.

Infrastructure

The Strive doesn't live in your head alone - no, it has infrastructure. It has an entire litany of podcasts, and conferences, Twitter threads, and LinkedIn posts, co-working spaces, and pitch competitions, and all of it keeps you sprinting.

Everyone around you is Striving, too. And when you stop, you feel like you're falling behind. But falling behind who? People who are also on the treadmill, also wondering why their last win didn't stick?

Once you're in it, opting out feels like abject failure. If you're not growing, you must be dying, if you're content, you must be complacent. Peace and satisfaction are a disease that infests the uninitiated.

Well, I went to the events and read the books (the ones with single-word titles like "Grit" and "Hustle") and optimised my mornings and tracked my habits with an app that gave me a score, and I tweeted through it, and I cold-emailed strangers to ask if I could "pick their brain," which is a phrase that should be fucking illegal.

It got me nowhere.

What it costs

You're told to calculate the upside: money, status, influence. You're told to weigh that against the "sacrifice," which is always framed as temporary. Grind for a few years, miss some dinners, put some relationships on hold. There's a finish line out there and once you cross it you get to enjoy things.

But this is bullshit! There's no finish line! That's the trick, and it's a damned good one!

I watched a friend raise a Series A and immediately start losing sleep over Series B. I know someone who hit a million in annual revenue and couldn't enjoy it because all he could think about was getting to 10. Every finish line is a checkpoint. You cross it and the next one appears, further away, higher up. This is intentional design.

What The Strive actually cost me was a large number of physical years. Years I spent optimising instead of living, years I spent going to networking events instead of seeing friends, building an audience when I could have been reading a book for fun. I worked on things that were always about to become The Thing that Mattered, and they never quite did. And even if they had, it wouldn't have been enough. It would never, ever have been enough.

Enough

What if the right scale for interesting work is whatever scale that lets you keep doing it? What if a good idea doesn't need to become a billion-dollar company? What if it can be a good idea that you work on because it's interesting and it pays your rent?

Kahneman and Deaton published a study in 2010 showing that after about $75,000 a year (probably closer to $100,000 now), more money doesn't make your day-to-day life feel any better. Your abstract "life satisfaction" might tick up, but the actual texture of your days doesn't change.

You can't ask "how much is enough?" inside The Strive's framework because "enough" is a word it doesn't recognise.

I run a solo creative studio called Studio Self. I write things I love writing. I work with clients I actually like. I pick projects that challenge my brain in ways I find interesting and I turn down the ones that don't. I am not optimising for maximum revenue. I am not building a unicorn. I am unlikely to go down in history.

By The Strive's metrics, I'm a failure. I should be raising money. I should be building a team. I should be positioning for an exit. Instead I'm writing a blog and reading comic books and playing Doom II and working on ideas that will almost certainly never make anyone a billion dollars, and I'm fine with that. Better than fine. I wake up and I'm not dreading the day. I work on things I care about. I stop working when I'm done. My brain feels alive and I have time to read for fun and I think that's worth more than a Series A.

The MASH test

The Strive doesn't believe in hobbies. In its twisted framework, everything is either productive or wasteful. Reading a comic book: wasteful. Reading a business book: productive. Playing a video game: wasteful. Building one: productive. It can't process the idea that you might do something because you enjoy it. Every hour has to be an investment. Every experience has to compound.

That is a psychotic way to live. I mean that close to literally. It's a disconnection from the basic human experience of enjoying things because they're enjoyable. A 6-year-old knows you read a book because it's fun. The Strive beats that out of you and replaces it with a spreadsheet.

I have a test for whether The Strive still owns you, and I call it the MASH test. MASH ran for 11 seasons, from 1972 to 1983. The finale pulled 105.9 million viewers, still the most-watched broadcast in American television history. It's a show about people stuck in the Korean War trying to stay human by being funny and caring about their work even though everything around them is insane.

The test: can you sit down and watch an episode of MASH on a Wednesday afternoon without feeling guilty about it?

If you can't, if you feel The Strive pulling at you, telling you that you're wasting time, that you should be producing something, that every hour not spent growing your brand is an hour wasted, then The Strive still owns you. You've handed your peace of mind to a system that will never give you permission to stop.

I can read a comic book now without running the mental math on whether I should be making content instead. I can go for a walk without listening to a business podcast. I can talk to someone without wondering if they're a useful connection. These sound like small things but mate I promise you, they're the whole game.

Why it sticks

You'd think, given everything I've described, that people would stop. But tey don't, and I didn't for a long time, even though I knew the math was bad.

Part of the reason why is survivorship bias. The Strive only shows you the people who won: the fundraise posts, the exit announcements, the profile in TechCrunch. You don't see the thousands who ground it out for five years and ended up back at a day job, older and more tired and with a credit card balance that makes them nauseous. The expected value of The Strive is way lower than it looks from the outside, but the outside is the only view you get.

Part of it is identity. Once I'd been Striving long enough, it stopped being something I did, and started being something I was. My friends were Strivers and my self-concept was "ambitious person building something." When I started pulling back, it felt like I was killing a version of myself. That's not a comfortable feeling.

But the biggest part, the part nobody wants to cop to is this: The Strive is a really good way to avoid sitting with your actual life. If you're always chasing the next milestone, you don't have to ask whether you like the way you spend your days. You don't have to look at the possibility that catching the thing wouldn't make you happy either. I think, for me, a lot of The Strive was... running, running from the question of whether any of it was what I wanted, or what I'd been told to want.

Boring on purpose

Contentment is boring. "I woke up, I wrote something I liked, I read a book, I had dinner, I went to bed" doesn't make for a great narrative. Nobody's making a podcast series about the person who decided things were just fine and they were just fine with that.

The Strive has narrative tension. Will they raise the round? Will the product launch? Will they make it? My life doesn't have narrative tension right now and I think that might be the whole point.

A life that makes a good story tends to be a life that was awful to live. The chapters people want to read about in biographies tend to be the ones the person would have skipped if they could. The Strive sounds great in the abstract. Work hard, dream big, change the world, but on the ground it's a prescription for chronic dissatisfaction, because it trains you to put your well-being somewhere in the future, always in the future, and the future never arrives.

I'd rather be bored and present than excited and perpetually somewhere else. I'd rather have a Wednesday than a narrative arc. I realise that's not a sexy position to hold. I'm directionally ok with that.

I work hard, I care about quality. I want the things I make to be good and I want to find clients who push me and I want projects that make my brain hurt in a good way.

But the specific cultural program that equates your worth with your scale, your follower count, your funding round, is a bad deal for most people. It takes the normal desire to do good work and bolts a set of impossible metrics onto it. It takes "I'd like to make a living doing something I care about" and inflates it into "you need to build an empire." Those are different things. The Strive collapses them into one.

Where this goes

"Is this enough?" is a better question than "How do I get more?" and I wish someone had told me that 10 years ago, but I probably wouldn't have listened.

I was too busy Striving.

The Strive isn't going anywhere, not any time soon. There's too much money in it, too many people selling the dream. The conferences will keep going, and the podcasts will keep recording, and LinkedIn will keep telling you that your comfort zone is where dreams go to die, which, as motivational slogans go, really is something isn't it?

But I think, I hope, the math is starting to catch up with more people. We spend years marinating in anxiety for a feeling that lasts 48 hours, on repeat, forever - and that is a bad fucking trade.

I'm going to go watch MASH now. I've got nowhere to be and nothing to optimise and I don't feel guilty about it at all.

SPONSORED

Westenberg is designed, built and funded by my solo-powered agency, Studio Self. Reach out and work with me:

Work with me
69d85619a6dac40001e93322
Extensions
The Hacker News tarpit

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have

Show full content
The Hacker News tarpit

This newsletter is free to read, and it’ll stay that way. But if you want more - extra posts each month, no sponsored CTAs, access to the community, and a direct line to ask me things - paid subscriptions are $2.50/month. A lot of people have told me it’s worth it.

Upgrade

Hacker News is a web application with the following features: a list of links, sorted by votes. Comments under those links, also sorted by votes. User accounts with karma. A text submission option. A jobs board. That's it; that's the entire product.

The database schema would take a handful of tables. You've got users, posts, comments, votes, and some metadata. A first-year CS student could design it. And I don't mean that as an insult to either the first-year CS student or to Hacker News.

Well, I spent a Saturday last month vibe coding a Hacker News clone. Took about 3 hours, most of which was me arguing with the AI about CSS. By the end I had a working link aggregator with voting, comments, user accounts, and a ranking algorithm roughly equivalent to the one HN uses. It looked like Hacker News. It functioned like Hacker News. It sorted stories by a points-over-time decay function and everything.

My 9 year old could have used it.

...but nobody will ever use it.

To be clear - this is not a post about how hard it is to build software. It's a post about how easy it is to build software, and how that easiness fools people into thinking they understand what they're looking at when they see a successful product.

Every developer who sees HN thinks, "I could build that in a weekend." And they're right; they absolutely could. In fact, I'd assume they're pretty shite at their jobs if they couldn't. What they couldn't build in a weekend // month // year // probably ever, is the thing that makes Hacker News actually work. And that ~thing is not the software.

Let me list every Hacker News clone I can think of off the top of my head: Lobsters, Tildes, Barnacles, EchoJS, Hashnode, various subreddits pretending to be link aggregators, that one site called Squishy or Squidgy or something that I remember existing briefly in 2019. Some of these are ok. Lobsters is genuinely good. But none of them are Hacker News in the way that matters, which is: none of them are the place where you go when you want to know what several hundred thousand programmers think is interesting right now.

You can't build Place People Go as a feature. It's a thing that happened over time, through a specific and unrepeatable sequence of events, most of which were not planned and some of which were just luck.

Hacker News launched in 2007 as a side project by Paul Graham, who ran Y Combinator. The initial user base was people who read Paul Graham's essays. Think about what that means for a second. The seed community was a self-selected group of people who were (a) programmers or startup founders, (b) interested enough in ideas to read long essays about programming languages and startup strategy, and (c) already connected to the Y Combinator network.

This is an absurdly good seed community for a tech link aggregator. You could not assemble it on purpose. Or rather, you could assemble something similar, but you would need to already be Paul Graham, which is an unreasonable prerequisite for a product launch.

PG has talked about this a bit. He's said the key to HN's moderation is that they basically hand-tuned the community for years. Daniel Gackle (dang), who has moderated HN since around 2014, reads an almost superhuman volume of comments and applies a consistent but hard-to-formalize set of norms. The guidelines say things like "Be civil" and "Don't be snarky" and "Please don't post shallow dismissals." These rules are not special. Every forum has rules like this. What's special is that someone actually enforces them, every day, across thousands of comments, with at least an attempt at consistency.

A link aggregator is only as good as its community, and the community is only as good as the people in it, and the people are only there because the other people are there. This is a Schelling point problem; everybody needs to coordinate on the same place, and which place they coordinate on is partly arbitrary, and once they've coordinated it is very expensive to move.

There's a bar in your city where all the interesting people go on Thursday nights. The bar is not special. The drinks are mediocre, the lighting is bad, the bathrooms are questionable. But interesting people go there, which makes it interesting, which makes more interesting people go there. If you open an identical bar across the street with better drinks and better bathrooms, nobody is going to switch, because the interesting people are at the other bar. They all know they're at the other bar. There is no mechanism for coordinated switching.

You could build a better Hacker News. Better ranking algorithm, better comment threading, better search, dark mode, an API that doesn't feel like it was designed in 2008 (because it was designed in 2008). None of this matters. The readers, the commenters, the founders who show up for "Show HN" and "Ask HN" and "Who is hiring?" are already at Hacker News. You can't move them by building a nicer website. They are not there because the website is nice.

Vibe coding has made it trivially easy to build software. I can spin up a functional web app in hours. So can most developers. Increasingly, so can people who aren't developers at all. The cost of building the thing has collapsed toward zero.

But most successful software products were never gated by the difficulty of building the thing. They were gated by distribution, network effects, community, trust, brand, regulatory capture, some tangle of these. Making the building part free doesn't touch any of those. It arguably makes them worse, because now you have a thousand competitors who also built the thing over a weekend and are all fighting for the same pool of users who are already using something else.

Imagine you could conjure a fully equipped restaurant out of thin air. Kitchen, dining room, the works. Free. What happens? You don't get a golden age of dining. You get a million empty restaurants, because the scarce resource was never the building. It was the chef who knows what she's doing, the corner spot with foot traffic, the regulars who show up on Tuesdays. Those things take years.

Hacker News is fifteen years of community norms, trust, moderation decisions, accumulated habits, and network effects. You can't build that. It isn't a technical problem. It's closer to an archaeological one. The thing that makes HN work is deposited in layers over time and you cannot speed up the deposition.

There's a lazy version of this argument that says "network effects make incumbents invincible, so never try." I don't buy it. Digg was the Hacker News before Hacker News and it self-destructed. Reddit almost died several times. Twitter did die, sort of, depending on how you score it. These things can break. But they almost always break because the incumbent does something stupid, not because a competitor does something smart.

Digg didn't lose because Reddit was technically superior. Reddit in 2010 was ugly and confusing and had the subreddit system, which I maintain to this day is one of the worst information architecture decisions ever made for a site that size. Digg lost because Digg redesigned itself in a way that enraged its entire user base, at the exact moment Reddit was standing there as an alternative. The coordination problem solved itself because one of the two options eliminated itself.

If you want to replace Hacker News, you don't need a better Hacker News. You need Hacker News to screw up badly enough that people are motivated to leave, and you need to already exist when they start looking for the exit. This is a patience and luck problem, and last I checked neither of those ships with an npm package.

There's a related thing happening all over my Twitter feed. Someone builds a beautiful project management tool over a weekend. They tweet a screen recording. It gets 500 likes. The tool dies off because project management tools don't compete with each other on features. They compete with Jira, and Jira's moat is that your company's entire workflow is caked into it like geological strata. Nobody is migrating away from Jira because some guy's weekend project has nicer fonts.

Same with note-taking apps. Every week there's a new one. Every week it's gorgeous. I have probably tried forty of them since 2015 and I still use a folder of plain text files, because at some point I realized the switching cost isn't money or even time, it's the habits in your fingers, and those are basically impossible to override on purpose. The new app would need to be so much better that it overcomes years of muscle memory, and none of them are, because text files are actually fine.

The demo is not the product. The product is the ugly part that comes after, where you have to convince real people to actually change what they're doing, and that has never been a software problem. I don't think vibe coding makes it any easier. If anything it makes it harder because you have more competition from other people who also demoed something nice and also can't get anyone to switch.

I think the vibe coding discourse has a hole in it, and the hole is shaped like the question: "what is software for?"

If software is a thing you build, then vibe coding changes everything. Anyone can build. We have democratized building. Congratulations to building.

But software is mostly a thing people use, and getting people to use things is not a building problem. It never was. The reason most software fails is not that it was too hard to code. The reason most software fails is that nobody wanted it, or everybody wanted it but was already using something else, or the right people wanted it but couldn't find it, or they found it but didn't trust it, or they trusted it but couldn't get their team to switch.

Hacker News works because Paul Graham had an audience before he had a product, Y Combinator had a network that seeded the community, and dang has been doing the same moderating job every single day for over a decade with what I can only describe as an unreasonable level of dedication. The whole thing has been accumulating social capital for almost twenty years...

I built a Hacker News clone in six hours. To me, it's perfect and for everyone else it's empty and those two facts are going to remain true forever. If that doesn't tell you something about what software is and isn't, I don't know what will.

SPONSORED

Westenberg is designed, built and funded by my solo-powered agency, Studio Self. Reach out and work with me:

Work with me
69d442442ac9bd00012fe301
Extensions
The AI writing witchhunt is pointless.

Alexandre Dumas ran what was essentially a content production house in 19th century Paris. His most famous collaborator was Auguste Maquet, who wrote substantial portions of The Three Musketeers and The Count of Monte Cristo. Maquet would produce drafts and outlines, and Dumas would rewrite and polish them, but the

Show full content
The AI writing witchhunt is pointless.

Alexandre Dumas ran what was essentially a content production house in 19th century Paris. His most famous collaborator was Auguste Maquet, who wrote substantial portions of The Three Musketeers and The Count of Monte Cristo. Maquet would produce drafts and outlines, and Dumas would rewrite and polish them, but the books went out under Dumas's name alone. Maquet eventually sued him over it in 1858 - and won a financial settlement - but the court ruled Dumas was the sole author.

At the peak of his Factory, Dumas had something like 73 collaborators working with him at various points. A contemporary writer named Eugène de Mirecourt published a pamphlet in 1845 called Fabrique de Romans: Maison Alexandre Dumas et Cie ("Novel Factory: The House of Alexandre Dumas and Company") accusing him of running a ghostwriting sweatshop. Dumas sued for libel and won, but nobody really disputed the underlying facts.

Dumas published around 100,000 pages in his lifetime.

Even his defenders admitted he couldn't have written all of it alone.

Put a pin in that, we'll come back to it later...

In November 2025, Hachette published a horror novel called Shy Girl by Mia Ballard. It is, decidedly, not my cup of tea. But, it had sold about 1,800 copies in the UK, and it had almost 5,000 ratings on Goodreads, averaging 3.51 stars. It was an ordinary debut, with a built-in fanbase.

And then the internet decided it was written by AI, and the world began a witchhunt.

A Reddit thread blew up, followed by a YouTube video titled "I'm pretty sure this book is ai slop" pulling in 1.2 million views. Goodreads reviewers started dissecting individual sentences like forensic linguists with a grudge, and by early 2026, Hachette had pulled the book from shelves, cancelled the US release, and scrubbed it from Amazon.

Ballard says she didn't use AI herself.

She says an acquaintance she'd hired to work on an earlier self-published version had incorporated AI tools without her knowledge.

"This controversy has changed my life in many ways and my mental health is at an all time low and my name is ruined for something I didn't even personally do," she wrote to the New York Times.

And I'll stand up right now and say - fuck it.

Maybe she's telling the truth.

Or, maybe she isn't.

I don't actually give a shit, because I don't actually know, and neither do you actually know, and neither do the thousands of people who participated in destroying her career.

We just. Don't. Know.

What I do know boils down to pretty much this: the tools // methods people used to reach their verdict are fucking garbage. The culture that's grown up around AI detection is poisonous, and I refuse to have anything to do with it.

AI detection tools are unreliable.

It's been shown over and over.

OpenAI launched its own AI text classifier in January 2023, and by July 2023, they'd shut it down because it correctly identified AI-written text only 26% of the time - worse, if I may point out, than a coin toss...

GPTZero, Turnitin's AI detection feature, Originality.ai, Pangram etc: the whole cottage industry that's sprung up here shares the same limitation. They're pattern matchers trained on statistical likelihoods, flagging text that looks like it could have come from a language model, and the problem is, a lot of perfectly human writing also looks like it could have come from a language model, because language models were trained on human writing, and even the AI-based AI detection tools are just playing an eternal // infernal game of whackamole with this model and that moel and the next model.

Snake, meet tail.

You're going to get along swimmingly.

Researchers at Stanford found in 2023 that AI detectors disproportionately flagged writing by non-native English speakers as AI-generated, based on simpler sentence structures, based on fewer idioms, based on predictable word choices, based on all the things a person writing in their second or third language might produce.

All the things a detector reads as "probably a robot."

The same thing happens to neurodivergent writers.

Autistic writers.

Such as myself...

The tools are biased and inaccurate, they spit out false positives at rates that should make anyone uncomfortable using them as evidence of anything, and yet people treat the output like a blood test that came back positive, forgetting apparently that blood tests are retested and retested because no one test is entirely accurate.

But most of the people who went after Shy Girl weren't even using formal detection tools; they were reading passages and going: "This sounds like ChatGPT to me" - and maybe it did, and maybe it was, but a gut feeling seems like an awfully precarious thing over which to fuck an entire career.

Just because someone on Reddit reads a sentence that feels generic, or a metaphor that lands a little flat, they (increasingly) conclude with absolute certainty that a machine wrote it, as if mediocre prose is a new invention, as if bad writing didn't exist before November 2022. And may God forgive us if we condemn each other to permanent damnation for producing shitty prose; sans the production of shitty prose, no writer has ever grown one jot.

I've been writing professionally for years, and I've read thousands of self-published and traditionally published books and a huge percentage of them contain clunky sentences, and overused phrases, and cliché metaphors, and prose that reads like it was assembled by so many monkeys with so many MacBooks. But that, dear reader is writing. Most writing is ok. Functional at best. Some writing is good enough to create // destroy empires and so on, and that was true in 2005 and it was true in every moment of our crummy, bargain-bin history up to the introduction of ChatGPT, and damn it, it's true now.

You can't read a paragraph and reliably, with a human life on the line (because that's the stakes, when you destroy a writer's career and a writer's reputation) tell beyond any reasonable doubt, whether a human or a machine produced it. Humans writing in familiar genres, leaning on conventions and common phrasings, leaning on their own context windows, containing everything from Ian Kershaw to Ursula LeGuin to a smattering of Harry Potter fanfiction from 2005 to a series of brain-rotted TikTok reels are doing the best they can to find the right words and shove them into something resembling the right order. A romance novel that uses "his eyes darkened with desire" isn't necessarily AI-generated, even if it reads like a steaming pile to those of us enlightened enough to call ourselves the Literati. Following genre conventions doth not a fraud make. A horror novel with clunky exposition isn't ChatGPT. It might just be a first-time author who hasn't found their voice yet, and they'll never find their voice if we wave pitchforks and torches at every line we personally dislike.

The big publishers are not the ones who'll get hurt by this, obviously. Hachette pulled Shy Girl and moved on, with a swiftly issued statement about "protecting original creative expression." Back to business, and so it goes.

No, the folks getting hurt are the writers. Not only the ones who are tarred - all of us. Every single God-forsaken one of us. We are all made smaller by the pursuit of unproven and unprovable purity. Whether Ballard used AI or not (and she says she didn't, and naive or not I'm inclined to throw my cynicism to the wind and just take her at face value, and you can mock me if you like), the punishment landed before any verdict was reached, because no verdict can ever be reached. Not beyond a reasonable doubt. Never beyond a reasonable doubt.

She's not going to be the last. This whole setup, where anyone can accuse any writer of using AI based on gut feeling, and broken detection tools get treated as proof and publishers fold at the first hint of controversy because the PR cost outweighs the book's revenue, is going to grind up a lot of people into a fleshy, bloody, bony paste. Most of them will be small-time writers, debut authors, indie-published folks without the platform or the money to fight back.

The motivations of the accusers are more complicated than they'd like to admit.

First - the writers who feel threatened by AI are channeling that fear into vigilante enforcement, and I get the fear. I share it, ~to a point. I think it's clear that AI is flooding the market with cheap content, even if I can't confidently crucify any individual for it. But destroying individual careers on the basis of speculation doesn't fight that problem - it simply gives you someone to punish, and the drive for revenge is, while altogether human, altogether bullshit.

Beyond the slighted writers, you've got the internet sleuths who've found a new game. The same energy that drove Reddit to misidentify the Boston Marathon bomber (remember that?) is now being applied to prose style analysis, with the same overconfidence, and the same total absence of accountability when they get it wrong.

Third - the booktok etc influencers who smell blood (and engagement) in the water. "I dissected this book and found some awkward sentences" doesn't get 1.2 million YouTube views. "This book is AI slop" apparently does.

Finally - the readers who feel betrayed by the idea that something they read might not have been "real." I understand that impulse, too - but the logical endpoint is a world where every writer is suspect, and every flat passage becomes evidence, and the act of reading itself is poisoned by constant suspicion.

What unites all of them is the conviction that they, ~they can tell. That they've developed a sixth sense for machine-generated prose through sheer exposure. Well, they haven't. Nobody has. The researchers who build these models can't reliably tell, and the companies that created the AI can't reliably tell, and I am comfortable concluding that someone with a Goodreads account and strong opinions sure as shit hasn't cracked it either.

Give me a break.

The "human-written" certification badges that have started popping up deserve a closer look, because they reveal how badly this whole discourse has gone off the rails...

The Society of Authors' logo and the Authors Guild's certification both operate on the honor system. You register, you say "I wrote this myself," and you get a sticker on your book. There is no forensic review (wouldn't make a difference), no manuscript audit (to what end?) Nobody's testifying under oath that they watched you type every word.

So what do these badges actually prove? That someone was willing to check a box? A person who used AI and wanted to hide it would check that box too. And a person who didn't use AI but can't afford the registration fee, or doesn't belong to the right trade association, or just didn't know the program existed, doesn't get the badge. The absence of the badge becomes its own accusation.

We've been here before. The "organic" label in food. The "fair trade" stamp on coffee. These things start as consumer protection and end as marketing advantages for folks with the resources to participate, and the writers who need protection the most - debut authors // the self-published, writers without agents or industry connections, are the ones least likely to know about or access these programs.

By creating a "certified human" category, you've implicitly created an "uncertified" category. Every book without the badge now carries a faint question mark, and so the presumption of innocence gets torn to shreds, and nobody has to take responsibility for it, because it happened through a logo, not a law.

I'm not going to use AI detection tools on other people's writing.

Not privately, not publicly, not ever.

I'm not going to participate in crowdsourced investigations of whether someone's novel or essay or blog post was "really" written by a human, and I won't share threads that claim to have found proof, and I won't add my voice to the chorus of outrage. My fingers are better employed typing out my own work than pointing at people I've never met.

The cost of a false accusation is a person's career and their mental health, while the cost of letting an AI-assisted book sit on a shelf is... a book sitting on a shelf. And I find I just do not give a shit. That asymmetry is so extreme I can't wrap my head around how more people aren't troubled by it.

If a publisher wants anti-AI clauses in their contracts, fine. If a literary prize wants attestation that no AI was used, that's their call. Those are agreements between parties who chose to be there and good luck to them. But the mob version of this, where anonymous internet users appoint themselves the AI police armed with broken tools and absolute conviction, is something I want no part of.

Writing has always been messy, and writers have always borrowed, imitated, recycled, and leaned on formulaic structures. Ghost writers exist, and editors sometimes rewrite entire chapters. Collaborative writing has been around for centuries. The line between "authentic" and "assisted" has never been as clean as people are pretending it is right now.

If The Three Musketeers were published today, and someone published the 2026 version of a Pamphlet, what would happen? Would a Reddit thread decide that the prose felt too formulaic? Would a YouTube video rack up a million views dissecting the sentence structure? Would Hachette pull it from shelves?

The answer is: fucking, probably. Because the current system doesn't care about the actual quality of the work, or the process behind it, or the centuries of collaborative tradition that produced some of the best writing we have. It cares about the appearance of purity. It cares about whether a mob can be convinced that something smells wrong.

Dumas is in the literary canon, and his books are assigned in schools.

But the way he made them would get him destroyed on the internet in 2026.

This seems suboptimal, to say the least.

I don't know what the right "policy framework" for AI and publishing looks like. Nobody does. We're probably going to spend years figuring it out and we're probably going to get a lot of it wrong.

But I am 100% sure that I know what the wrong version looks like. It looks like a YouTube video with a smug title and a million views, and a Reddit thread full of folks who've never published anything cosplaying as literary forensic experts, and a debut author's name becoming synonymous with fraud because her prose wasn't polished enough to survive a vibe check run by strangers on the internet.

Mia Ballard sold 1,800 books. She had a 3.51 on Goodreads. She was nobody. Most writers are nobody. The internet ate her alive because it felt good to have a villain.

She won't be the last.

And I still won't be any part of it.

SPONSORED

Westenberg is designed, built and funded by my solo-powered agency, Studio Self. Reach out and work with me:

Work with me
69d0fd73b19a140001e334e6
Extensions
The "Passive Income" trap ate a generation of entrepreneurs

I had coffee last year with a guy - I won't use his real name - who told me he was "building a business." I asked what it did. Dropshipping jade face rollers.

I made him say it twice.

Jade face rollers.

He'd found

Show full content
The "Passive Income" trap ate a generation of entrepreneurs

I had coffee last year with a guy - I won't use his real name - who told me he was "building a business." I asked what it did. Dropshipping jade face rollers.

I made him say it twice.

Jade face rollers.

He'd found them on Alibaba for $1.20 each, and started selling them through Shopify for $29.99. Never used one himself. Didn't really know what they were for - something about lymphatic drainage? Reducing puffiness? He said "lymphatic" the way you say a word you've only ever read and never heard out loud.

Some guy on YouTube said jade rollers were "trending," the margins looked insane on paper, so he'd "built" a website with stock photos of a dewy-skinned woman rolling a green rock across her cheekbone and started running Facebook ads at $50 a day. Customers would email asking where their stuff was - shipping from Guangzhou, three to six weeks, sometimes way longer - and he'd copy-paste a response he found on a dropshipping subreddit. He had a Google Doc full of pre-written customer service replies.

Never talked to a single customer.

I swear to god.

Five months in, he was $800 in the hole.

He told me all this like he'd invented the wheel.

I bought him another coffee. I genuinely had no idea what else to do.

Jade Roller Guy has become my go-to example of something that went drastically, terribly wrong with how a whole generation of would-be entrepreneurs thought about work and money. A specific ideology - I've been calling it Passive Income Brain - grabbed a huge chunk of the people who were, by temperament and ability, most likely to start real businesses, and it gave them a completely fucked set of priorities.

Somewhere between 2015 and 2022, "passive income" stopped being a boring financial planning term and became, I don't know how else to put this, a salvation narrative. I mean that literally. There was an eschatology if you want to get nerdy about it. The Rapture was the day your "passive income" exceeded your monthly expenses and you could quit your job forever. People talked about it with that exact energy.

But, of course, the folks making any actual income, of any kind, were the ones selling courses about making passive income. It was an ouroboros. It was an ouroboros that had incorporated in Delaware and was running Facebook ads.

The pitch went something like: you, a sucker, currently trade your time for money. This is what employees do, and employees are suckers. (I'm paraphrasing, but not by much.) Smart people build SYSTEMS. A system is anything that generates revenue without your ongoing involvement. Write an ebook. Build a dropshipping store. Create an online course. Set up affiliate websites.

The specific vehicle doesn't matter because the important thing isn't what you build, it's the structure. You want a machine that generates cash while you sleep, and once you have that machine, you are free.

Free to do what? Sit on a beach, apparently. Every single one of these people wanted to sit on a beach. I've never understood this. Have they been to a beach? There's sand. It gets everywhere. You can sit there for maybe three hours before you want to do literally anything else.

The "Passive Income" trap ate a generation of entrepreneurs

But I digress.

The allure is real. Who doesn't want money that shows up while you sleep?

I'd fucking love that. I'd love it very much indeed. But "passive income" as an organizing philosophy for your entire business life, for how you think about work, is almost perfectly designed to produce garbage.

When you make "passivity" the thing you're optimizing for, you stop caring about anything a customer might actually want. Caring is active. Caring takes time. Caring is work.

Giving a shit is, by definition, not passive.

Between 2019 and 2021, roughly 700,000 new Shopify stores opened. The platform went from about a million merchants to 1.7 million in two years. About 90% of those stores failed within their first year. Which is really more a meat grinder, than it is a business model...

We started drowning in a million businesses nobody was actually running. Dropshipping stores with six-week shipping times and customer service that was just copy-pasted templates. Guys who'd put their "brand name" - usually something like ZENITHPRO or AXELVIBE, always in all caps, always vaguely aggressive - on a garlic press identical to four hundred other garlic presses on the same Amazon page. AXELVIBE! For a garlic press!

And the affiliate blogs! Hundreds of thousands of them, pumped full of SEO-optimized reviews of products the authors had never touched, never even seen in person. A fractal of bullshit that technically qualifies as commerce but puts zero dollars of actual value into the world.

Leverage is real; I'm not disputing that. There is a difference between trading hours for dollars and building something that scales. Software does this. Publishing does this. You write a book once, sell it many times, nobody calls that a scam. Fine! That part they got right!

Where it went wrong is that the whole movement confused "build a good product that scales" with "build any mechanism that extracts money without you being involved." I don't think that confusion was accidental. I think the confusion was the point. Because if you're teaching people to build real businesses, you have to sit with hard, boring questions about whether anyone actually wants what you're selling. But if you're teaching people to build "passive income streams" you can skip all of that and go straight to the fun tactical shit. How to run Facebook ads, how to set up a Shopify store in a weekend, how to write email sequences that manipulate people into buying things they don't need.

Nobody talks enough about what the passive income movement did to the content quality of the entire internet. If you've tried to google "best [anything]" in the last five years and gotten a wall of nearly identical listicles, all with the same structure ("We tested 47 blenders so you don't have to!"), all making the same recommendations, all linking to the same Amazon products, you've experienced the results.

Those articles weren't written by people who cared whether you bought a good blender. They were written by people who cared whether you clicked their affiliate link, because that's what generated passive income, and the incentives made honesty actively counterproductive.

The honest review of blenders is: "most blenders are fine, just get whatever's on sale, the differences below $100 are basically meaningless." That review generates zero affiliate revenue. So nobody wrote it.

Instead you got "The Vitamix A3500 is our #1 pick!" with a nice affiliate link, written by someone who has never blended anything in their life. Multiply this across every product category and you start to understand the informational desert we've been living in. We broke Google results, at least partly, because an army of passive income seekers had an incentive to flood the internet with plausible-sounding garbage.

(Someone is going to object that Google should have filtered this stuff out, and yes, sure, but also, "the people creating the pollution aren't at fault because the EPA should have caught it" has never been a great argument.)

I've met dozens of smart, capable people who had actual energy, and who spent their entire twenties bouncing between passive income schemes instead of building real skills // real businesses // real careers. The pattern was always the same: six months on a dropshipping store, it fails, pivot to Amazon FBA, that fails, pivot to creating a course about dropshipping (because of course), and then the course doesn't sell either because by 2021 there were approximately forty thousand courses about dropshipping and the market had been saturated since before they started.

And the whole time they were getting further and further from the thing that actually creates economic value, which is: find a real problem, solve it for real people, care enough to stick around and keep improving. The boring thing. The thing that takes years. The thing that is, to be absolutely clear about this, not passive.

I once saw a guy ask whether he should start a dog walking business and the top response was something like "dog walking isn't scalable, you should build a dog walking platform instead." This person liked dogs! He liked walking! He lived in a neighborhood full of busy professionals with dogs!

But the Passive Income Brain thing had gotten so deep into how people talked about business online that "do the simple obvious thing that works for you" was considered naive, and "build a technology platform for an activity you've never actually done as a business" was considered smart.

The dog walking guy could have been profitable in a week.

The app guy would have burned through his savings in six months and ended up with a landing page and no users.

By 2020 the passive income world was absolutely crawling with grift: guys posing with rented Lamborghinis in YouTube thumbnails, "digital nomads" whose actual income came entirely from selling the dream of being a digital nomad to other aspiring digital nomads, podcast hosts interviewing each other in an endless circle of mutual promotion where everyone claimed to make $30K/month and nobody could explain what they actually produced. By 2021 or so it started to look like a distributed, socially acceptable MLM. The product was the dream of not working. The customers were people desperate enough to pay for it.

Not everyone in this world was cynical. I genuinely believe that. A lot of the people selling passive income content believed their own pitch. They'd had some real success with a niche site - pulled $3,000/month for a while, it does happen - read the same books everyone else read, figured okay, I'll teach other people my system. Why not. I would have done the same thing at 24. I'm almost sure of it.

But zoom out and what you had was just an enormous machine converting human ambition into noise. Affiliate spam // dropshipped junk // ebooks about passive income // courses about courses. An entire layer of the internet that was nothing but confident-sounding bullshit produced by people who had optimized for everything except making something worth buying.

The people near the top made money. Everyone else spent months or years chasing a mirage and came out with nothing but a Shopify subscription they forgot to cancel. They thought they'd failed. They hadn't failed. The system, every system, failed them.

What actually makes money hasn't changed. You find something people need. You get good at providing it. You charge a fair price and you keep showing up even when it's tedious and even when you don't want to. You build relationships over years. You build reputation over years. None of it is passive, and none of it has ever been passive! All of it revolves around giving a shit, day after day, about something specific. I don't think anyone has ever found a way around that and I don't think anyone will.

The passive income thing was a fantasy about not having to give a shit.

This is a terrible foundation for pretty much anything.

The affiliate SEO blogs are being slaughtered right now by AI-generated content. The people who spent years producing algorithmically optimized content of no value to humans are getting outcompeted by software that does the exact same thing, faster and cheaper. Facebook ad costs went through the roof and took the dropshipping gold rush with them. The biggest passive income gurus have already pivoted to selling AI courses. The machine keeps running. It just swaps out the brochure.

But I've noticed more people talking about what I'd call "give a shit" businesses - people who make furniture, run plumbing companies, write software they actually use themselves. Stuff where the answer to "why does your business exist?" isn't "to generate passive income for me." This works a lot better than the laptop-on-the-beach grind.

Jade Roller Guy, if you're out there: I hope you found something real.

I hope it keeps you busy.

SPONSORED

Westenberg is designed, built and funded by my solo-powered agency, Studio Self. Reach out and work with me:

Work with me
69cf5a54b19a140001e3340f
Extensions
The Entire Internet Is a UGC Reaction Video Now

I keep a folder in Apple Notes called “cursed websites,” where I save various artefacts that make me feel like the social contract has dissolved. Call it an act of self-loathing. Call it collecting evidence of the fall. Dansugc.com went straight into the folder this morning.

It&

Show full content
The Entire Internet Is a UGC Reaction Video Now

I keep a folder in Apple Notes called “cursed websites,” where I save various artefacts that make me feel like the social contract has dissolved. Call it an act of self-loathing. Call it collecting evidence of the fall. Dansugc.com went straight into the folder this morning.

It’s a site where marketers // entrepreneurs (and I find the line between those two groups has become blurred to the point of being illegible) can buy pre-recorded “Reaction” videos for $3 apiece. You browse a library of 2k clips, sorted by emotion (shocked! Happy! Crying! Excitement!), pick a face you find particularly appealing, download a 5-10 second clip of a stranger performing surprise // delight at nothing in particular, and splice it into your own content. The idea is to make it look like an actual someone had an actual emotional response to your app on TikTok. Custom orders run to $8 and let you specify outfits, emotional arcs etc.

The tagline reads: “100% Real Humans. Zero AI.”

And I think that tells you almost everything you need to know about where we are. And where we are is a place where “at least the fake fuckery was produced by a biological organism” counts as a premium feature. The pitch for selling manufactured authenticity at scale is that at least the people in the factory are still real people. That’s the floor. That’s what passes for premium. We are drowning in content that is functionally the same as so many designer handbags stitched up alongside so many dupes.

The internet is the most powerful communication technology in human history, and we’re using it to sell each other $3 clips of faked surprise.

I don’t blame Dan, if that’s even his real name. He’s running a business filling a niche. He’s recognised that the entire internet advertising “ecosystem” now runs on simulated, casual, spontaneous “cool girl” energy. He’s simply the shovel-seller in an authenticity gold rush; except the gold is parasocial trust, and the shovels are clips of various women pretending to have their minds blown by your calorie-counting app.

100, ready to post UGC videos per month costs $800. A fully managed campaign for 500 videos goes for $10k. Dan claims over 5 billion total views generated, and I don’t doubt his numbers at all. But if this stuff doesn’t set off your alarms, even a little, you’ve probably been marinating in it so long you’ve lost the ability to smell it.

What Dan’s business lays bare - if you actually sit with it - is that the internet, as a social and cultural space is almost entirely performance. The whole apparatus has been hollowed into a content mill that grinds human attention into micro-conversions. I’m aware that I’m not the first person to make the complaint that the internet sucks - but every point of suck has now compounded into the final boss of shitty experiences. The algorithmic timelines, the social media homogeneity, the death of truth, the proliferation of monetisation strategies and side hustles etc have all contributed to this moment: a growth hacked, engagement optimised, brand-building logic that has destroyed our ability to distinguish between a person sharing something they give a shit about, and a person executing a “content” strategy.

Open TikTok right now and ~try to find a video that isn’t, at some level, attempting to sell you something - a political identity, a digital product, a lifestyle, a personal brand. It's next to impossible. Every piece of content carries this faint whiff of ~strategy behind it. The girl doing a “Get Ready With Me” video has an affiliate link in her bio, and the asshole ranting about immigration has a Substack he can’t wait to funnel you to. The therapist explaining attachment styles is, naturally, building a course she’ll launch next month, and the couple doing a “day in our van-life” vlog is negotiating a brand deal in their DMs. There is always a funnel, always a CTA, and the output, no matter how “down to earth” it’s designed to feel, is always doubling as a mechanism to convert your attention into revenue.

Jean Baudrillard (read Simulacra and Simulation) identified how modern society replaces reality with the symbols and signs of reality. He mapped the process in four stages: first the image reflects reality, then it masks reality, then it masks the absence of reality, and finally it has no relation to reality whatsoever. A UGC reaction video purchased for $3 and spliced into a TikTok ad is operating at that fourth stage, because the reaction doesn't reference a real reaction, there was never a real reaction, and the whole thing is a sign pointing at nothing...

You might say who cares, advertising has always been manipulative, and sure, that's true. When Grigory Potemkin allegedly erected fake village facades along the Dnieper River in 1787 to impress Empress Catherine II during her tour of Crimea, he was doing UGC marketing for the Russian Empire (the historical consensus is that the villages were probably real settlements that had been tidied up rather than total fabrications, but the legend stuck because the concept is so useful as shorthand). The instinct to manufacture the appearance of prosperity for the benefit of powerful onlookers is old as dirt. What's different is the scale and the fact that regular people are doing it to each other all day long, voluntarily, for free or for pennies.

There's a phrase you hear in marketing: "everyone is a creator now." Like the whole internet has become a Renaissance workshop where artisans and thinkers reach audiences directly. In practice, everyone is a marketer now. The "creator economy" turned out to be an economy where the thing being created, more and more, is demand for more and more of yourself. Your aesthetic, your opinions, your morning routine, your trauma, your fitness journey, your face: all raw material for the content machine, all measured against so many growth metrics.

The result is an internet that feels, to use a technical term, like shit. Scroll any platform and you're wading through a river of optimized slop, and what makes it depressing is how same-y it all is despite the theoretically infinite diversity of human expression available online.

Political content looks like beauty content and beauty content looks like finance content and finance content looks like fitness content, because they’re all using the same hooks and they’re all built on the same emotional beats. The provocative claim in the first 2 seconds, the false tension, the extreme language, comment if you agree, like and subscribe and so on and on. Don’t forget to share this with someone who ~needs to hear this. Make sure you follow for part 2. The playbook is identical, whether someone’s raving about the best skin serum, or about their least favourite ethnic groups...

AI makes all of this both worse and darkly funny at the same time. AI slop and human slop have now converged to the point that Dan can credibly market “zero AI” as a premium feature, while his customers’ output offers no real elevation from the realm of deepfakes. And as much as Real Humans is a selling point for the internet today, the AI is getting better, too. It can produce damn-near the same hooks, the same engagement-bait captains, the same dead-eyed reaction that a human can churn out today. AI content creators aren’t even poisoning the well; not really. They’re simply drawing from a well we already puked in years ago.

AI slop is human slop with the labor costs removed, and that's why nobody can tell the difference, and that's why Dan has to specify that his product is made by real humans, like a carton of eggs stamped "cage-free."

In Don DeLillo's White Noise a character visits "The Most Photographed Barn in America" and realizes that nobody can actually see the barn anymore because the barn has been completely replaced by the aura of the photographs of the barn. Once you've seen the signs about the barn, he says, it becomes impossible to see the barn. The internet has done this to basically everything. You scroll past enough UGC reaction videos and you can't encounter a real reaction without wondering if it's bought, you read enough performative vulnerability posts and you can't encounter real vulnerability without suspecting it's a hook. The constant presence of the fake thing corrodes your ability to trust the real thing, and the really vicious part is that a lot of the "real things" were fake too, which means the thing you're mourning the loss of may never have existed in the form you remember it.

This is, I suspect, why nostalgia for "the old internet" has become its own genre of content (which is, of course, itself being optimized for engagement, because there's no exit door). People remember a time when someone's blog was their blog and nothing more, when a forum post was written because a person had a thing to say and they said it and moved on. Whether that era was actually as good as we remember is debatable, and I think there's a strong case that we're romanticizing it. Sturgeon's Law applied then too: 90% of everything was crap. But the crap was sincere crap. The crap was some guy with a Blogspot writing 3,000 words about his favorite Star Trek episodes because he liked Star Trek and had opinions about the Borg, with zero intention of building an audience or selling a course called "How I Built a 6-Figure Blog About Star Trek."

69cc58e31ee8f600019f0b01
Extensions
The World's First Bullshit

I opened Twitter this morning and three different startups were announcing "the world's first" something. An AI CMO, an autonomous AI marketer, and a design agent "with taste," which is a phrase that made me close my laptop for about ten minutes.

None of

Show full content
The World's First Bullshit

I opened Twitter this morning and three different startups were announcing "the world's first" something. An AI CMO, an autonomous AI marketer, and a design agent "with taste," which is a phrase that made me close my laptop for about ten minutes.

None of them are the world's first anything. I'd bet money there are 40 AI marketing tools already shipping, maybe more, and the category lines are so blurry that "first" really comes down to how specific you're willing to get with your label. I could build an AI that writes haikus about SaaS pricing pages. I could call it the world's first AI SaaS Pricing Haiku Engine. Nobody could argue, because nobody else would have tried. That's what "first" means now. You've found an unclaimed sliver of territory so narrow that being the only person there is trivially easy.

But OK, forget that the claims are fake. What bothers me more is that "world's first" is the wrong thing to want in the first place. It's the wrong trophy entirely.

Thomas Newcomen bolted together the first commercially successful steam engine in 1712 and the thing was, by every measure, awful. Maybe 1% thermal efficiency. It ate coal like a bonfire eats kindling and it leaked through every cycle. James Watt showed up 57 years later, added a separate condenser, and built a version you could run a factory with. Newcomen got a Wikipedia footnote. Watt got a unit of measurement named after him. Google wasn't the first search engine. Facebook wasn't the first social network (Friendster was, and if you remember Friendster, congratulations, you're old). The iPhone showed up years after the Blackberry and the Palm Treo. Who ended up mattering? The ones people actually liked. Every single first mover on that list became a piece of bar trivia.

Founders keep doing this, I think, for two overlapping reasons and one of them is almost sympathetic. Silicon Valley has a mythology, basically a religion at this point, where the inventor is the saint and the timestamp is the sacred relic. The Xerox PARC researchers who built the graphical user interface were visionaries; Steve Jobs was the guy who walked through their lab, took notes, and shipped something your mom could buy at a mall. The mythology says PARC mattered more. In practice, Jobs built the product, and products are what people use.

But I think the bigger driver is more cynical than that, and it has to do with Twitter specifically. "World's first" is a cheat code for the algorithm. You can't verify it while you're scrolling, it sounds historic, and it carries a weight that "we also built an AI marketing tool, we think ours is pretty good" never will. The most extreme claim gets the most retweets, and retweets are what your investors see before they decide whether to write a check. I get why people do it. I'd probably be tempted too.

The problem is who it attracts. Novelty-chasers. People who'll try your product once, talk about it at a dinner party, and never open it again. The people who actually make a product successful long-term are the ones who care that your thing works well, and those people could not care less whether you were first or forty-first.

The second version of something is almost always better than the first, because the second version watched the first one break. Every good product is a correction of somebody else's bad product. That's how engineering works in practice: you watch someone else's bridge come down and you build yours with thicker cables.

What would it look like if founders were honest about this? "We looked at the 14 AI marketing tools that already exist and we think we've solved the 3 problems they all share." A less exciting tweet. But it's certainly more useful. It tells potential customers that you've done your homework and you're competing on substance rather than on who filed their launch post on Tuesday instead of Wednesday. Most founders would rather sound historic than sound competent, and their customers can tell.

The AI startup space right now feels like the early days of a gold rush, when the most important thing is to stake your claim loudly before anyone else reaches the same patch of dirt. But gold rushes end.

The people who built lasting wealth in the California Gold Rush were largely the ones selling picks and denim jeans to miners, the ones who understood that being best at serving a need that wasn't going away beat being first to a plot of land every time.

Levi Strauss arrived in San Francisco in 1853, 3 years after California became a state, and he wasn't first to anything, but he's still around.

Every founder tweeting "world's first" today should ask one question about their product: would anyone still care about this if 10 other people had built the same thing? If yes, you might have something, and if no, if the only interesting thing about the product is the claim of novelty, what you're looking at is marketing copy where the product should be.

You can announce "world's first" on launch day, before anyone has used your product, before anyone has even seen a demo. You can never announce "world's best." Other people have to say that about you, and they'll only say it if you've given them a reason to.

69cb12a51ee8f600019f09e4
Extensions
Notes on going solo: celebrating 6 years of Studio Self

Since roughly // broadly 2020, I’ve been running a solo-powered minor empire. I have no employees, and my only office is my home office, filled as it is with cat hair and various comic books. My business is: me, a laptop, a set of AI tools that scale the

Show full content
Notes on going solo: celebrating 6 years of Studio Self

Since roughly // broadly 2020, I’ve been running a solo-powered minor empire. I have no employees, and my only office is my home office, filled as it is with cat hair and various comic books. My business is: me, a laptop, a set of AI tools that scale the parts I hate, and a personal network. I’ve done brand strategy, naming, GTM, messaging, content and growth marketing for a list of folks I respect and genuinely give a shit about - SaaS cos, VC firms on 3 continents, and an initiative I still believe has the power to change the world.

And I’ve done it all without once - ever - wishing I had a team, or wishing I was any “bigger” than I am.

Well, I’m coming up on the 6th anniversary of going my own way, and I wanted to take a moment to put down on paper what worked and why, how I think about the future of what I do, and why the structural economics of being a solo-operator are only going to improve from here.

The trad agency model is a staffing arbitrage play. You hire 23-year-olds, bill them out at senior rates, and scale with automation. You technically lose money on the first project with each client (the first being the only time senior staff do the work) and make it up on the long-tail retainer work that hopefully follows. Founders sell, juniors grind, and middle managers translate between the two groups. To varying degrees of success...if you’ve ever found yourself wondering why so much agency work feels so mediocre, when those same agencies are winning awards etc, this is why. The person who sold you on the engagement, the person entering the Lions and the person doing the work on your actual brand are almost never the same person. There’s an incredibly lossy compression mechanism in between, and it strips out most of the taste, judgement and contextual awareness that made the pitch compelling in the first place.

This model is the go-to because - thus far - it has been the only way for an artisan to scale their work to the level of financial freedom // validation the entrepreneurial set promote as the be-all-end-all of human existence.

When I started Studio Self, I had just left a tech startup who had been acquired by MYOB, and along the way we’d struggled with precisely that experience: attempting to scale our own capabilities by hiring someone else, only to find ourselves mired in a level of mediocrity that can only be defended by a recognisable brand. My goal was to eliminate that experience entirely.

The thesis was that creative agencies should be small. They should be born small, and they should stay small, and they should be entirely focused on ~the work. The agency is a different beast to a tech startup, and it should act as such. Call it an anti-LARP manifesto.

There has never been a translation layer. There’s no game of telephone between “what the client actually needs” and “what the junior assigned to the account understood from the brief.” I have shipped every engagement with full context intact, from first meeting to final deliverable.

I have made a deliberate choice to not scale the creative aspect of my work. Note the word “creative.”

For the first few years, that meant my workload was roughly 1-2 clients max.

The constraint on solo operations has always been bandwidth; one person can only do so much in a day, and the non-creative operational overhead of running a business (invoicing, scheduling, CRMing, formatting, bookkeeping, task // project management etc) eat into the hours and the cognitive bandwidth available for actual creative work. I challenge anyone who has spent more than an hour a day looking at a spreadsheet to pull a piece of good actual work out of their ass. In my experience, the work suffers any time the creative mind is faced with documentation.

This is dangerous, because the appeal // selling point of an agency is the creator who drives it. Call this the Mind (the originator of the business.) They and their specific output are the product.

In a traditional agency, you solved this by hiring operations people. And of course, to justify the cost of the ops folks, you then hired other creatives, and the agency enshittification almost inevitably began. Adding layers between the Mind and the client inevitably leads to a decay in quality. If you wanted to maintain solo-operator status, you solved it by working nights // weekends and then slowly burning out. The road to alcoholism is (believe me) paved by folks who attempted to do it all.

LLMs have changed this math completely.

The choice a creative makes about how they use AI is critical, and I think it can evolve, but you need to be clear about it. The distinctions between what you do and don’t outsource to a thinking machine with a non-apparent brain matter enormously, and I worry that most folks running solo businesses are getting this - shall we say - a little backward.

I have a simple formulation when it comes to AI:

My purpose in my work is to be creative.
My purpose is to enjoy my work.
I enjoy the work that is creative.
I do not enjoy the work that is not creative.
Therefore...well.

I use AI for task and project management, operations, proposals, documentation, formatting, scheduling, meeting transcripts, project timelines, email triaging (but never, ever replying or management), and the thousand small tasks that used to consume over 50% of my working week. Example: setting an agent loose on my Google Drive to clean up a folder of poorly filed (whoops) drafts, or to sort out unpaid invoices, or to capture and share receipts with my accountant. I use AI to surface topics and ideas from Hacker News and other sources that I don’t want clogging up my personal RSS reader. I apply automation heavily to this work because there is nothing creative about it, and it brings me no joy. The quality of that AI output is probably better than I produce manually, because AI doesn’t go down a mental tangent at 4pm on a Monday while staring at invoices and become submerged in self-hatred. I am liable to do that. I use AI for coding and dev work - largely through Claude and Mistral - and I’ve found it invaluable in shaking the dust off the coding skills I first learned modding Wolfenstein 3D at the age of 14. Lastly, I use it to come up with YouTube titles and descriptions, and I use Descript for AI-assisted editing.

But I don’t use AI to write copy, or to develop brand strategy.

I have been a heavy user of Grammarly in the past, and I’ll admit to that - once upon a time, it was like having an editor in your pocket. But their increased reliance on AI has made their editing tools next to useless and any output run through them has become homogenous and unreadable, and after a review of the last 6 months of work, I’ve finally kicked them to the curb. I’d rather my work include however many technical imperfections Grammarly might have fixed, as long as I can avoid Grammarly’s phrasing proclivities...

I don’t use AI to make taste-based decisions about what a company should sound like, or look like, or feel like. And the reason isn’t actually a romantic attachment to the artisanal purity of human creativity, as much as I might believe in that - it’s entirely strategic.

The market is already flooded with AI-generated “creative” work. You can feel it in the sameness of the YC landing pages, and how every Series A-B deck now has that exact tone of voice. Everything has started to blur together, and nobody sounds specifically like themselves. It’s beginning to feel like everyone has the same marketing team, because everyone’s marketing team uses the same agency, and that agency is ChatGPT. I’m not anti-AI, but I’d certainly count myself amongst those who dislike bad outputs...

My theory is that messaging and work that sounds like it was produced by an actual human with actual taste and actual opinions (even if those opinions run the risk of being ~wrong) are going to stand out more than ever. Which is, I suspect, going to be seen as the great irony of the AI era. The technology that makes it trivially easy to create competent creative work simultaneously makes merely competent creative work nearly worthless. The value transfers to the stuff that can’t be generated: points of view, idiosyncrasies, judgements, strong opinions. The premium shifts from execution to context, and personal // individual context will always beat documented context.

Taste is - almost by definition - a solo-operator product. It doesn’t survive committees and Slack channels and standups. It lives in one person’s brain and how that brain produces work.

So the model I’ve landed on - AI handles everything that doesn’t require creative judgement, and a human handles everything that does - turns out to be accidentally well-positioned for the next decade of competition. I’m faster than most agencies, because I don’t have coordination costs, and I’m better than AI-only solutions because the work I produce has (arguably) a pulse.

There’s a Borges story (isn’t there always) called “Perre Menard, Author of the Quixote” in which a fellow attempts to write Don Quixote word-for-word to the original, but through his own creative process. It’s worth a read. Borges' point was that identical outputs can have completely different meanings, depending on who produced them, and why. I find myself returning to this, when I look at AI-generated brand work, sitting next to human-created brand work. They look, sound, feel similar on the surface - but the difference is in the intention. It’s in whether someone actually made a choice to put a comma here or there, or whether an algorithm produced the most statistically likely arrangement of words. The right clients know the difference, even if they can’t always articulate it, and they’re increasingly willing to pay for it.

The history of business is (and I’ve ranted about this before) a history of declining coordination costs. Ronald Coase, in his 1937 paper “The Nature of the Firm” argued that companies exist because of the transaction costs of coordinating work. You hire people instead of contracting everything out, because managing 100 contracts costs more than an org chart etc. But every time coordination costs have dropped, the optimal firm size has dropped with them. IE - the internet made it possible to run a 10 person company that would have needed 50 people in 1989. Cloud computing and SaaS tools made it possible to run a 5 person company that would have demanded 10 in 2005. AI is making it possible to run a 1 person company that would’ve asked for 5 in 2020. Etc.

I think it’s fair to assume we’re still early on that curve.

The tools available to solo operators today are absurdly powerful compared to even 1-2 years ago. I can generate documents, produce financial models, deploy websites and self-hosted tools, manage multi-channel marketing campaigns, handle customer support and run project management workflows without hiring anyone. This is a boon to the creative side of my brain that hates every single one of those tasks.

What hasn’t changed - and what I don’t expect to change for a long time - is that the top of the value chain still requires human judgement. The lower-level work is getting cheaper, and probably should; but for experienced folks who know how to sell that experience, the premium is rising. Judgement dense, taste-heavy, context-dependent tasks justify a price at a multiple of a Claude Max subscription, and for good reason. Which creates an interesting (to me, at least) opportunity: a single operator who uses AI to handle the entire operational substrate of a business while personally delivering the high-judgment creative work that clients actually value. The margins on this model are high, because the revenue scales with the operator's expertise and reputation while the costs stay essentially flat.

There’s a “cult of personality” aspect to this, too. I think we care more about individual founders and what they represent than we used to. We invest in companies because a human being has become the face of the operation, and we like that face, and we like the things they say. You can see this playing out everywhere from Mr. Beast to Elon Musk, and no matter how distasteful you or I might find their brands, it’s a real phenomenon. Arguably, it’s a good chunk of what makes my own business work: people like me. They like the way I think, and they want to associate with it or apply it to their own brand.

Could this model produce a billion-dollar solo business? I don’t think the math is as crazy as it sounds. I’m not sure it’s something I have any interest in pursuing, but I find the idea fascinating...

Software companies with zero marginal cost products have shown that revenue can scale independently of headcount - if a solo operator builds a productized service // SaaS tool // content platform, or they can pull off a brand licensing operation on top of their core expertise, and they can use AI to handle everything else, the revenue ceiling starts to look very different from what we've historically associated with one-person operations.

You don't need to sell a billion dollars worth of consulting hours. You need to build a leveraged asset on top of consulting insight.

I'm not predicting this will happen next year, but the structural barriers that made it impossible are being removed one by one, and faster than most folks seem to realize. The first person to do it will probably be someone who looks a lot like the current generation of solo operators: technically sophisticated, creatively excellent, strategically sharp, and running their entire operation on a stack of AI tools that didn't exist three years ago.

My “mini-empire” is this blog, this email newsletter, products like Kerouac and Distributism etc, powered by a solo agency. I wake up (most days...) passionate about hacking away at all of it, and I deeply give a shit about the clients I have and the words I get to put on the page.

And if I were starting that “empire” today, I don’t think I’d do anything differently. I’d still go solo from day one, and I’d still choose to position myself at the higher end of the market, and I’d still guard my own creative work jealously.

The advice I’d give to anyone else:
  1. You need to actually be good at something. Yes, the solo model magnifies skill, but it also magnifies any // all mediocrity. If you’re a B-minus operator hiding in an agency where the brand does the work, going solo will expose your shortcomings with alacrity.
  2. You need an actual opinion and an actual perspective on your domain that clients can’t get from a chat bot. I’ve had a fair few VCs and tech “luminaries” advise me to be less opinionated over the years; play it safe, don’t ruffle feathers etc. But that’s the entire value proposition - a real person with a real worldview. If your only pitch is “I’ll do competent work at a reasonable price,” you’ll be automated out of every opp. A chat bot won’t tell you its frank opinion on xAI and Elon Musk. I will, and I’ll do it without being asked twice. This is not a weakness.
  3. You need to build systems. As many as you can. And make them legible, too... every repeatable process should be automated, every template refined, every workflow documented. The goal is to make the operational side of your business so efficient that it functionally disappears, leaving you with nothing but the high-value creative work that only you can do.

130 years ago, musicians and orchestras were concerned that the advent of the phonograph would put them out of business. We’re several tech disruptions removed fro that moment, and recorded music has created entirely new categories // genres // grifts of music that were previously impossible, but live music has only become more valuable.

The solo operator is in a similar place.

AI-generated creative work is the recording. Your work is the live performance. And as the recordings proliferate and all start to sound exactly the fucking same, the live performance becomes the thing people will cross state lines to experience.

I’m Six years in, and I don’t think I’m likely to change course now. And I’m frequently surprised by how durable the model has actually turned out to be...I've watched agencies and tech giants alike go through layoffs and restructurings and pivots and identity crises, and I've watched AI startups promising to “automate creative work” launch with terrible fanfare and then discover that their output was too generic to be even slightly competitive.

Meanwhile, the solo model keeps working // compounding. The core value proposition becomes more relevant as the alternatives get noisier and more desperate...

My goal from here is to follow whatever side quests Studio Self happens to turn up, let it continue to support this blog, and keep having conversations with founders and teams who I actually like (important.)

I think the future probably belongs to the small companies // individuals more than sprawling conglomerates. There is a huge opportunity for people who use AI to remove everything that isn't judgment from their workload, and apply that judgement to good products and good services. My theory is that one person, with an AI-augmented operational layer plus taste is the company of the future - and I'd bet on that again and again.

Finally - if you’re interested in talking about how to work with me at Studio Self, you can reply here or drop me a line: joan@thisisstudioself.com

69ca0eb21ee8f600019ed474
Extensions

Related Narratives

← Back to feeds