GeistHaus
log in · sign up

Sentiers

A carefully curated weekly newsletter finding signals of change and imagining better futures in technology, society, and culture.

rss
15 posts
Feed metadata
Generator Ghost 6.34
Status active
Last polled Apr 29, 2026 01:37 UTC
Next poll Apr 30, 2026 01:37 UTC
Poll interval 86400s
ETag W/"45d9a-cbOkl1raGa93YYJ2ONy4aStUlsg"

Posts

Scenario planning for the “jobless future” ⊗ Software brains & statistical engines
Newsletter
No.400 — Extrapolated futures archive ⊗ The dissonance is expanding ⊗ South Korea to spur a renewables revolution ⊗ Eye contact with a humpback whale
Show full content
Scenario planning for the “Jobless Future” Scenario planning for the “jobless future” ⊗ Software brains & statistical engines

It’s the second time I share a piece by Tim O’Reilly where he applies scenario planning to AI—last time was back in issue No.384. I like this “series,” on this one he’s looking at some of the signals around employment, and comes up with useful quadrants. As we do in foresight, he maps divergent possibilities rather than predicting a single outcome; his goal her is to find robust strategies that hold across all quadrants. He plots two vectors: the scale and pace of AI’s economic impact (capability combined with adoption speed), and whether that impact flows toward efficiency (doing the same with fewer people) or toward doing more, serving previously unmet needs.

The lower two quadrants differ mainly in speed: the Slow Squeeze erodes entry-level work quietly, while the Displacement Crisis delivers unemployment topping 10%. The upper-left Augmentation Economy sees AI widening individual workers’ capacity. The upper-right Great Transformation is the scenario worth dwelling on: rapid AI adoption directed toward new applications—drug discovery, personalised education, care at scale. O’Reilly draws on economist Alex Imas and Noah Smith to argue this quadrant isn’t merely a moral preference; it’s where the stronger businesses end up. Three-quarters of AI’s economic gains, per a PwC study he cites, are flowing to the 20% of companies focused on growth rather than cost-cutting.

Post-creation of the scenarios, O’Reilly calls signals “news from the future,” data points that gradually reveal which quadrant the world is entering, and the ones he shares are quite mixed. The robust strategy that holds across all four is the same: orient toward doing more rather than toward efficiency. Every time AI is used to do the same work with fewer people, it’s a vote for the lower half of the grid; using it to do something previously impossible, to serve previously unmet needs, pushes toward the upper half. As long as there is this kind of demand and unsolved problems, he argues, AI augments rather than replaces; it’s only when we stop looking for new things to do that the machines come for the jobs.

They model jobs as bundles of tasks, and distinguish between “strongly bundled” jobs (where the same person has to do multiple interdependent tasks) and “weakly bundled” ones (where tasks can easily be split between a human and an AI). AI replaces the weakly bundled jobs first. But even for weakly bundled jobs, automation only replaces human labor after demand becomes inelastic, after AI is so productive at the task that making more of the output hits diminishing returns. […]

In the upper quadrants, all three categories thrive. Specialists do well because AI expands the scope of what their bundled expertise can accomplish. Salarymen thrive because companies that are doing more, not just doing the same with less, need people who can adapt to constantly changing tool capabilities within the context of their business. And small businesses proliferate because AI gives a one-person shop the productive capacity that used to require a department. […]

Create professional associations that lean into mentorship and an AI-enriched career ladder, but aren’t afraid to take a political stance. The idea that providers of capital are entitled to all of the gains is a pernicious idea that has created an engine of inequality rather than of wide prosperity. It doesn’t have to be that way. Professional associations and other forms of solidarity are a possible source of countervailing power.

Software brains & statistical engines

Nilay Patel coins “software brain”, the cognitive habit of seeing the world as a series of databases controllable through structured language. In Sentiers “lingo,” I’d refer to this as a lens, here Patel means the way some people in tech perceive the world around them, and the opportunities they see. I find the concept very useful: it’s in line with Marc Andreessen’s 2011 declaration that “software is eating the world,” which is still quoted regularly (even broligarchs are correct sometimes). In this view, if you can model something as a database, you can manipulate it, optimise it, scale it. Uber is a database of cars and riders; YouTube is a database of videos; DOGE arrived in government and immediately tried to govern by controlling databases. The legal system’s formal language (statutes, precedents, case citations) is seductively similar to code, which is why tech people and lawyers share an intoxicating mutual recognition; both believe the other has found a way to issue commands to reality.

I’m almost done watching a fantastic interview with Karen Hao on The Diary of a CEO podcast, and what she describes about Ilya Sutskever’s and Geoffrey Hinton’s conviction that human brains are “statistical engines” (a hypothesis, she’s careful to note, not established science) connected immediately for me with what Patel is describing. Both rest on the same equation of mind with machine, approached from two directions: one looks at brains and sees computers; the other looks at the world and sees data it can process with code. In a way, squinting a bit, the researchers trying to build AGI while relying on the statistical engine hypothesis are applying “software brain” to biology. Patel’s observation is that everyone downstream from that hypothesis has been applying the same cognitive style outward, to society at large. They see data that can be computed.

To connect to a number of other pieces shared in the past, I’d also mention that Patel basically proposes a version of Alfred Korzybski’s “the map is not the territory,” that “the word is not the thing.” The database is always a simplification of the territory it models, and at some point every database stops matching reality. His insight was that we routinely confuse our representations of things with the things themselves and act on the map as though it were the world. Software brain takes this confusion as its operating premise: when the map and the territory diverge, the response is to change the territory, not the map. Asking people to make themselves legible to AI is that inversion turned into a business model.

Quinnipiac just found that over half of Americans think AI will do more harm than good, while more than 80 percent of people were either very concerned or somewhat concerned about the technology. Only 35 percent of people were excited about it. […]

Any business process that looks like code talking to a database in a repetitive way is up for grabs. That’s why Anthropic has been so relentlessly focused on enterprise customers, and it’s why OpenAI is now pivoting to business use. There’s real value in introducing AI to business, because so much of modern business is already software: collecting data, analyzing it, and taking action on it over and over again in a loop. Businesses also control their data, and they can demand that all their databases work together. […]

But: not everything is a business. Not everything is a loop! The entire human experience cannot be captured in a database. That’s the limit of software brain. That’s why people hate AI. It flattens them. […]

Even taking the time to consider how much of your life is captured in databases makes people unhappy. No one wants to be surveilled constantly, and especially not in a way that makes tech companies even more powerful. But getting everything in a database so software can see it is a preoccupation of the AI industry.


“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • Extrapolated futures archive. “The Extrapolated Futures Archive is a reverse-lookup for speculative fiction. Describe a situation you are facing, and find the SF stories that already worked through the implications. The catalog connects stories (novels, novellas, short stories, films) to the speculative ideas they explore: thought experiments about technology, governance, biology, society, and more. Every idea is tagged with domains, scenario types, and outcome types so you can filter by the kind of future you are thinking about.”
  • Beyond Tomorrow: Four Scenarios for the World of 2050. “Explore four distinct visions for 2050—representative of the range of plausible futures and based on detailed quantitative analysis of 100 megatrends and a century of historical data. The journey to 2050 could take several routes. By developing a strategy that accounts for multiple possible futures, leaders can safeguard long-term competitiveness while understanding how best to position their organizations today.”
  • Transforming with STILE: a practical method for making decisions with the future in mind. “… this is where the STILE framework comes in—a strategic tool designed to assess the feasibility and readiness of emerging ideas across five critical dimensions called STILE Elements: Social Acceptance, Technological Capability, Infrastructure, Legal Clearance, and Entrepreneurial Zeal.”
Algorithms, Automations & Augmentations
  • The dissonance is expanding. You should read Kai’s intro to his latest issue, well put! “I find it increasingly hard to be part of an industry that is building a future I fear is becoming deeply anti-human. The person with seventeen browser tabs and a Claude Code subscription and the person who considers human creativity and the arts indispensable – they both feel like me. I’m just not sure they can fully coexist anymore. The tension is real.”
  • Open-world evaluations for measuring frontier AI capabilities. “AI models have started to saturate most major benchmarks. But does that mean they can build and ship a real product, or conduct a scientific experiment end-to-end, or navigate a government bureaucracy? Researchers have started testing AI in such real-world settings. We call these evals ‘open-world evaluations’. This essay defines open-world evaluations, surveys the lessons learned so far, and lays out best practices for conducting them.”
  • The AI Roadmap: How We Ensure AI Serves Humanity. “In our new report, we offer seven principles that outline how the technology should be built, deployed, and governed. They’re a roadmap and an invitation. Together, we can take the first steps toward that better future.”
Built, Biosphere & Breakthroughs
  • How South Korea plans to use the Iran crisis to spur a renewables revolution. “In Guyang-ri, a farming village of 70 households about 90 minutes south-east of Seoul, people gather for communal free lunches six days a week. The meals are funded by the village’s one-megawatt solar installation, which generates roughly 10m won ($6,800) in net profit each month.”
  • The disappearance of the public bench. “Benches are microcosms of an expansive debate about who belongs in urban public spaces. When they are removed or made uninviting, we lose more than just a place to rest.”
  • The US offshore wind industry finally gets a break. “Burgum had vowed to fight back, but last week, the department quietly let the final deadline for appealing the courts’ decisions lapse. The move means construction of the nation’s first five major wind farms along the eastern seaboard can continue absent a change in the case. When complete, the wind farms will generate enough electricity to power well over 2 million homes.”
Asides
  • Eye contact with a humpback whale. “This moment of eye contact was beyond my wildest dreams. I’ve never encountered a whale like this one, and it was the most profoundly beautiful experience of my life.”
  • This Filipino man transformed his home into a free library for all. Respect! “What makes Reading Club 2000 so special isn’t just that it has books; there are many libraries, after all, but that it truly belongs to everyone. At Mang Nanie’s library, there’s: No membership card. No library card. No borrowing limits. No fees or fines. You can walk in, take a stack of books with you, and never be asked to return them. You can even keep them.”
  • “Is it life? We can’t tell”: Nasa’s Curiosity rover finds organic molecules on Mars “including chemicals widely considered building blocks for the origin of life on Earth. Five of the seven molecules identified in a dried lakebed near the equator had never previously been observed on the red planet.”
69ed3ed9d26e5b00010f384e
Extensions
Civilizational optionality ⊗ The social edge of intelligence
Newsletter
No.399 — The term “AGI” is almost useless at this point ⊗ Frugal AI ⊗ Design futures in infrastructure ⊗ The AI revolution in math has arrived ⊗ First Indigenous Group to ban data centers from its land ⊗ A macro array of colorful slime molds
Show full content
Civilizational optionality Civilizational optionality ⊗ The social edge of intelligence

A few weeks back I watched a talk by Indy Johar at the Long Now Foundation and then one by Kate Crawford (links below). I thought I’d do a members’ issue featuring those two but haven’t gotten around to it, like so many ideas. Thankfully, they’ve published a new shorter piece by Johar, on civilizational optionality, which is basically a more focused excerpt of his longer talk. I often talk about inventing futures, whatever the form that endeavour takes, here Johar is talking about preserving futures.

He explains that civilisation’s most pressing strategic task is not solving discrete crises but preserving what he calls “civilizational optionality,” the degrees of freedom that allow societies to adapt across multiple possible futures, rather than narrowing into a single brittle trajectory. The framing distinguishes optionality from longtermism: longtermism prioritises civilisational continuity, which can be achieved with a thin slice of humanity intact; optionality requires that plural developmental pathways remain open. The urgency comes from what Johar describes as a forced “recoupling,” externalities like climate breakdown, soil loss, and hydrological disruption, long treated as separate from the economic operating model, are now feeding back as active destabilisers across food, energy, legitimacy, and security systems simultaneously. I quite like this, especially in opposition to the much dreamt about decoupling of GDP and energy consumption/carbon emissions. The essay works through ten such logics, each one a different vector through which the recoupling is already manifesting.

The practical proposal is to fund not solutions but the conditions under which solutions remain possible: stable food and water systems, legitimate institutions, and governance architectures capable of holding long-horizon commitments. He calls these “exstitutional wrappers,” coordinating structures that operate outside existing institutions, since no single institution can hold this kind of multi-generational responsibility. The essay closes with a self-correcting clause: any such structure must be designed to revise its own assumptions, or it risks becoming the thing it was built to prevent—a system that forecloses futures in the name of protecting them.

More → His conference I mentioned above, Civilizational Optioneering and Kate Crawford’s, Mapping Empires.

Through humans, machines, biological systems, and their entanglements, this stored [fossil fuel] energy has been mobilized into a planetary-scale cognitive system: distributed sensing, modeling, and acting capacities across biological, technological, and institutional substrates. The first photographs of the whole Earth were images of that system perceiving itself. […]

What we are now entering is a phase of forced recoupling: the externalities are no longer “outside” the operating model. They are re-entering the system as active constraints and destabilizing feedbacks. Carbon becomes heat stress and food volatility. Plastics become endocrine risk. Biodiversity loss becomes disease dynamics. Hydrological disruption becomes energy instability. […]

Even where the most critical typologies of optionality collapse are visible — glaciers, heat, soil, hydrology, fertility, legitimacy — the places most exposed are often not where effective prevention or optionality expansion can be financed, governed, and executed at speed and scale. […]

In a world where wealth is systemically entangled with the continuity of civilization itself, optionality becomes a foundational asset class: not one among many, but the first-class asset. Without allocation to optionality, wealth becomes terminally exposed to collapse as systems spiral toward zero-sum dynamics and mutually assured destruction.

The social edge of intelligence

I’ve often written and spoken about the gap between disappearing (to some extent) junior roles and senior ones. How do you become senior if the whole system training you to that level collapses? We’ve also looked to AI as collective knowledge, as synthesis of what humanity knows (yes, a subset, for sure). We’ve also looked into model collapse—which I prefer calling Habsburg AI, as per Jathan Sadowski. Here Bright Simons brings together these concepts by arguing that the intelligence embedded in AI systems is not primarily a function of architecture or compute; it is a function of the social complexity of the civilisations whose language those models absorbed—the argumentation, institutional friction, and collaborative problem-solving that left linguistic traces worth learning from. As organisations offload cognitive work, eliminate junior roles, and reduce the messy human-to-human interaction that produces rich language, that “substrate” degrades. Simons calls this the Social Edge Paradox: the technology’s own deployment undermines the conditions that made it possible, endangers it’s continued progress, and our own.

The mechanism operates at civilisational scale. Human collaboration, argumentation, institutional friction—the social processes that produce expertise and contested knowledge—generated the rich linguistic record that made training useful models possible. AI deployment that substitutes for that interaction, rather than scaffolding it, doesn’t just deskill individuals; it progressively impoverishes the social substrate from which future training data draws. The linguistic traces of genuine social reasoning thin out, and models trained on what remains inherit statistical averaging rather than the argumentative complexity of civilisation. The studies Simons cites show this operating at the organisational level already: consultants using GPT-4 performed 19% worse on tasks requiring contextual judgment; early-career employment in AI-exposed fields has dropped 13% since 2022. These are early readings of a longer civilisational process.

Two directions Simons doesn’t address, that I’d throw in there. The first is AI as sparring partner: human-to-human dialogue has qualities that LLM interaction cannot replicate, but working with an LLM rather than alone does preserve something, some friction, some counterpoint. Whether that is enough to meaningfully slow the substrate degradation remains to be seen, but to my mind it’s a counter force. The second is synthetic data, which some researchers position as a path around the training data wall. Simons doesn’t address it, though perhaps his argument absorbs it: synthetic data addresses quantity, not the social complexity of what gets generated. More tokens of statistically averaged output do not reconstitute the rich disagreement that fed the original models.

The Social Edge is more than a metaphor. It is the literal boundary between what AI can do well and what it will keep struggling with due to fundamental internal contradictions. Furthermore, the framework asks us all to pay attention to how the very investment thesis behind AI also contains the seeds of its own failure. And it reminds leaders that AI’s frontier today is set by the richness of the social world that produced the data it learned from. […]

The Social Edge Framework says: yes, scaling matters, architecture matters, and compute matters. But none of these will continue to deliver if the social substrate—the complex, argumentative, institutionally diverse, perspectivally rich fabric of human interaction—is allowed to thin. And thinning is very possible. […]

Language is often mistaken as an information pipe, but it is really a social coordination technology. […]

Getting intelligent minds to sync around an issue and work towards a common cause has always been the hallmark of human mental effort, whether it is raising giant pyramids or landing on the moon. A complex vision must radiate into the hive-mind to generate an interconnected consciousness that takes us from the solitary genius of apples falling on scientific heads to finally defying earthly gravity en route to Mars.


§ The term “AGI” is almost useless at this point. Some weeks ago I shared Robin Sloan’s AGI is here (and I feel fine) and got some pushback. Although the titles seem to oppose each other, I think this piece by Helen Toner is an excellent “follow up” and makes clearer everything Robin was saying. “But that’s changed. Today’s best AI systems are good enough that they’re now inside the fuzzy conceptual cloud of ‘AGI-ish’: that is, they’ve surpassed some people’s definitions of AGI, while falling well short of others’. As a result, talking about “AGI’ is no longer a helpful way to gesture in a rough direction—instead, it’s likely to make some people think you mean one thing, and others imagine something totally different.”


§ Frugal AI helps countries priced out of Big Tech. This might be my new favourite LLM term, “Frugal AI.” “This is perhaps the most important dimension of frugal AI, […] it is about building leaner, more efficient systems from the ground up. By design, the systems use less compute, less memory, and less energy, which directly translates into a smaller carbon footprint.”


“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • Tobias Revell: Design futures in infrastructure. “Introduces Arup Foresight’s approach to helping organisations think and act more effectively in conditions of deep uncertainty. The talk frames futures thinking as a critical, designerly practice that goes beyond prediction, using scenarios, worldbuilding and speculative design to surface assumptions, stress test decisions and make long term change tangible today.”
  • The Protopian Prize. Incredible list of judges and advisors. “A fiction contest inviting you to share your vision of people working toward liberatory futures, meeting obstacles, and making real change. ‘Protopian’—a word coined by Kevin Kelly, one of our contest’s judges—means an achievable, optimistic future characterized by continuous, incremental progress rather than revolutionary leaps or a static, perfect state. Protopian stories imagine a future that is neither flawless nor catastrophic, but instead workably better than today. It’s about plausible progress rather than perfection or collapse.”
Algorithms, Automations & Augmentations
  • The AI revolution in math has arrived. “While no single new result is a world-beating breakthrough, some of them are on par with discoveries published in professional mathematical journals. In some cases, algorithms formulate a conjecture, prove it, and verify the proof with minimal human intervention. In others, extensive chats with large language models such as ChatGPT, Claude, or Gemini lead to novel proof strategies.”
  • The 2026 AI Index Report. “The AI Index offers one of the most comprehensive, data-driven views of artificial intelligence. Recognized as a trusted resource by global media, governments, and leading companies, the AI Index equips policymakers, business leaders, and the public with rigorous, objective insights into AI’s technical progress, economic influence, and societal impact.” They also published Inside the AI Index: 12 Takeaways from the 2026 Report
  • India’s frugal AI startups Sarvam and Krutrim build sovereign models. Not the same piece as the one above the link blocks! “The new book “LeanSpark” examines frugal innovation in India, including how startups Sarvam AI and Krutrim overcome cost and infrastructure constraints.”
Built, Biosphere & Breakthroughs Asides
  • Barry Webb documents a marvelous, macro array of colorful slime molds. “This fungi-like form is one of hundreds of kinds of slime mold, and it typically only reaches a height of about two centimeters at the most. Thanks to Webb’s macro photos, we glimpse a phenomenally beautiful world up-close that is otherwise virtually invisible.”
  • Pejac transforms basic graph paper into detailed, trompe-l’œil tableaux. “[The artist] often turns to the precise geometry of gridded sketchbooks in order to challenge perception and think instead about depth and movement.” (Via Kottke.)
  • Underwater volcano eruption. “We went on an expedition to capture Kavachi, one of the world’s most active underwater volcanoes, erupting beneath the Pacific Ocean in the remote Solomon Islands. This short cinematic piece showcases selected field cinematography captured during an expedition to the Solomon Islands. Steam explosions, sulfur-rich plumes, and superheated seawater collide in one of the most extreme environments on Earth.” (Also via Kottke.)
69e3f040771ee00001c11ed8
Extensions
A peculiar mixture of omniscience and impotence ⊗ Electrostates & physics
Newsletter
No.398 — A scathing LLM YouTuber ⊗ “We ran a business LARP” ⊗ Making Sense of Slow AI ⊗ Solar power and chili peppers ⊗ Skeleton of Three Musketeers hero d’Artagnan
Show full content
“You develop an instant global consciousness, a people orientation, an intense dissatisfaction with the state of the world, and a compulsion to do something about it. From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, ‘Look at that, you son of a bitch.’”
—Edgar Mitchell, Apollo 14 astronaut (hat tip)

A peculiar mixture of omniscience and impotence A peculiar mixture of omniscience and impotence ⊗ Electrostates & physics

Will Self uses the “Pentagon Pizza Index”—the folk observation that sudden surges in pizza deliveries near the Pentagon or CIA headquarters often precede major geopolitical events—as a lens for examining how historical perception has changed in the digital age (archived here). His argument runs through Baudrillard’s 1991 essay on the Gulf War, which claimed modern warfare had lost peripeteia, the decisive reversal that gives a narrative its tragic shape. Self suggests the world of 2026 complicates that diagnosis in two ways: the orbital asymmetry Baudrillard described has mutated into something distributed across cyber intrusions, sanctions, drone strikes, and algorithmic propaganda; and peripeteia itself has not vanished but fractured into innumerable tiny reversals, each too small to constitute history alone, yet collectively producing outcomes no one fully intended. We lowly citizens respond by scanning trivial data for hidden meaning, treating delivery patterns, Google Maps traffic, and shipping insurance rates as the auguries of a new geopolitical astrology.

The paradox Self identifies is that digital networks have democratised the tools of intelligence analysis and, with them, manufactured a feeling of involvement that has no matching power behind it. Everyone can track satellite imagery and monitor open-source signals, yet the systems producing those signals operate at speeds and scales beyond any individual’s influence. Participation becomes spectatorship disguised as agency; prediction markets transform uncertainty into tradable assets while substituting speculation for action. The result is what Self calls “orbital history,” events that descend from technological systems encircling the planet rather than emerging from human communities, leaving only “faint fingerprints on the mundane surfaces of everyday commerce.” We know more about the signals of history than any previous generation, and can act on almost none of it. The pizza box becomes a modern augur’s entrails: the oracle tells us something is happening somewhere beyond our reach, but confirms there’s nothing we can do about it.

More → Less central to the piece, there’s a bit on prediction markets, which also makes me think that in the midst of what he’s explaining, we also have the monetisation of everything, including betting on when conflicts start and missiles rain down. Bleak. Self also uses the expression “the swarm” a couple of times, as a descriptor of our times, which reminded me of The Churn in The Expanse, but that might be because I’m just finishing up a rewatch.

And because these events occur within the digital medium itself – within data flows, algorithms, and predictive markets – they are experienced primarily as signals: a spike in pizza orders, a sudden surge in shipping insurance premiums, a viral rumor about a dead prime minister. Each signal hints at catastrophe while simultaneously reducing it to a pattern in the data. […]

In the swarm, however, there is no singular protagonist. Decisions are dispersed across institutions, algorithms, and networks of actors. The consequences of those decisions propagate through systems too complex for any individual to control. […]

Events no longer emerge from the ground of human communities but descend from technological systems that encircle the planet: satellites, financial networks, algorithmic media. These systems operate at speeds and scales beyond ordinary human perception. Their effects appear sudden and inexplicable, like meteorological disturbances in the political atmosphere.

Electrostates & physics

A wide-ranging conversation between Paul Krugman and David Roberts, recorded against the background of an idiot-created oil crisis. Roberts’s central frame is that the US has chosen, more or less deliberately, to be the last petrostate, while China is building itself into the first electrostate—dominating batteries, EVs, critical minerals, and the full physical stack of an electrified economy. Emerging economies weighing a 50-year infrastructure bet are not going to choose LNG (Liquified Natural Gas) dependency when the learning curves on solar and storage keep doing what they have been doing. The Iran war, Roberts argues, is the most apt demonstration anyone could have scripted of what fossil fuel dependency actually costs.

Some of their discussion we’ve covered here before with other pieces, but the efficiency point Roberts makes is one I don’t think I’ve seen before, and he flags it as one of the most important things to understand about the transition: electrifying the economy delivers a 50 to 60% efficiency gain before anyone changes their behaviour or uses less of anything. The number comes from Saul Griffith, and it follows directly from the physics—combustion engines waste most of their input energy as heat, losing something like 60 to 66% of what goes in; electric motors convert around 80% into motion. “Same cold beer, same warm showers,” just electrified: that alone is the largest efficiency gain humanity has ever had access to. Roberts also argues that the case for clean energy is overdetermined, justified independently by climate, by particulate pollution, by AI competitiveness, by energy security, and that leaning on whichever rationale lands best in a given room is not a concession, it’s strategy.

On other topics, such as density and windmills (in the US); in his opinion, NIMBYism among wealthy liberals is no longer defensible: you cannot oppose wind farms or new housing density and call yourself a progressive, and he wants social sanction attached to trying. The right-wing effort to flood local planning meetings with anti-renewables propaganda is well-funded and effective; the asymmetry in how communities treat renewable and fossil fuel infrastructure is glaring. On autonomous vehicles, he’s worried that cheaper, more pleasant car travel means more car travel, and we will need to push for shared use and not individual ownership.

Roberts’s overall tone is somewhat optimistic. In the end, physics wins, and the world will probably figure the transition out even if the US chooses to be the stubborn holdout. He closes by telling young people not to go into finance or spend their lives getting burritos delivered five seconds faster, but instead to do something that matters.

More → On solar and energy, he’s broadly aligned with what Azeem Azhar was saying in The case for radical solar optimism, which I shared in No.395.

I’m interested in the social and political and economic implications and basically, the implications are we need to decarbonize as fast as possible. That’s all I need to know from climate change science. So let’s get on with it, let’s do that, let’s decarbonize. I don’t dwell on the science itself anymore. […]

If the learning curve just keeps doing what it’s doing for ten more years, clean energy is going to be wildly, trivially cheap. I think we’re not that far from a situation where we’re going to have during sunny days, a surge of solar energy so big that the new policy problem is going to be, “what do we do with all this energy?” […]

We could theoretically be the first species ever to be in a state of energy abundance, of having all the energy we want or need, we have no idea what that could lead to. […]

There’s a reason China has grabbed and dominated the physical substrate of the electrotech economy. The minerals, the batteries, the magnets, all that stuff. We’re trying to dominate AI with just the top froth and they are trying to dominate AI by owning the whole stack all the way down to the electrons, you need clean electrons to run your AI. So they are building with that in mind.


§ Mo Bitar on YouTube is my absolute favourite YouTuber of the last couple of weeks, I had to share. Think The Daily Show but only about LLMs, more scathing, and from a coder who knows his stuff. Hilarious and on point.


“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • We ran a business LARP. People went deep. Scott Smith’s write-up of a workshop Changeist ran. “It was not a lecture about futures methods. It was not a strategy session. It was something closer to a case investigation: participants arrived as independent panel members convened to review six fictional regional organisations, looking back seven years from 2035.” And Paul Graham Raven also interviewed Smith and Susan Cox-Smith about the same LARPy workshop.
  • FF x RCA March 2026. “On the 26th of March, [Future Friends] held a special evening in collaboration with RCA Design Futures. 11 speakers graced the stage—all varied and wonderfully vibrant. Sarah Owen sparked the “333” format: 3 slides, 3 signals, 3 minutes. Here is your invitation to peek into their thinking and get to know them better.”
  • Science fiction and innovation: A literature analysis on science fiction-related methods mapped into the innovation process. “This study investigates the literature in search of science fiction-related methods able to support the development of innovations. With around 60.000 publications considered based on a high-level search, a refined search combined with a manual search led to 17 science fiction-related methods to support the development of innovations.”
Algorithms, Automations & Augmentations
  • The enterprise AI playbook: Lessons from 51 successful developments. “We set out to build something empirical. To document real-world use cases that have actually delivered business value. To map the practices of organizations that are not just experimenting with AI but successfully deploying it at scale. We wanted depth.”
  • How to make AI serve the public. “A democratic agenda for AI governance. The problem is structural. The solution has to be too. No small group deserves the power to direct a technology this consequential. Not even the best-intentioned CEOs, nation-states, or AIs. Anything short of democratic governance is a priesthood or oligarchy.”
  • Making Sense of Slow AI: A zine about slow AI imaginaries by AIxDESIGN & internet teapot. “What if AI didn’t run on Silicon Valley logic? Making Sense of Slow AI, compiles 40 pages of eclectic stories on Small, Esoteric, and Ancestral AI – inviting you to think small, make it magical, and plan for the past.”
Built, Biosphere & Breakthroughs
  • Solar power in Africa is heating up — thanks in part to chili peppers. “Another challenge was that the Malawian government pays JCM in Malawian kwachas, which is volatile compared to other currencies and can devalue quickly. JCM Power’s solution was to invest the kwachas into community farming of African bird’s eye chili peppers in and around the solar panels. These, in turn, are sold in U.S. dollars, largely to Nando’s Peri-Peri, a chain of chicken restaurants (there are locations in Canada) with a signature hot sauce.”
  • How Montana tribes are using sovereignty to restore their waterways. “After a decade of negotiations, however, one of the most significant tribal settlements in U.S. history created the 2015 Confederated Salish and Kootenai-Montana Compact Water Rights Compact. … The combination of Indigenous-led restoration, shared management structures and targeted funding may help the tribe recover the rivers and the lifeways inextricably intertwined with them.”
  • Trump administration orders dismantling of the US Forest Service. That f/cker will leave nothing standing. “The headquarters is going to Utah. Every regional office is being shuttered. The research program is being destroyed.”
Asides
  • Skeleton of Three Musketeers hero d’Artagnan may have been found. “Workers repairing a church in the Dutch city of Maastricht have discovered a skeleton that could belong to the 17th-century Gascon nobleman Charles de Batz-Castelmore – better known as d’Artagnan – whose exploits led Dumas to make him the hero of the Three Musketeers.”
  • Why so many control rooms were seafoam green. “What caught my eye as a designer, as with most industrial plants and control rooms of that time, besides the knobs, levers, and buttons, was the use of a very specific seafoam green, seen here on the reactor’s walls and in the control panel room.” In turn this caught my eye, having just assembled a Tintin Moon Rocket, which itself has a seaform green control room. (Via Kottke)
  • The Deep Sea. A loooooooooong fascinating scroll by Neal Agarwal.
69d9a25d9acd5400015c51fd
Extensions
Geist in the machine ⊗ The prospect of Butlerian Jihad
Newsletter
No.397 — Ending the AI arms race ⊗ Insight and future-fit decisions ⊗ Better AI creative collaborators ⊗ Progressive Paris ⊗ The asteroid ryugu and the building blocks of life
Show full content
Geist in the machine Geist in the machine ⊗ The prospect of Butlerian Jihad

This is one of those articles that I find tougher to follow, juggling multiple philosophers’ thesis as the author does. It’s worth the effort though. Peter Wolfendale argues that the current AI debate recapitulates an 18th-century conflict between mechanism and romanticism. On one side, naive rationalists (Yudkowsky, Bostrom, much of Silicon Valley) assume intelligence is ultimately reducible to calculation; throw enough computing power at the problem and the gap between human and machine closes. On the other, popular romantics (Bender, Noë, many artists) insist that something about human cognition, whether it’s embodiment, meaning, or consciousness, can never be mechanised. Wolfendale finds both positions insufficient. The rationalists reduce difficult choices to optimisation problems, while the romantics bundle distinct capacities into a single vague essence.

His alternative draws on Kant and Hegel. He separates what we loosely call the “soul” into three capacities: wisdom (the metacognitive ability to reformulate problems, not just solve them), creativity (the ability to invent new rules rather than search through existing ones), and autonomy (the capacity to question and revise our own motivations). Current AI systems show glimmers of the first two but lack the third entirely. Wolfendale treats autonomy as the defining feature of personhood: not a hidden essence steering action, but the ongoing process of asking who we want to be and revising our commitments accordingly. Following Hegel he calls this Geist, spirit as self-reflective freedom.

Wolfendale doesn’t ask whether machines can have souls; he argues we should build them, and that the greater risk lies in not doing so. Machines that handle all our meaningful choices without possessing genuine autonomy would sever us from the communities of mutual recognition through which we pursue truth, beauty, and justice. A perfectly optimised servant that satisfies our preferences while leaving us unchanged is, in his phrase, “a slave so abject it masters us.” Most philosophical treatments of AI consciousness end with a verdict on possibility. Wolfendale ends with an ethical imperative: freedom is best preserved by extending it.

I can’t say I agree, unless “we” win a perfectly executed “Stieglerian Revolution” (I just made that up, see the next essay) and end up with a completely different relationship to our technology and capital. However, his argument all the way before then is a worthy reflection, and pairs well with the one below and another from issue No.387. I’m talking about Anil Seth’s The mythology of conscious AI, where he argues that consciousness probably requires biological life and that silicon-based AI is unlikely to achieve it. Seth maps the biological terrain that makes consciousness hard to replicate; Wolfendale maps the philosophical terrain that makes personhood worth pursuing anyway, on entirely different grounds. Seth ends where the interesting problem begins for Wolfendale: even if machines can’t be conscious, the question of whether they can be autonomous persons, capable of self-reflective revision, remains open.

Though GenAI systems can’t usually compete with human creatives on their own, they are increasingly being used as imaginative prosthetics. This symbiosis reveals that what distinguishes human creativity is not the precise range of heuristics embedded in our perceptual systems, but our metacognitive capacity to modulate and combine them in pursuit of novelty. What makes our imaginative processes conscious is our ability to self-consciously intervene in them, deliberately making unusual choices or drawing analogies between disparate tasks. And yet metacognition is nothing on its own. If reason demands revision, new rules must come from somewhere. […]

[Hubert Dreyfus] argues that the comparative robustness of human intelligence lies in our ability to navigate the relationships between factors and determine what matters in any practical situation. He claims that this wouldn’t be possible were it not for our bodies, which shape the range of actions we can perform, and our needs, which unify our various goals and projects into a structured framework. Dreyfus argues that, without bodies and needs, machines will never match us. […]

This is the basic link between self-determination and self-justification. For Hegel, to be free isn’t simply to be oneself – it isn’t enough to play by one’s own rules. We must also be responsive to error, ensuring not just that inconsistencies in our principles and practices are resolved, but that we build frameworks to hold one another mutually accountable. […]

Delegating all our choices to mere automatons risks alienating us from our sources of meaning. If we consume only media optimised for our personal preferences, generated by AIs with no preferences of their own, then we will cease to belong to aesthetic communities in which tastes are assessed, challenged and deepened. We will no longer see ourselves and one another as even passively involved in the pursuit of beauty. Without mutual recognition in science and civic life, we might as easily be estranged from truth and right – told how to think and act by anonymous machines rather than experts we hold to account.

The prospect of Butlerian Jihad

Super piece by Liam Mullally, who uses Herbert’s Dune and the Butlerian Jihad as a lens for what he sees as a growing anti-tech “structure of feeling” (Raymond Williams’s term): the diffuse public unease about AI, enshittification, surveillance, and tech oligarchs that has not yet solidified into coherent politics. The closest thing to a political expression so far is neo-Luddism, which Mullally credits for drawing attention to technological exploitation but finds insufficient. His concern is that the impulse to reject technology wholesale smuggles in essentialist assumptions about human nature, a romantic defence of “pure” humanity against the corruption of machines. He traces this logic back to Samuel Butler’s 1863 essay Darwin Among the Machines, which framed the human-technology relationship as a zero-sum contest for supremacy, and notes that Butler’s framing was “explicitly supremacist,” written from within colonial New Zealand and structured by the same logic of domination it claimed to resist.

The alternative Mullally proposes draws on Bernard Stiegler’s concept of “originary technicity”: the idea that human subjectivity has always been constituted in part by its tools, that there is no pre-technological human to defend. If that’s right, then opposing technology as such is an “ontological confusion,” a fight against something that is already part of what we are. The real problem is not machines but the economic logic that shapes their development and deployment. Mullally is clear-eyed about this: capital does not have total command over its technologies, and understanding how they work is a precondition for contesting them. He closes by arguing that the anti-tech structure of feeling is “there for the taking,” but only if it can be redirected. The fights ahead are between capital and whatever coalition can form against it, not between humanity and machines. Technology is a terrain in that conflict; abandoning it means losing before the contest begins.

Wolfendale’s Geist in the Machine above arrived at a parallel conclusion from a different direction: where Mullally argues that rejecting technology means defending a false vision of the human, Wolfendale argues that refusing to extend autonomy to machines risks severing us from the self-reflective freedom that makes us persons in the first place. Both reject the romantic position, but for different reasons.

To the extent that neo-Luddites bring critical attention to technology, they are doing useful work. But this anti-tech sentiment frequently cohabitates with something uneasy: the treatment of technology as some abstract and impenetrable evil, and the retreat, against this, into essentialist views of the human. […]

If “humanity” is not a thing-in-itself, but historically, socially and technically mutable, then the sphere of possibility of the human and of our world becomes much broader. Our relationship to the non-human — to technology or to nature — does not need to be one of control, domination and exploitation. […]

As calls for a fight back against technology grow, the left needs to carefully consider what it is advocating for. Are we fighting the exploitation of workers, the hollowing out of culture and the destruction of the earth via technology, or are we rallying in defence of false visions of pure, a-technical humanity? […]

The anti-tech structure of feeling is there for the taking. But if it is to lead anywhere, it must be taken carefully: a fightback against technological exploitation will be found not in the complete rejection of technology, but in the short-circuiting of one kind of technology and the development of another.


§ Ending the AI arms race: why safer futures are still possible & what you can do to help. I can’t really start sharing a Nate Hagens interview or essay every week, and I’ve already mentioned him a few times recently. But I’ll still do a quick share here, for this excellent chat with Tristan Harris, in part for this bit on doom, which I’ll keep in mind. “We can see the truth, and we’re not seeing that because we’re trying to be doomers. You’re seeing that so that you can try to be honest, and it’s the deepest form of optimism to look that truth in the eye and say, and now here’s what we’re gonna do instead.”


“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • Futures Intelligence — Closing the gap between insight and future-fit decisions. “A new integrative capability that brings different forms of future-related insights in one connected sense-making flow to turn them into shared, decision-ready understanding”
  • 2026 AXA Foresight Report. “From dense megacities facing increasing pressures on resources and infrastructure to coastal regions where ocean dynamics reshape risks and opportunities, to agricultural areas adapting to shifting demographics and economic conditions, each territory comes with its own set of transformations. Our exploration focuses precisely on how these futures emerge and unfold differently depending on the territory.”
Algorithms, Automations & Augmentations
  • Stanford scholars train generative AI to be better creative collaborators. “The conversation around AI and art generally swings between two extremes: A flood of AI slop or the total automation of creative work. The more desirable approach may be an AI that behaves as a useful collaborator.”
  • The cognitive costs of AI. “In the space of two years, the discourse around AI and knowledge work has produced an entire family of concepts: Cognitive Offloading, Cognitive Debt, Cognitive Atrophy, Cognitive Drift, Cognitive Surrender. Each more alarming than the last. In sequence they read like an escalation. That escalation is worth examining.”
  • Local opposition is slowing AI data centers. Wall Street has noticed. “A lot of the commitments and the build-out of data centers where it’s easy has kind of been done, so you’re getting marginally more difficult. From a markets perspective, expectations might be, maybe not reset, but realigned with the fact that it’s hard to put a couple trillion dollars in the ground in a short time.”
Built, Biosphere & Breakthroughs Asides
69d0355266f92e0001e98d18
Extensions
We’re all living in the “Mirror World” now ⊗ The AI boom is a polycrisis
Newsletter
No.396 — Tough week for Zuck ⊗ Learning in Motion ⊗ The people pushing back on AI ⊗ A wind-powered tumbleweed ⊗ Books, plants and playgrounds
Show full content
We’re all living in the “Mirror World” now We’re all living in the “Mirror World” now ⊗ The AI boom is a polycrisis

Are two Kleins better than one? In this excellent conversation between Ezra and Naomi, the answer is a resounding yes (video and transcript at the NYT). They cover a lot of ground: starting with Naomi Klein’s book Doppelganger, including the concept of the “mirror world.” The pandemic as a fork in the road, the Epstein files as a real-world echo of QAnon’s structure, diagonalism and the wellness-to-far-right pipeline, the Mamdani campaign in New York, and the question of what a more welcoming left might look like. It’s wide-ranging and sometimes meandering, as these interviews tend to be.

There are some specific sections that had me shaking my head in agreement more vigourously. The mirror world: when people are ejected from liberal institutions, through public shaming, through cancellation, through the mute button, they land in a parallel world with replica platforms, replica publishing, replica narratives. The liberal instinct to block and shun meant these structures grew in private, invisible to the people who created the conditions for them, until they “exploded into dominance” after the 2024 US election. Her framing of contemporary fascism as “a pathology of injured power” feels like it’s on the money, literally. These aren’t powerless people revolting. These are elites, tech oligarchs, billionaires, men exposed by #MeToo, who experienced the mildest forms of accountability and responded as if they were under siege. Marc Andreessen described basic crypto regulation as “terror.” The Epstein files reveal powerful men seeking advice on how to survive a reckoning they knew was coming. Klein reads the Trump administration as, in part, a tech revolt against AI regulation; oligarchs who were told they were gods in the 1990s and are furious that anyone might now tell them otherwise.

The spiritual dimension is quieter but, I believe, just as important. Klein argues that technocratic liberalism has become arid; it knows how to offer a tax credit but not how to speak to the feeling that something is lost in modernity. RFK Jr.’s appeal, she suggests, comes partly from his ability to talk poetically about the natural world, a register that mainstream politics has almost entirely abandoned. Climate discourse became “carbon trading,” the drabbest way to talk about something alive. The interview closes on what Klein calls “the irreplaceable”: art, the canonical version of universities, nature, human connection, the things tech oligarchs seem willing to replace with AI without pausing to ask whether anyone wants that. She sees the emerging movements (data centre resistance, neighbourism in Minneapolis, the Mamdani campaign) as rooted in a shared impulse: people cherishing where they live, learning their difficult histories, and refusing to let economic logic colonise every remaining corner of life.

A few weeks ago, I shared how Nate Hagens responded to Dario Amodei’s essay on AI risk. A key point was asking who picked this direction for AI and the economy? Even if we make it past the Amodei risk period, why? Why are “we” rushing in that direction? While coming at it from a different direction, the Kleins are pondering much the same thing, where the pandemic showed us a glimpse of a slower world, we must imagine and push towards one we’d get to pick, instead of being virused into it.

In doppelganger literature and film, the storyline is usually you’ve got a protagonist and then somebody comes along who’s a double of them, and they’re so good at performing you, so much better at performing you, that they eventually overtake you. At the end of Dostoevsky’s The Double, the protagonist gets carted away and sent to an asylum while the double just takes over. I think that’s kind of happened in our culture. The doppelganger is at the wheel. […]

Mark Andreessen sees the most mild accountability as an existential attack. The way he talks about basic regulation for crypto or AI as terror. These are men who came up in the 1990s, when Mark Andreessen was on a throne on the cover of Time magazine. I think that may have gone to their heads. I think we did that as a culture just because people were rich. And I think they’re angry that they no longer get treated like gods. And that feels like being terrorised to Mark Andreessen. […]

These are technologies that exist because they fed off of the accumulation of all of human knowledge and output. I believe we own them already. […]

There is a fundamental failure to appreciate that which is irreplaceable. And that failure seems to me to be very connected with the willingness to just replace art with AI, replace universities with AI. Shouldn’t we have a conversation about whether or not we want to get rid of that whole concept?

The AI boom is a polycrisis

In this piece, Matteo Wong and Charlie Warzel explain how the AI industry has become a single point of failure for the global economy. Trillions in investment have created a supply chain that runs through a handful of chokepoints: advanced chips made by two South Korean and one Taiwanese company, fuelled by Persian Gulf energy and helium, financed in part by Gulf petrostates. The war in Iran has functionally closed the Strait of Hormuz, sending oil, gas, and helium prices surging, and earlier this month Iran bombed Amazon data centres in the UAE. Hyperscalers collectively spent nearly $700 billion on AI last year, much of it financed through private-equity firms that are themselves leveraged against institutional investors, pensions, and insurance funds.

The underlying business model compounds the risk. Chips depreciate rapidly as new generations arrive, so the physical assets backing all this debt lose value on a schedule. Token pricing, the main revenue mechanism, is deflationary; as models improve, each unit of output costs less, creating what one analyst calls “a death spiral to zero.” Even the optimistic scenario, where AI revenues keep growing, implies years before profitability and millions of job losses along the way. The authors argue that speed has been prioritised over supply-chain redundancy, energy independence, and financial resilience, while the administration that encouraged the “let it rip” ethos has simultaneously destabilised the geopolitical conditions the industry depends on.

Much as in the piece above, or the Hagens response I mentioned, or Shapiro’s The Next Great Transformation shared another week earlier, it’s one more perspective from which we can see how much the economy is not going in a direction that serves society’s purpose. In large part because society doesn’t really get to chose right now. “As Polanyi put it in his most succinct formulation, ‘instead of economy being embedded in social relations, social relations are embedded in the economic system.’” We need to flip this and re-orient. Planet, life, humanity, society, economy. And there’s probably another few things to insert in there before the last one.

More → I found the piece in a LinkedIn post by Matthieu Dugal [fr] where he referred to Olivier Hamant’s robustness [pdf], whom you should check out. He’s not very present in English so I don’t think I’ve written about him here. Although I did in French here, and it’s a fascinating framework. But I’m also thinking about brittleness, as in Jamais Cascio’s BANI framework. In either case, the situation depicted in the featured piece is very worrying.

The fear is that too much money is coming in too fast and that generative-AI companies still have not offered anything close to a viable business model. If growth were to stall or the technology were to be seen as failing to deliver on its promises, the bubble might burst, triggering a chain reaction across the financial system. Everyone—big banks, private-equity firms, people who have no idea what’s mixed into their 401(k)—would be hit by the AI crash. […]

The hyperscalers are spending far more, but investors have started to notice that they are not generating anything near the revenue they need to. The data-center boom’s top players—Google, Meta, Microsoft, Amazon, Nvidia, and Oracle—have all lost 8 to 27 percent of their value since the start of the year, making them a huge drag on the overall stock market. […]

At every step of the way, AI firms have appeared to prioritize speed above the physical security of data centers, supply-chain redundancy, energy efficiency and independence, political stability, even financial returns. And in that quest for unbridled growth, the AI industry has wrested ungodly amounts of capital from investors all looking for the next big thing, ensnaring the entire economy.


§ Tough week for Zuck. Ain’t that just a damn shame? LoL. Meta and YouTube werefound negligent in a landmark social media addiction and Meta hit with $375M verdict in New Mexico child safety case.


📧If you love reading newsletters, you can find more at CrossPromo Club. When you subscribe to one, they swing more recommendations my way, so you help me to grow Sentiers. ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • Learning in Motion. “What is mobile? And how will we learn? These two guiding questions served us on our journey into the future. First, we imagined a training program that would have participants fully mobile, traveling in a bus along a designated route. Before making this a reality, we need to unpack possible futures regarding how learning and mobility will unfold and intersect.”
  • The Pocket Box™. “Gunnar Anderson’s story explores a mysterious new discovery that breaks the bounds of physics as we once understood them. But quickly, scientific awe morphs into commercial prospecting—with little regulation and horrifying consequences.”
  • In 2035, nothing is natural any more. And that’s good news.. “In 2035, the most trusted products are not the ones grown in undisturbed soil. They are the ones you can trace, verify, edit, and regenerate.”
Algorithms, Automations & Augmentations Built, Biosphere & Breakthroughs Asides
  • Books, plants and playgrounds: Montreal creates a place to come together. I have, I’m ashamed to admit, not visited yet. “But this is a public place: Sanaaq Centre in downtown Montreal, which upends the conventions of North American public space. Rather than offering a single function, the 57,000-square-foot space on downtown’s western edge contains multitudes. It combines a public library, a black-box theatre, a media lab, a social services hub, urban agriculture and the café under one roof.”
  • Dani Guindo’s dramatic aerial photos reveal the ghostly outline of an Icelandic glacier. “His latest series, Terminus, captures a glacier’s many rivulets amid a rocky landscape, along with a ghostly, rounded outline revealing evidence of the glacier’s earlier phases.”
  • The Curious 100 2026. “A celebration of one hundred courageous leaders and creative minds across the US who are harnessing the transformative power of curiosity to address today’s most pressing problems and shape the cultural conversations that define our moment—whether it’s redesigning democracy to include everyone, advocating for school-supported agriculture, or editing groundbreaking music videos.”
69c84403156686000101351c
Extensions
Habermas and his coffeehouses ⊗ The case for radical solar optimism
Newsletter
No.395 — Possibilities literacy ⊗ Proof ⊗ Energy falling below $100 ⊗ Roots and the meaning of life
Show full content
Habermas and his coffeehouses Habermas and his coffeehouses ⊗ The case for radical solar optimism

The German philosopher Jürgen Habermas passed away last week, so his name was top of mind, and I always jump on thinking about cafés, coffeeshops, salons, and the like. Double that if “scenius” is mentioned. So I was drawn to this post by Jeff Jarvis, where he argues that Habermas’s famous theory of the coffeehouse as the birthplace of rational, civil public discourse was based on an idealisation that never matched reality. Habermas drew heavily on The Spectator and The Tatler, publications that were actively shaping the culture they claimed to describe. The actual coffeehouses were chaotic, smelly, full of rumours and sedition. Coffeehouses and publishers had a “frenemies” relationship, proprietors resented the expense of newspapers but depended on them for content, while publishers relied on coffeehouses for distribution and news-gathering. Posted house rules from 1674 prohibited swearing, quarrels, and wagering. Evidence that all three were common enough to require management. (Side note here to mention the very Eurocentric representation, such settings were found all around the world around coffee and tea, way before either drink made it to Europe.)

I preferred the more historical parts, but the piece’s central point is that the internet-as-degraded-public-sphere framing rests on a comparison to something that never existed. Jarvis draws on Nancy Fraser’s feminist critique, Habermas’s public sphere excluded women, the poor, and the marginalised, a limitation he himself came to acknowledge over the course of his career. He also brings in McGill’s Making Publics Project, which challenges the idea of a single public sphere altogether. Publics formed around books, theatres, ballads, and languages, creating what Benedict Anderson called “imagined communities,” not through rational coffeehouse deliberation but through shared cultural forms.

The coffeehouses-and-publishers tension maps onto today’s platform-media dynamics with uncomfortable precision: mutual dependence, mutual resentment, and a shared inability to control the discourse they generate. Regina Rini’s framing is useful here, she characterises the ongoing negotiation of online norms as a tension between those seeking “Marginal Protection” and “Status Quo Warriors,” which describes the coffeehouse dynamic just as well. Jarvis’s conclusion, leaning on James Carey, is that republics require “cacophonous conversation.” The coffeehouse was always messy, and reframing online discourse as genuine public formation—chaotic, imperfect, but real—is more honest than mourning a fall from Habermas’s impossible standard. But really, first and foremost, I just enjoyed a nice historical look at coffeehouses.

🤖 Note: I don’t know much about Habermas’ writing, but according to Claude; “Jarvis argues against a somewhat simplified version of Habermas, one that he himself partially outgrew. He acknowledged the exclusionary nature of the bourgeois public sphere and engaged with Fraser’s critique over the course of his career. His model was partly normative (what public discourse should aspire to), not purely descriptive.”

Nancy Fraser made a compelling feminist argument against Habermas’ presumption of a public sphere there: “We can no longer assume that the bourgeois conception of the public sphere was simply an unrealized utopian ideal; it was also a masculinist ideological notion that functioned to legitimate an emergent form of class rule. … In short, is the idea of the public sphere an instrument of domination or a utopian ideal?” […]

It’s not just that the old rules may not apply in new environments. It’s also that old rules were imposed by the powerful — white, male, and privileged — upon other sectors of society — among them in America, Black, Latino, LGBTQ+, disabled, immigrant, poor — who did not have seats at the table where standards were set for all. Now the formerly disenfranchised have an opportunity to seek new rules — and those who set the old rules resent their intrusion; when their old ways are criticized, they cry that they have been “canceled.” […]

Print had been around for two centuries by the time coffeehouses arrived, but newspapers were new and were too expensive to be bought by commoners. Coffeehouses changed that: “The coffeehouse was the place to read broadsides, pamphlets, and periodicals,” said Klein. “As a specifically discursive institution, the coffeehouse should be viewed in the context of the history of discourse and communicative practices in society.” […]

Cowan quoted English lawyer Roger North fretting that “not only sedition and treason, but atheism, heresy, and blasphemy are publicly taught in diverse of the celebrated coffee-houses.”

The case for radical solar optimism

When it comes to solar, I always have in mind Deb Chachra’s view on abundant energy and what we might be able to do with it. That comes from having read her thinking for years, including her excellent book How Infrastructure Works. In this piece it’s actually Azeem Azhar making the case for radical solar optimism but there’s a good deal of overlap between the two. Azhar’s core argument rests on Wright’s Law: every doubling of cumulative solar production has cut module prices by roughly a quarter, a pattern that has held for nearly five decades. Forecasters consistently underestimated this, the IEA’s projections were off by a factor of seven. The argument is a manufacturing learning curve, the same kind of dynamic that drove semiconductors. And like semiconductors, each cost threshold unlocks new markets that fund the next doubling.

The piece gets more interesting once it moves past grid electricity, which accounts for only a fraction of total energy use. Azhar walks through what each further cost reduction makes viable. Cheap green hydrogen undercuts fossil-derived hydrogen, which opens the door to green steel and synthetic fertiliser. Cheaper still, and desalination becomes affordable for agriculture, not just wealthy coastal cities. Direct air capture, currently prohibitively expensive, starts approaching workable costs. Synthetic jet fuel, chemically identical to kerosene but carbon-neutral, remains expensive but the gap narrows with each doubling, and carbon pricing closes it further. Even the remediation of PFAS “forever chemicals,” where the core obstacle is the sheer energy needed to break carbon-fluorine bonds, becomes tractable. Azhar also spends time on the bear cases—intermittency, land use, the fact that panels are now cheap but the rest of the system isn’t—and handles them seriously. The piece is long and detailed but the central argument is simple: cheap energy doesn’t just replace fossil fuels, it makes previously impossible things merely expensive, and previously expensive things cheap.

Caveat → I might have to make up some kind of personal scale I can use here, from Amish to techno-solutionist. Azhar is definitely quite a bit closer to the right side of that hypothetical scale than I am. And probably than most of you. So maybe dial back the article a bit as you read or as you reflect after the fact, and probably dial back another couple of notches when reading him on AI. Contrary to many, he’s not going crazy though, the basics are solid and directionally correct, just a tad over-enthusiastic, imho.

Further reading → The bit about “every material object is atoms in a configuration.” reminded me of the Wil McCarthy novel Bloom (good read) and its ladderdown tech. “A system of converting elements into other elements "Nuclear transmutation" further down the Periodic Table. As a result of using this system, the inhabitants of the Immunity have a plethora of high-numbered elements lying around. For example, they have so much gold they use it to weigh down their shoes and pave the streets.”

If that sounds familiar, it’s because you’ve lived it. This is the exact flywheel that built the computer industry over the past fifty years. In 1971, Intel’s first microprocessor had 2,300 transistors. Today’s chips hold 100 billion because a learning curve cut the cost per computation by a factor of 10 billion. That curve did not produce a single product. It produced an industry: mainframes gave way to PCs, PCs to smartphones, smartphones to cloud computing, cloud computing to AI. Each market was unimaginable at the price point of the previous one. Each one funded the next doubling. […]

Solar now generates 9.5% of global electricity, from 1% a decade ago. If the market were fixed, if electricity demand stayed where it is, there would be only three-and-a-bit doublings left before solar hit 100%. But the market is not fixed. Every time the price falls far enough, the ceiling moves and a use that was previously uneconomical becomes viable. […]

But the next generation of DAC is electrochemical: Verdox’s electroswing adsorption uses electrons directly to capture CO₂, with no thermal regeneration step and ocean-based approaches like Equatic use electrolysis to move seawater chemistry at scale. […]

By printing perovskite on top of the silicon, you essentially create a device that captures different parts of the solar spectrum in each layer – the perovskite absorbs higher-energy light while letting lower-energy light pass through to the silicon beneath. This opens up more of the spectrum for efficient conversion into electrical energy. This increased the theoretical efficiency limit to ~45%. […]

We use more than 30 million hectares of land to grow biofuels. That’s about the size of Poland; it powers just 4% of land transport. If we were to cover the same area in solar panels, we would meet the global electricity demand today. […]

Fossil fuel land use is cumulative and destructive; solar is permanent infrastructure that can improve the land it sits on.


📧If you love reading newsletters, you can find more at CrossPromo Club. When you subscribe to one, they swing more recommendations my way, so you help me to grow Sentiers. ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • This section is often the one I have most trouble filling in. I’m trying to keep it “meta” or practice-focus, i.e. about foresight (whatever you want to call it) and futures itself, not about signals, as well as some imagination, speculation focused stuff. This week I have only one and ran out of time looking for more so I’m using this “hole” to ask; do you find this link block useful? Why? Why not? Where could it go? Hit reply if you have thoughts!
  • Possibilities literacy: empowering learners for an uncertain world. “Introduces Possibilities Literacy as a new meta-framework for engaging with the possible. Defines five dimensions: Perception, Crafting, Engagement, Stewardship, Mindsets. Shows how constraints, alternatives, and agency structure possibility-making.” (Via Weekly Wandering.)
Algorithms, Automations & Augmentations
  • Proof. This is more tool than news, but even if you don’t end up using it, it’s a type of application/tool to pay attention to, it’s “an online document editor built for agents and humans to collaborate. Fast, free, and no login required.” To put it simply, you invite your favourite LLM as a collaborator and work on a document together. Very multiplayer à la Matt Webb.
  • Pokémon Go players unknowingly trained delivery robots with 30 billion images. I’m surprised anyone is surprised. “In other words, all that time users spent wandering around playing Pokémon Go will now help determine how well a courier robot can deliver your take out. It’s a stark example of how crowdsourced data, seemingly collected for one purpose, can be quietly repurposed years later for something quite different.”
  • Will AI make the workplace more human?. “Gensler’s Global Workplace Survey 2026 reveals how AI is making the physical workplace more essential — not less.”
Built, Biosphere & Breakthroughs
  • Energy falling below $100 shows the world a way out. “Lithium-ion battery packs have slumped permanently beneath $100, with grid electricity from four-hour batteries costing $78 per megawatt-hour at the end of last year. The cost of lithium-iron-phosphate batteries used in cars has dropped below $100, to $81 per kilowatt-hour, making electric vehicles more affordable to buy.”
  • This Paris tour reveals how Hidalgo made the city greener, more car-free. Lots of pictures and graphics about this fantastic transformation! “Visitors will discover that it’s a dramatically different place than a decade ago: lines of bikes and throngs of pedestrians where lanes were once jammed with cars, greenery encroaching on former pavement, summer swimming in the once-grimy Seine river — and a corresponding drop in air and water pollution.”
  • The Scottish island that bought itself. “To this day, the trust is run by three entities: The Isle of Eigg Residents Association (representing island residents), the Highland Council (representing the local government), and the Scottish Wildlife Trust (which ensures long-term environmental stewardship of the island). Board members are appointed by their communities and serve staggered three-year terms, ensuring the island runs in the interest of all three stakeholders.”
Asides
  • Extreme macro photos of insect wings by Chris Perani layer thousands of images. “The images reveal details we’d otherwise only be able to see clearly beneath a microscope, and a meticulous process illuminates undulating, scaled surfaces that resemble chromatic pixels, stained glass, or even beadwork.”
  • Roots and the meaning of life. “They are so far out of sight for us, creatures of the upper world, that we don’t readily think of them. But as soon as we do, as soon as we plunge the mind into the cold dark humus to which the body will one day return, they become a spell against despair and a consecration of all that is alive.”
  • Recommendations of 25 medieval manuscripts to explore online. “Almost every institution with a significant collection of medieval manuscripts digitizes many of their most significant works and makes them freely accessible online.”
     

“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers
69bee19421d6c30001ae8ab4
Extensions
Syndicates of capital ⊗ Three modes of cognition
Newsletter
No.394 — Jobs below the AI ⊗ Seneca Falls Convention · 2048 ⊗ Manufacturing + AI ⊗ The psychological distance to climate disaster ⊗ GPS jamming
Show full content
Syndicates of capital Syndicates of capital ⊗ Three modes of cognition

Early in her piece, Jessica Burbank writes that “you’ll be surprised how quickly things click and how easily your mind makes connections when you absorb the news with a conception of syndicates of capital.” She’s right. Burbank defines these as groups of wealthy individuals whose primary objectives involve securing new capital flows and preserving existing ones. They operate in what she calls the “extralegal sphere”—she’s explicit that none of this is hidden, that their multi-billion dollar deals and contracts are publicly disclosed. States are just no longer the highest form of power globally. That role has shifted to these syndicates, which use state power where useful but circumvent it wherever possible.

Burbank extends political scientist Joseph Nye’s analysis of global power systems, arguing the transition from state-centred to syndicate-centred power began around 1920 (I’d say even earlier than that) and completed before 2020. The mechanism was largely neoliberal ideology: privatisation and free enterprise, enforced globally, created the conditions for capital accumulation through resource extraction and labour exploitation by multinational corporations. Her opening example grounds the theory in history—Patrice Lumumba’s 1961 assassination by the CIA, triggered not by Cold War ideology alone but by his challenge to the resource extraction apparatus in Congo. The state acted as enforcer for syndicate interests, a pattern she argues has only deepened since.

The sharpest part of the piece is her explanation of why “globalism” became a political slur. Syndicates actively benefit from populations rejecting global thinking, because any meaningful analysis of power across borders reveals their control. Fascism, in her reading, emerges as a reaction to syndicate dominance—nationalist movements attempting to reclaim military and economic control for state leaders—but syndicates typically co-opt fascist leaders rather than lose ground to them. The framework doesn’t name current figures, which is both its strength and limitation. It’s a framework for reading power rather than an exposé, and the kind of structural thinking I’ve been finding increasingly valuable in parsing world events. (Via KDO.)

Rather than states cooperating to develop a system of global governance, or succumbing to exogenous threats to sovereignty, state power eroded from within. Official government leaders today have less power than wealthy private citizens who demonstratively exert more control over economic policymaking and the use of military force. State leaders often directly serve syndicates of capital instead of the public or the state, though some try to do both. […]

Liberals insist that a right to sovereignty is lost when any of the fundamental principles of liberal society are missing within a state. This is when liberals label countries ‘failed states,’ and insist intervention by other states is justified. For example, if a state nationalizes natural resources through legitimate democratic processes, and therefore jeopardizes free enterprise (resource extraction by global ‘market’ forces), that state loses their right to liberty and self determination. […]

Liberalism was the driving dogma behind the establishment of free markets globally, which, in practice, was the violent enforcement of resource extraction and labor exploitation my multinational corporations. These conditions facilitated the accumulation of global capital in the hands of the few. Those few formed syndicates. Therefore, neoliberal hegemony catalyzed the shift of how world power is organized, facilitating the fall of an anarchic system of states, and the rise of syndicates of capital. Liberalism effectuated a new world order in which liberalism is obsolete. […]

The end the anarchic system of states is difficult to identify precisely because syndicates did not replace states. Instead, syndicates developed a new global power structure that includes states.

Three modes of cognition

One of the things that interests me in AI is all the discussions it has provoked around the various forms of intelligences and our understanding of them. Here Kevin Kelly explores what he sees as three distinct modes of cognition that together might compose intelligence. The first, knowledge reasoning, is where current large language models excel—the “super-smartness” that comes from ingesting every book and message ever written, that’s the bulk of today’s LLMs. The second, world sense, is spatial intelligence: an understanding of how objects behave in physical space, incorporating gravity, continuity, and common sense about physical reality. We’re talking world models and robotics, a field in emergence. The third, continuous learning, is the ability to improve incrementally from daily experience and mistakes—something humans do constantly but current AIs lack entirely. Other than memory systems and using user data for subsequent phases of model training, the idea of continuous learning is not yet really applied.

Kelly’s argument builds on his earlier piece proposing a “periodic table of cognition,” where he compared our understanding of intelligence to early (and wrong) theories of electricity. Intelligence, he argues, is not a single elemental force but a compound of dozens of cognitive primitives, much as water turned out to be a compound rather than an element. The three modes framework is a simplified version of that larger project, and it explains a specific puzzle: why AIs that surpass human expertise in book knowledge still can’t replace human workers. The answer is that knowledge reasoning alone is insufficient. Without world sense and continuous learning working alongside it, the compound we recognise as intelligence remains incomplete.

These are sometimes called world models, or Spatial Intelligence, because this kind of cognition is based on (and trained on) how physical objects behave in the 3-dimensional world of space and time, and not just the immaterial world of words talking about the world. […]

A major reason why AI agents have not replaced human workers in 2026 is that the former never learn from their mistakes while the latter, even if not as smart, can learn on the job, and can get better each day. […]

When AI experiences another sudden quantum jump in capabilities, it will likely be when someone cracks the solution for a continuous learning function. Human employees are unlikely to lose their jobs to AIs that can not continuously learn because a lot of the work we need done requires continuous learning on the job.


§ “Jobs below the AI” is the new “jobs below the API.” I haven’t read AI agents are recruiting humans to observe the offline world yet, and it’s not the first instance of AI hiring people, but just reading “Agents need us — as sensors, as verifiers, as bearers of liability — in ways we have barely begun to account for” immediately took me back >10 years ago, to the idea of “jobs below the API.” This piece on Forbes is the oldest in my bookmarks but likely not the source of the expression.

It’s roughly the same thing, then you were a tool under an algorithm accessed through an API, the Uber app assigning you rides. Now you’re a tool called upon directly by an AI. A digital agent tasking a carbon unit to grab a case and get it to another carbon unit, perhaps working for another AI.


⚜️Did you know I also write a foresight newsletter in French? Si vous lisez le français, ou voulez pratiquer votre compréhension de la langue de Molière Tremblay, et évidemment apprendre des belles affaires dans un contexte de prospective, jetez un coup d’oeil à l’infolettre Télescope que j’écris pour la Société des demains. Abonnez-vous! ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • Seneca Falls Convention · 2048 is a speculative event in the future, and at the real online even Artifacts from a Matriarchal Future. Tracey Worley will “share the methodology I used to create Seneca Falls 2048—a speculative project featuring artifacts from the 200th anniversary of the first women's rights convention. Then you'll make your own artifact from a matriarchal future. Think of it as practicing the future. Building muscle memory for worlds we can barely imagine yet desperately need.”
  • Amy Webb launches 2026 Emerging Tech Trend Report. “This year, though, there’s a twist you won’t see coming—one that could change the way we track, understand, and act on trends forever. The theme is Creative Destruction. While there's no official dress code for the session, we do recommend you wear black.”
Algorithms, Automations & Augmentations
  • Assembling the future: manufacturing + AI. “This also makes the third wave of automation the most accessible yet. Previous waves required specialized programming skills (G-code, PLC logic, robotic path planning). AI systems meet people where they are. Robots can learn tasks through imitating human behaviors; shop floor workers can query data in plain English. The interface to manufacturing automation is more approachable than it's ever been”
  • The real AI talent war is for plumbers and electricians. “The AI boom is driving an unprecedented wave of data center construction, but there aren’t enough skilled tradespeople in the US to keep up.”
  • A data center opened next door. Then came the high-pitched whine. “Unlike most of her neighbors, she preferred a supercomputing hub to a shopping mall, which might bring a crush of car traffic. She was even more pleased when she learned the data center would generate its own power — rather than connecting to the grid and driving up her electric bills. But then the data center turned on, along with the eight natural gas turbines powering it. Now her home is barraged by a high-pitch whine that she says has made her newly screened-in porch unusable.”
Built, Biosphere & Breakthroughs
  • The strange and persistent psychological distance between us and climate disaster. “An analysis of dozens of previously published studies reveals people systematically underestimate their own vulnerability to climate threats.”
  • The global journey of fast fashion’s discarded clothes. “Discarded garments often enter complex second-hand markets, with huge volumes ending up in places like Accra, Ghana, where unsellable clothing accumulates in waterways and landscapes. Researchers argue the real solution lies not just in recycling but in reducing overproduction, adopting circular design and encouraging consumers to buy fewer, longer-lasting garments.”
  • BYD just killed your EV argument with a battery that competes with gas engines. “The Blade Battery 2.0, a new battery that can drive more than 621 miles on a single charge. In the process, the company has exposed just how far behind the rest of the EV industry has fallen. … BYD’s new charging architecture kills the ICE pit stop advantage entirely by pushing 1,500 kilowatts of peak power through a single cable, or up to 2,100 kilowatts if using a dual-gun setup. To understand the sheer power of that electrical flow, you have to look at the current industry standard.”
Asides
  • GPS jamming and its use in the Iran war, explained. “Windward in its analysis identified 21 new clusters where ships’ AIS were being jammed in the region in the first 24 hours after the Iran war began. A day later that number had jumped to 38, Bockmann said. Maritime data and analytics company Lloyd’s List Intelligence said it had logged 1,735 GPS interference events affecting 655 vessels, each typically lasting three to four hours, between the start of the war and March 3.”
  • Ailing “Megaberg” sparks surge of microscopic life. “By 2026, the iconic iceberg, sopping with meltwater and shedding smaller bergs as it moved into warmer ocean waters, put on one more show. The chunks of ice and frigid glacial meltwater left in its wake appear to have fueled a surge in phytoplankton abundance, known as a bloom, observed in surface waters by NASA satellites.”
  • Mystery orcas from afar thrill Seattle-area whale watchers. “When somebody gets the thrill of seeing an orca in Northwest waters, that whale is almost always well known. Scientists have probably given it a number and documented its family tree, perhaps even its DNA. Whale lovers have probably given it a cutesy name, like Yoda or Kelp. … But on March 6, a trio of orcas showed up in Canada’s busy Vancouver Harbour, later heading south to Seattle, Tacoma, and Olympia, that were a mystery to scientists.”

“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers
69b5cccf3cc30b00015387f1
Extensions
An obscure media theory ⊗ World literacy
Newsletter
No.393 — The future belongs to the creative generalists ⊗ Media’s bad idea with Kalshi ⊗ Reimagining realism ⊗ Labor market impacts of AI ⊗ World record for fusion plasma ⊗ Apocalypse no
Show full content
The obscure media theory that explains “99% of everything” An obscure media theory ⊗ World literacy

The conversation between Derek Thompson and Joe Wiesenthal draws on mid-century media theorists—Walter Ong, Marshall McLuhan, and Joshua Meyrowitz—to argue that the shift from oral to literate culture was among the most consequential transformations in human history, and that digital media is reversing it. Literacy didn’t just change how people communicated; it restructured how they thought, enabling solitude, abstraction, and the institutions built on those capacities. The return of oral conditions—conversational back-and-forth, viral repetition, large memorable characters—helps explain “everything” from the appeal of certain politicians to declining trust in expertise.

Meyrowitz’s 1985 book No Sense of Place gets particular attention. He argued that electronic media would collapse the distinction between public and private behaviour, leading audiences to distrust anyone who code-switched between contexts, and to gravitate toward those who remained consistent across all of them, whatever the content of that consistency. Writing before the internet existed, he also predicted that broader access to information would simultaneously deepen our dependence on experts and erode our faith in them—a description that reads, as Wiesenthal notes, like something written last year.

The conversation closes on AI, which both speakers see as genuinely strange territory: a technology trained on the written word but experienced as conversation, more interior than social media’s agonistic noise, closer in feel to the solitary act of reading than to being online.

I should note that Wiesenthal moves between documented arguments and his own opinions or experiences, without always marking the transitions. Well-grounded ideas and personal speculation end up side by side, presented with similar confidence, kind od softening the solidity of the argument, to me anyway. Still, the theories discussed are worth pondering, even if their chat sometimes feels like opinions more than research.

”Human beings in primary oral cultures do not study. They learn by apprenticeship, hunting with experienced hunters, for example, by discipleship, which is a kind of apprenticeship by listening, by repeating what they hear, by mastering proverbs and ways of combining and recombining them, but not study in the strict sense.” […]

I remember rereading that section on a plane recently and I jolted up in my seat. I was like, that’s what AI has changed. You can enter into conversations with text. That is true either at a literal level—like I can download a PDF of a book, and give it to Claude and be like, Claude, can we talk about this book? But also, at a higher abstract level, we’re talking about a technology that is pre-trained on text. It’s pre-trained on literacy. But we have an oral, which is to say conversational, relationship with that training corpus. It’s weird. […]

The age of literacy made possible a set of abstract systems of thought—calculus, physics, advanced biology, quantum mechanics—that are the basis of all modern technology. But that’s not all, Ong and his ilk said. Literacy literally restructured our consciousness, and the demise of literate culture—the decline of reading and the rise of social media—is again transforming what it feels like to be a thinking, living person. […]

We had the age of orality, which was the age of the ear. Then we had the high watermark of literacy, which is the high watermark of the age of the eye. And now we’re in this messy third stage where it’s like there’s some human facial organ that’s an eye and an ear mashed together because we have TV and radio and social media and TikTok. And what’s interesting about these technologies is that they are all oral. What is radio, if not oral? What is television if not oral? What is TikTok if not spoken and live? […]

I think if you look at the modern world, the modern world has elevated a lot of what I think Ong would call heavy characters. I certainly think Trump is a heavy character, with his makeup, and his hair, and his whole visual presentation. I think Elon is a heavy character. I think if you look at the visual way that a lot of sort of YouTube stars look with their ridiculous open-mouthed soy faces when on their YouTube screenshot.

World literacy

Jay Springett’s explains his concept of “world literacy”—the fluency audiences have developed for navigating fictional universes that exceed any single text. People now arrive at a new film, game, or series already carrying a set of expectations: that the story is a window onto something larger, that the world has a history and a governance structure, that changes to established canon require explanation. This intuition wasn’t consciously acquired. It accumulated across five decades of nerd culture—Dungeons & Dragons, the Star Wars Extended Universe, MMOs, trading card games—before leaking into the mainstream. Now it’s just how audiences relate to fiction.

The mechanism Springett identifies is a “leapfrog”: one world raises the standard for how a fictional universe should be maintained, and audiences carry that raised expectation into every other world they inhabit. This means world-builders are no longer competing only against their own previous work; they’re competing against the literacy their audience acquired elsewhere. The practical consequences are significant—if creators don’t manage the “present tense of canon,” the audience will, loudly and messily. Disney’s post-acquisition handling of Star Wars is the cautionary example: an audience that had built genuine competence across decades of transmedia participation, met with governance that treated that competence as an obstacle.

World literacy is the extent to which an audience treats a piece of media as a window onto something larger than itself. That the thing you are reading/watching/playing is a window into an implied world. The world exceeds any single text, or release. […]

D&D wasn’t the first fictional “world”, but it was the first techno-social system to make world-inhabitation mechanical and ongoing. […]

The Star Wars Extended Universe trained a generation of nerds like me in the late 90s early 00s to expect the world to be bigger than the films. […]

Though audiences don’t necessarily treat all windows as equally authoritative. They build hierarchies, show > book > tie-in comic > wiki > reddit opinion. The interesting thing is that they treat the whole ecosystem as part of the world, even while ranking its authority. Implicit hierarchy is another part of world literacy.


§ The two following articles might seem disparate, I haven’t had time to think more about it, but I feel there’s an intriguing overlap. Michelle Higa Fox, writing about generalists and creative careers, argues that the people who will last through AI are those who follow curiosity into territory that doesn’t obviously connect to their work—improv classes, poetry, public speaking—trusting that the oblique path builds something the direct path can’t. Nick Foster, writing about software design, makes a case that the interesting move with AI isn’t to sand down its probabilistic roughness into something deterministic and familiar, but to treat its inconsistency as a material property worth working with rather than around. If you squint a certain way, both are saying that accepting that productive confusion—friction that doesn’t immediately resolve into task completion—is a feature, not a flaw.


§ This is a really bad idea. Kalshi has been rolling out a series of high‑profile media deals that put its prediction data directly into mainstream coverage, including becoming CNN’s official prediction market partner with exclusive odds integrated across TV and digital, and signing an exclusive multi‑year partnership with CNBC to feature Kalshi markets on air, online, and in premium products. It is also deepening its elections footprint by licensing “gold standard” U.S. vote counts and race calls from the Associated Press into Kalshi’s platform ahead of the coming midterms, effectively turning AP results into a live input to Kalshi’s election markets and dashboards. Technically, I guess you could integrate this in some form of “wisdom of the crowd” intelligence balanced alongside analysts and have a more curated/responsible view, but it’s not what I’m expecting to see. It’s going to be exactly what sports betting is in sports coverage, i.e. anything goes monetisation pron.


“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • Reimagining realism: notes on the work of a time between worlds. “We are living in a time between worlds: a place where the harms of old systems that have shaped life for generations are becoming clearer and starker by the day — and the new futures we hope for are still contested and unevenly emerging. We are already navigating a period of instability, precarity and uncertainty.”
  • The two decisions that make (or break) a strategic foresight project. “Before you scan trends, build scenarios, or run workshops, there is a moment that quietly determines whether your foresight project becomes decision-useful—or merely interesting. That moment is the Designing / Scoping stage.”
Algorithms, Automations & Augmentations
  • Labor market impacts of AI: a new measure and early evidence. Economic research by Anthropic. “- We introduce a new measure of AI displacement risk, observed exposure, that combines theoretical LLM capability and real-world usage data, weighting automated (rather than augmentative) and work-related uses more heavily. AI is far from reaching its theoretical capability: actual coverage remains a fraction of what's feasible. Occupations with higher observed exposure are projected by the BLS to grow less through 2034. Workers in the most exposed professions are more likely to be older, female, more educated, and higher-paid”
  • Anthropic made pitch in drone swarm contest during Pentagon feud. I’m a disappointed but not surprised. “Anthropic PBC submitted a proposal to compete in a Pentagon prize challenge to produce technology for voice-controlled, autonomous drone swarming, according to people familiar with the matter. The company's submission focused on using its Claude AI tool to translate a commander's intent into digital instructions and to coordinate a fleet of drones, with humans having oversight of the system.”
  • Can we run experiments on history with AI? “By ‘experimental history’ here, he does not mean speculative fiction or playful alternate timelines. The aim is more austere and methodological. Can we build models that behave in ways that are recognizably constrained by a particular historical moment, and then use those models to ask structured ‘what if’ questions about how literary worlds evolve? The project sits at the edge of both digital humanities and AI research, and it exposes some surprisingly basic problems in how today’s AI models understand time.”
Built, Biosphere & Breakthroughs
  • France beats the world record for fusion plasma duration. “France’s WEST tokamak held a hot plasma for 1,337 seconds, a little over 22 minutes. That performance matters because long, steady plasma operation is a core requirement for future nuclear fusion power plants. The run also edged past the mark set weeks earlier by China’s EAST, improving the duration by about 25 percent.”
  • Interactive visualization examples. Lovely! “An interactive spatial lens visualization of NYC 311 complaints filed between 2022 and 2025. Drag any lens onto the map to explore how noise, parking violations, heating failures, rodent activity, and illegal dumping are distributed across New York City neighborhoods — and how each category has shifted year by year.”
  • An exclusive look inside the largest effort ever mounted to keep the Great Barrier Reef alive. “In response to these existential threats, the government launched a project called the Reef Restoration and Adaptation Program (RRAP). The goal is nothing less than to help the world’s greatest coral reef survive climate change. And with nearly $300 million in funding and hundreds of people involved, RRAP is the largest collective effort on Earth ever mounted to protect a reef.”
Asides
69acf5f7e71b0e00011aefa6
Extensions
A country of geniuses ⊗ Collapse: A framework
Newsletter
No.392 — We can move beyond the capitalist model and save the climate ⊗ Our emerging planetary nervous system ⊗ A handbook for strategic foresight ⊗ A Cookie for Dario? ⊗ This city turned its rooftops into a climate shield ⊗ “Viking” was a job description
Show full content
A country of geniuses ⊗ Collapse: A framework

Once in a while I think I should do a Nate Hagens special, so many of his interviews are worth spreading far and wide. This is not such a special, but, it does first feature an essay by Hagens; one of the better analyses of that Dario Amodei essay that made a splash not long ago, The Adolescence of Technology. Don’t be taken by the title, Nate summarises the essay but largely focuses in filling the missing pieces with a series of “wide boundary” arguments. The second featured article is by someone I didn’t know about, Adrian Lambert, who presents his framework for collapse, which feels very Hagensian. Maybe make your coffee a bit stronger this morning.


A country of geniuses

Nate Hagens’ meta-commentary about Dario Amodei’s essay on AI risk begins with a useful declaration of intent: this is not AI doomerism or AI cheerleading, but an attempt to look at the world this technology is actually entering—with all its incentives, constraints, and fragilities. He borrows a useful metaphor from Amodei, that of a “country of geniuses”—the idea that sufficiently advanced AI would function like 50 million highly capable minds operating in coordination, able to copy themselves, work without pause, and act across every interface of modern life, from scientific research to markets to media. That framing sets up the question Hagens finds most absent from Amodei’s analysis: what is all this actually for? He draws on Dennis Meadows’ observation that tools don’t change goals, they amplify the priorities of whoever holds them. If those priorities remain growth, power, and competitive throughput, a more capable optimisation engine won’t redirect civilisation toward different ends—it will pursue the same ones faster.

The deeper challenge Hagens poses to Amodei’s framework is that it assumes surviving technological adolescence leads somewhere worth arriving. Amodei imagines a managed abundance on the other side: AI-accelerated scientific progress, sustained GDP growth, a kind of settled stability. Hagens questions whether that destination is physically possible. A “country of geniuses” doesn’t float above the biosphere, it plugs directly into it, competing for energy, water, and materials that are already strained. The “goal function” question, then, isn’t just philosophical. As Hagens puts it, a system that optimises the wrong objective can perform brilliantly while destroying the things you actually value, ”Think King Midas meets the Terminator.” The real issue isn’t whether AI can make us richer, but what kind of richness we’re aiming for in the first place.

Further reading → He writes that “people focus on the alignment of models as if that’s all that matters. It does matter quite a bit, but the larger alignment problem, in my opinion, is societal alignment.” The piece I shared last week, The Next Great Transformation (which quite a few readers seem to have appreciated) connects directly with that.

This is not a smart research assistant or a brilliant colleague, rather it’s something closer to a vast workforce of highly capable minds that can operate quickly, copy themselves, and act through all the interfaces of the modern world like emails, code, design tools, scientific papers, research labs, bureaucracies, markets, and media. If and when this workforce arrives, it will be a civilizational event. For better or worse it will change everything. […]

A datacenter is also basically a physical machine plugged into the Earth. It’s made of silicon chips, copper, and cooling systems, and uses water, concrete, and transmission lines. Most of these things rely on geopolitically-tenuous supply chains and the reality that those source materials are not infinite or frictionless to access. […]

Most of the real world harm in modern life doesn’t come from any lack of intelligence, it comes from incentive structures, institutional capture, and organizations that can externalize costs while still declaring success and cultural status. […]

All definitions of wisdom (from every language and knowledge system) have an element of restraint. Restraint in ourselves, our species, our tech, and our institutions.

Collapse: A framework for understanding and navigating the decline of industrial civilisation

Adrian Lambert’s starting point is pretty bleak: industrial civilisation is collapsing, and prevention is no longer the relevant question. He lays out a framework along six domains—thermodynamics, ecological overshoot, the myth of decoupling, capitalism’s growth imperative, an epistemic crisis, and renewal within limits—to argue that collapse is not a future risk but an active process already underway. The core mechanism is thermodynamic: civilisation is a system that survives by consuming concentrated energy and releasing it in degraded form, and fossil fuels have allowed it to grow far beyond what the planet can sustain. As Joseph Tainter’s work on historical civilisations shows, complexity yields diminishing returns, and when the energy surplus required to maintain it falls below a threshold, rapid simplification follows.

Lambert is equally sceptical of the cultural narratives that obscure this trajectory. He sees the promise of “decoupling” (growing GDP while shrinking environmental impact) as an accounting trick, with high-income countries offshoring resource-intensive production rather than reducing it. The epistemic crisis runs deeper still: modernity’s reductionist frame, fixated on quantifiable metrics and human exceptionalism, leaves societies poorly equipped to reckon with relational and ecological realities. He prescribes a shift in orientation: towards ways of living that work within biophysical limits, regeneration of the commons, and acceptance that the work of renewal matters even if human survival is not guaranteed.

Further reading → Also on collapse, this episode of the Farsight podcast a few months ago is a very good interview with Luke Kemp about his book Goliath’s Curse, a historical study of the collapse of human societies.

The Great Turning that Macy describes will not arrive as a single, coordinated revolution. It will be local, fragmented, and adaptive, small communities woven into the fabric of the ecosystems they depend on. In the absence of central planning, these efforts will be messy and incomplete. But they will also be experiments in survival. […]

Overshoot is therefore both an ecological and political reality: it is as much about the unequal distribution of consumption as the aggregate scale of it. As a result, collapse will not arrive everywhere at the same time, but will unfold along different timescales across the world. […]

The political economy of industrial civilisation has locked itself into a high-cost complexity trap: maintaining and expanding complexity requires ever-greater inputs, but the returns on that complexity are shrinking, pushing the system toward instability. […]

How we think shapes how we act. Without epistemic renewal, technical and political interventions will remain trapped in the logic of overshoot. […]

This demands a shift from the industrial growth economy to what Joanna Macy calls a “life-sustaining society.” It is a cultural, political, and ecological pivot: from growth to sufficiency, from extraction to kinship, from centralised control to distributed resilience.


§ We can move beyond the capitalist model and save the climate—here are the first three steps. By Jason Hickel and Yanis Varoufakis, who’s capitalism diagnosis and description fits right along the collapse piece above. “By capitalism we mean something very odd and very specific: an economic system that boils down to a dictatorship run by the tiny minority who control capital – the big banks, the major corporations and the 1% who own the majority of investible assets.”


§ Our emerging planetary nervous system. There’s something in the language of this essay that I find too … spacey? But there’s something to it. “We’ve been separating signal from judgment, letting speed outrun significance, and ignoring our bodies until alarms blare. We, as a species, are grimly out of tune. Reimagining civilization’s cognition is no longer optional — it is the design imperative for a world that must learn to steer, not spin.”


“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • Thinking About Tomorrow: A handbook for strategic foresight. “There is no single right way to do foresight. Throughout, we invite you to treat what you read as a starting point. Whether you are new to foresight or have practised it for a long time, we hope this handbook will be a useful and enjoyable companion on your journey into the future we are all heading into.”
  • Refracting the Futures: Megatrends PRISM. “Foresight is not about accumulating information but about developing the capacity to make sense of change in ways that inform action. Similarly, the field of anticipatory governance highlights the need for mechanisms that link long-term analysis to present-day institutional choices. The PRISM was designed as such a mechanism. It offers a disciplined yet flexible process for refracting megatrends into implications that are strategic, ethical, and actionable.”
  • Foresight Africa 2026. “This year’s Foresight Africa report brings together leading scholars and practitioners to illuminate how Africa can navigate the challenges of 2026 and chart a path toward inclusive, resilient, and self-determined growth.”
Algorithms, Automations & Augmentations
  • A Cookie for Dario? Anthropic and selling death. “To be clear: I am glad that Dario, and presumably the entire Anthropic board of directors, have made this choice. However, I don’t think we need to be overly effusive in our praise. The bar cannot be set so impossibly low that we celebrate merely refusing to directly, intentionally enable war crimes like the repeated bombing of unknown targets in international waters, in direct violation of both U.S. and international law. This is, in fact, basic common sense, and it’s shocking and inexcusable that any other technology platform would enable a sitting official of any government to knowingly commit such crimes.”
  • AI taxonomy. An operational framework for precision in AI discourse. “‘AI’ has become semantically meaningless. The term now encompasses everything from a regression model to an autonomous robot, creating confusion in strategic discussions, partner conversations, and product positioning. This taxonomy provides a functional framework based on what the AI actually does, not what technique it uses.”
  • Pope Leo tells priests not to use AI to write homilies or seek likes on TikTok. Not a group of people I would have thought needed to hear this. “‘Like all the muscles in the body, if we do not use them, if we do not move them, they die. The brain needs to be used, so our intelligence must also be exercised a little so as not to lose this capacity,’ Leo said in the closed door meeting, according to a report by Vatican News on Feb. 20.”
Built, Biosphere & Breakthroughs
  • This city turned its rooftops into a climate shield. Niiiice!! Beautiful work, have a look at those pictures for your next solarpunk deck/mood board ;-) “As cities struggle with heat, Zürich offers a masterclass in using vegetation to cool streets, manage stormwater and restore biodiversity.”
  • Amazon deforestation on pace to be the lowest on record, says Brazil. “Near-real-time satellite alerts show Amazon deforestation in Brazil continuing to decline into early 2026, with clearing from August through January falling to its lowest level for that period since 2014. Over the previous 12 months, detected forest loss also dropped to a 2014 low, reinforcing a broader downward trend that is corroborated by official annual data and independent monitoring. Clearing in the neighboring Cerrado savanna has also fallen.”
  • Obsession with growth is destroying nature, 150 countries warn. “‘Unsustainable economic activity and a focus on growth as measured by the gross domestic product, has been a driver of the decline of biodiversity ... and stands in the way of transformative change,’ warns a report by the Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES) published Monday.”
Asides
  • Paediatricians’ blood used to make new treatments for RSV and colds. “Antibodies harvested from the blood of paediatricians are up to 25 times better at protecting against the common respiratory infection RSV than existing antibody therapies, and are now being developed as preventative treatments.” Now do preschool educators.
  • “Viking” was a job description, not a matter of heredity, massive ancient DNA study shows. “Viking-style graves excavated on the United Kingdom's Orkney islands contained individuals with no Scandinavian DNA, whereas some people buried in Scandinavia had Irish and Scottish parents. And several individuals in Norway were buried as Vikings, but their genes identified them as Saami, an Indigenous group genetically closer to East Asians and Siberians than to Europeans. ‘These identities aren't genetic or ethnic, they're social," Jarman says. "To have backup for that from DNA is powerful.’”
  • Star Wars: Andor’s Tony Gilroy gives interview he couldn’t before. “Yeah, it’s the same shit all the time. Get rid of truth, get rid of a free press, destroy communities, nationalize the businesses, find an arbitrary enemy that you can elevate and false flag them through propaganda. Flood the zone with as much gak and atrocity as you can so that nobody can pay attention to what just happened, and pray that you have an overwhelming majority of sheep that will follow you. It’s just tragically and sadly familiar.”
69a35e7491c5c200013e0f9c
Extensions
The next great transformation ⊗ AI and the futures of work
Newsletter
No.391 — Perhaps AI is the paperclip ⊗ The imagination curriculum ⊗ On algorithmic wage discrimination ⊗ Romania and the link between economic growth and high emissions ⊗ Origami to imagine emergency shelters
Show full content
The next great transformation The next great transformation ⊗ AI and the futures of work

Fascinating piece with which to consider the rise and impacts of AI. When asked about AI, I’ve often said that if it were controlled by states, or at least not by maximalist founders, and took into account what happens to people who lose jobs, that it would be an entirely other discussion than the “move fast and break people” we are currently seeing. Here Jeremy Shapiro provides the perfect lens to consider ways of balancing AI and society. He revisits and explains Karl Polanyi’s concept of the “double movement”—the historical pattern in which market expansion generates a social counterforce demanding protection—and his broader argument that markets must remain embedded in social and political institutions, not the other way around, to avoid destroying the societies that host them.

Writing in 1944, Polanyi traced how the nineteenth-century experiment with self-regulating markets—treating labour, land, and money as pure commodities—produced dislocation so severe it destabilised democracies and cleared the path for fascism. His argument was that when states fail to shield populations from the raw force of market disruption, societies reach for whatever protection remains available, however illiberal. The interwar gold standard illustrated this perfectly. Germany’s commitment to it transmitted the Depression directly into social life; when democratic governments chose austerity over people, political legitimacy collapsed. By contrast, the postwar welfare state rebuilt that legitimacy by cushioning dislocation and redistributing the gains of industrial capitalism broadly enough to sustain social peace.

Shapiro’s argument is that AI represents a structurally similar moment. The technology threatens to decouple productivity from labour—automating not only routine tasks but cognition, judgement, and career progression itself—while concentrating the gains to a small group of firms and regions. This is, in Polanyi’s terms, a disembedding shock: economic activity pulled out of the social institutions that have historically absorbed change. He looks at how the US, Europe, and China are approaching AI and how their strategies address, or not, or badly, Polanyi’s theories.

One of the article’s central points is that the familiar “AI race” framing—who builds the largest models, controls the most compute—entirely misses this dimension. Speed without social protection accelerates backlash; backlash erodes political capacity; and states with weakened legitimacy lose long contests regardless of their technological lead. The real competition, Shapiro concludes, is over which model of social embedding can integrate AI without tearing society apart.

In Polanyian terms, AI is beginning to disembed economic activity from the social institutions that have absorbed change, creating precisely the conditions under which political counter-movements emerge. Such risks are already visible in rising populism, political volatility, and declining trust in institutions across advanced economies. AI may well accelerate the technological and economic trends that are already straining the social fabric. […]

This constraint matters because it creates a structural mismatch. Markets are global, but social protection remains largely national. Firms can shift profits, relocate assets, or threaten exit while maintaining access to consumer markets. Governments seeking to tax AI rents or impose social obligations face an immediate credibility problem. Even well-designed domestic re-embedding strategies risk erosion if firms can arbitrage jurisdictions. […]

For AI governance, this implies a sobering conclusion. If comprehensive international coordination on taxation and social standards remains politically constrained, then re-embedding efforts may not have the tax base they need to redistribute the gains and provide social protection. Success will depend on a mix of partial and fragile coordination among like-minded states, some bloc-level rule-setting, and access-based enforcement mechanisms that link participation in large markets to compliance with social obligations such as taxation. […]

The goal remains social integration and protection from market volatility, but the mechanisms must extend beyond labor markets alone. Embedding must increasingly target income, rather than jobs; firms and platforms, rather than individual workers; and status and participation, rather than employment per se. […]

Polanyi teaches us that markets are powerful only when societies can bear them. When they cannot, markets provoke their own undoing and often in rather spectacular fashion.

AI and the futures of work

Johannes Kleske, writing from a perspective of over fifteen years of tracking AI-and-work discourse, pushes back against the viral “everything is changing” articles that resurface with each new model release. His target is a recent post by AI entrepreneur Matt Shumer, which Kleske reads not as a forecast but as what he calls a “present future”—a story about today dressed up as a prediction about tomorrow. Shumer’s error, in Kleske’s view, is extrapolating a personal experience in one narrow field (coding) into a claim about all work. The same pattern appeared after AlphaGo in 2016, after ChatGPT in 2022, and before that in dozens of preceding cycles. Each time, the predictions failed to account for how complex work actually is. Kleske also invokes Jevons’ paradox—the 19th-century observation that greater efficiency in coal use led to more coal consumption, not less. A pattern that has repeated since—to explain why AI tools are making many people work harder, not less.

The second argument is about what this kind of hype does to people. Drawing on L.M. Sacasas’s “Borg Complex” concept, Johannes argues that FOMO-driven narratives trigger reactance: they push people away rather than drawing them in. The attention economy amplifies fear over useful information, and the result is a counter-movement forming against AI precisely when people might benefit from engaging with it. Kleske’s prescription is not to disengage but to approach AI as a normal technology—experimenting without the urgency, comparing it to 1999 and the internet rather than to an imminent takeover. The things that will actually matter, he writes, probably haven’t been built yet.

I don’t want [people to believe “resistance is futile.”] I think AI is changing things, but I want society to shape this transition according to its values. The question I keep asking is, how can we use the best of this technology but with the values we have as a society and the way we want to live in this world? How can people gain more agency in shaping the future, instead of having it dictated to them?

This circles back nicely to the Shapiro piece above. Individuals need to better understand the technology to regain some agency, and societies need the same kind of rekindled resistance to act clearly and with purpose in re-embedding AI, and markets, in society. Not the other way around.

I’m only interested in present futures because you can learn a lot about the present from listening to stories about the future. Just as reading science fiction predicts very little about the future, it reveals what we project into it based on our current problems. […]

I’m convinced that AI is going to change work fundamentally in many places. But it’s going to take much longer, it’s going to be so much weirder, and it’s going to be so much more unexpected than today’s predictions suggest. […]

The intriguing question isn’t what AI can do. It’s what new kinds of work and value emerge once things shift.


§ Some days I feel very very tired. Like when within an hour I read that family deepfakes help people celebrate and grieve in India, that a judge had to scold Mark Zuckerberg’s team for wearing Meta glasses to their social media trial, or that some people are talking about using millions or even billions of LLM tokens a day without mentioning energy, or electricity once, or that WD and Seagate confirmed that hard drives for 2026 are sold out, because hyperscalers are outspending the rest of the world. Nicholas Carr might be right, perhaps AI is the paperclip.

Bostrom’s story [of the paperclip maximizer], I would argue, becomes compelling when viewed not as a thought experiment but as a fable. It’s not really about AIs making paperclips. It’s about people making AIs. Look around. Are we not madly harvesting the world’s resources in a monomaniacal attempt to optimize artificial intelligence? Are we not trapped in an “AI maximizer” scenario?


ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • The imagination curriculum. Zoe Scaman’s “reading list for strategists who want to think dangerously.” Excellent, detailed sci-fi recommendations. “They’re not strategy books, but they’ve taught me more about thinking strategically than most of what’s on the business shelf. Because they do the thing we’ve forgotten how to do: question the frame. Follow an assumption past the threshold of what’s comfortable. Imagine that the whole thing could be organised differently.”
  • The future of data centres: how is the industry changing in the AI era? “The demands on the data centre industry are evolving rapidly – so must our understanding of the issues they face. A new series of reports explores the future of the sector, from technological performance, resource use, energy to data centres’ wider role in the community.”
  • Megatrends 2026. “Sitra’s new megatrend review outlines the overall picture of change, and the constraints and the opportunities relevant to Finnish society to offer support for decision-making. The report interprets megatrends from Finland’s perspective through four themes: people and culture, power and politics, nature and resources, and technology and the economy.”
Algorithms, Automations & Augmentations
  • On algorithmic wage discrimination. “Drawing on a multi-year, first-of-its-kind ethnographic study of organizing on-demand workers, this Article examines the historical rupture in wage calculation, coordination, and distribution arising from the logic of informational capitalism: the use of granular data to produce unpredictable, variable, and personalized hourly pay.”
  • AI could mark the end of young people learning on the job—with terrible results. “The arrangement meant that employers had affordable labour, while employees received training and a clear career path. Both sides benefited. But now that bargain is breaking down. AI is automating the grunt work – the repetitive, boring but essential tasks that juniors used to do and learn from. And the consequences are hitting both ends of the workforce. Young workers cannot get a foothold. Older workers are watching the talent pipeline run dry.”
  • Wikipedia volunteers spent years cataloging AI tells. Now there’s a plugin to avoid them. “Tech entrepreneur Siqi Chen released an open source plugin for Anthropic’s Claude Code AI assistant that instructs the AI model to stop writing like an AI model. Called “Humanizer,” the simple prompt plugin feeds Claude a list of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways.”
Built, Biosphere & Breakthroughs Asides
699a222e983de00001bc30de
Extensions
What technology takes from us ⊗ Private sufficiency, public abundance
Newsletter
No. 390 — A surveillance dragnet (almost) ⊗ Open-Source foresight on global transformation ⊗ The thinking game ⊗ China’s CO2 emissions “flat or falling” for 21 months ⊗ Japan’s most influential architect’s working studio
Show full content
What technology takes from us—and how to take it back What technology takes from us ⊗ Private sufficiency, public abundance

Rebecca Solnit argues that technology, driven by capitalist and Silicon Valley ideologies, encourages us to prioritise having over doing, which undermines our relationships, our aptitude for solitude, and our sense of self. This relentless focus on efficiency and quantification devalues the rich, difficult experiences that build resilience, connection, and meaning in life. She warns that machines demand we become more like them—reducing human interaction to frictionless exchanges—while pushing digital substitutes for genuine social contact. To resist this dehumanisation, we must cherish the embodied, imperfect realities of being with others and rebuild spaces where democracy, trust, and joy can flourish.

It’s not lost on me that, as much as I completely agree with Solnit, it’s also a contrast to my great interest in technology. But it also makes sense, you have to be completely entranced with Silicon Valley tech to not realise that we have strayed for away from any kind of reasonable balance. Like many things, it’s not a binary but a spectrum.

Here’s a quote by Edward O. Wilson, which you might have read almost as often as Gibson’s unevenly distributed futures.

The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology. And it is terrifically dangerous, and it is now approaching a point of crisis overall.”

This discrepancy is often presented as a problem, that we are not “up to date,” enough to keep up with our own technology. But, and Solnit explains it well, our bodies are built for certain ways of doing things, of living, of interacting. We should focus on aligning with that, not on catching up to whatever tech bros have most recently thought up and thrown at us to make a buck (well, billions of bucks).

One last thing. For a little while it was common to say that, against all odds, “the geeks/nerds had ‘won.’” Self-identifying as a geek, I felt that for a while, but we now have to realise that the geeks were (are) broken, why are we falling for their “solutions”? They/we should take time to mend ourselves and meet each other again, not try to “patch” our difficulties with tech.

We are told that machines will become like us, but in many ways they demand we become more like them. To let that happen is to lose something immeasurably valuable. That immeasurability is what makes this struggle difficult, but what cannot be measured can be described or at least evoked and valued. It cannot be boiled down to simple metrics such as efficiency and profitability. […]

I want to praise difficulty, not for its own sake, but because so much of what we want, we get through endeavours that are difficult. The difficulty is why doing something is rewarding; you have accomplished something, exerted effort and skill, stayed with the trouble, tested your limits, realised your intentions – or sometimes failed at all these things, and that too can be important, as can learning to survive failure. […]

There is a sense of belonging that goes deeper than words when we are with people who care about us, and even more so when we are in alignment, whether it’s two people falling into step on a walk or a dozen dancing together or a congregation praying or 10,000 marching together. […]

Coan noted in a recent interview that the normal approach to studying the brain and the mind is to isolate a person. But, as he pointed out, the normal state of being human over the aeons is not isolation; it’s being with others. […]

We need to rebuild or reinvent the ways and places in which we meet; we need to recognise them as the space of democracy, of joy, of connection, of love, of trust.

Private sufficiency, public abundance

“The story of economics has been the story of scarcity, but what would it take to change the narrative?” The Future Observatory Journal “explored that question through a roundtable discussion,” the article is a recap of those exchanges.

The author explores how shifting from a narrative of scarcity to one of abundance—social, material, and natural—can inspire more optimistic and sustainable ways of living. The piece highlights the importance of creating spaces for conversations that challenge dominant economic and social stories, fostering a metamodern worldview focused on conscious evolution. Drawing on examples like Indigenous wisdom and E. O. Wilson’s (yes, him again) biophilia hypothesis, the discussion emphasised our innate connection to nature and the potential for innovation within planetary limits. The idea of private sufficiency combined with public abundance encourages rethinking consumption and resources, suggesting that what humanity has produced can be enough if reused continuously and shared equitably.

Consumerism, capitalism, individualism. I’m starting to realise that dogmatic classical economists are as harmful to society as maximalist techies. If we rediscover what Solnit is highlighting above, then we can re-appreciate living together, in abundance, instead of trying to buy individual abundance in a world of scarcity.

The American Economic Association defines economics as ‘the study of scarcity, the study of how people use resources and respond to incentives’. A common criticism of neoliberal economics is that it creates artificial scarcity by taking goods that were common and commoditising them, which, in turn, leads to hoarding, exploitation and hierarchical social structures. […]

She asserts that they follow a principle of ‘least resistance’, which is that they pursue their existence in ways that use minimum energy and create minimum resistance. In ecosystems, excess consumption has been ‘evolved out’ – in other words, species or ecosystems that thrived while using other resources or less energy would have out-competed those that consumed more. […]

She described how scarcity often underlies much thinking about healthcare – the view being that there are millions of patients requiring treatment, limited resources, and this will only get more difficult in an ageing society. This is effectively a sickness service rather than one that truly embodies health and care. […]

This perhaps provides evidence for an emerging metamodern worldview that is more determined to move on from the antagonising polarities of modernity and postmodernity, towards more unifying narratives that engage with what some refer to as ‘conscious evolution’. Just as biological evolution developed through variation and selection of genes, it is postulated that human consciousness is evolving through variation and selection of memes.


§ With Ring, American consumers built a surveillance dragnet. “It does not take an imagination of any sort to envision this being tweaked to work against suspected criminals, undocumented immigrants, or others deemed ‘suspicious’ by people in the neighborhood. Many of these use cases are how Ring has been used by people on its dystopian ‘Neighbors’ app for years.”

Later in the week, Ring cancelled its partnership with Flock Safety after a surveillance backlash. This is likely not the last time they try something like this, so I’m sharing both.


§ The kingdom of misfits, on slime molds in Prospect Park Ravine, Brooklyn’s last remaining forest (and more generally). ◼ Will fungi thwart the destructive rise of the anthropocene? “The earth’s ecosystem relies on interdependency, as the curators of Fungi: Anarchist Designers reflect in an interdisciplinary show that fuses research and art to centre mushrooms in our daily lives.”


ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • 10F Consortium, open-source foresight on global transformation. “The 10F Consortium is a collaborative public service foresight project that brings together 20+ experienced practitioners to map critical global shifts over the next decade. Unlike proprietary consulting reports or institutional forecasts, 10F Consortium operates as an independent network providing open-source analysis of systemic changes reshaping our world. The project produces 10 critical forecasts covering domains from global trade systems to technology governance, climate adaptation to migration patterns.”
  • Living in a culture of futurelessness. “Young people are feeling the effects of a collective backwards cultural gaze, research from Starling and Tapestry has found. Annie Auerbach and Adam Chmielowski explore this ‘culture of futurelessness’ and call on brands and insight leaders to step up.”
  • What companies that excel at strategic foresight do differently. “[They] are able to systematically track both predictable future events and true unknowns across short- and long-term horizons. Based on a survey of 500 organizations, firms with more advanced foresight capabilities report a meaningful performance edge, driven by data-forward methods, continuous signal detection, and an explicit focus on potential upsides to risks—not just downsides.”
Algorithms, Automations & Augmentations
  • The Thinking Game. (YouTube link, free to view.) “Filmed over five years by the award winning team behind AlphaGo, the documentary examines how DeepMind co-founder Demis Hassabis’s extraordinary beginnings shaped his lifelong pursuit of artificial general intelligence. It chronicles the rigorous process of scientific discovery, documenting how the team moved from mastering complex strategy games to solving the 50-year-old "protein folding problem" with AlphaFold - a breakthrough that would win a Nobel Prize.”
  • Where is AI taking us? Eight leading thinkers share their visions. Includes Melanie Mitchell, Gary Marcus, and Helen Toner. “As society wrestles with whether A.I. will lead us into a better future or catastrophic one, Times Opinion turned to eight experts for their predictions on where A.I. may go in the next five years. Listening to them may help us bring out the best and mitigate the worst out of this new technology.”
  • The new Fabio is Claude. Bleh. “The romance industry, always at the vanguard of technological change, is rapidly adapting to A.I. Not everyone is on board.”
Built, Biosphere & Breakthroughs
  • Analysis: China’s CO2 emissions have now been “flat or falling” for 21 months. “Solar power output increased by 43% year-on-year, wind by 14% and nuclear 8%, helping push down coal generation by 1.9%. Energy storage capacity grew by a record 75 gigawatts (GW), well ahead of the rise in peak demand of 55GW.”
  • CO2 turned into starch: China’s new method boosts productivity by 10x. “Researchers at the Tianjin Institute of Industrial Biotechnology have reportedly found a way to synthesize starch directly from carbon dioxide. Achieved using only enzymes and raw materials, this new process, they report, is 10 times more productive than previous attempts. What’s more, the process doesn’t rely on plants or photosynthesis, and could help make industrial-scale manmade starch production commercially viable.”
  • What a successful rainforest recovery program sounds like. “The results revealed that the din of life in forests regrowing under a program that paid landowners to leave pastureland untouched had a lot in common with intact forests in the area, Delgado and colleagues reported recently in Global Change Biology.”
Asides
  • Inside Japan’s most influential architect’s working studio (Tadao Ando). The space! The shelves! The books! The desk! “Documents the daily rhythms of work and the careful, repetitive making of architectural scale models that sit at the center of his practice. The focus is not on finished buildings, but on process. Time spent refining ideas. Returning to the same forms again and again. Letting work unfold slowly.”
  • The Case of the Green Covers “is a risograph-printed zine that documents the history of the ‘Green Penguins,’ ‘a series of hundreds of crime novels published with green covers by the UK publisher Penguin in the 1960s’.” You had me at Penguin covers. Ok, also at risograph. Ok, also at zine.
  • Ambient Videos. Made by Noah Kalina, so everything’s gorgeous. “I started a YouTube channel, where I make ambient videos that are sometimes as long as two hours. They’re meant to be put on your TV and left there. Background. Ambiance. Something quiet in a room while you do something else, or nothing at all. In a way they are just screensavers, but every now and then something unexpected might happen.”
  • Why this tiny robot is a breakthrough for flight. “It is a flying robot no bigger than a paperclip, with wings that flap instead of spin. And it does not drift or wobble like most tiny drones. It darts, flips, and accelerates through the air with the speed and control of a real bumblebee.”
  • Serendipity - Etymology, origin & meaning. Wait, what? “Serendip (also Serendib), attested by 1708 in English, is an old name for Ceylon (modern Sri Lanka), from Arabic Sarandib, from Sanskrit Simhaladvipa ‘Dwelling-Place-of-Lions Island’.” (Via Scope of Work.)

 

“Ambitious, thoughtful, constructive, and dissimilar to most others.
I get a lot of value from Sentiers.”

If this resonates, please consider becoming a supporting member—it keeps this work independent.

Support Sentiers
698fac6cb800c200010c9f67
Extensions
The future of being human ⊗ For the sake of mutual interdependence
Newsletter
No.389 — The space trash apocalypse ⊗ Jeff Bezos, moral cretin ⊗ The futures cone reimagined ⊗ The Authoritarian Stack ⊗ Local energy networks saving lives ⊗ Tangible media
Show full content
The future of being human The future of being human ⊗ For the sake of mutual interdependence

Indy Johar argues that as prediction and optimisation (LLMs) become infrastructure—embedded in pricing, access, ranking, and the allocation of attention—what becomes scarce isn’t computational power but something else entirely: attention that can settle without extraction, relationships that form without accounting, uncertainty that doesn’t collapse into anxiety, and the ability to become, “without being prematurely named, scored, or fixed.” With too much optimisation, legibility becomes the condition through which resources and access are allocated, so people learn to make themselves readable. The hidden cost is that this compresses what can’t be represented without being diminished. Akin to “the map is not the territory,” what isn’t measured is ignored.

This isn’t nostalgia for a pre-digital world, not the anologue trend. Johar proposes a set of categories he calls “pre-legibility zones” and “opacity commons”—public and semi-public spaces designed so that capture isn’t default and identity performance isn’t the price of entry. These are “bounded worlds” where the right to remain partially unknown is treated as a civic affordance, with what he calls “selective legibility”: opacity by default, proportional accountability, consentful revelation. The argument extends to “machine-assisted rewilding,” where technology actively creates space for irreducibility rather than increasing capture. What makes this compelling is that it’s not about retreat—it’s the naming of something important, kept and rewilded for its importance, within the existing world.

To him, the future of being human isn’t the opposite of machine intelligence but its complement—the institutions, environments, and practices that ensure prediction doesn’t become total formatting, that optimisation doesn’t flatten the conditions of meaning, that intelligence doesn’t reduce life to what can be scored.

In a way, it’s kind of Chatham House Rule for life. It also reminded me of Clive Thompson’s piece Rewilding your attention, shared in No.285 over five years ago! Johar’s “practical doctrine” also reminded me of “gevulot” in Hannu Rajaniemi’s Quantum Thief, which allows each person to decide what information about them will be available to others.

Selective legibility is the middle path between two failures: total capture, which corrodes formation and agency, and romantic opacity, which can shelter harm. The aim is not to disappear. The aim is to make life livable: to allow becoming, while being held. […]

It can also mean an anti-optimisation layer: systems that introduce friction where extraction would otherwise be automatic; that detect when environments are becoming too capturing; that enforce norms of non-instrumental interaction; that protect the right to opacity and the right not to be continuously translated into signal. […]

But there is another coupling available: machines that actively create space for irreducibility—systems that reduce capture rather than increase it, that preserve unpriced time, that protect attention as a right, that enable encounter without turning it into data. […]

The invitation is to begin unfurling: to prototype the conditions that allow thicker forms of life to re-enter the everyday; to create spaces where micro-communication can return; to defend the right to opacity as a civic affordance; to design selective legibility as a livable doctrine rather than an abstract principle; to explore machine-assisted stewardship as an institutional stance rather than a moral aspiration.

Achieving independence for the sake of mutual interdependence

It’s pretty uncanny sometimes how articles align. It’s usually on purpose of course, but once in a while it just happens. I saved this piece because it’s an interview with LM Sacasas and for me Sacasas = save to Reader on sight. As soon as I started reading though, the parallel was right there. Johar was proposing a rebalancing of optimisation technology, where it’s a complement to humans, not an extractive overlord. Where we can decide when we are measured, or not, where we regain agency.

In the case of Sacasas, he draws on Ivan Illich’s concept of “convivial” tools—human-scaled technologies that empower rather than disable—to argue for a restrained relationship with technology that preserves human agency and interdependence. Illich, building on Jacques Ellul’s critique of technological society, saw industrial-age institutions as counterproductive once they pass certain thresholds: they create dependencies instead of freedom, outsourcing competencies like navigation (GPS replacing maps) or communal practices like burials to professional classes. He argues that we must ask what is good for humans to do regardless of whether machines can do it better, since building skills and depending on each other creates the threads that weave communities together.

The goal here isn’t rugged individualism but achieving autonomy for the sake of mutual interdependence—communities with the strength to order their lives according to their values. Hannah Arendt’s vision of the world as gift, and her example of the table as ideal technology (bringing people together while maintaining distinction), frames a different orientation: receiving the world with gratitude rather than treating it as a field for engineering solutions.

But my chief problem with the rhetoric of inevitability was that it was deployed by those who wanted to foreclose our thinking and judging. It doesn’t want us to think about whether this would be a good development or not for us. Often, it was assumed that it would be good — the new device, the new efficiency, the new mode of optimization — but good for what and good for whom? Maybe good for the bottom line of a company. Maybe good in discrete ways for some individuals. But many of these tools have not been good for us. […]

But Gay suggested that we are formed by the habits implicit in our economic structures, political structures, and scientific technological structures. As we participate in those structures at a pre-rational level, we are being shaped and formed by them. […]

Tools are not just an expression of our desires, but they form our desires. Tools are not just an expression of our agency, but constrain and empower our agency. […]

One of the trends implicit in the technological structures of modernity is that they isolate us. They make it difficult to form the moral communities of deliberation and practice that can help us slow down, think, and make choices.


§ 003: The space trash apocalypse you haven’t been thinking about. Excellent discussion between Radha Mistry and Tobias Revell, primarily about the Kessler syndrome, which “describes a situation in which the density of objects in low Earth orbit (LEO) becomes so high due to space pollution that collisions between these objects cascade, exponentially increasing the amount of space debris over time.” Tobias wonders if we might not already be in it. Good point. Some crazy people seem to be working on it, since SpaceX seeks authority from the FCC to launch and operate a constellation of up to one million satellites as orbital data centers.” As I said on Bluesky, “In my opinion, it’s not at all impossible that Melon Husk’s name ends up going down in history not as a real life Tony Stark, but as the guy who caused the Kessler effect that ruins space for everyone.”


§ Jeff Bezos, moral cretin. “No, the thing that has changed is that Jeff Bezos has developed a political agenda. He is on Team Billionaire. Team Billionaire thinks that billionaires are brilliant, wise, and omnicompetent. It can’t stomach leaving journalists in charge of the journalism, because surely the billionaire owner has better instincts and deeper insights. Team Billionaire thinks the public needs to stay in line and respect their betters. Team Billionaire thinks the government should stay on the sidelines (at least until its bailout time, that is).”


ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • The futures cone reimagined: A framework for critical and plural futures thinking. “This article critically re-examines the Futures Cone, a foundational but frequently misapplied tool in foresight practice. Often treated as a forecasting method or creative prompt, the Cone is reframed here as a relational and epistemic scaffold that only gains meaning through reflective, participatory processes.” That being said, we don’t talk enough about the future burrito.
  • Foresight 2026 Roland Berger China Annual Trends Report. “This report provides trend analysis and in-depth insights into key industries, such as Automotive, Civil Economics, Consumer Goods and Retail, Health, Energy, Industrial Products and Services, and Technology. Additionally, this year's report delves into several major hot topics, including China's potential unleashed in a new world order, artificial intelligence, Chinese companies' international expansion, transaction and investor services, new quality productive forces, and sustainability, aiming to stimulate thought and provide valuable insights to industry stakeholders.”
  • A Manifesto for Future Cities. “Reflections on future cities point to a deeper issue: cities are not only struggling with what kind of futures they are heading toward, but also with how to consciously move away from paths that no longer serve them and collectively define their future, with well-being emerging as a meaningful—yet hard to operationalize—compass for urban development.”
Algorithms, Automations & Augmentations
  • The Authoritarian Stack. “How tech billionaires are building a post-democratic America — and why Europe is next. … Under the banner of ‘patriotic tech’, this new bloc is building the infrastructure of control—clouds, AI, finance, drones, satellites—an integrated system we call the Authoritarian Stack. It is faster, ideological, and fully privatized: a regime where corporate boards, not public law, set the rules.”
  • International AI Safety Report 2026 | International AI Safety Report. “The second International AI Safety Report, published in February 2026, is the next iteration of the comprehensive review of latest scientific research on the capabilities and risks of general-purpose AI systems. Led by Turing Award winner Yoshua Bengio and authored by over 100 AI experts, the report is backed by over 30 countries and international organisations. It represents the largest global collaboration on AI safety to date.”
  • AI in science research boosts speed, limits scope. “As individual scholars soar through the academic ranks, science as a whole shrinks its curiosity. AI-heavy research covers less topical ground, clusters around the same data-rich problems, and sparks less follow-on engagement between studies.”
Built, Biosphere & Breakthroughs Asides (archive week I guess?)
6987bc74a5b14500016014e1
Extensions
Why science fiction can’t predict the future ⊗ The straw, the siphon, and the sieve
Newsletter
No.388 — The Field Guide to Design Futures ⊗ Mozilla recruits partners to take on AI goliaths ⊗ Hamburg combats loneliness with ”culture buddies” ⊗ Art veteran uses Gen Z slang
Show full content
Why science fiction can’t predict the future (and why that’s a good thing) Why science fiction can’t predict the future ⊗ The straw, the siphon, and the sieve

When I found it in the Reactor newsletter last week, I almost put this piece by Ken Liu in the futures “link block” below. Good thing I didn’t, it’s well worth featuring, and the read. Scifi, technology, retro-futures, history, paths taken and not taken, metaphors. Lots woven in there.

Liu asserts that science fiction fails at prediction but succeeds as mythology. The genre’s abysmal track record—no flying cars, no post-nuclear Terminators, no sentient HAL 9000—doesn’t prevent its concepts from shaping how we talk about technology. Terms like “Big Brother” and “cyberspace” describe contemporary reality, but the real systems bear little resemblance to their fictional origins. Our surveillance state arose through voluntary privacy trades for convenience, cobbled together from tech companies, advertisers, governments, bots, bad laws, and our own imperfections. Orwell crafted a powerful metaphor, not an accurate prediction.

History is a stumble through competing possibilities. Around 1900, 40% of American cars ran on steam, 38% on electricity, 22% on gasoline. Steam seemed reliable, electricity had Edison backing better batteries, while gas cars were loud, dirty, and required dangerous hand-cranking. Hundreds of plausible reasons explain why gas won—cheap Texas oil, failed EV rental models, the electric starter, Henry Ford’s determination—but the actual sequence involved random events, wars, bankruptcies, and cultural shifts that nobody could foresee. We mistake the consequences of winning for its causes, weaving triumphalist narratives that make the present seem inevitable. Science fiction authors face an impossible task: they work before the breakthrough that decides which competing solution wins. They can only guess and construct plausible worlds around their guess, turning effects into causes through character arcs and moral resolution.

This failure becomes the genre’s strength. Science fiction creates myths—thinking machines questioning their makers, immortality through genetic code, autonomous houses—that give us tools to understand a world where technology dominates our evolutionary future. Liu quotes Le Guin: Mary Shelley released Frankenstein’s monster, and nobody has shut him out since. The monster sits in our modern living rooms because myths don’t vanish under scrutiny. Read as prediction, Frankenstein fails, but prophecy was never the point. Silvia Park’s robot children in Luminous offer another variation of the Abdicating Parent archetype. The specific scenario matters less than the framework it provides for understanding the fraught relationship between mortals seeking immortality through generation. Science fiction’s wrongness provides hope—there are no laws dictating the future, and dystopia arrives only if we build it. The metaphors endure long after predictions crumble, becoming vocabulary for making sense of an impossible present and constructing an unimaginable future.

The prospective view, in that moment before the breakthrough, when all the potential solutions are vying for attention, is completely different from the retrospective view, long after the breakthrough technology has transformed the world and secured its own triumphalist narrative. Survivorship bias, confirmation bias, selection bias, hindsight, narrative fallacy, wishful thinking, arrogance… there are countless names for the cognitive biases humans exhibit when we try to tell the story of the past from our place in the present, and we must constantly remind ourselves that the way it is is not the way it has to be. […]

By crafting entertaining stories, authors invent powerful metaphors that shape how we imagine our technological future and understand our technological reality. These metaphors are why science fiction matters. […]

These are all metaphors that allow us to make sense of a world in which the products of our imagination and craft, technology and invention, increasingly dominate not just our own evolutionary future, but the future of the planet as a whole. We live in a world in which the possibility field is growing ever grander, and new myths are needed to make sense of it. […]

These modern myths become part of our vocabulary, the framework and tools with which we make sense of the impossible present and then construct the unimaginable future.

Technology and wealth: the straw, the siphon, and the sieve

Nate Hagens also works with metaphors. In this essay he argues that technology doesn't create wealth (usable energy, organized matter, and the stocks and flows that make life possible, viable, and enjoyable), it extracts it faster. He uses three metaphors to explain how technology functions at scale. The straw represents how technology accelerates drawdown of natural resources, turning stocks into flows. Fracking exemplifies this: we access oil more quickly without finding more of it, getting closer to the slurping sound at the bottom of the milkshake. The siphon describes how gains concentrate as technology scales. Network effects and capital requirements favour early movers and large players, creating chokepoints that allow platform owners to extract value simply by controlling access. The sieve filters wealth away from other species and toward humans, particularly a small subset. The technosphere now outweighs all living biomass, redirecting 40% of Earth’s net primary productivity to human use.

Hagens extends this framework to debt, which functions as social technology that pulls future resources into present consumption while concentrating returns through interest payments. AI amplifies these dynamics in the cognitive realm. Like fossil fuels multiplying physical labour, AI scales pattern recognition and coordination at near-zero marginal cost once trained. This accelerates extraction of attention and decision space, enlarges the concentration of returns to platform owners, and optimises for current metrics without accounting for soil health or ecosystem stability. Hagens doesn’t claim to have solutions, and instead closes with ten questions about scale, responsibility, and what constitutes real wealth. These questions can help us investigate when tools become destabilising, who absorbs costs off balance sheets, and whether we’re borrowing from the future while calling it innovation. His framing suggests speed itself might be a risk variable rather than an unquestioned good.

What would our economy look like if “wealth” meant the continuity of flows rather than the liquidation of stocks? Sunlight, rain, soil fertility, functioning ecosystems – not just this quarter’s output. […]

What would change if we treated speed as a risk variable rather than an unquestioned virtue? What would shift if slower systems weren’t seen as failures, but as systems with time to notice mistakes? […]

But when a technology works, it spreads – especially in a globally-interconnected economy. When it spreads, it scales. And when technology scales across whole economies and decades, its role and impact changes. At the macro scale, technology acts as a set of tools that lets us pull “more” from the world per unit time as an economy and a species. […]

In that sense, AI steepens the same gradients we’ve already been riding, creating more throughput, more concentration, and less time and awareness to notice what’s being lost. […]

At what point does scale change the moral and physical meaning of a tool? When does something that helps at a village level become destabilizing at a planetary one?


§ Stubborn Optimism, tending your inner fire, and why hope is not enough with. Nate Hagens again, this time interviewing the fantastic and inspiring Xiye Bastida. I loved her views on activism, on centering nature, but also on futures. “So that’s one of my theories of change. If it doesn’t exist, I build it and I build it the way I would like to live in the future because I’m practicing the future today.”


ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • The Field Guide to Design Futures. Heck of a list of contributors + me ;-). “There is something inherently fascinating yet reassuring about manuals. They promise results as long as you follow steps and recipes that you can easily replicate and apply every time you need them. The Field Guide to Design Futures works on a different premise: you build your own understanding of what Design Futures and futures thinking are and design accordingly by selecting, assembling, scraping, and skimming different entries, voices, and contributions that make up this volume.”
  • What’s between, between?. “The exhibition takes Gulf Futurism as its starting point—a term that emerged to describe the unique experience of rapid transformation across the Arabian Peninsula, where hyper-modernization and clashing visual cultures create a distinctive sense of living between multiple temporalities. It captures the dizzying collision of histories with futures, luxury malls alongside desert landscapes, and centuries-old traditions coexisting with cutting-edge technology.”
  • The Future 100: 2026. “Amidst this ‘metamorphic’ current, the desire for human connection remains unmistakable. In the year ahead, human impulse will shape brand strategies, influencing the top marketing trends in 2026, and draw people back to immersive, high-impact experiences that demonstrate the value of authenticity and unlock infinite possibilities.”
Algorithms, Automations & Augmentations
  • Mozilla recruits partners to take on AI goliaths. “The company is putting together ‘a rebel alliance of sorts,’ … The goal is to make AI more trustworthy while offering a counter to massive players like OpenAI and Anthropic. ‘It’s that spirit that a bunch of people are banding together to create something good in the world and take on this thing that threatens us, [i]t’s super corny, but people totally get it.’”
  • How the world lives with AI: findings from a year of global dialogues. “Through seven rounds of deliberation with more than 6,000 people across 70 countries, we’ve built recurring infrastructure to learn how the world actually lives with AI—what people use it for, whether they trust it, and how it is changing their daily lives.”
  • Why India’s plan to make AI companies pay for training data should go global. “A license fee for the use of copyrighted data can compensate creators and help AI companies avoid lengthy legal fights.”
Built, Biosphere & Breakthroughs Asides
  • National gallery of art veteran uses Gen Z slang in viral videos. Legend! “Myers and Mary King, the museum’s social media copywriter, wrote a script by pulling words from a spreadsheet they created full of Gen Z jargon. … Luchs speaks five languages: English, French, Italian, and some German and Russian. She approached grasping Gen Z parlance like she was learning another language. King coached Luchs in pronouncing the words.”
  • Thousands of Chinese fishing boats quietly form vast sea barriers. “China quietly mobilized thousands of fishing boats twice in recent weeks to form massive floating barriers of at least 200 miles long, showing a new level of coordination that could give Beijing more ways to impose control in contested seas.”
  • “Doing Is Living” highlights five decades of Ruth Asawa’s biomorphic wire sculptures. “‘I study nature and a lot of these forms come from observing plants,’ Asawa said in a 1995 interview. ‘I really look at nature, and I just do it as I see it. I draw something on paper. And then I am able to take a wire line and go into the air and define the air without stealing it from anyone.’”
697d28b03fbbfc0001c4ad03
Extensions
The mythology of conscious AI ⊗ The discourse is a Distributed Denial-of-Service attack
Newsletter
No.387 — Research as a form of pattern disruption ⊗ 10 things I learned from burning myself out with AI coding agents ⊗ Kim Stanley Robinson, science fiction maestro and utopian ⊗ What our Blue Planet really looks like
Show full content
The mythology of conscious AI ⊗ The discourse is a Distributed Denial-of-Service attack

A high proportion of the positive recommendations I get about this newsletter go something like this: “I look forward to your Sentiers each Sunday. It is something I enjoy and find inspiration from over my morning coffee!” (Thanks Todd!) So, make a cup of your favourite coffee, find a comfortable seat, and enjoy the reads, I think both features hit entirely different but important topics to ponder. As always, replies are more than welcome and yes, if you have a moment to share with a friend or colleague, every bit helps.

To expand a bit on the comments linked above, maybe James Hoffmann needs to have some kind of curated list of recommended reads to enjoy coffee with. An entirely self-serving idea, I know. I’d both love to be on the list and read through it.


The mythology of conscious AI

Long, fascinating piece by neuroscientist Anil Seth. First, a slight side trip. A couple of weeks ago, I shared a post where Robin Sloan argued that AGI has already arrived. There was a bit of hubbub in the replies in my inbox. A lot of the discourse had to do with one of two things; using the “historical” understanding of AGI, which has kind of meant human equivalent and was not what Robin wrote about. The second has to do with language, i.e. what we use the word “intelligence” for. Reading this mythology piece, I realised (finally?) that for many of us—possibly for most of us if we stopped to think about it—we tend to equate intelligence with consciousness, or at least the two are so close to each other in our minds, that when thinking about the I in AI we actually have both words in mind. They are, of course, not the same. “Intelligence is the ability to achieve complex goals by flexible means,” while “consciousness, in contrast to intelligence, is mostly about being.”

Back to the piece itself, Seth argues that conscious AI remains highly unlikely because consciousness probably requires biological life, not just computation. He challenges computational functionalism—the assumption that implementing the right algorithms suffices for consciousness—by demonstrating how psychological biases (anthropomorphism, the seductive power of language like “AI hallucinations”), the brain-as-computer metaphor’s limitations, and the existence of non-computational processes (continuous dynamics, stochastic phenomena, electromagnetic effects) all undermine claims that silicon can replicate consciousness. Real brains exhibit deep multiscale integration where individual neurons engage in metabolism and self-maintenance that resist clean separation between function and substrate. One insight: every acknowledged conscious entity is alive, suggesting consciousness connects fundamentally to biological self-regulation and the thermodynamic imperative to resist entropy rather than abstract information processing.

Beyond the scientific argument, Seth offers a cultural critique of Silicon Valley’s conscious AI pursuit. He identifies how financial incentives drive some researchers’ enthusiasm for machine consciousness rather than rigorous evidence, framing the entire enterprise as “techno-rapture”—a Promethean fantasy about transcending biological limits and escaping mortality. This mythology exploits exponential growth rhetoric to create psychological pressure toward believing imminent breakthroughs despite scant evidence. His simulation-versus-instantiation distinction clarifies the stakes: computational models lack the causal powers of what they model, just as simulating digestion doesn’t digest.

I’d draw your attention to something he says in passing, where he identifies a practical implication that might be more pressing than the theoretical debate: “even if we know, or believe, that an AI is not conscious, we still might be unable to resist feeling that it is. Illusions of artificial consciousness might be as impenetrable to our minds as some visual illusions.” Meaning that the appearance of consciousness could prove “good enough” and fundamentally shape human-AI interaction, regardless of the underlying reality.

But here’s where the trouble starts. Inside a brain, there’s no sharp separation between “mindware” and “wetware” as there is between software and hardware in a computer. The more you delve into the intricacies of the biological brain, the more you realize how rich and dynamic it is, compared to the dead sand of silicon. […]

There is, therefore, something of a contradiction lurking for those who invest their dreams and their venture capital into the prospect of uploading their conscious minds into exquisitely detailed simulations of their brains, so that they can exist forever in silicon rapture. If an exquisitely detailed brain model is needed, then you are no more likely to exist in the simulation than a hailstorm is likely to arise inside the computers of the U.K. meteorological office. […]

Evidence that the materiality of the brain matters for its function is evidence against the idea that digital computation is all that counts, which in turn is evidence against computational functionalism. […]

Computational simulations generally lack the causal powers and intrinsic properties of the things being simulated. A simulation of the digestive system does not actually digest anything. A simulation of a rainstorm does not make anything actually wet. If we simulate a living creature, we have not created life.

The discourse is a Distributed Denial-of-Service attack

Great piece by Joan Westenberg and at the same time an example of a part of the problem. What she’s writing about isn’t new, I’ve myself used the DDoS analogy multiple time in trying to describe our current information predicament. But, as she explains, understanding something and not just having an opinion takes time. Thus, she’s “late” to the issue. However, here “late” actually means a well argued point, a strong position, and a reflection you should take the time to sit with.

Westenberg argues that endless controversies exhaust our collective cognitive capacity through sheer volume, preventing the sustained attention required for deliberative thinking. By the time we marshal resources for careful analysis, the conversation has moved on and we’re already several outrages behind, forcing us into permanent reactive mode. False information spreads faster—70% more likely to be retweeted—precisely because it’s simpler and emotionally compelling, whilst truth requires cognitive work we cannot afford under this constant bombardment.

The discourse transforms understanding into mere positioning, rewarding confidence over competence, gradually rewiring participants into incapable thinkers unable to step back from the flood. Just like you’re not in traffic, you are traffic, now we also “do this shit to ourselves. We are our own botnet.” According to the author, the only viable response is deliberately stepping back to deeply understand one topic rather than frantically positioning on everything, reclaiming the capacity for actual thought over perpetual reaction.

The discourse takes the most important problem of our time and converts it into an infinite series of tribal skirmishes, each of which generates heat and engagement while bringing us no closer to answering any of the actual hard questions. […]

You can have a position on something without understanding it, and you can understand something without having a confident position on it. […]

The philosopher Bertrand Russell remarked that the fundamental cause of trouble in the world is that the stupid are cocksure while the intelligent are full of doubt. […]

When many ideas compete for limited attention, the ideas that are best at capturing attention win, and those that aren’t good at it die out. This creates selection pressure toward attention-grabbing content, which tends to be extreme, emotional, simple, tribal, and visceral. The ideas that survive aren’t the most true or useful. They’re the most viral. […]

But the discourse hates expertise. Or rather, it puts experts in an impossible position. To engage with the discourse, an expert has to compress their nuanced understanding into takes that can compete with the confident nonsense being spouted by random accounts with anime avatars.


ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • Research as a form of pattern disruption. “To spot weird signals, you need to go down rabbit holes. Follow your intuition. And remember, pursuing rabbit holes is not always an act of procrastination. Sometimes, it’s simply your mind telling you to follow your curiosity. Weirdness can present itself at any given moment, through any medium.”
  • Three Narratives for the Future of Work. “That is why, when asked whether I am optimistic or worried about the future of work, my answer is deliberately uncomfortable: I refuse the binary. I do not think we should be ‘optimists’ or ‘pessimists.’ We should be prepared.”
  • CES 2026 trends. “Explore VML’s top takeaways from CES 2026 – from AI and humanoids to health spans and wearable tech that’s shaping the future”
Algorithms, Automations & Augmentations
  • 10 things I learned from burning myself out with AI coding agents. “Fifty projects later, I’ll be frank: I have not had this much fun with a computer since I learned BASIC on my Apple II Plus when I was 9 years old. This opinion comes not as an endorsement but as personal experience: I voluntarily undertook this project, and I paid out of pocket for both OpenAI and Anthropic’s premium AI plans.”
  • Anthropic Economic Index report: Economic primitives. “These ‘primitives’—simple, foundational measures of how Claude is used, which we generate by asking Claude specific questions about anonymized Claude.ai and first-party (1P) API transcripts—cover five dimensions relevant to AI’s economic impact: user and AI skills, how complex tasks are, the degree of autonomy afforded to Claude, how successful Claude is, and whether Claude is used for personal, educational, or work purposes.”
  • OpenAI to test ads in ChatGPT as it burns through billions. Enshittification proceeding as expected. “The move represents a reversal for CEO Sam Altman, who in 2024 described advertising in ChatGPT as a ‘last resort’ and expressed concerns that ads could erode user trust, although he did not completely rule out the possibility at the time.”
Built, Biosphere & Breakthroughs Asides
  • This Ocean Map shows what our Blue Planet really looks like. “The world is 71 per cent ocean, but you wouldn’t know it from looking at a standard world map, what’s great about the new Ocean Map is that it encourages us to consider the world from a different perspective, one which reclaims the importance of the ocean on which we all depend.” (I looked for this map after seeing this visualisation of Earth’s surface, which was linked in the Robinson interview above.)
  • Powering change: a visual journey into China’s green transition. “The exhibition showcased aerial photographs of China’s renewable energy landscape—solar farms, wind turbines, and hybrid energy projects—alongside stories of people and communities living amid the country’s massive energy transformation.”
  • Back-scratching bovine leads scientists to reassess intelligence of cows. Missed opportunity by the titler, this should have been “Back-scratching cow leads to head-scratching scientists.” “Scientists have been forced to rethink the intelligence of cattle after an Austrian cow named Veronika displayed an impressive – and until now undocumented – knack for tool use.”
697537f22b28390001e5de1c
Extensions
The post-American internet ⊗ Scanning the present through a polycrisis lens
Newsletter
No.386 —AI is changing the physics of collective intelligence ⊗ When the future isn’t somewhere else ⊗ Claude in healthcare and the life sciences ⊗ An enzyme that can break down polyurethane ⊗ 7,000-year-old underwater wall
Show full content
The post-American internet The post-American internet ⊗ Scanning the present through a polycrisis lens

In this long read, Cory Doctorow uses his colourful, lively, and zinger-filled writing to explain his vision for a “project of building a post-American internet: a project to reduce tech debt, to unlock America’s monopoly trillions and divide them among the world’s entrepreneurs (for whom they represent untold profits), and the world’s technology users (for whom they represent untold savings); all while building resiliency and sovereignty.” He says he his simply hopeful that this scenario can happen, but at the same time paints a portrait that is almost utopian. On the other hand, his opinion of bosses re AI and corporations writ large, is bleak af—I don’t necessarily disagree, it just might be a bit too strident to convince some people.

In between those (to me) extremes, Doctorow makes an excellent argument “that we’ve got a new coalition in the War on General Purpose Computers: a coalition that includes the digital rights activists who’ve been on the lines for decades, but also people who want to turn America’s Big Tech trillions into billions for their own economy, and national security hawks who are quite rightly worried about digital sovereignty.”

He identifies multiple vulnerabilities in the current US-dominated internet architecture that the TACO’s chaos has exposed. There’s the legal infrastructure: anticircumvention laws like the DMCA’s Section 1201, which the US Trade Representative forced on the world through trade agreements, making it illegal to jailbreak devices or build interoperable alternatives without manufacturer permission. There’s the physical infrastructure: most transoceanic fibre optic cables make landfall in the US, where the NSA has been tapping them since at least 2006—a “hub-and-spoke” topology that only worked when the world trusted America not to abuse its centrality. And there’s the software infrastructure: the world’s governments and businesses locked into US cloud services that can be weaponized (as Microsoft allegedly did to the ICC) or remotely disabled (as John Deere did to looted Ukrainian tractors).

Doctorow’s solution threads through all three: repeal anticircumvention laws to enable adversarial interoperability and migration tools, then build digital sovereignty on open-source, commons-based software, like the EU’s Eurostack project, that can be collectively maintained, audited, and localized by institutions worldwide. The door is “open a crack,” he argues, and the first country to walk through it becomes the “disenshittification nation” supplying freedom-enhancing tools to the rest of the world. (Go Canada! <dubious emoji />)

I assume you’ve spotted the pattern by now: the US trade representative has forced every one of its trading partners to adopt anticircumvention law, to facilitate the extraction of their own people’s data and money by American firms. But of course, that only raises a further question: Why would every other country in the world agree to let America steal its own people’s money and data, and block its domestic tech sector from making interoperable products that would prevent this theft? […]

This has been a long time coming. Since the post-war settlement, the world has treated the US as a neutral platform, a trustworthy and stable maintainer of critical systems for global interchange, what the political scientists Henry Farrell and Abraham Newman call the “Underground Empire.” But over the past 15 years, the US has systematically shattered global trust in its institutions, a process that only accelerated under Trump. […]

There’s one post-American system that’s easy to imagine. The project to rip out all the cloud connected, backdoored, untrustworthy black boxes that power our institutions, our medical implants, our vehicles and our tractors; and replace it with collectively maintained, open, free, trustworthy, auditable code. […]

Bosses like to tell themselves that they’re in the driver’s seat, but really, they fear that they’re strapped into the back seat playing with a Fisher Price steering wheel. For them, AI is a way to wire the toy steering wheel directly into the company’s drive-train. It’s the realization of the fantasy of a company without workers. […]

Let’s call time on enshittification. Let’s seize the means of computation. Let’s build the drop-in, free, open, auditable alternatives to the services and firmware we rely on.

Related → In The dream of the universal library, Monica Westin examines another early-2000s digital promise that failed to materialize: the universal library. Like Doctorow’s post-American internet, the dream of making every digitized book accessible online was blocked by copyright law—specifically, the 70% of scanned books trapped in legal limbo (under copyright but commercially unavailable). Westin proposes practical licensing reforms similar to what the EU implemented in 2019, rather than overhauling copyright entirely. She highlights a bitter irony; that these books are likely fully accessible to train Google’s LLMs while remaining locked away from human readers. She concludes that “the universal library is near, but it’s up to us to ensure that humans, not just AIs, have a card.”

Scanning the present through a polycrisis lens

Bryan Alexander revisits the concept of a polycrisis, where multiple overlapping crises—such as demographic shifts, climate change, geopolitical tensions, and technological developments—interact and exacerbate one another. He illustrates this with current examples like protests in Iran, immigration challenges in the US and Europe, and the US seizure of Venezuela’s president, showing how these crises intertwine with national power struggles and broader global trends. Alexander highlights how technology, including AI and communication tools like Starlink, plays a significant role in these events, influencing both governments and opposition forces. He concludes that national elites are increasingly struggling to maintain control amid these compounded pressures, which may lead to a more inward, locally focused mindset worldwide.

Demographic changes can drive foreign policy, such as opening borders to immigration or waging wars before the fighting-age population drops below a certain level. […]

I also infer a connection to anxieties about the demographic transition in the right’s doubling down on a fierce masculinity as part of traditional gender roles, with the call for women to have more children. We can add an intra-elite competition as well, as the two leading American political parties fought over immigration policy. Macroeconomic factors also powered this story, as sending nations’ economies offered fewer opportunities, while Americans tended to avoid the hands-on jobs immigrants perform.

Related → At one point, Alexander writes; “One difference from the American case is that the European economy has been struggling.” I’d point you, and him, to this by Gabriel Zucman: The idea of a sclerotic Europe facing an American El Dorado has little basis in fact. “The US produces $81 in gross value per hour worked, but at a particularly high environmental cost. The EU produces $71 per hour, yet with far lower carbon emissions. More leisure time, better health outcomes, greater equality and lower carbon emissions, all with broadly comparable productivity: Europeans can be proud of their development model, which is, on the whole, far more compelling.” 🗄️


§ AI is changing the physics of collective intelligence—how do we respond?. “Inside collaborative processes, AI can capture rich, real-time transcripts of discussions; distill arguments, rationales, and assumptions into structured forms; and track who said what, linked to which evidence, with what level of agreement. This makes the tacit more legible.”


ul { margin: 2rem 0 0 0; padding: 0; } ul li { margin: .5rem 0 0 0; list-style: none; padding: 12px 0; border-top: 1px solid #eee; line-height: 1.4em; } ul li p { margin-bottom: 0.5rem; } Futures, Fictions & Fabulations
  • When the future isn’t somewhere else. “In futures work, we talk about weak signals—early indicators that point to larger systemic shifts. Immigrant communities have not been offering weak signals. They have been offering strong, persistent evidence of a system under strain. The failure here was not a lack of foresight. It was the decision to treat lived experience as anecdotal rather than authoritative.
  • Motion to redefine VUCA as Violent, Unfair, Confusing, and Absurd. It’s a one phrase post by Jorge Camacho on LinkedIn, then amended to Dark VUCA. Not much to read, but as I commented, I thinks it’s very much on point so there you go, an updated VUCA (volatility, uncertainty, complexity and ambiguity).
  • Eventbrite Social Study 2026: Event trends shaping the year ahead. “Reset to Real captures a cultural shift back to live experiences that feel human, unfiltered, and alive. See what’s driving the next wave of IRL culture.”
Algorithms, Automations & Augmentations
  • Advancing Claude in healthcare and the life sciences. “A complementary set of tools and resources that allow healthcare providers, payers, and consumers to use Claude for medical purposes through HIPAA-ready products. Second, we’re adding new capabilities for life sciences: connecting Claude to more scientific platforms, and helping it provide greater support in areas ranging from clinical trial management to regulatory operations.”
  • Brands say Amazon’s ‘Buy for Me’ is listing products without permission. No shame. “‘Buy For Me’ uses ‘agentic AI capabilities’ to provide third-party websites with shoppers’ encrypted payment and shipping information, according to Amazon. Still, several merchants said that, to shoppers accustomed to scrolling Amazon’s marketplace, the listings can resemble a typical Amazon product page, potentially giving the impression that a brand is selling directly on Amazon, even if the transaction ultimately happens elsewhere.”
  • Microsoft responds to AI data center revolt, vowing to cover full power costs and reject local tax breaks. I’m not holding my breath. “The new plan, announced Tuesday morning in Washington, D.C, includes pledges to pay the company’s full power costs, reject local property tax breaks, replenish more water than it uses, train local workers, and invest in AI education and community programs.”
Built, Biosphere & Breakthroughs Asides
  • 7,000-year-old underwater wall raises questions about ancient engineering. “Scientists have identified a stone wall nearly 400 feet long, lying 30 feet beneath the surface of the Atlantic Ocean. It was built by hunter-gatherers more than 7,000 years ago, though its purpose remains uncertain. Researchers speculate it may be the source of local legends of a city swallowed by the sea.” (Via Kottke.)
  • Thousands of dinosaur footprints discovered in remote Italian Alps. “A wildlife photographer who was exploring a remote pocket of the Italian Alps has discovered thousands of dinosaur footprints preserved in the vertical face of a mountainside.”
  • Sesame Street refresh with David Gallo. “Scenic designer David Gallo takes us behind the scenes of his work refreshing Sesame Street, balancing history with modern storytelling. From relocating Oscar the Grouch to uncovering forgotten set details, Gallo shares how he approached updating an iconic space while preserving its magic.”
696c01d2de0aec00017ec082
Extensions
← Back to feeds