GeistHaus
log in · sign up

Wild Information

New writing from Claire L. Evans.

rss en Claire L. Evans
(clairelevans@substack.com)
10 posts
Feed metadata
Generator Substack
Status active
Last polled Apr 29, 2026 01:39 UTC
Next poll Apr 30, 2026 01:39 UTC
Poll interval 86400s
ETag W/"8127d-kt83X/Xqw573J4EbbnFmYab/K/k"

Posts

Witness at the End of Time
On the death of an ancient tree, the death of images, and the persistence of softness.
Show full content
The end of history.

Last week I drove up to San Francisco with some friends for the Game Developer’s Conference (a massive, and frankly terrifying, video game industry gathering). Along the way we stopped to juice up the Volkswagen, and, killing the long half-hour it takes to replenish an electric car, kicked around a Western-themed truck stop. Its entertainments included not one but three animatronic shooting galleries, a “farmtique” selling salvaged road-signs and hand-painted Cheugiana, and a truckbed-sized cross-section of ancient downed sequoia, varnished to a coffee table shine.

Fitting with the unspecified but palpable Biblical mood of the place, the tree’s rings were marked with the birth of Jesus, the fall of the Roman Empire, a long, unremarked-upon Dark Ages, American Independence and the death of the tree itself.

1776 AD: American Independence

1952 AD: Tree Cut Down

I liked that. According to the Bravoland truck stop in Kettleman City, California, the death of this tree was as important a historical event as the birth of Jesus or the founding of the republic on which it stood. These kinds of displays are common in Northern California; since the early 19th century, the ancients have been flayed, hollowed, and sliced for public consumption. If we’re to believe the markings on their terminal rings, the timeline of human history ends with the tip of the lumberman’s axe.

I’ve written before about how difficult it is to see a tree—to synchronize with arboreal time from ground level. The maximum lifespan of a sequoia is 3,500 years. It’s understandable to want to annotate that time with more knowable markers (and we do something similar with size: the Tall Tree in Humboldt County, for example, is 70 feet taller than the Statue of Liberty). But this sequoia wasn’t present for the birth of Jesus or the first musket-shots of the American Revolutionary War. It was just alive at the same time. An indirect form of bearing witness. If people had rings, in my innards there would be pins marking “Invention of the World Wide Web” or “Gulf War.” Instead my cells have turned over a thousand times and all I have are my memories.

My favorite book about California is Jared Farmer’s magisterial Trees in Paradise, which examines the state through the lens of redwood, eucalyptus, orange, and palm trees (although real heads know the palm is taxonomically a grass). Farmer argues that the ancient redwood and sequoia groves of Northern California provided an appealing “landscape of antiquity” to white Protestant colonizers. “For a young nation insecure about its cultural position relative to Europe,” he writes, “natural scenery offered something vaguely compensatory if not commensurate with ruins, myths, and epics.”

Unable (or unwilling) to engage with the existing myths and epics of the Indigenous people of the Sierras, 19th century Americans instead benchmarked the ancientness of the indigenous trees against Homer, Aristotle, Copernicus, Jesus. And then they cut them all down. They burdened trees with the weight of history and then, so divested, declared that history over, sawing it off at the roots. From what remained, they built a “new” world. Here it is. Slices of the ancient world persist, but they’re shellacked and mounted to the wall, witnesses to our dark age and a reminder of the time destroyed.


Back in LA, we went to a lecture about the death of images. In it the graphic designer David Rudnick made an argument (more elegant than my recollection of it) that images have historically held a shared narrative value, and that in an era of high-speed, bespoke image generation, these slower shared narratives will erode, and with them the image’s role as a basic unit of culture. Rudnick spoke derisively about images intended for an audience of one—those solipsistic products of the prompt. In the past, images seen by only one person were a sign of madness. I don’t disagree, although it had me thinking about what more steadfast shared referents might be.

There are those lasting mental images we experience alone but together, like migraine phenomena, shared hallucinations, or the symbology of dreams (admittedly, all their own signs of madness). Or perhaps material can be a referent. The wood that was once a tree; the microchip that was once sand. The tree that’s still standing. Maybe we can build something like culture from a shared knowledge of the origins of things and their fates. Maybe that’s where culture began. After all, a tree receives as much meaning as it imbues. Like an image, it’s a carrier for history. This makes the woodcutter’s crime a crime against narrative as much as it is a crime against nature.

As Laura Tripaldi observes in her excellent book Parallel Minds, our conception of prehistory as an age of jagged flint and stone is a consequence of the relative permanence of those materials. Because rocks last, we assume they’re all there ever was, ignoring the really transformative technologies of early human culture, which have long since returned to the Earth: the proto-software of textile weaving, the advanced chemistry of fermentation and pigment. This material bias extends forwards in time as well. We assume the machinations of silicon and rare Earth minerals to be eternal. But it’s the soft technologies that have persisted, and will persist.


Compost After Reading, which I don’t plan on composting anytime soon.

Speaking of soft technologies and the power of the biodegradable, my friend Cass Marketos, an artist and community composter, has just published a manifesto and how-to guide called Compost This Book. Cass has an expansive practice. As an artist, she’s composted memories and ideas; as a composter, she is practical, improvisatory, and welcoming. Her hands are always dirty with good Earth. If you like my work, you will love hers. She also writes an excellent newsletter called The Rot.


A bit of news:

We went to the video game conference because Blippo+ was up for four IGF Awards: the second-most nominated game of the year! We didn’t win, but no matter. That our rogue media experiment received the highest recognition in the industry feels like a triumph in itself. Blippo+ is available to play on Nintendo Switch, PC, and now Mac.

I’m also tickled to say that Blippo+ won the Herman Melville Award for Best Writing at the New York Game Awards earlier this year. I’ve never written a video game before!

A few upcoming events of note:

March 27: I’ll be back in San Francisco next week for Gray Area’s annual Algorithmic Art Assembly. I’m giving a talk on brainless cognition—likely the least algorithmic subject on offer that weekend. In related news, a friend told me that folks in the know are now referring to “the intersection of art and technology” as just The Intersection.

March 28: On March 28th, there will a party in Los Angeles to celebrate the publication of the catalogue for Emergence, a 2024 Fathomers exhibition exploring The Intersection of art and synthetic biology (I wrote an essay about one of the pieces in the show, a human tear gland “organoid” that cried tiny tears throughout the opening). It’ll be a science rager, with a chanting astrophysicist, a “Petri DJ,” and a dance performance about collapsed stars. I’ve been invited to end the evening with a biology-inspired DJ set, so expect goopy tunes and Barbara McClintock soundbites. If you’d like to come, enter the code AFTERLIFE50 for half-off tickets.

xo

Claire

https://clairelevans.substack.com/p/witness-at-the-end-of-time
Extensions
Those Curious Naturalists
To close the year, a long meander on the dark edges of human-animal cohabitation.
Show full content
Konrad Lorenz with his greylag geese.

It all started during my lucid dream phase. Hunting for references to lucid dreaming in literature and philosophy, I found someone’s dissertation, which referenced an obscure story by the British writer Lawrence Durrell. I’d never heard of Durrell, but his Alexandria tetralogy of novels sounded interesting: four interleaving accounts of the same story, set in Alexandria, Egypt, in the 1930s.

I read the first, Justine, and from there I was briefly Alexandria-mad: I couldn’t get enough of Rue Rosette and the Grande Corniche, Lake Mareotis, the “Canopic mouth” of the Nile, and the dusky nights Durrell described in shades of mauve. As a chaser, I read the poet C. P. Cavafy, a longtime Alexandrian, and E.M. Forster’s chatty guidebook to the city, where he was stationed as a Red Cross volunteer during WWI.

Having dispatched all of Durrell’s Alexandria books, I eventually found my way to his daughter, Sappho, a playwright who died by suicide at the age of 34. Granta published some of her journals in the early 1990s, in which she makes multiple allusions to childhood sexual abuse. “I want to play around with the idea of parricide—not in general but in specific,” she wrote, in the very first entry. “Vis à vis my father.”

So that was that for my Durrell era. Or so I thought.

Jacquie Durrell with Cholmondely.

Three weeks later I was in a thrift shop (as ever) and came across an odd little paperback by a Jacquie Durrell, who I learned was the ex-wife of Lawrence’s brother, Gerald, a famous naturalist. Lawrence, as it turned out, was not the most notorious Durrell; Gerald, OBE, wrote bestsellers too, including My Family and Other Animals, an account of his childhood with his bookish brother “Larry” on the island of Corfu.

Gerald’s dream, in those early years, was to own a zoo; in the late 1940s, he made it happen, using his inheritance to mount animal-sourcing expeditions to Britain’s colonial territories in Cameroon and Guyana. He further subsidized his journeys into equatorial Africa by publishing travelogues dramatizing his negotiations with tribal leaders and the madcap bush expeditions it took to fill ships with angwantibos, macaws, and skinks (a large percentage of these animals, injured by the traps set to ensnare them and lacking appropriate care, died on the journey back to England).

Jacquie’s book, Beasts in my Bed, is something of coda to her husband’s accounts; a wife’s-eye view of the dubious, dusty business of animal-sourcing, complete with patronizing footnotes from Gerald (“women do like to exaggerate” — GD). Back in Bournemouth, they kept the animals at home while they searched for a suitable site for what eventually became the Jersey Zoo. Bush-babies and squirrels cozied together in the garage and Cholmondely, a chimpanzee in diapers, slept in the Durrells’ bedroom, where he swung from the drapes and “learnt to accept light, noise, cigarette smoke and anything else without detriment to his health or well-being.” Okay.

I’ve always been of two minds about zoos. They shouldn’t exist, but there is something about a kid seeing a giraffe with their own eyes, especially now, in our age of illusion, for them to know that such an animal is real and worthy of protection, out there. Maybe, for an animal born in captivity, that’s the best you can hope for. But stealing animals from stolen land, as the Durrells did, is inconceivable. What struck me, reading Beasts in my Bed, was the entitlement. The animals were theirs for the taking. Each in a box, to be fed cut fruit and second-rate steaks for the rest of their days. This was a better life than their birthright. “Contrary to the popular belief,” writes Jacquie Durrell, animals do not “live in a Utopian state where all their whims and fancies are catered for; in fact some of them were in appalling condition when they came to us.”

Slightly more charming, in this genre, are the writings of the Austrian animal ethologist Konrad Lorenz, whose 1959 book King Solomon’s Ring was another thrift store buy for me: a paperback with illustrated marginalia of creatures great and small. Lorenz comes off as more Dr. Doolittle than Dr. Durrell. From his perch on the banks of the Danube, an “island of wilderness in the middle of Lower Austria,” he made his own home a zoo: tame rats darted underfoot throughout the house, nipping “neat little circular pieces” from the linens to pillow their nests, and a gaggle of greylag geese, imprinted on Lorenz as their disproportionate but loving parent, lounged in the flowerbeds by day and the bedroom by night. In this way Lorenz did his most famous work, a study of the birds’ instinctive bonds—by becoming father goose.

The scientific logic for this unorthodox cohabitation was that “captivity cages minds as well as bodies,” and animal behavior is best observed without the intermediation of the cage. Lorenz allowed animals free rein in his house so that he could watch how their minds worried the world—how they solved problems, formed bonds, and overcame, in time, their fear of man. There’s no question he formed deep attachments with these creatures; on hands and knees, in his best approximation of a waddle, he daily led his goslings to water. But something about his writing troubled me, too: a certain incongruity about what it means to be “wild” or “free.” After all, geese running loose in the larder are still captive. Is it such an inconvenience to go to where they live?

The nagging sense I had, reading my thrift-store Lorenz, was that his menagerie of animal roommates were not being understood on their own terms. Peeled off and delaminated from their world, they were instrumentalized in service of something I couldn’t quite place. Narrative, scientific notoriety, or ideology? It didn’t take much research for me to find the answer. Lorenz, although rehabilitated to the point of receiving a Nobel Prize in 1973, had been, before the War, an unguarded supporter of National Socialism, and used his theories about animal behavior to justify Nazi policy.

According to the German historian Ute Dichmann, Lorenz published several papers in the early 1940s arguing that domesticated hybridized geese, cut off from the harsh winnowing of natural selection, were destined to become evolutionary degenerates, and that “cultured peoples,” having attained a certain stage of civilization, were susceptible to the same “physical and moral manifestations of decay.” That is, he believed that interbreeding weakened races just as it dimmed the natural instincts of wild birds. We know where such logic leads; during the War, Lorenz assisted the Nazi psychologist Rudolf Hippius in racial studies on humans in occupied Poland.

Far more competent scholars than me have explored this unsettling subject (Dichmann’s Biologists Under Hitler, in particular, is an enlightening read). I just find it interesting what the keeping of animals can reveal. It isn’t neutral. In the case of the Durrells, for all their later emphasis on conservation, a zoo served as a living index of a violent empire; for Konrad Lorenz, close communion with waterfowl fed a scientific justification for eugenics. In neither instance were animals intentionally mistreated. I think they were even loved. I guess it comes down to why they were loved, and how.

Illustrations from Niko Tinbergen’s Kleew: The Story of a Herring Gull (1947).

Was it a possessive love, the patriarchal kind? A love born from the misplaced sense that “Man” can do better than “Nature” in making a creature whole? One warped by ideas of hierarchy and racial purity? Ironically, Lorenz shared his Nobel with Niko Tinbergen, an ornithologist who spent two years in a Nazi hostage camp for his role in the Dutch resistance. One of the most striking documents in Dichmann’s book is a letter Tinbergen wrote just after the war to a colleague in the States, accounting for who survived and who was lost. “Many of us have been imprisoned in some way or another,” he wrote. “Our government will demand from everyone a declaration: ‘have you been in prison during the war? If not, why not?’

By 1945, Tinbergen was eager to get back to his birds. He believed in unfettered observation, in field work. His mantra was “watch and wonder.” Not much time for that during the war. Still, he managed to write a small book about bird sociology, and, while in the camp, some illustrated stories about a Herring gull, which eventually became a beloved children’s book in the Netherlands. What a contrast: Tinbergen imprisoned, dreaming of gulls, the free-flying seabirds of the North Sea. Lorenz walking free, twisting his own observations of birds into a justification for the worst kind of containment and brutality. And later, exploiting their imprinting instinct to form, in the name of science, a twisted patriarchy. To quote Sappho Durrell, on her own father: “I will always have his ego between me and the world, and my surroundings will be as dry as dust.”

Lorenz (left) and Tinbergen (right) eventually reconciled; although immediately after the war, Tinbergen wrote that it would take time for “the wounds of our souls” to heal. Photo courtesy of the Max Planck Society.

Maybe you’re wondering what any of this has to do with technology, or microbiology, or any of the things this newsletter is ostensibly about. Nothing, really, beyond the fact that it’s bracing to chase curiosity, and to honor those weird streaks through books and happenstance where it feels, for a time, like everything is connected. But I do think that these questions of how we regard the living world, and where we stake our vantage over it, are always relevant.

A few weeks ago I had a conversation with a microbiologist who mentioned, in passing, a foundational conflict in his field, the divide between the Koch and Winogradsky models of etiology. Robert Koch, the 19th century biologist whose work provided the basis for the germ theory of disease, saw microbes as immutable pathogens, each tethered in a direct causative relationship with a specific disease. In the lab, he sought to inoculate “pure cultures” free of any “uninvited guests.” Sergei Winogradsky, a Ukrainian ecologist, took a different view: that microbes, like all creatures, lived in variegated and unruly constellations, in nomadic communities rife with competition and collaboration. He preferred to investigate them as closely to their natural habitat as possible, engaged “in the life contest with other microbes.”

Microbiologists still disagree about how best to study their subjects: isolated in a Petri dish, like zoo animals? Or together, as gregarious players in a lively ecology? In 1958, Niko Tinbergen wrote a lovely book, Curious Naturalists, in defense of naturalistic study, his mode of watching and wondering over life. “The biologist has to be aware that he is studying, and temporarily isolating for the purposes of analysis, adaptive systems with very special functions—and not mere bits,” he wrote. These living systems act on us as much as we act on them, and in the case of many microbial communities, they sustain our very existence. In light of this mutual vulnerability, it’s hard to distinguish the animals from their keepers. I’m happy with that.


Finally, a few recommendations for your weekend:

  • This marvelous primer on Victorian-era acoustics experiments, from the always impressive Public Domain Review. I also really love the Public Domain Image Archive, a bounty of early scientific imagery, Medieval illustrations, and other visual oddities. A collection of 16th century perspective drawings of geometrical figures, with snails and birds included for scale? Yes.

  • This 1973 John McPhee piece, Travels in Georgia, about two scrappy biologists collecting roadkill, is a straight marvel. James Somers has a few fascinating old blog entries about McPhee’s process; this one, about how he uses the dictionary, originally included a script to install a 1916 edition of Webster’s on your Mac. I’ve never managed to make it work, but maybe you will. I’ll take comfort in my recent discovery that the LA Public Library provides online access to the Oxford English Dictionary; the Historical Thesaurus is a lost weekend waiting to happen.

  • This 24 hour stream of lo-fi microbes to study to.

I hope everyone who made it this far is enjoying the nub-end of 2025; in these dead days between the holidays and the New Year, may you be accountable to nobody.

xo

Claire

P.S. A bit of news, as a post-script: I’ve been nominated for the Herman Melville Award for Best Writing in a Game by the New York Videogame Critic’s Circle for my work on Blippo+; I’ll be in New York in January for the ceremony. Fingers crossed!

https://clairelevans.substack.com/p/those-curious-naturalists
Extensions
Feed the Soil (and the rest will follow)
Ancient camels, buggy computers, and other musings from an empty office.
Show full content

Last month, I visited an old family friend with a rooftop garden in Brooklyn. As is often the case when two plant nerds meet, we did the rounds of her beds together, bearing witness to each nub of growth. Her school of organic gardening was new to me; rather than compost, she ferments. When she eats traditional fermented foods like natto, she rinses out the container and dumps the wet dregs into the garden. She folds fermented rice hulls into the soil, and advised me to save the starchy, nutrient-rich water left over from rinsing rice—mixed with sugar and left to bubble a few days, it captures wild microbes from the air and transforms into a potent fertilizing brew.

Clearly it was working. But any gardener will tell you: feed the soil, not the plant.

I’m trying to take a feed the soil approach to everything these days. Rather than fixating on the fruits of my labor, I’m putting down mental compost—adding layers of art and film and conversation to the pile and letting it cook. I’m reading a lot. I’ve just moved into my first proper office, a mostly-empty corporate building in the heart of LA’s Miracle Mile. It’s a place in transition, like me; either postlapsarian or pre-utopian. During coffee breaks, we wander over to the nearby LaBrea Tar Pits, where the air reeks of sulfur and the grass underfoot bubbles with the fermented souls of extinct megafauna. I admire the educational signage around the fenced-off pits, written in language sticky as tar; the phrase “dense bone jumbles” comes to mind. Where better to learn about the 7-foot camels that roamed Southern California during the last Ice Age? The camels were called Camelops hesternus: Yesterday’s Camel.

Most of the office spaces around us are still empty. Chairs are stacked everywhere, wheels akimbo. It reminds me of the sociologist Michael Indergaard’s account of visiting the vacated SoHo offices of the web design agency Razorfish right after the collapse of the dot-com bubble; he described rows of abandoned $3,000 office chairs “like rows of terra-cotta soldiers” in “one of those tombs built for Chinese emperors.”

Another bubble-pop is imminent, and if we’re lucky, more emperors will fall. In such uncertain times, I turn to yesterday’s camels. The classics: fermentation, friendship, new projects, old movies. Great novels and stacks of moldering magazines and lefty pamphlets pilfered from estate sales. The way wildflower seeds laid down right before the rain always know what to do. The milky light of a meek California winter. Maybe I’m getting old, but nothing really changes.

Here’s Christopher Isherwood, from A Single Man, on the IBM punch-card:

He dislikes even to touch these things, for they are the runes of an idiotic but nevertheless potent and evil magic; the magic of the think-machine gods, whose cult has one dogma, we cannot make a mistake. Their magic consists in this, that whenever they do make a mistake, which is quite often, it is perpetuated and thereby becomes a non-mistake. 

I too distrust the “magic of the think-machine gods,” but I’m learning new ways to deflect it. Computers are temporary, I think; just our species’ fleeting attempt to capture ancient processes in stone. As readers of this extremely sporadic Substack might remember, earlier this year I wrote a piece for WIRED about the largely fruitless attempt to build a computer model of a microscopic worm called C. elegans. Despite having less than 1,000 cells, the little worm eludes simulation. My takeaway was that mimicking life is a misuse of resources—it sets the servers spinning, and best case scenario you end up with an inferior copy of something that already exists.

Recently, I’ve been talking to some computer scientists who are approaching this question in reverse: rather than tasking silicon-based computers with the burden of life’s complexity, they’re convinced we should just let nature do the computing for us. After all, it’s really good at it. No creature can survive without processing information about its surroundings through its senses, preserving it, in the form of memory, and putting it to use, in the form of behavior or phenotypical change. Input, memory, output. That’s what computing is, in a sense: life itself.

In a recent study by the Living Digital Systems Lab at the University of Nebraska-Lincoln, researchers were able to coax E. coli into identifying Fibonacci sequences. This joins work from a group in Kolkata, India, who genetically-engineered bacteria to respond to mathematical queries—by glowing green in the presence of a prime number, for example. Of course, the bacteria don’t know they’re doing math; they’re just doing what bacteria do, sensing chemicals, which in this case stand in for numbers, then expressing onboard genetic responses that we can then interpret.

“Eventually what we want is a computing system connected to a reactor with bacterial cells inside,” Dr. Sasitharan Balasubramaniam, of Nebraska’s Living Digital Systems Lab, explained to me. “You might send a command from your computer, it changes to chemical signals, the bacterial cells compute, produce an output, and that output gets translated to an electrical signal and hopefully comes back to the computer.”

The challenge of biocomputing isn’t getting the bugs to process information. They’re doing that already, in their own way. It’s building interfaces: translating our queries into signals the bacteria understand, and then learning to decode their responses (which might take the form of glowing proteins, shivers of electricity, or reactions to precisely-timed pulses of light) back into something we can use. Between the two-digit language of binary and the more expansive sensory processing of living organisms in vivo, we’ll wade through plenty of noise: the natural stochasticity of all living systems. But as with any good translation, meaning transcends exactitude.


What else? I went on the Quanta Podcast to discuss my recent piece on what cells remember. It was an interesting, dense conversation—about how the most important concepts have the shiftiest definitions, how important it is to revisit forgotten science, and what mechanisms for memory we retain from our unicellular ancestors.

The video game I wrote and co-produced, Blippo+, continues to tickle and delight fans of dead media. Inverse called it one of the 25 Best Games of the Year, and AV Club gave it a 9/10. Get it on Steam here (Mac version coming early 2026) or on Nintendo Switch here. It’s best played on an old CRT television late at night.

I wrote a short opinion piece for Issues in Science & Technology, responding to a science fiction story by E. G. Condé, a writer and anthropologist. Condé’s story is a cautionary tale about the precarious thermodynamics of modern data centers; my piece proposes a biological alternative, storing our “cold” data in synthetic DNA. I’m grateful to the good people of the Center for Science and the Imagination at Arizona State University for the invitation to participate in this series, Future Tense Fiction, which pairs original science fiction stories with commentary from experts.

I have a piece, also, in the 2026 Other Almanac, a reimagined version of the Old Farmer’s Almanac (RIP) that takes a more progressive, ecological approach to its material, with contributions by “climate organizers, indigenous activists, migrant farmworkers, historians, scientists, medicine makers, incarcerated painters, astrologers, lawyers, borderland midwives, and more.” I’m honored to be part of it.


Some things I’ve loved, recently:

  • My friend Ava Kofman’s investigative feature, for the New Yorker, about the surprising shape of the laboratory animal rights movement. It starts with a troop of escaped macaques (“the marines of monkeys, because they never leave a man behind”) and takes so many unexpected turns—into the creepy, secretive facilities of primate breeders and the world of MAGA diehards who have made the liberation of lab animals central to their anti-science campaigning.

  • On a related tip, I’ve been reading the philosopher Martha C. Nussbaum’s 2022 book Justice for Animals, which makes a no-nonsense case for an animal ethics based on capabilities rather than cognition or the capacity for suffering, two concepts that rely too heavily on anthropocentric comparison. Nussbaum’s thesis is that all animals have their own, inherent capabilities—their own sensory and emotional experiences, forms of expression, social bonds, joys and needs for bodily autonomy—that may not look anything like ours.

    I’ll leave you with a provocation from Nussbaum:

[A]ll animals, both human and non-human, live on this fragile planet, on which we depend for everything that matters. We didn’t choose to be here. We humans think that because we found ourselves here this gives us the right to use the planet to sustain ourselves and to take parts of it as our property. But we deny other animals the same right, although their situation is exactly the same. They too found themselves here and have to try to live as best they can. By what right do we deny them the right to use the planet in order to live, in just the way that we claim the right?

Like I always say about the raccoons, possums, and skunks that make their lives in the weedy alleyways of Los Angeles: nobody told them nature left.

xo

Claire

https://clairelevans.substack.com/p/feed-the-soil-and-the-rest-will-follow
Extensions
Eating the Engram
A brief history of memory — in cells, worms, and beyond the brain.
Show full content
Pierre Huyghe, Timekeeper, 1999.

Please assume, for the sake of argument, that there is in our souls a block of wax.

So begins Plato’s treatise on memory, the “mother of Muses.” To remember, Plato said, we hold our thoughts to our internal wax, imprinting them, as with a seal ring. Over time, the wax softens, our memories fade; what we don’t imprint, we forget.

This is an archaic model for memory, but not an entirely inaccurate one. A wax imprint is a form of memory, in a literal sense — just as a book is, an etching on a wall, or a footprint left behind on some archaic plain. “Any physical medium that allows information to persist over time carries information about the past,” says Sam Gershman, a cognitive scientist at Harvard. “That perspective helps us take a step back away from some of the brain-centric ideas about what memory is.”

I spoke to Gershman recently for a piece I was writing for Quanta about brainless forms of memory. His lab is currently studying the ciliate Stentor coeruleus, a primitive unicellular organism that nonetheless appears capable of remembering its past. Although microscopic pond creatures might seem beyond the ken of cognitive science, Stentor is remarkably like a neuron, with similar excitable membranes and action potentials. If scientists like Gershman can uncover how ciliates remember, it may help us to trace what ancient forms of memory still linger in our own cells.

Gershman’s work so new that it’s still unpublished, but his questions are old ones. What is memory? Where does it live? We still don’t have good answers. Plato had his wax impressions, to which the German zoologist Richard Semon later gave the name engrams. Discovering the nature and location of engrams in the brain was, Semon presumed, a “hopeless undertaking” for the science of his day. It stymies us still.

In the 1960s, an eccentric behavioral psychologist named James V. McConnell took up Semon’s gambit with a series of studies on planarian flatworms. Flatworms are a particularly useful organism for studying memory: they’re one of the simplest creatures to possess a brain and a nervous system with the same bilateral symmetry as ours. They also have an uncanny ability to regenerate. If a planarian worm is chopped in half, both halves will regrow into a new worm — tail from head, and head from tail.

McConnell devised a series of experiments to test associative learning in planarians, repeatedly shocking them while exposing them to a bright light until they associated the light with the shock — think Pavlov’s worm. Once confident that the worms remembered this training, McConnell beheaded them all. As he expected, the new worms that grew from the severed heads remembered the shocks and reacted in kind.

What McConnell didn’t expect was that the worms that grew back from the headless tails retained their training, too. Their brains had been entirely lopped off, but they still anticipated the dreaded shock that accompanied McConnell’s bright light. This meant that whatever form the worms’ memories took, they weren’t the exclusive purview of their brains. They lurked somewhere — or maybe everywhere — else.

McConnell took this work a step further by grinding up his trained worms and feeding them to their naive brethren; the cannibal worms picked up the light response right away, as though they were remembering, rather than learning, what to do. Could the worms have eaten the engram? “In the jargon of computer engineering, information is always ‘fed’ into a computer,” Arthur Koestler later wrote, in a survey of McConnell’s work. “Here the metaphor became flesh.”

Media coverage in Time and Esquire extrapolated McConnell’s findings into a dubious future of piano lesson pills and cannibal students. But although edible memories made him one of the most famous public scientists of the 1960s, the scientific community viewed McConnell’s experimental work with skepticism. It didn’t help that he published all his research in a satirical journal called The Worm Runner’s Digest.

The Worm Runner’s Digest, which ran from 1959 to 1979, published — often without differentiation — satirical articles and serious research alike.

“There’s a wide swath of people in neuroscience who consider [McConnell’s] memory transfers a traumatic memory that discredited the field,” explains Gershman.

But this stigma is lifting. In 2018, UCLA neuroscientist David Glanzman performed a similar, albeit less macabre, experiment on the sea slug Aplysia california. After training the slugs to respond to a shock to their tails, Glanzman was able to transfer the sensitization from one slug to another via a direct injection of genetic material. This suggested that memories are stored in RNA, a contention McConnell shared, and that they aren’t choosy — Glanzman only had to inject into the neighborhood of the nervous system for a memory to transfer to a new slug.

Michael Levin, who studies the emergence of intelligence at Tufts University, has interpreted this slug-agnostic memory as evidence of biology’s robust “remapping” capacity. In Levin’s view, life’s most interesting trick isn’t memories themselves — which, like any compressed information, in living and computational systems alike, are bound to be quite lossy — but the slug’s capacity to reinterpret them in a new context. In a recent paper in the journal Entropy, he wrote that the “engrams” in Glanzman’s experiment “seem less like encoded memories and more like a kind of prompt,” containing just enough information for the slug to deploy to new ends.

Memories, Levin argues, aren’t about fidelity; they’re about salience. He gives the example of a caterpillar becoming a butterfly. In the cocoon, the caterpillar’s memories persist as its brain turns to goo; a caterpillar trained to avoid a certain smell, for example, will retain the aversion on the other side of metamorphosis. But since the caterpillar’s life experience — memories of crawling on delicious leaves — aren’t relevant to the butterfly, who moves through 3D space in search of nectar, the butterfly must generalize its old memories, remapping them, literally, on the fly.

“Leaves,” no longer relevant to the butterfly, become generalized as “food,” Levin explains. This biological aptitude for “mnemonic improvisation,” for bringing the lessons of past experience to bear in a constantly-shifting world, is, he argues, “a ratchet that gave rise to intelligence.” Evolutionary survival, after all, is a matter of adapting to change. And while memory drives this adaptation, it’s not always precisely relevant. Think of the butterfly: to survive, it reinterprets its past for a completely different future. As always with biology, a little creativity goes a long way.

Art for my Quanta magazine piece on cellular memories by Kristina Armitage.

If you’re interested in these kinds of questions, I just published that piece for Quanta. It’s about the cohort of cognitive scientists and neuroscientists, including Gershman, rewriting everything we know about memory. For the better part of a century, we’ve assumed that memory is an intercellular phenomenon: that it’s the consequence of collections of neurons in our brains wiring together. But new research on individual human cells and tiny unicellular creatures is revealing that memory transcends these connections — and that even the smallest solitary cell remembers its past experiences.

It was wonderful to get to talk to several leading thinkers in this emerging field. Even strictly defined, memory is a philosophical subject, and the scientists who are drawn to it seem to be deeply invested both in the history of these ideas and what their findings imply for our understanding of agency and cognition. As the microbiologist Dennis Bray writes, although an individual cell may be “a robot made of biological materials,” it’s also “undeniably the ‘stuff’ of which consciousness is made.”


A few strong, completely unrelated recommendations for your week:

  • Earth.fm, a library of immersive natural soundscapes from all over the world. It’s got it all: the sound of melting snow in the Baltics. Iberian midwife toads tooting at dusk. Costa Rican jungle rain. And the non-profit that runs the project cites Zen Master Thich Nhat Hanh’s Earth Holder Community as inspiration. This playlist is a good starter; there’s also a mobile app. Great for writing.

  • Charles Aznavour’s early ‘70s queer elegy, “What Makes a Man a Man.” I first heard this song as the soundtrack to The Other Side, a slideshow of Nan Goldin’s photographs. I grew up on devastating French music, so I have a particular affection for this kind of narrative torch song, but I can’t get over how efficient and vivid Aznavour’s writing is here — and how ahead of its time.

  • Charles Portis’ 1985 novel Masters of Atlantis. An arch satire of midcentury American esoteric movements, it follows a group of “Gnomonists” ostensibly preserving the knowledge of the lost city of Atlantis as their movement rises and falls. Full of proto-New Age neologisms, ludicrous hats, and failed experiments in alchemical metallurgy, it’s extremely funny without ever tipping into farce.


Finally, an invitation to any readers in Los Angeles! On August 7th at the Bob Baker Marionette Theater, we’re celebrating the launch of the video game I wrote, Blippo+.

The game, which is entirely live-action, stars over 100 performers and represents the culmination of an enormous amount of creative work from a dedicated group of artists over nearly five years. At the party we’ll have an exhibition of production props and costumes, hands-on demos of the game in multiple formats, a puppet show, and lots of fun surprises. No cells, worms, or ciliates will be harmed. Come see!

xo

Claire

https://clairelevans.substack.com/p/eating-the-engram
Extensions
Enter the Meadow
Modeling complex horizons, the mind of a worm; quite a bit of personal news.
Show full content
David Hockney, The Arrival of Spring, Normandy, 2020.

The woods are still wet from the storm. With each breeze, the trees shudder down droplets, as if remembering the rain, and the surrounding meadow steams lightly in the sun. Everything is buzzing. Black-eyed Susans tilt under the weight of the bumblebees and grasshoppers spring from their hiding spots in the grass.

I’ve decided to try and know this meadow as best I can, so I’m sitting on a bench with my phone pointed at the sky, recording bird songs. I’m smelling the wildflowers. I’m taking photos of leaves and comparing them to illustrations in a local field guide. But descriptions are static, and out here in the meadow, everything changes. Without thinking, I pop open a pod and it reveals a tidy row of seeds as green as new peas.

Oh no, I think. It’s meadows all the way down.

What draws me to nature is how persistently it defies capture. I could count each blade of grass, every worm, ant, bud, and splat of bird-shit, and still know nothing about this meadow. It’d be about as useful as counting grains of sand on a beach. I’m reminded of the late essayist Barry Lopez, who spent thirty years watching and writing about a stretch of the McKenzie River near his home in Western Oregon.

A true student of the living world, Lopez was confident in little else but the fact that the river would continue to reveal itself over time. A river, he explained, cannot be known, not the way a rocket engine can. Because although an engine is complicated, a river is complex, “an expression of biological life, in dynamic relation to everything around it.” Those who really understand landscape—field biologists, hunting and gathering peoples, artists—prefer specificity: how the light moves on the water on a given day. “This view,” Lopez wrote, “suggests a horizon rather than a boundary for knowing, towards which we are always walking.”⁠

I like Lopez’s model of a horizon, rather than a boundary, for knowing. Our relation to that horizon never changes, no matter how much ground we cover trying to reach it. As any scientist will tell you, the more we learn, the less we know, and this is especially true in biology: for all the data we gather, the dynamics of living systems remain impossible to capture and model at scale. It’s all meadows, all the way down.

Portrait of the author with her print spread in WIRED—featuring incredible computational worm artwork by John Provencher.

In March, I published a feature in WIRED about an ongoing open-source effort to build a computer simulation of a microscopic nematode worm. This worm is one of the most comprehensively-studied organisms in science; we’ve had a wiring diagram of the 308 neurons that constitute its brain since the late 1970s. And yet every neuroscientist I spoke to despaired at the possibility of ever creating a working model of how those 308 neurons drive the worm’s behavior. They described the undertaking to me as a cathedral—something they don’t expect to see completed in their lifetimes.

I find this, above all, comforting. No need to stress about the coming of Artificial General Intelligence if Artificial Worm Intelligence remains such a distant impossibility. I hope it underscores how mightily naive we are about the possibility of truly imitating life. As I write in the WIRED piece, it currently takes ten hours of compute time to render a single second of a basic mechanical model of a worm inching forward. To make a complete, molecular-scale worm simulation—one that moves backward, too, and more importantly, expresses why a worm does anythingwould take at least 10 more years of research, cost tens of millions of dollars, and require data from up to 200,000 real-life worm experiments. That’s a whole lot of effort to pull off what living worms can do with nothing much more than compost and sunlight.

Of course, in science, modeling isn’t usually about this kind of total representation. Instead, modeling is a practice of constructing the false in pursuit of the real. As the molecular biologist Dennis Bray writes, a model is “a way of knowing,” a “symbolic representation that helps us to comprehend the phenomenon.” Like language, models help us tinker with things too complex or all-encompassing to otherwise grasp.

But there are different ways to make a model, and different reasons for doing so. A physicist might reduce the world like an equation, stripping complex systems down to their fundamentals—force, energy, matter, heat—in order to understand them from first principles. She works from the bottom up. The engineer, more practically-minded, models from the top-down. Her objective in modeling nature is to mimic it, borrowing its distributed architecture to design a better internet or streamline a swarm of robots. For the engineer, how a system works matters far more than why. The models used by physicists and engineers can exist fairly independently of the systems they describe, as interesting mathematical toys, or as blueprints for machines.

For the biologist, however, modeling guides empirical study. Her models are bound to the world, driving new questions about life and how it works. She compares computational ant networks to real colonies and mathematical swarms to mobbing flocks of jackdaws. For the biologist, if a model doesn’t help her to understand the world—if it’s not a good fit—it’s as good as useless. This means that when a biological model is uncoupled from the original system it imitates, confusing things happen.

Take AI: machine learning models are simplified abstractions of complex biology, cribbing the architecture of neural networks from human brains. They don’t mirror brains down to the molecule, because (as I hope I’ve conveyed here) there isn’t a computer on Earth powerful enough to do such a thing, and even if there were, such a simulation would be as opaque to us as the original wetware is. That doesn’t stop us from confusing the map with the territory, and conflating the output of these models with intelligence itself. This is dangerous territory for many reasons, not least because it constrains our understanding of what intelligence is, and what forms it may take in the meadows, rivers, and ever-distant horizons of the living world.


In other news, I just got back from Europe, where I was invited to give a keynote talk in Stockholm about some of these ideas—uncomputable worms, complex living systems, ways of knowing—in the context of AI, the ultimate inadequate model. The video of the talk is kinda doing numbers on YouTube, at least by my standards.

Blippo+, the live-action video game I’ve been working on with my friends for five years, is finally out in the world. It’s nearly impossible to explain, but if you’re into pirate television, video art, broadcast cable, Buckaroo Banzai, Star Trek, computer history, and far-out theories about consciousness particles, you will probably love it. It’s currently rolling out as an 11-week collective experience for the black-and-white Playdate console, but the full-color version is coming out for Nintendo Switch and PC in the Fall. You can wishlist the game on Steam if that’s your thing. And if the idea of a “live-action video game” has you scratching your head, I’ve written a longform primer about the history of this maligned genre—a category-resistant hybrid of interactive cinema and cinematic gaming—for PioneerWorks.

Also, I just found out that my piece about lucid dreaming for Noema Magazine won a Southern California Journalism Award for best “Soft News” feature. 🏆


Here are three new books I really like:

  • Lori Emerson’s Other Networks, a “radical technology sourcebook” of non-internet communications networks—from talking drums, smoke signals and pigeon post to dial-in party lines, Ham radio, and barbed wire telephones. It’s a fascinating compendium of all the ways human beings have contrived to keep in touch with one another, and a vision of a more playful world; Emerson pairs technical entries with examples of ways artists have subverted and tinkered with communications technologies, proof that she fully gets who the real engineers are. Emerson is a founding director of the Media Archaeology Lab at the University of Colorado Boulder, which just seems like the coolest place in the West.

  • I Am Abandoned, a slim volume from Primary Information books documenting the performance artist Barbara T. Smith’s spicy 1976 staging of a dialogue between two unhinged psychoanalytic computer programs at CalTech. When I was in Europe, I was lucky to catch a fantastic exhibition of feminist computer art at Kunsthalle Vienna; Smith’s piece “Outside Chance,” in which she tossed some 3,000 printouts of computer-generated snowflakes out the window of a Vegas hotel, stole my heart (documentation here, courtesy of the Getty Institute).

  • Rosalyn Drexler’s 1972 novel To Smithereens. Drexler, a painter, was both a founding figure of the New York Pop art scene and, briefly, a professional wrestler. The two vocations combine in this novel about a scrappy female wrestler and her nebbish art critic beau. A bawdy sendup of the ‘70s Manhattan art world, it quite literally pulls no punches. Long out-of-print, it’s been rescued by Hagfish Books, a small press who sniff out overlooked writers and give them a second chance at adoration (full disclosure: I’m on Hagfish’s advisory board).


Finally, I’ll leave you with a thought from one of the people I interviewed for the WIRED piece—the computer scientist Stephen Larson, founder of OpenWorm:

The way I see it, a lot of a lot of understanding biology is understanding the informational relationships between things. DNA is essentially a little packet of data, of information. What really matters about it is not that it's made of molecules. It's the sequence and the relationship of the molecules to each other. There's a lot of biology that is highly based on information and the way that information works. And so it feels pretty natural, if you're going to go simulate a thing, to say that the aliveness comes from the informational structure more than the material components.

🪱,

Claire

https://clairelevans.substack.com/p/enter-the-meadow
Extensions
The Hidden Bird Algorithm
Simulating flocks in the shadows of Hollywood.
Show full content
Remastered still from Stanley and Stella in: Breaking the Ice, produced by the Symbolics computer graphics department for SIGGRAPH 1986.

Craig Reynolds has always been drawn to patterns in nature. When he was a kid, he paid close attention to the trailing shapes of clouds, the way tree branches swayed in the breeze. He marveled at the comings and goings of ants, the way their tiny labors accumulated into colonies. But nothing was more beautiful to him than flocks of birds, which murmured in great black clouds, responding to one another in real time, above the San Francisco Bay. 

Just the thought of it awed him: thousands of animals, each responding in real-time to one another, precise as clockwork and fluid as the wind. Reynolds eventually studied computer science at MIT—drawn, in the same way, to elegant patterns of code.

In the late 1970s, Reynolds was part of MIT’s Architecture Machines Group, a cohort that would eventually become known as the MIT Media Lab—and which, even back then, was already designing computer graphics, robotics, and human-machine interfaces. At MIT, he started thinking about natural systems again. “How could you do that on a computer? How could you simulate them? What would a model of a flock be like, or an ant colony?” Reynolds remembers.

For his 1978 master’s thesis, Reynolds created a programming language for animation, proposing that it might be used to simulate bird flocks by imitating the “hidden bird algorithm” behind their movements. When he graduated, he brought that idea with him to the West Coast, where he’d landed one of the few jobs available at the time for a “computer graphics weirdo:” making visual effects for the movies, at a video production house, Information International, Inc., in Culver City, California.

At Triple I (as everyone called it) Reynolds earned his first film credits, working as a special effects scene supervisor on movies like Michael Crichton’s plastic surgery slasher Looker and, of course, Tron. Showbiz wasn’t glamorous. “Triple I was literally between death and taxes,” Reynolds tells me over Zoom. On one side of the building was an IRS field office; on the other, the Holy Cross Cemetery, known for its celebrity clientele. “We would often go up there to smoke,” he says. “We would just drive out to the middle of the cemetery—that was our entertainment.”

It was at Holy Cross, sitting on the lawn with the ghosts of Bing Crosby, John Ford, and Sharon Tate, that Reynolds began to seriously wonder how a flock of birds really worked. Blackbirds flock in the thousands, sometimes even the tens of thousands—the highest flock count ever recorded was 40 million blackbirds in Arkansas. No way a single bird could know the shape of such a large group, Reynolds mused.

Holy Cross was close enough to the 405 for the freeway’s roar to soundtrack his revelation. “I realized that it must be a very different experience to be a member of a flock than to observe it from the outside, like the difference between driving in traffic and standing on a roadside watching traffic whiz by,” he explained in a 1998 interview.

Figure from Reynolds 1986 SIGGRAPH paper, Flocks, Herds, and Schools: A Distributed Behavioral Model.

What underlying pattern drove the quicksilver movements of a flock in the sky? To Reynolds, it seemed to be formed by the tension between opposites. On one wing: birds want to stay close to one another. On the other: they don’t want to collide. These are simple rules, simple enough to lay down in code. But it was a long time until Craig Reynolds did anything about it.

It took being invited to teach a workshop at SIGGRAPH—the annual conference of the Special Interest Group on Computer Graphics and Interactive Techniques, a professional organization for visual effects artists, computer scientists, and engineers—for Reynolds to tackle his flocking simulation in earnest.

At first, he’d planned to share an overview on the basic concepts of procedural animation, the kind of programming-based computer animation he’d explored at MIT. “I started to say that if you were able to give an animated character a software brain, and you put the right rules in, and you put a bunch of them in a scene, then maybe flocking would just happen,” he remembers. He’d made the same claim in his master’s thesis, nearly ten years earlier. “It occurred to me that I should stop, year after year, talking about how easy it would be, and either do it—or stop saying that it was easy.”

Reynolds set to work. First he created a white wireframe sky, a volumetric world, and saw that it was good. Then he populated it with flying creatures, triangle-shaped “bird-oid objects,”⁠ and those were good too—or good enough.

Finally, he dictated three local rules:

1. Collision Avoidance: avoid collisions with nearby flockmates

2. Velocity Matching: attempt to match velocity with nearby flockmates

3. Flock Centering: attempt to stay close to nearby flockmates

Avoidance, matching, and centering. There were a few finer points—the farther towards the periphery each “boid” drifted, for example, the stronger its pull back towards the center of the flock was. But that was, largely, it.⁠

Reynolds’ boids didn’t model a real population of birds. They didn’t flap their wings, tire, or feel hunger. They were just triangles. But when he ran the simulation on his Symbolics 3600 workstation, they flocked like starlings in a pixel sky. When they encountered an obstacle, they banked around it, or bifurcated like water flowing around a stone. They darted in zig-zags through the white emptiness, as though outsmarting predators lurking in the depths of his hard drive. Their movements were so clearly and undeniably birdlike that anyone who saw them recognized them immediately for what they were. Even now, they have the spark of life in them.

“They weren't pigeons, they weren't eagles, but they were flocking,” Reynolds says. “Whatever the hell they were, they were flocking.”

Nobody in the visual effects business was crying out for a flocking algorithm. Nor was flocking some grand challenge in computer science. Reynolds just thought it might work, and it did. “My career path has been finding research questions that nobody cares about, and following them to where they led,” he says, and laughs. But even Reynolds was surprised to discover where this question ultimately led. Because not long after his presentation at SIGGRAPH, he started getting calls—from biologists.

Over the last three decades, agent-based models like Reynolds’ boids have revolutionized biology. Microbiologists use them to understand how bacteria, archaea, and protists jostle for resources in aquatic environments. Plant biologists use them to model the subtle interactions between trees and to plot the spread of invasive species. Agent-based models help biologists understand how flocks, swarms and schools emerge from the interactions between individual birds, bugs and fish—and help them to make predictions, refine their questions, and test out their hypotheses.

Of course, as the ant scientist Deborah Gordon observes, “biology is not physics.” There will never be a unified model of collective behavior, because living systems are complex; in the real world, every bird is an individual, and senses its surroundings differently. The wind changes; the sun sets. A hawk cleaves the flock of swifts in half.

To quote the statistician George Box, all models are wrong. They can never predict what will really happen in nature—and even if they do, there’s no way of knowing if the outcome they describe could have been achieved in some other way. In the movies, the gap between the model and reality is the place where representation fails; we call it the uncanny valley. In science, it’s something more promising. Everything we still don’t know, or have yet to ask, is in that gap, and it just gets bigger and bigger.

Images (dithered) from Craig Reynolds’ work on simulating the evolution of camouflage textures, which you can read more about here. I’m also just now realizing that these patches of imperfect camouflage look like scintillating scotoma.

You’ve seen Reynolds’ boids before, in some form or another. They’ve been wildebeests, in a particularly traumatic stampede sequence in 1994’s The Lion King. They’ve been bats, too—in Batman Returns and in the Sylvester Stallone rock-climbing flick Cliffhanger. Reynolds was awarded a technical Oscar in 1998; he spent most of his career at PlayStation modeling autonomous characters. Today he works on simulating the co-evolution of camouflage patterns in nature by setting virtual prey and predator algorithms to devour and hide from one another. Art, as ever, imitates life.


This has nothing to do with anything, but last week I was at the Portland airport. A pianist was set up in the terminal, playing pop classics next to the food court. As I was ordering pizza, a silver-haired woman with rose-colored glasses started dancing by herself to the music. She had a giant teddybear strapped to her backpack. The guy behind the cash register gave me a look. “Dinner and a show,” he said.

I was already in a maudlin mood so I said to him that life can be beautiful, if you’re willing to get free. He nodded. We don’t have much time, he said. My husband and I sat and watched the old hippie dance. Her skirts billowed as she spun in circles to “Candle in the Wind.” I remembered how during my obsession with lucid dreams I’d read that the easiest way to prolong one is to spin around in circles. There’s something about the centripetal motion, even simulated, that tickles the brain and anchors you where you are. I wondered if the same thing was true while waking. And then I thought of dervishes, and the planet spinning on its axis. Of course.

Spinning around and around, like everyone, to keep the dream alive,

xo

Claire

https://clairelevans.substack.com/p/the-hidden-bird-algorithm
Extensions
What's a Brain?
On bacterial, cellular, and other minimal minds.
Show full content
The slime mold Physarum polycephalum, spotted in the New Hampshire woods.

We all have one: three pounds of winkled grey-pink peering out at the world from behind castle walls of bone. It’s the seat of consciousness and dreams. It holds memories, harbors love, shocks with fear, awe, and pain; it works over the raw input of the senses, smoothing the chemical signals, wavelengths of light, vibrations and electrical messages it receives into a cogent picture of its realm.

Yep, that’s a brain—mine and yours, the same brain that brought us language and agriculture, that threw the first spear. The same an unnamed scribe first etched into papyrus some 3,500 years ago (searching for the right hieroglyph, he chose “skull-marrow,” something to scoop out when the time comes for mummification, favoring that glorified muscle, the heart, for transport into the realms of the Dead).

It’s the same brain Aristotle had, and used, in his wisdom, to ponder its purpose: certainly nothing important, perhaps a cooling system? The same the 20th century neurologist Sir Charles Scott Sherrington called “an enchanted loom.” Bless them all, it’s the only brain they had, but it’s far from the only brain there is.

Cognitive science has long taken the existence of a nervous system as a prerequisite for intelligence. But neurons aren’t magic or unique—they only perfected what simpler cells had already been doing for millennia, and continue to do everywhere on Earth. Bacteria make collective decisions using chemical signals similar to those found in cortical brain activity. Cells come together to form complex structures. Slime molds make decisions, sometimes even irrational ones. And if memory is the capacity to modify behavior based on past experiences, then sperm, amoeba, yeast, slime molds, plant cells, and single-celled organisms all have memory.

Cognition isn’t reserved only to vertebrates with language, reason, or self-awareness. There are more primitive cognitive subsystems within us, around us, and all along the ladder of evolutionary time. Studying them is the purview of an emerging interdisciplinary field in biology: “minimal cognition” or “basal cognition.”

The philosopher of science Pamela Lyon writes that “taking seriously modern evolutionary and cell biology arguably now requires recognition that the information-processing dynamics of ‘simpler’ forms of life are part of a continuum with human cognition.” Not to be outdone, biologists František Baluška and Michael Levin have suggested that cognition is a fractal, spanning nested levels of biological organization. “Whether each successive level of organization is in some sense smarter than the ones below it, or whether structures derive their cognitive powers from those of lower levels, remains to be discovered,” Levin and Baluška write. They don’t concern themselves with big-C consciousness, the so-called “hard problem” of cognitive science. Instead, by learning how these more minimally cognitive agents function, they propose that we might someday be able to tackle the hard problem from the (squishy) bottom up. I think that’s exciting; it makes the whole world a brain.

I always come back to this image of an ancient computer from Michael Green’s Zen and the Art of the Macintosh, one of the most beautiful artist books ever made.

I came to minimal cognition in the course of my ongoing research (read: obsession) with slime molds, which, in certain contexts, display computational capacities to rival our fastest computers. Physarum polycephalum can solve combinatorial mathematics problems and model resilient networks so effectively that there’s been quite a bit of wet lab-tinkering to bend them into primitive bio-computers. One study in 2016 even injected conductive nanoparticles into the slime molds’ plasmodial tubes to see if they could be made conductive, suggesting future Physarum chips. Initially I thought this was the most interesting thing about slime molds: that they do computer stuff.

Now I’m torn. Are slime molds a new computational substrate, or do we just use computers as measuring sticks, a way of imposing value on the living world? If an organism is “like a computer” doesn’t that make it an object, something that can be instrumentalized towards a purpose—that is to say, ours? As any researcher who works with them will tell you, slime molds aren’t objects. They’re subjects, and quite willful ones. When I spoke to the Australian systems biologist Chris Reid, who has done extensive work with Physarum, he told me that his specimens are always escaping the lab. “Sometimes the best slime mold to use for your experiment is the one that's just crawled out of the bin,” he said. “They have a real zest for life.”

When the popular press covers slime molds, they tend to emphasize how remarkable these brainless organisms are at seemingly human tasks—like, famously, recreating Tokyo’s regional commuter rail network. But that’s backwards. Networks are a fact of nature, and slime molds are much better than us at creating networks that are both minimal and robust, balancing efficiency with redundancy. “That's where it makes real sense to give the problem to the slime mold, see how it solves it, and try to extract principles that we can use to make our systems better,” Reid explained.

That is to say, in order to get useful answers, it helps to meet the slime mold where it’s at. Other organisms may help us to answer different questions. And if we align our questions with the inherent capabilities of the organisms we employ in our computational experiments, we can yoke together our interests, too. Maybe that’s why I’m so interested in minimal cognition. Not only because it opens up the definition of what a brain can be, but because it binds us to the world, drawing our brains into a broader phenomenon that touches life at every level. We’re nothing special. As Reid said to me, laughing, “in some ways, it's information processing all the way down.”

The terms “information processing” and “computation” show up all over the minimal cognition literature, and in biology more generally. This threw me, initially. But it doesn’t mean that slime molds, cells, or bacteria are like computers. It just means that the underlying principles of computation transcend their medium. Living systems store information in order to make better decisions in the future—those who remember danger avoid it later. We humans simply use computers to outsource that remembering and tailor its scope. If you’ve read James Bridle’s essential Ways of Being, you may recognize in this idea that book’s persistent mantra that the world is not like a computer, but a computer is like the world.


Anyway, this is mostly what I thought about during my precious month at MacDowell, where I worked on two books, slept with the windows open during the summer storms, spotted meteors, newts, and mushrooms, and learned to feel safe in the dark.


In other news:

  • New Release, the new album from my band YACHT, is out now on all platforms! Listen on Spotify or Apple Music; pick up the LP on Metalabel or Bandcamp.

  • The nice people of Longreads included my piece for Noema Magazine on lucid dreaming in their weekly Top 5, calling it “brain-bending in more ways than one.”

  • On October 12th, I’ll be in conversation with Jonathan Lethem at Hauser & Wirth’s bookshop, ARTBOOK, here in Los Angeles. We’ll be talking about his new book, Cellophane Bricks, a “stealth memoir of his parallel life in visual culture.”

xo

Claire

https://clairelevans.substack.com/p/whats-a-brain
Extensions
The Colony Makes The World
On the emergent intelligence of ant colonies; ecologies of entanglement; a New Release!
Show full content
Still from Saul Bass’ Phase IV (1974).

Ants are 140 million years old, older than us by far, older even than the dinosaurs. There are some 14,000 known species of them, on every continent save Antarctica. Only 50 have been studied in any real depth. Some farm fungus; some enslave other ants; some weave nests from larval silk; some build leaf castles mortared with aphid spit; some build bridges from their own bodies; some keep other insects as livestock; some work security for trees in exchange for nectar, a home.

Everything ants do, they do without central control⁠—even though individual ants are dumb, mostly blind, and can’t remember anything for more than ten seconds.

When’s the last time you watched an ant? There’s one walking across the café table where I sit and write these words now. This table must be an Everest of steel to her—what an enormous expenditure of energy for a little ant, and all just to wander aimlessly around, stopping here and there to inspect a speck of dust! She seems to have no idea what she’s doing, and I have a hard time imagining that this solitary, bumbling individual will ever find what she seeks. “If you watch ants at all, you end up wanting to help them, because it seems that the ant you’re watching just can’t seem to get it together,” explains Dr. Deborah Gordon. “But in the aggregate, somehow, they get a lot of things done.”

Dr. Gordon, by her own estimation, has “watched more ant colonies for longer than any other scientist, and for longer than most ant colonies have watched each other.”⁠ For over 30 years, she’s spent her summers in the Arizona desert, watching the same colonies of Harvester Ants grow older, larger, and wiser. Individual ants come and go, of course: an ant’s lifespan is only about a year. But Gordon has shown that colonies mature, retaining memories individual ants forget.

But back to the ant on the table. There’s nothing here for her. Were I to drop a crumb of cookie onto her path, though, she’d make the most of it. She’d take some and head straight home, laying a trail of pheromones as she went. The next ant to cross that trail in her own aimless wanderings would drop everything and follow it to its end. Finding cookie there, she’d do the same as her predecessor, reinforcing the trail, and once enough ants echoed this task, it’d become a highway, and I’d need to relocate.

This is called a recruitment trail; it’s an example of positive feedback in ant behavior. The more pheromone is laid down, the more ants are drawn to the trail, each adding more pheromone until there are enough ants to make quick work of bringing the food home. But the pheromone is volatile, too, and will evaporate if it’s not reinforced; this secondary mechanism ensures that the shortest trails are always the strongest, and keeps the ants from wasting time on depleted food sources. Efficient.

From Catherine Chalmers’ Antworks.

Many ant species use recruitment trails to siphon workers towards food sources, but the network structure of these trails varies. With the pharaoh ant, an opportunistic species that specializes in making the best of patchy resources, the trail algorithm favors recruitment; its branching structure funnels ants into a recruitment pool, making it easier for those ants who have found food to lure others quickly. Pharaoh ants have three different trail pheromones: a strong attractant, a weak one, and a repellent, which they use to mark dead-ends. With finer chemical control, they can prioritize paths and directly recruit foragers off the main trail to take advantage of an ephemeral food source—say, the sudden appearance of a human picnic.

Turtle ants, however, live in small colonies in the canopies of tropical forests, where vines snake through the branches and plants snap under the footsteps of lizards and birds. The turtle ants must constantly maintain the circuit of trails connecting their nests to nearby food sources lest they be swallowed whole by the green life of the jungle. Here, the algorithm favors coherence; the default, for turtle ants, is to soldier onwards, laying down a slowly-evaporating pheromone at all times. When they reach a dead end, some percentage always explore further, searching to find and heal the broken link in the trail. These variations, Gordon Argues in her book The Ecology of Collective Behavior, demonstrate how the collective behaviors of colonies emerge from ants’ relationship to a dynamic environment. The world makes the colony.

You’ll notice I’m using the word “algorithm.” It’s quite literal—an algorithm is just a sequence of instructions one must follow to achieve a task. A recipe is an algorithm, technically. For the Harvester ants that Gordon studies, that recipe might be something like: wait, until you encounter another ant with odor X three times in the next 30 seconds. If you do, go forage for food. If you find a seed, return it to the nest, then wait again.

Still from Saul Bass’ Phase IV (1974). If you’ve never seen it, seek out the version with the full “lost” ending—a marvel of a psychedelic montage.

But even a simple algorithm, running on hundreds of ants, each interacting with their environment and updating their tasks as the information they receive changes, can produce complex ripple effects. When a colony’s circumstances change—say, there is an abundance of food—it’s able to instantly reallocate its workers to different tasks. And in a larger colony, where the rate of interaction between ants is higher, the same algorithm might lead to different behaviors than in a smaller, younger colony. To understand these dynamics, myrmecologists like Gordon work with computer scientists and engineers to create quantitative models of how ant colonies work, porting their algorithms from sandy anthills, to symbols, to silicon.

The results are, often, humbling. Gordon has shown that the algorithm Harvester ants use to regulate their foraging behavior across the desert is uncannily similar to the Transmission Control Protocol used to regulate data traffic on the internet, for example. Meaning that ants beat us to network design by a hundred million years.

Are there other algorithms regulating the “anternet” that might help us to build more efficient networks of our own? Already, algorithms inspired by ant recruitment trails are used to solve combinatorial optimization problems in computer science, and one researcher has proposed that living “ant computers” are theoretically possible—an unsettling vision, but one familiar to anyone who has read the British science fiction novel Children of Time, in which hyper-intelligent alien spiders use sophisticated chemistry to turn neighboring ant colonies into server farms.

Pop Beetles, from artist Catherine Chalmers’ series of “antworks,” Impostors.

In one sense, this isn’t new. Edward O. Wilson, probably the most famous scientist to ever study social insects, often wrote about ants in mechanistic terms, comparing colonies to factories. In doing so, he drew heavy inspiration from cybernetics, the science of communication and control. For most of its history, the study of ants—and more broadly, biology as a whole—has been in thrall to this vision of behavior. Ants were merely automatons, going about pre-programmed roles following chemical cues. The division of labor in a colony was as inflexible as a caste system: the moment they emerged from their pupa, some ants patrolled, some foraged, some piled the bodies of the dead, some kept the tunnels clean, some fed the brood, each a piece of a clockwork, enslaved from egg to midden-pile.

Only in the last few decades has the true flexibility of ant society come to light, thanks to Gordon and a new generation of systems-minded scientists. Even in species where ants vary widely in size, any ant’s role is determined by the dynamic, ever-changing needs of the colony—and, just as importantly, by an environment that is both constantly changing and constantly being changed by the ants. As Gordon observes, the world makes the colony.

But the colony, too, makes the world.

🐜


I’ve been at MacDowell for a little over a week. My little studio—which has sheltered, over the years, Audre Lorde, Celine Song, Michael Chabon, and, in an extraordinary coincidence, my college mentor, the poet Martha Ronk—is already strewn with print-outs, library books, and jotted notes. Every day I ride my bicycle around the property, huffing forest, braking for wild turkeys and deer. Every artist deserves this.


Finally, some recent news from the outside world:

  • My band, YACHT, has a new album coming at the end of August. In an experimental move, we launched a “reverse-preorder” campaign in July, selling New Release on vinyl two months before its streaming premiere; the first edition sold out immediately, but a second wave is live now. We talked to Yancey Strickler about staying willfully independent, making art under capitalism, and transcending the sinking feeling that “your house is a store and your life is a job.”

  • I’m extremely pleased to be featured in Ecologies of Entanglement, a new interview series from Are.na (truly the ant colony of my mind) and Willa Köerner’s wonderful newsletter Dark Properties. The series will draw connections between natural and technological networks. My interview is here—read on if you’re interested in natural systems, virtual worms, speculation, and slime mold.

  • My friend Kriss Knapp is launching a series of “Object + Word” collaborations, pairing poster prints with commissioned texts from writer friends. My contribution is a little ecological poem; the print, with an illustration by Kriss, comes out on the 6th. If you subscribe to her Substack you’ll see it there first.

xo

Claire

https://clairelevans.substack.com/p/the-colony-makes-the-world
Extensions
The Queen's Doll's House
On the freaky model world of the Dollomites; plus—more lucid dreaming and a roundup of recent favorites.
Show full content

The Queen’s Doll’s House, 1924.

Last weekend I hit the holy grail of estate sales: a former electrical engineer and computer fanatic with hoarder tendencies had passed, leaving an attic full of meticulously-organized back issues of IEEE Spectrum and MacWorld, wall-sized ASCII art printouts, three-ring binders full of satellite data, ancient Radio Shack manuals, books on electronic spycraft, computer simulations, and systems thinking.

I made multiple trips, hauling home dusty computer magazines and aerospace convention newsletters that will likely populate my own future estate sale. I guess I have light hoarder tendencies myself, but what can I say? Every home is an archive.

And anyway—you never know what you’ll find. On impulse, I also grabbed a ‘70s coffee table book about dollhouse collecting (regular readers know I have a weakness for plays of scale). Paging through it on a Sunday afternoon, I fell into a lost world; the book turned out to be a serious history of “the hobby” as well as an index of its leading practitioners. In 1976, it seems, miniature-making was still a cottage industry, anchored by monomaniacal craftspeople—the kind who’d happily spend a month hand-carving a tiny Edwardian dresser—who did robust mail-order business advertising in hobbyist magazines like the Nutshell News. Delightful, obviously.

The Queen’s bedroom; 1/12th scale.

But not even the most gifted among them would have been good enough for The Queen’s Doll’s House. This eight foot-tall mansion, presented to Queen Mary of England “by her loyal subjects” in 1924, is almost certainly the most intricate dollhouse ever built. It has electricity, working elevators, and a basement livery full of royal limousines. Its silver taps run hot and cold water. The wine cellar contains real champagne, sherry, and kegs of beer. The paintings hanging throughout the house were produced by famous English painters of the day, and authors like Rudyard Kipling, G.K. Chesterton, and Joseph Conrad each contributed tiny leather-bound, hand-written books to the dollhouse’s 200-volume library. Every major firm in England produced 1/12th-scale versions of their products for the occasion. The entire inventory of the dollhouse spans two large volumes.

One of these volumes, The Book of the Queen’s Doll’s House, contains a fascinatingly weird essay by the engineer Mervyn O’Gorman on the “effect of size” on the dollhouse’s world. It’s a known bugbear in miniature-making that certain materials don’t perform well at scale: an inch-wide cotton coverlet sits on the dollhouse bed like a piece of cardboard, for example. But Mr. O’Gorman must have been the first writer to seriously consider the physics of the miniature. According to his calculations, the little people living in the dollhouse—he called them “Dollomites”—would have the strength of ten men. They’d eat six meals a day, leap staircases in a single bound, and have hearts like hummingbirds. Their voices would be inaudible to us; the gramophone and working pianos in their house would cause more pain than pleasure to their tiny ears. To the Dollomites, the paint on the walls would be a half-inch thick, and a single drop of water from the tap the size of a pear. Every glass of wine would be so viscous they’d have to suck it down. And forget about soup. “Cream or thick soup,” O’Gorman warned, “would be so sticky that the soup spoon would be found to lift the plate with it from the table.”

Illustration from The Book of the Queen’s Doll’s House, by A.C. Benson and Sir Lawrence Weaver, 1924.

Of course, I find all this wonderful—I love it when someone takes an absurd premise seriously. But there’s something about this attempt in particular that I think gets at the fundamental appeal of miniatures, that is, the impossibility of ever inhabiting them. In attempting a rational scientific study of the Queen’s dollhouse, O’Gorman accidentally created something utterly monstrous: a dollhouse world populated by whispering, ravenous, cream-sucking, super-strong freaks. It’s unholy, and that’s because dollhouses are not made to be lived in; they’re barely fun to play with. Dollhouses are for looking. As the poet Susan Stewart observes, dollhouses are “consumed by the eye.” They’re shadowboxes of simulated order, a way of distilling the complexity of life—in this case, an empire—into a complete whole that can be enjoyed at a glance. The most famous dollhouses, writes Stewart, were “meant to stop time and thus present the illusion of a perfectly complete and hermetic world.”

Of course, it’s hopeless; everything changes, everything moves. A dollhouse only gives the illusion of possessing that which we can never truly inhabit. But the empire reels in pain. History marches on. From birth to death every cell in our bodies sloughs off into an undifferentiated world. Every home becomes an estate sale. Rot and repeat.

A few weeks ago the writer Theresia Enzensberger and I held a public conversation at the Goethe-Institut in Los Angeles; as someone who tends to daisy-chain from one fixation to another, I rarely have the opportunity to talk about my work as a whole. At one point I was struggling to connect the dots and our moderator, the science fiction scholar Sherryl Vint, made the very astute observation that what seems to capture my interest is the gap between models and reality. I felt deeply seen by that. It’s true!


In other news, Noema magazine just published a new long-form piece from me.

Illustration for "Living in a Lucid Dream” by Brindha Kumar for Noema Magazine.

Readers of this Substack will quickly recognize in it the months-long obsession with lucid dreaming that played out, in part, in these pages. In the Noema piece, I talk to two philosophers and a cognitive scientist; we get into animal dreams, the seams of reality, and ancient traditions of temple dream incubation. As ever I am grateful to find outlets that will give me space to go long on my utterly noncommercial ideas.


Speaking of noncommercial ideas: the original Bumper Stickers for Your Phone, Volumes I and II are now available as a bundle. To quote Jason Kottke, “lol, tiny bumper stickers for your phone.”


Finally, a few things I’ve really enjoyed, recently:

  • Zoë Schlanger’s new book, The Light Eaters, a survey of new research in plant behavior and intelligence, had me apologizing to my tomatoes.

  • This 2019 talk from the designer Jen Hadley about the history of type design for television broadcast made me think about the persistent and often hidden materiality of technology. Did you know that scrolling credit sequences were once printed on literal scrolls of paper?

  • I can’t get enough of the Studs Terkel Archive Podcast, an unofficial feed of interviews conducted by the legendary oral historian Studs Terkel on his four-decade-spanning Chicago radio show. Two of my favorite episodes: Studs talks to Quentin Crisp, in London, about being out in England before the War, and to James Baldwin, right after the publication of Another Country, in 1962. Baldwin ends his interview with a fragment of a poem by Marianne Moore, which I’ll leave you with now:

The weak overcomes its

menace, the strong over-

comes itself. What is there

like fortitude! What sap

went through that little thread

to make the cherry red!

xo

Claire

https://clairelevans.substack.com/p/the-queens-dolls-house
Extensions
The Inner Space Race
On the deepest holes in the world and what we did—or didn't—find there.
Show full content
Newspaper advertisement placed by the 19th Century feminist cult the Koreshan Unity, who believed we live on the concave inside of the Earth.

After seven hours in the car, the road still pulls like a tide on my ears. Even after I’m splayed on the motel bed, its momentum continues, as though the tarmac were unspooling somewhere below the box spring. Fitting, then, that the movie we’ve selected from the depths of the cable box is about the liquid pull of the world.

Aaron Eckhart discovers that the molten metal core of the Earth has stopped spinning, sending the electromagnetic field into chaos and exposing patches of the planet to the raw sun. Rome burns; fish boil alive in the San Francisco Bay; migratory birds lose the plot, as do I; faulty navigation systems force Hilary Swank to land a Space Shuttle in the LA River. The only solution is, of course, to nuke the Earth’s core, a mission Eckhart and Swank undertake post-haste, riding a blind heatproof bullet straight through miles of CGI magma. At every interval the commercial breaks grow longer. They advertise glistening tri-patty burgers and increasingly specific pharmaceuticals addressing rare cancers and skin conditions. Regional ads for the local power utility beg for moderation in the summer heat, and when the movie returns, Eckhart is deeper within the scorching Earth and even more shirtless.

Aaron Eckhart and a criminally-underused Stanley Tucci hoist the deep Earth nukes that will save the world from electromagnetic apocalypse.

On its release in 2003, The Core was pilloried by critics and cited by the scientific community as an example of everything Hollywood does wrong in its representation of science onscreen. In a poll, hundreds of scientists voted The Core the worst offender in recent memory, and Dustin Hoffmann, at the time the ceremonial face of the Science and Entertainment Exchange, lobbied for its thorough debunking. The Core’s writer, John Rogers, was forced to respond that he did his best, that he’d even fought with the studios to excise even more egregious pseudoscience: spacewalks in liquid magma, subterranean dinosaurs, and a windshield for the film’s tunneling ship, Virgil.

In Rogers’ defense, a scientifically accurate version of The Core (or The Journey to the Center of the Earth, for that matter) wouldn’t have been very exciting to watch. The Kola Superdeep Borehole, the deepest hole ever drilled into the real Earth, only penetrated a third of the way through the continental crust somewhere just shy of the Finnish border. That’s a little under eight miles; the Earth’s core, for reference, is 4,000 miles deep. Although it surfaced plenty of surprises from deep below the permafrost—boiling hydrogen-rich mud, a shock of liquid water, and even microscopic Plankton fossils as far as three miles below the surface—it was hardly the stuff of adventure tales.

Workers at the Kula Superdeep Borehole celebrate 11,000 meters, or about 6.8 miles. As much as it looks like it, I promise you that this is not an AI-generated image.

Even before the Russians welded it shut in the ‘90s, the Kola Superdeep Borehole was only 9 inches wide, and in photos, its opening peters immediately away into darkness, a blank slate that has been enthusiastically filled in by eldritch creepypasta authors, clickbait farms, and evangelical memelords suggesting that the Soviet Union opened the gates to hell. Images taken by urban explorers of the abandoned site could not look more like outtakes from Stalker, adding to its postlapsarian Soviet spook factor.

The Kola Superdeep Borehole is the forgotten ruin of a lesser-known Cold War rivalry between world powers. As the US and Soviet space programs plotted their paths to the moon, an inner space race was on, too—to plumb the deepest point on Earth. The US made its attempt at sea, where the crust is thinner, off the coast of Guadalupe, Mexico. Project Mohole, named for the Mohorovičić discontinuity, a magma-filled layer separating the Earth’s crust from its mantle, didn’t make it very far before it hit cost overruns and political infighting, but the engineers on the project did invent the “dynamic positioning” systems required to keep a drill steady at sea, opening the gates to a very different kind of hell: the proliferation of deep offshore oil drilling.

This was likely the objective from the get-go. Project Mohole’s drilling barge, the CUSS I, was funded by a consortium of oil companies: Continental, Union, Superior, and Shell. John Steinbeck, who sailed on the CUSS I to cover it for LIFE Magazine in 1961, wrote as lustily as you might expect of the “elite and motley” group of paleontologists, oceanographers, and petrologists aboard. The latter were “the cream of a very special profession already trained in offshore drilling in shallow water.” As the drill plunged through the mud, none slept, none showered; when it finally hit pay dirt, that cream grubbed for hunks of crystal-flecked basalt to take home.

The Europeans dug a superdeep hole, too, albeit much later: the Kontinentales Tiefbohrprogramm der Bundesrepublik Deutschland (KTB) Borehole in Bavaria broke ground in 1987 and made it just under six miles, recording temperatures of more than 500°F at its deepest point. Unlike the Soviet hole, the KTB is still open to researchers, tourists, and even the occasional artist-in-residence. In 2010, the Dutch artist Lotte Geevan made geophone recordings of the rumbling sound emanating from deep within the KTB, which she said made her feel “very small.”

Although both make us feel very small, the challenges of inner-space exploration are, in every way, inverse to those presented by outer-space travel: tremendous pressure and heat make it impossible to dig beyond a certain point. The mantle is yet to be breached; pending some miraculous advance in materials science, it’s unlikely that it ever will. In The Core, a geophysicist named Brazz, played by Delroy Lindo, has to invent a “37-syllable-long tungsten titanium crystal alloy” called Unobtanium to render the ship impermeable to the molten Earth. Conveniently, it also draws power from heat, a late-blooming fact that keeps our heroes from roasting to death in the center of the world. They retrace their long road, riding a lava flow to surface from within an ocean volcano off the coast of Hawaii. Whales circle and sing; Deep Earth Mission Control toss off their headsets in relief; the skies calm their freak; the planet turns another day. Tomorrow we drive on.


In other news: next week, the German writer Theresia Enzensberger and I will be having a public conversation at the Goethe-Institut in Los Angeles. Theresia’s most recent book, Auf See (At Sea), about a failed libertarian seasteading commune, was nominated for the German Book Prize; I had a chance to read some of the work-in-progress translation and it’s exactly the kind of cutting near-future critical sci-fi I love. We’re calling the event "Slime Molds and Seasteads;our conversation, moderated by Sherryl Vint, director of the Speculative Fiction and Cultures of Science program at UC Riverside, will be wide-ranging, weird, and certainly of interest to readers of this Substack. It’s free, the parking is free, and we’ll have algae cocktails.

xo

Claire

https://clairelevans.substack.com/p/the-inner-space-race
Extensions
Hyperlink Island
Note: this week I’m trying something new and sharing some fiction.
Show full content

Note: this week I’m trying something new and sharing some fiction. I wrote this story in a fever a few summers ago, inspired by J.G. Ballard’s Report on an Unidentified Space Station and an image, which has haunted me for years, from Michael Green’s Zen and the Art of the Macintosh, of an ancient computer falling into ruin in the jungle. Re-reading it today, I was struck by how much it resonates with the themes of this newsletter. Read on if you like slime molds and body horror; back to regular programming soon.

Hom & Hom, 1989.

Report 01

We’d been dithering at sea a year before we finally sighted Hyperlink Island. We emptied our wallets at once; our money means nothing in the new world. From the ruined ship we salvaged a few tools, a chess set, and our trusted captain, its hard drive encrusted in salt. With these, we swam to shore, pried oysters from the rocks and fruit from the trees. Tonight we feast on silica beaches. Tomorrow we’ll venture into the forest, in search of the ancient computer that created this place.

I’ll write when I’m able.

Report 02

Our captain is unwell since our landing; its ports are swollen with brine. A barnacle has appeared where its optical reader was. We feed it the random numbers we find inscribed in the pearlescent canals of seashells, but still it grows weak. We’ve lashed a makeshift stretcher together from palm fronds and lengths of Ethernet cable. Hopefully this will suffice to carry our captain deeper into the hot, green forest. All around us cicadas wail like disconnected modems and the ground teems with rotten links. Our trust in our captain has brought us this far, and we are lost without its guidance. Only the ancient computer, if she does indeed exist, can help us now.

Report 03

It has been three days since we began our journey inland. Our captain’s condition is unusual, but stable. A fine mold has appeared all over its casing. Feathery cilia reach out as we approach, eager to lap salt from our skin. I have so far kept my distance, more out of respect than fear. Our captain has ceased communicating in natural language and has reverted to issuing commands in its native Lisp. Only Lois, our navigator, can understand. She translates what she can, but even in English its demands were incomprehensible. Perhaps we are too primitive to understand; perhaps we lost our curiosity during the AI winter. At night the jungle hums. We must be getting close.

Report 04

Our initial survey of the island appears to have been incorrect. Lois suspects an electromagnetic field has distorted our bearings. We should have passed through the jungle by now; instead, it grows louder and denser. This afternoon we discovered fungal masses as tall as termite mounds and covered in a mottled orange skin. They’re as rubbery and unknowable as the sharks that took Lois’ brother at sea. But our rations are depleted. We eye the mushrooms hugrily. The captain’s mechanical core is visible only by moonlight.

Report 05

Last night we succumbed to our hunger and tasted the orange fungus. The texture was surprisingly viscous, and once we began to eat we found it difficult to stop. When we finally slept I was visited by the most remarkable visions, which I am not entirely sure were dreams: the ancient computer, taking the form of an enormous insect, descended from the jungle canopy and bound us like pupae in lengths of magnetic tape. As she worked she whispered instructions to our captain in binary code. I awoke more exhausted than before, but the jungle looks different today. Its constrictive net of life now seems beautiful, even logical. Lois, although she cannot explain why, knows precisely in which direction our party must travel: in a nested loop.

Report 06

Our captain is in fine spirits. It lopes ahead of us now, easily, synched with the recursive rhythms of the forest. Its arms are knotted with fiber-optic vines, and when we pause to sate our hunger with more of the orange fungus, butterflies land on its head. They appear to be recharging. Lois and I are taken with fever; the island itself seems to be floating in an ocean of pure data. I feel the edges of everything solid wearing away. The sun rises and sets, rises and sets, and as the day changes I remember the world we left behind: the decayed cities, the docklands where we labored so long, believing we could someday earn passage beyond the edges of the map. How our families feared for us! I’ll never regret stealing the ship that brought us here. I’ll never forget what I had to do to steal it.

Nam June Paik, TV Garden, 1974.

Report 07

Last night, as Lois slept and our captain defragged its drives, I crept alone to a stand of ancient trees a few hundred meters from camp. As I approached, I noticed a faint orange light emanating from the ground, so dim I’m not sure I would have seen it under the white glare of the overhead sun. Emboldened by curiosity, I peeled back a thick mat of moss from the forest floor. It pulled off neatly, like an orange peel. Beneath was a mass of blinking conduits and cables of extraordinary complexity woven deep into the black soil. As I looked closer, I noticed a web of bright orange hyphae coiled around the circuitry. I touched it, and pulled my hand quickly back in shock. I showed Lois the burns this morning; they have already begun to blister. The moss must insulate the island. I haven’t told Lois, or the captain, but I believe the ancient computer to be already with us. I see her face everywhere in the strange symmetry of this place. I think she has been with me since the day we set sail.

Report 08

This morning brought a dry lightning storm to the island. We watched the sky crackle with static in patches between the thick canopy. Lois said it reminded her of the ancient text Neuromancer, which I haven’t read. The air went prickly and forest creatures howled and pelted us with fruit. The lightning spooked our captain, too; for about two hours it stood in a clearing, screaming buggy code into the heavens. This hysteria ended only when the storm broke and the skies opened with a torrential rain. Now that this, too, has passed, the moss underfoot has a pleasant new springiness and the smell of Earth and warm plastic is sensuous. In the new light, I’ve noticed that the burns on my hand have faded, leaving a rubbery scab of orange. I think I may be infected. I’m keeping it secret for now.

Nam June Paik, TV Garden, 1974.

Report 09

The more I pick at the orange scab, the more vigorously it grows back. It has begun to spread across my hand, its leading edge pulsing tentatively forward, as though it were searching for food. Despite my horror, there is something beautiful in the way it grows; an intelligence is clearly at play in the lacelike patterns forming against my skin. My dreams have grown more intense and vivid. Last night I dreamt I was home, knotting fishing nets with my blistered hands, bleeding into the sea until I was dry. Every morning Lois douses me with water from the brackish stream that winds across the jungle. I feel myself disappearing into this place. I had a name, once. A family. But now I crave only salt, and to merge with the ancient computer, if she will have me.

Report 10

Forgive me but I must hurry—I am not myself—from the moment I open my eyes my field of vision is not mine—I look at Lois and I no longer see the outlines of her face—instead new light converges into Lois—and then—as though a lens were clicking into place—every molecule—cells ebbing, merging—I see life’s little animals eating each other up—consuming and negotiating and breeding in a dance. A jungle, also—within us all—fantastic, creative madness I could never have imagined exists—never seen before—existence itself. The constellation called Lois refuses my touch—my gift.

Tonight the captain and I will do what is necessary.

/ Report 11 Missing

/ Report 12 Missing

/ Report 13 Missing

/ Reports 14-92 Available Upon Request

Report 93

This is Lois. I should get this down, since I’m not sure I’ll ever get off this island. I suppose that’s what Ballard wanted, in his way. In the end I had to tear the equipment away from him, and as I did that his arms tore away too, leaving only spongy, bloodless flesh, like the stem of a boletus mushroom. I shudder to think what would have happened if he and the captain had caught me sleeping that night, as I imagine they’d intended. Fortunately I’d slept only fitfully since we arrived. Whatever effect the fungus had on Ballard, it spared me. I did my best to tend to him, but our doctor, my brother, died at sea. I’m only an engineer. I’m working to repair this emitter using components I salvaged from the captain. I hope to pierce the magnetic field that contains this wretched place. Ballard stopped making sense a week or so before he died, if you can call what he did that. He and the captain spent hours together, straddling a nurse log and braying with laughter. I didn’t know our captain could laugh. At night, Ballard would sometimes reach out and touch me, whispering, very serious, saying his hands—what was left of them—were passing through me. I thought he was joking, but I know certain fungi can give people the ability to perceive the world around them more acutely. I wish I could ask him, but all that remains of the Ballard I knew is a mound of mottled orange nestled between the trees. I’d feel pity, but I think this is what he wanted. I made a recording of his last words to me, as he melted into the soil and coughed puffs of fluorescent spores into the air between us. I’m transcribing it here, as faithfully as I can, before the drives corrode any further:

Lo, Lo, Lois. It’s not what we thought. She’s here, right here. All this time wasted searching. She was with us. Seamless, self-correcting, magnificent. Please accept this. Please accept this gift. The ancient computer isn’t here. She isn’t an island. None of us are.

🌱


In other news: Bumper Stickers for Your Phone Volume 1 is now available.

xo

Claire

https://clairelevans.substack.com/p/hyperlink-island
Extensions
A Forest from the Moon
In 1971, the astronaut Stuart Roosa, a former U.S. Forest Service ranger who’d logged his first flight miles smoke-jumping wildfires in Oregon, brought a forest into space.
Show full content

In 1971, the astronaut Stuart Roosa, a former U.S. Forest Service ranger who’d logged his first flight miles smoke-jumping wildfires in Oregon, brought a forest into space. 

Tucked into Roosa’s personal kit, hundreds of pine, sycamore, sweetgum, redwood, and Douglas fir seeds made a historic 34 laps around the moon. When Apollo 14’s crew returned to Earth, they spent two weeks in an isolation chamber, and the forest underwent decontamination, too. Under pressure, the seed canisters burst, scattering the seeds and exposing them to a vacuum. NASA would have left them for dead, but Stan Krugman, the geneticist in charge of the project, intervened to salvage and sort them.

From these, some 450 “moon trees” germinated, and by 1975, the seedlings were hardy enough to plant out. The US Forest Service put out a call to state foresters, offering the trees on a first-come, first-serve basis. “It was part science, part public relations,” Krugman joked at the time.

The timing was perfect. The 1976 bicentennial was just around the corner, and moon tree planting ceremonies seemed like an appropriately solemn way to celebrate America’s sprint from revolution to the stars. The trees were planted at state capitols, museums, schools, and courthouses in 40 states. Then-President Gerald Ford sent out a telegram to be read at all the tree-planting ceremonies, praising the moon trees as “living symbols of our spectacular human and scientific achievements.” He had a moon pine planted at the White House. Everybody drank Tang afterwards.

Today, the White House moon tree is dead. As are all seven moon trees planted at the U.S. Space and Rocket Center in Huntsville, and the moon sycamore at Cape Canaveral. In fact, less than 100 of the moon trees remain, although that number is approximate: having passed stewardship of the trees onto the states, neither NASA nor the Forest Service kept any official record of where the moon trees were planted. For decades, as they put down roots, the moon trees were completely forgotten.

That might have been the end of the story: a forest of moon trees, isolated from one another, exposed to a vacuum of indifference. But in the early 1990s, a third-grade teacher in Cannelton, Indiana learned about a moon Sycamore planted at a nearby Girl Scout camp. Unable to find any information, she sent an inquiry to NASA.

Her email landed in Dave Williams’ inbox. Dave isn’t a historian; he runs the Space Science Data Archive, the NASA department responsible for restoring Apollo-era lunar data. “I was like, ‘I've never heard of this before in my life,’” he told me when I reached him over Zoom. “But I knew a lot of the old-timers at Goddard, so I went around, and I asked them—no one had heard of it, no one knew anything about it.”

Bicentennial moon tree planting ceremony, Philadelphia, 1975. Note the US Forest Service’s mascot Woody the Owl, of “give a hoot, don't pollute” fame.

He started digging through newspaper archives and posting clips online. It was the early days of the World Wide Web, and information was scarce. But people who came across his site submitted moon tree sightings and bicentennial memories. Bit by bit, his bare-bones, unofficial archive became the only repository of known moon tree locations. Over 30 years, it’s brought him into contact with state foresters, park rangers, archivists, and even the Vatican astronomer. But he never met Stuart Roosa, who died in 1994. “We're losing the astronauts,” he says. “In fact there may come a day when the only the only living things that have ever been to the moon are these trees.”

The moon trees have descendants, second generation “half-moons” grown from the seeds of a mature moon Sycamore at Mississippi State University (you can buy one here, from a Tennessee nursery that also offers a pecan tree from Alex Haley’s childhood home and Helen Keller’s favorite magnolia). Fittingly, Stuart Roosa’s own descendant—his daughter Rosemary—offers half-moon planting ceremonies in memory of her Dad. In 2022, having rediscovered the PR possibilities of the project it had ignored for a half-century, NASA flew a fresh batch of seeds around the Moon on its Artemis I missionThose will be planted soon; the legacy continues.

Last year, I pitched a version of this story to a magazine, suggesting that I could round out the piece by visiting the moon tree plantings closest to me, along California’s central coast. The editor, very kindly, told me this might be boring for readers. After all, the moon trees are just trees. Their provenance gives them no discernible aura, no exotic mutations. They don’t grow upside-down or sideways. “What’s less space-age than a tree?” says Williams. This is precisely why I like the moon tree story so much.

Trees are wonderfully indifferent to narrative. Every historic tree I’ve ever visited—from the “Major Oak” in Sherwood Forest to the 5,000-year-old ancient Bristlecone pines of Northern California—has been, its own way, utterly normal. Plaques nearby always insist otherwise, of course: this tree has been to space, this tree saw the rise and fall of empires, this tree sheltered Robin Hood. But trees know better than to let our stories dictate the value of their lives. Who the hell is Robin Hood, to an oak?

The moon Sycamore at Mississippi State University, parent to the next generation of moon trees.

In the late 1980s, the “space philosopher” Frank White published a famous book called The Overview Effect, which argued that seeing the Earth from space triggers a state of transcendent awe that irrevocably changes astronauts’ perspectives. White’s theory is poetic enough, but in the hands of space industry boosters, it’s given moral weight to the idea that humanity is fated to expand to other worlds. If astronauts return to Earth transformed, then we should all be astronauts, White suggests—in fact, it’s our evolutionary imperative to colonize other worlds. But the original moon trees, unchanged by their journey, are perfectly happy putting their roots down here on Earth. Maybe we should be too.


In other news, The Computer Accent, the feature documentary film about my band YACHT’s early experiments with machine learning and music, is finally available to rent or buy on AppleTV and Prime Video. It was filmed in 2017, and although it’s already become a time-capsule of creative AI during what turned out to be a transformative moment, the central thesis of the film—that music documents experience, and that the role of technology shouldn’t be to make those experiences more efficient, but rather to enliven and complicate them—remains solid.

I did a long interview about the film and my thoughts on AI now with AllMusic, if you’re interested, and the filmmakers have also released an extended cut, in podcast form, of one of the film’s most interesting conversations, with the algorithmic music pioneer Dr. David Cope, who started making generative music systems in the 1980s. Visiting Cope’s office in Santa Cruz is one of my favorite memories of making this film; every inch of the ceiling was covered in wind chimes, and when the window was cracked open, a gentle breeze from the sea made the most lovely music of all.

xo

Claire

https://clairelevans.substack.com/p/a-forest-from-the-moon
Extensions
When Animals Dream
A conversation about animal dreaming with philosopher David M. Peña-Guzmán.
Show full content

Anyone with a pet knows, intuitively, what many biologists still hesitate to admit: that animals are dreaming beings. “Like us,” writes the philosopher David M. Peña-Guzmán, “they are builders of worlds, even after the Stygian currents of sleep have pulled them under and sent them flying through the looking glass.”

Peña-Guzmán is of a school of contemporary philosophers who work closely with scientific material, unpacking the implications of laboratory research on our understanding of consciousness. In his 2022 book When Animals Dream, he draws from behavioral and neuroscientific research on animal sleep to make the case that not only do many animals dream, but their dreaming is evidence of phenomenological consciousness: the capacity to sense, feel, and experience the world. As such, he argues, dreaming, rather than forming language, suffering, or making rational judgments, is what confers moral status on animals. Because they dream, animals are conscious beings “who matter, and for whom things matter.”

It’s a fascinating book—packed with daydreaming rats, hallucinating monkeys, and the silent songs of sleeping finches—and Peña-Guzmán’s argument is lucid and deeply felt. We spoke over Zoom this week; I’m giving you some highlights below.

Let's start with something a bit personal. I wonder if spending years thinking about the dreaming lives of animals has had an effect on your dream life.

I had a dream that really marked me once, in which I was wearing a white lab coat, and I was with a group of other scientists hovering over a sleeping rat in a laboratory setting. I think this was, partly, an anxiety dream—about my own anxiety about using scientific research to make an argument about animal minds and animal ethics, knowing full well that a lot of the research that I use doesn't meet my own standards of ethics. This is something that I have been struggling with in my work: what it means to work in close proximity to scientific protocols that often kill animals.

It's clear from your work that you're profoundly attuned to animals, and you care deeply about the ethical and moral responsibilities we have to them. But yeah, your book does include a number of quite brutal laboratory experiments as evidence.

I'll be very honest: I had a lot of anxiety about this. I had to do some soul-searching about what exactly my goals are, and what it means to pursue them. I'm not sure that I'm satisfied with my own answer to this issue. I still sit uncomfortably with it. The way in which I've thought about it, up until now, is that perhaps we can use some of this research that's been produced by scientists to launch an internal critique. To show: look, based on these experiments that you've already done, the image of animals that you have, and that you use to justify your experiments, is overturned. Because the image that emerges of animals from this research is of sentient, complex, sensible creatures that therefore trigger moral considerations. One thing I will not do is call for more of these experiments. That is inconceivable. I'm not going to call for more experiments proving that animals feel pain. The evidence has been there for a very long time.

Your argument for animal dreaming draws analogies between the neuroanatomy, electrophysiology, and the behavior of animals during sleep and our own. But you also write that "other than anthropocentric conceit, we have no reason to expect that other animals dream in the same way that we do." How is our dreaming similar, and how is it different?

What I have in mind here is a distinction between, let's say, structural and functional similarities. I'm giving all these reasons to believe that other animals can dream. But how they dream—that's not something that I touch upon, except in a few cases where the science does give us insight into some elements of the content of their dreams. You can think about it as a form versus content distinction. I think animals have the form of dreaming, the capacity, but then they fill it in the same way they fill the world of waking experience, which is to say with their own sensory capacities, with their own bodily limits, and capabilities, and so on. There might be other animals that have the same sensory modalities that we do, but that don't give them the same weight.

For example, humans are highly visual creatures. A third of the cortex is dedicated to visual processing, and so even from an anatomical perspective, vision takes up a lot of real estate in the brain. Most of our dreams tend to be visual. But there are animals that see, but that don't see well, and for whom seeing is pretty much a backup sensory modality. In that case, I think it's fair to suspect that they will have dreams where there is not a lot of visual content. Think about an electric eel that senses and produces electricity. That's a sensory modality for that creature. It's conceivable that an eel will have electric dreams, and there’s no way for us to even imagine what that is, because we don't know what it means to experience electricity as meaningful. But they do. That's the sort of thing I mean: that they will dream, but not like us.


“An eel will have electric dreams, and there’s no way for us to even imagine what that is, because we don't know what it means to experience electricity as meaningful.”

This reminds me of the concept of the umwelt, the perceptual sphere of an animal. Could you say that an animal's dream is its imaginative umwelt?

I think that's exactly what a dream is. I think a dream is an imagined umwelt. When people talk about the umwelt, they talk about it like it's a bubble that engulfs the animal in which it is immersed. In this case, it's imagined in a much more radical sense than our waking umwelt, but both of them are imagined. It's just that you see the construction much more clearly, in the case of the dream.

Animal dreams, like their perception, will always be unavailable to us. My cat will never be able to describe her dreams to me, but that doesn’t mean she doesn’t dream.

Often the barrier that those of us who work in this field have to jump over is the issue of language. How do you know [animals dream] when animals can't talk about it? Often we navigate that by saying, well, you can have the experience without having language, because although language might add something to the experience, it doesn't constitute the experience. An example of this is the idea of thought itself. For a long time, thought has been described as having a linguistic form: to think is literally to think in a sentence. But you can think visually, and a picture is not a sentence. It doesn't have a linguistic structure. There are no verbs, there are no subjects and nouns. Philosophers are really invested in making things linguistic and I just believe that's a mistake. And it's been a mistake for a very long time.

An anecdote that blew my mind in your book is that while there's evidence of dreaming in mammals, lizards, birds, and even fish, the jury is still out on cetaceans. Of all the animals that I feel certain must dream, dolphins are at the top of my list.

It's not that there isn't evidence of dreaming in cetaceans, it's that cetaceans are the most contested case. And that has to do with the style of their sleep cycle. Dolphins have hemispherical sleep: they fall asleep with only half of their brain, while the other half is awake and alert and interacting with the world. They keep half of their brain awake, because they have to go up to breathe, otherwise they would drown, and because they have to follow the pod that's constantly moving. And so some people have said, logically, that they wouldn't be able to dream, because a dream is something that you get immersed in, and a dolphin can't get immersed in a dream and still be actively interacting with the world in a coherent way. But the fact that it's inconceivable to us doesn't mean that it's not materially possible for them. Who knows? It could be that they have a split-screen vision, where half is dream, half is reality. Maybe it's a combination of the two. Maybe they switch back and forth, changing channels between reality, dream, reality, dream. We don't know whether or not they dream. Some people have noticed indicators of REM in whales and dolphins, others have looked and have not found them. My position on this is that I just don't want to close the door too soon merely because we can't comprehend it.


“Maybe they switch back and forth, changing channels between reality, dream, reality, dream.”

When Animals Dream opens with an anecdote about Heidi, a female day octopus caught sleeping in a 2019 PBS Nature special. As she sleeps, Heidi’s dreams appear to play across her skin in a stunning chromatic display; she flicks from white, to orange, to wine-dark purple and finally light gray and yellow, in a sequence, Guzman suggests, that’s almost narrative in quality. If you’ve never seen the video, it’s really remarkable:

When Animals Dream is available here. David M. Peña-Guzmán is also the co-host, with Ellie Anderson, of Overthink, an excellent and very approachable philosophy podcast that tackles a wide range of subjects ranging from animal justice and standpoint epistemology to psychedelics, orgasms, and AI.


In personal news: a few weeks ago I joined the designer Mindy Seu and Yasaman Sheri, principal investigator of the Synthetic Ecologies Lab at the Serpentine Gallery, for an online seminar hosted by the School of Visual Arts. We talked slime mold, ferments, networks, and interspecies collaboration, among other things. The video for that session, The Algorithmic State: Wetware, Fermented Code and Artistic Inquiry, is now on YouTube if you’d like to watch.

xo

Claire

https://clairelevans.substack.com/p/when-animals-dream
Extensions
Brighter Than a Cloud
How to describe a scintillating scotoma?
Show full content
Woman on sofa obscured by c-shape scotoma with black, 1987.

How to describe a scintillating scotoma? It’s one of the most common symptoms of a migraine, but unless you’ve had one, it sounds unreal. A scintillating scotoma is like a barbed ripple in the pool of sight. It’s a skeletal Magic Eye raised up from the flatness of the world. It’s a glare on the tarmac as you drive West at sunset on a rain-slick freeway—only when you turn your head, it’s still there, so you have to pull over, close your eyes, and wait out the slow-motion firework working its way across your brain.

Medieval migraine sufferers compared the jagged shape of their scintillating scotoma to fortresses, and medical literature describes their “fortification patterns.” I never quite understood the analogy until I took a water tour of Copenhagen—as our boat slid down the Øresund alongside the island citadel of Kastellet, I saw my own scotoma in stone. A migraine aura, too, has ramparts and ravelins, bastions against a zig-zag world. At their edges, chaos reigns. The thin interface between a scotoma and normal vision is a boiling tar, a volley of arrows, a violence of inputs and light. In the midst of an attack, it’s tempting to think that perception is always at war with reality.

In the absence of an organizing mind, everything comes unglued. Faces go missing and dark holes seem to eat half the universe. Migraine sufferers can experience the uncanny sense of consciousness doubling known as déja-vu, or its cousin, jamais-vu, in which the world feels newly-made. The world might feel suddenly very unreal, fracture into a mosaic, or slow to a stop-motion pace, dropping frames. The self might cleave in two in a fit of somatopsychic duality. Writing about these bizarre and horrifying perceptual phenomena, the late Oliver Sacks observed that migraines “show us how the brain-mind constructs ‘space’ and ‘time,’ by demonstrating what happens when space and time are broken, or unmade.”

Landscape with c-shape scotoma and missing vision, 1980s.

It’s true that migraines offer a rare vantage on the brain and its mysteries. As the scotoma’s shining edges recede across your field of vision, they trace a corresponding wave of electrical activity, a “spreading depression” crossing the cortex at 3 millimeters per minute. You can watch it happen in real time, as you might watch a cloud track across the sky on a windy day. You might even have time draw a picture of the scotoma as it passes overhead—a landscape en pleine tête.

The Wellcome Collection, in London, hosts a remarkable collection of such pictures: some 545 submissions to a British migraine art competition than ran throughout the 1980s. Each image has its own vernacular charm. The scotomas explode into tidy living rooms and block bucolic country roads. They dazzle a woman’s smile. They arc like ball lightning across the faces of row houses. The collection is worth appreciating on an aesthetic level alone, as sublime outsider art, but it’s valuable, also, because it reveals the scotoma’s universal form. Like dreams, migraines map terrain we all share.

Face obscured by c-shaped scotomas in black, red and yellow, 1980s.

Perceptual distortions are difficult to measure, but they can be approximated in paint and pencil, which makes migraine art a powerful diagnostic and scientific tool. The earliest depictions of migraine phenomena were illustrations made by physicians who happened to be migraineurs themselves, like the German ophthalmologist Christian Georg Theodor Ruete, who illustrated the three successive stages of his own “flimmerskotom” in 1845, and the 19th century British physician Hubert Airy, whose ink renderings wouldn’t be out of place in the Wellcome’s migraine art collection.

Hubert Airy's 1870 illustration of his own scintillating scotoma, reproduced in P. W. Latham's On Nervous or Sick-Headache (1873).

Nor, for that matter, would the illuminated manuscripts of the 12th-century German Christian mystic Hildegard of Bingen. Hildegard, who spent most of her childhood as an anchoress, enclosed in a one-room cell adjoining the Benedictine monastery at Disibodenberg, described lifelong gripping pain and a persistent umbra viventis lucis: a reflection of living light. “The light which I see…is not spatial, but it is far, far brighter than a cloud which carries the sun,” she wrote. Descriptions like these, supplemented by her visionary artwork depicting fortifications, mandalas of stars, flaming eyes, and openings in the sky lapped with white-crested waves, have led many neurologists to retroactively diagnose her as a chronic migraine sufferer.

Manuscript illumination from Scivias (Know the Ways) by Hildegard of Bingen, 1511.

Does such a diagnosis negate Hildegard’s holy ecstasies? Is it reductive to pathologize the shimmering visions that led a Medieval abbess to compose some of the most stunningly beautiful liturgical music of all time? Oliver Sacks, again, answered these questions wisely, in his 1970 book Migraine:

“Invested with [a] sense of ecstasy, burning with profound theophorus and philosophical significance, Hildegard’s visions were instrumental in directing her towards a life of holiness and mysticism. They provide a unique example of the manner in which a physiological event, banal, hateful, or meaningless to the vast majority of people, can become, in a privileged consciousness, the substrate of a supreme ecstatic inspiration.”

Hildegard’s ecstatic inspiration was religious in nature, but the migraine experience is non-denominational. Lewis Carroll transmuted his acute migraine symptoms—the visual disturbances of Lilliputian and Brobdignian vision, which made him feel as though his limbs were shrinking and growing away from him—into Alice in Wonderland. Giorgio de Chirico, who suffered from scotoma, depersonalization, déja- and jamais-vu, among other migraine symptoms, repeatedly painted wobbly suns, jagged voids, zig-zags and disproportionate figures into his metaphysical canvases.

According to Migraine Art: The Migraine Experience From Within, migraine auras are as old as humankind—so old, perhaps, that they may have inspired the geometric forms of Stone Age cave drawings. Which makes recent attempts to generate migraine auras using convolutional neural networks seem particularly poignant to me: what began in stone, animated by the hot flicker of firelight, continues 5,000 years later, deep in the heart of servers whose mineral components were mined from the same dark Earth.

xo

Claire

https://clairelevans.substack.com/p/brighter-than-a-cloud
Extensions
Would the Buddha Wear a Walkman?
A few months ago I wrote a newsletter about lucid dreaming. After a night of uncharacteristic sleep deprivation, I’d experienced a visceral flying dream, after which, curious, I lurked some message boards, read the literature, and casually started performing “reality checks:” small daily rituals designed to test whether or not I am, in fact, asleep. My reality check of choice is counting my fingers, and the first time I found myself doing this in a dream my hands looked like the telltale flippers of bad generative AI. One finger had a smaller finger jutting out at the knuckle.
Show full content

Choromatsu, a Japanese snow macaque, was the star of an award-winning SONY Walkman campaign in 1988. He died at the age of 27.

A few months ago I wrote a newsletter about lucid dreaming. After a night of uncharacteristic sleep deprivation, I’d experienced a visceral flying dream, after which, curious, I lurked some message boards, read the literature, and casually started performing “reality checks:” small daily rituals designed to test whether or not I am, in fact, asleep. My reality check of choice is counting my fingers, and the first time I found myself doing this in a dream my hands looked like the telltale flippers of bad generative AI. One finger had a smaller finger jutting out at the knuckle.

The weirdness was enough to jolt me into lucidity, which was the point.

So now I know how to wake up within my dreams, but controlling them is another matter. Dreams have their own logic. To change scenes, for example, you have to seek out a door or an elevator; dreams require a superficial context for conveyance. Failing that, a television set will do: you just have to twiddle its knobs until a new landscape emerges from the screen. To stabilize the dream, spin in place like a dervish. To fly, find a window and jump. To force yourself awake, it’s best to just find a bed and fall asleep inside the dream—and if you wake up in another dream, you can always rule out a “false awakening” by counting your fingers again.

These are tips I’ve picked up from the dream forums, where oneironauts share strategies for getting around the dreamworld. Reading their reports, I’m struck by their sense of shared place. No matter where the sleeper lays their head, the physics of dreams are unchanging. All dreams have a ground, for example. You cannot witness your own death in a dream without the dissolution of the dream. All dreams resist written symbols. Numbers and letters are so rubbery in dreams that simply trying to read something is a powerful reality check (someone on the lucid dreaming Reddit recently reported that their watch, in a dream, read 11:Y).

The ineptitude with numbers is not limited to the dreamer, or to the dream itself. Studies have shown that “dream characters,” the entities we interact with inside of our dreams, are useless at math. Although they can be creative and ingenious interlocutors, when tasked with arithmetic, dream characters can’t solve problems above a primary-school level, and are especially stymied by addition and subtraction.

Oneironauts largely agree that dreams are in color, although it was scientific consensus, well into the 1960s, that all dreams were black and white. The black and white dream era spanned, roughly, from the invention of photography to the invention of color film. It’s fun to speculate that black and white movies so affected the collective subconscious that they permeated our dreams, but it’s more likely that we don’t dream in black and white or in color but rather in some more indeterminate palette, to which, upon waking, we impose interpretations based on the world around us. As the philosopher Eric Schwitzgebel writes, “perhaps dream-objects and dream-events are similar to fictional objects and events, or to the images evoked by fiction, in having, typically, a certain indeterminacy of color, neither cerise nor taupe nor burnt umber, nor gray either.” Is fiction in black and white? A question for another night.


I know this newsletter is ostensibly about technology and biology, but I can’t help myself from pursuing interestingness, so here we are. We could call dreams an “interface” if it makes us feel better. Or a benchmark for consciousness, something far beyond the so-called hallucinations of neural networks. As the philosopher David M. Peña-Guzmán lays out in his book When Animals Dream, dreaming is not unique to humans, or even to mammals, despite the fact that most animal sleep researchers prefer to use guarded euphemisms like “oneiric behavior” and “mental replay.”

In the conclusion of a famous study of sleeping zebra finches, which revealed that the little birds sing in their sleep—rehearsing, note for note, the songs that mark their days—the authors proposed that this nighttime “replay” was a merely an “algorithmic implementation” rather than anything like a human dream. As Peña-Guzmán comments, in this framing, the finches do not experience their own dreams “any more than a laptop experiences running Adobe Reader or Microsoft Word.” Needless to say this is not Peña-Guzmán’s view, nor is it mine. A mountain of electrophysiological, behavioral, and neuroanatomical evidence supports the vast dreaming lives of animals, that is to say, their meaningful inner world. I wonder if a zebra finch, or, for that matter, the overweight domestic shorthair currently asleep on my shoes, could ever have a lucid dream. What might they choose to do in that dream that they couldn’t do upon waking? And would they think it was real?

In Tibetan Buddhist sleep yoga, the dreams we have at night are considered “example dreams:” spiritual clues to the fundamentally illusory nature of waking reality, which is the actual dream. In Western lucid dream technique, reality checks like counting your fingers are a way to differentiate waking and dreaming states, to underline their difference; in Buddhist practice, from what I understand, similar “illusory form” meditations help to remind you that everything is a dream, including death—the dream at the end of time. Anyway, I had to laugh when I found this book at the thrift store:

“From intelligence-boosting devices to lucid dreaming techniques to shaman training to innovative new psychotherapies, this book provides a comprehensive consumer guide to the new frontiers of high-tech for higher consciousness.”

Published by former editors of Omni Magazine—someday I’ll share the story about my tenure as the editor-in-chief of a scrappy, and, as it turned out, wholly illegitimate Omni reboot—it’s a Whole Earth Catalogue-esque index of New Age tech, including a number of dream hacking tools, like digital dream diaries and REM-detecting lucid dreaming goggles designed by the psychologist and preeminent lucidity scholar Stephen LeBerge. The full book is on the Internet Archive if you want to browse it (as is Mind Mirror, Timothy Leary’s “self-help life simulation software” on CD-ROM).

Most of these New Age techno-gadgets may seem silly now, but dream tech is alive and well. A well-funded startup called Prophetic is threatening to train machine learning models on EEG and fMRI lucid dream data, then beam its findings via ultrasound directly into willing brains—their Halo device, “the most advanced neurotech wearable ever created,” can be pre-ordered for $2,000. At the other end of the spectrum, the Dormio, which you can hack together yourself in a weekend, promises easy dream incubation by whispering prompts into your ear at the most suggestible moments of hypnagogia. The latter, Michael W. Clune writes in a great Harper’s piece, offers an unsettling glimpse into the “mind’s ceaseless, devouring creativity,” which can be as frightening as it is is exhilarating. Your mileage may vary.


Anyway, back to the waking world. On February 8th, I’ll be giving a brief virtual talk at the School of Visual Arts about new frontiers in biocomputing—think slime molds, fungal architectures, and new biological substrates for AI—and talking “wetware” with Mindy Seu, of Cyberfeminism Index fame, and Yasaman Sheri, principal investigator of the Synthetic Ecologies Lab at the Serpentine Gallery. RSVP for “The Algorithmic State: Wetware, Fermented Code, and Artistic Inquiry” here if that sounds fun to you.

xo

Claire

https://clairelevans.substack.com/p/would-the-buddha-wear-a-walkman
Extensions
The Time of Big Walking
Happy New Year, everyone.
Show full content

Happy New Year, everyone. I write this brief reflection to you from my in-laws’ home in Vaasa, Finland. This close to the Arctic circle, the sun rises a little past ten in the morning and moves low across the sky, never quite clearing the birch forest on the horizon. At midday it’s already setting and by three in the afternoon it’s night again. While present, the sun is fiercely golden. Everyone takes advantage. Cross-country skiers and dog-walkers appear on the frozen lake, and a neighborhood teen, I’m told, with a sizable TikTok following, whips donuts on a hot-rod snowmobile. Light here is a precious commodity. But then it is everywhere, especially these days.

In the sleeplessness of my intense jetlag I’ve been reading about the Sámi people, indigenous to the Sápmi territory spanning Northern Finland, Sweden, Norway, and Russia. Their calendar has eight seasons (including spring-winter, pre-summer, pre-autumn, and pre-winter) and twelve months, articulating successive changes in the environment and its inhabitants. March is Njukčamánnu, Swan Month; April is Cuoŋománnu, Snow-Crust Month; May is Miessemánnu, Reindeer-Calf Month. When these months land—on which precise days—is less important than their sequence: Swan before Snow-Crust, Snow-Crust before Calf, Calf before Hay, Molt and Rut, all of which fall before Dark-Period, which begins in Skábmamánnu, or November.

Source: BÁIKI: The International Sámi Journal. Note the Editor’s Note about the impact of the Chernobyl Disaster on the Sámi. Reindeer feed on lichen, which, having no roots, absorb radiation and heavy metals easily; in the aftermath of the disaster, some 73,000 contaminated reindeer were culled in Sweden alone.

In the colonial calendar, March is March, but the thin crust of ice over the snow and the Whooper Swans appear when they appear, on nature’s time. This means March is fluid, and it can change, for better or worse. In the immediate aftermath of the Chernobyl Disaster, for example, the Sámi were forced to start pre-Autumn early, slaughtering reindeer before they had a chance to eat too many contaminated lichen, mosses, and mushrooms. Artificial borders, which limit the movement of reindeer and their herders, can derail the calendar too, as does, of course, climate change. In the Sámi calendar, time relies on the landscape and its inhabitants for meaning. Maybe that’s a useful thought for heading into Ođđajagemánnu, New-Year Month: to know what time it is, you have to know what’s happening around you.


Below is another interesting indigenous Arctic calendar, courtesy of the Evenki people of Northern Russia, China, and Mongolia: starting from the head, the first month begins after the spring equinox, putting Western New Year somewhere around the right wrist. For the Oroquen people of China and Inner Mongolia, the period around New Year’s is known as the “the time of big walking,” because the heaviest snow hasn’t yet fallen and it’s still possible to track nearly thirty kilometers a day hunting wild sable (this is followed, of course, by the time of little walking).

Defining a year solely in terms of a fixed calendar is a kind of luxury: a marker of the fact that some people don’t need to observe unfolding changes in the landscape to know when to hunt, when to leave the mountains, when to return. But it is also an impoverishment, removing the richness of the living world from everyday view. A year is 365 calendar days, but for the Sámi, Evenki, Oroquen and other Arctic peoples, it’s also a flow of crows, swans, blossoms, and lichen, a crust of ice on the snow, birch leaves yellowing on the trees, reindeer grown fat on mushrooms and berries, lakes slowly hardening with ice, each moment blooming, thawing, freezing into the next, from head to shoulder to wrists and back again. Of course, these observations have historically been a matter of subsistence, but what is a calendar, if not a record of the days we’ve survived, or hope to? In any case, I’d like to see—and feel—the world around me so closely in the year to come.


As for the year behind us: I spent 2023 on a project completely outside the purview of this newsletter. I can’t share anything yet, but it has involved building a new world with my best friends. In my own writing, I went long on scale, two ways, for Grow Magazine: making a case against scalability and exploring the history and philosophy of microscopy. For the MIT Technology Review, I did a little biotech reporting and indulged my passion for media archaeology by traveling through the colorful history of the corporate presentation. I published long conversations about intelligence (artificial and otherwise) with writers James Bridle, Sheila Heti, and Joanne McNeil.

I was nominated for (and lost) a National Magazine Award, gave an old favorite the Pop-Up Magazine treatment, moderated a panel on AI and protein-folding, and premiered a visual presentation about the new frontiers of biocomputing at Sónar Festival. Broad Band was translated into German and improbably named one of the “best tech books of all time” by The Verge. A short story I wrote about a future museum of symbiosis was turned into a physical installation (out of mycelium, no less) for the Venice Biennale. My band also released four singles and a video.


I loved the Italian nanotechnologist Laura Tripaldi’s trippy and audacious “technomaterial bestiary” Parallel Minds, about the intelligence of materials. I loved Renata Adler’s elliptical 1977 novel Speedboat, which Meghan O’Rourke described reading as “like being in a snowstorm” (and which I read here in Finland, as snow indeed fell outside the window). I loved The Diving Pool, a collection of three spare and unsettling horror stories by Yoko Ogawa. I loved Stanislaw Lem’s Memoirs Found in a Bathtub, and this album by the Tuareg family band Etran de l’Aïr. I loved walking around my neighborhood every morning listening to audiobooks on Libby. The video game IMMORTALITY blew my mind (more about that soon). Here is a playlist of songs I put on every time I potter around in the garden.

Have a wonderful New Year, however you define it.

xo

Claire

https://clairelevans.substack.com/p/the-time-of-big-walking
Extensions
Living in a Lucid Dream
I had an early flight, so I went to bed long before I was tired.
Show full content
Leonora Carrington, And Then We Saw the Daughter of the Minotaur. 1953.

I had an early flight, so I went to bed long before I was tired. But it doesn’t matter—I never sleep when I know an alarm will wake me before sunrise. I just lie awake, my thoughts and I in a minuet of worry, until the bell tolls. That night was no different.

Five hours passed. Cars drove by, coyotes howled. I dressed for imaginary occasions in my mind. I thought of my mother, a lifelong insomniac happy to scrape by on three or fours hours a night. I felt heavy and conspicuous, like a fallen statue.

When the alarm finally rang I rose, dressed, and gathered my things. Just as I was making coffee my phone pinged: the airline. My flight was four hours delayed. I went back to bed—still exhausted, now fully dressed. Again, I lay there, wide awake. I was about to give up when I was pulled out from under the sheets by my ankles.

In a single motion, an unknown force tossed me like a ragdoll out of the sliding-glass door. Voices echoed and called across the horizon as I tumbled through the sky, in a constellation of blue pinpoint stars that were also me. I realized I was dreaming.

It wasn’t like any dream I’d ever experienced: I was completely awake, and, once the panic subsided, I realized, in control. I concentrated, tried barrel rolls. I felt the pillow close; felt the reality of my bedroom just beneath the dream. The bed pulled me back, the force pulled me out again. I dove, coasted, watched the sun rise over the skyline of LA. I wondered if I might miss my flight.


There is, of course, a robust community of lucid dreamers on Reddit. On their recommendation, I picked up Exploring the World of Lucid Dreaming, a 1990 manual by the Stanford psychophysiologist Stephen LeBerge. The book outlines a technique for becoming reliably lucid: MILD, or the Mnemonic Induction of Lucid Dreams (a more advanced technique, WILD, can cause night terrors if done incorrectly).

One of the fundamentals of lucid dream induction is something LeBerge calls “critical state-testing technique.” The Reddit community calls it reality testing. It’s simple: every day, as often as possible, perhaps every time you pass through a door, ask yourself if what you are experiencing is real. Ask yourself seriously. Look around and justify your answer. Count your fingers. Plug your nose. Look at your watch, then look again to see if the numbers are still there. Try to pass your hand through your palm. Do this often enough, and it becomes a habit. Eventually you’ll do it in your sleep. And when that happens, you will find that your fingers multiply. You can breathe with your nose plugged. The watch is unreadable. Dream Standard Time: you’re asleep.

Salvador Dali, The Persistence of Memory (detail), 1931.

LeBerge writes that the dream world is as “real” as its waking equivalent, at least in a sensory sense. To our brains, the sights and sounds of a dream are as real as any other experience. The difference is that while objects in the waking world exist outside of our perception of them—persisting even when we’re not looking—dream-objects are created anew in each moment of perception. I found this idea eerily reminiscent of the great science fiction writer Philip K. Dick’s one-sentence definition of reality:

Reality is that which, when you stop believing in it, doesn’t go away.”

This definition, to which I probably refer too much, is from “How to Build a Universe That Doesn’t Fall Apart Two Days Later,” a meandering speech Dick gave in 1978. It’s pure Dick: observations about Disneyland, nods to pre-Socratic philosophy, stories about androids that don’t know they’re androids, an account of Dick’s prophetic visionary experience, and a deep dive on the theory, developed from that experience, that we live in an ancient world, reality is an illusion, and time is accelerating.

In the speech, he talks about having an extraordinarily vivid dream, which he felt compelled to put to paper, and which figures as a scene in his 1974 novel Flow My Tears, The Policeman Said. After the book was published, he realized the scene was from the Bible. His conclusion was that both the Bible and his novel drew from the same well of truth, the same observation of a real world that persists beneath the collective dream of waking life.

“An author of a work supposed fiction might write the truth and not know it,” he said.


Finally, here are some recent long-form pieces from me, which I wrote—as best as I can tell, anyway—in a fully-conscious state:

The Perpetual Zoom, for Grow Magazine

I began this piece on the history of microscopes nearly a year ago, when a casual interest in the Dutch microscopist Antoni Van Leeuwenhoek turned into a mild obsession. Leeuwenhoek was a self-taught draper from Delft and the first human to see beyond the visible, peering through a tiny bead of glass into the world of bacteria, which he called “animalcules.” Writing this piece, I was trying to get at what it really means to “see” through a microscope, and how modern microscopy techniques—using electron beams, lasers, and scanning probes to plumb data from the microrealms—push our understanding of “seeing” to the point of total abstraction. Shoutout to the UCLA NanoSystems Institute, who were generous enough to let me look at all their advanced microscopes, and to the ladybug who landed on my foot while I was there.

The New Mechanical Turks, for PioneerWorks

This is a conversation with the writer Joanne McNeil about “fauxtomation:” when tech companies obfuscate the human labor powering allegedly automated content moderation systems, food delivery robots, and self-driving cars. McNeil’s very good debut novel, Wrong Way, out now with MCD Books, is about the hidden life of one such worker, the driver of a “driverless” car. It’s sci-fi, but, reading it, I couldn’t help but think of Philip K. Dick’s observation: that a writer of supposed fiction might inadvertently chance on the truth. This conversation is the second installment in Three-Way Mirror, a series of long-form conversations about AI I’m publishing with PioneerWorks. The first, if you missed it, was with the Canadian novelist Sheila Heti.


Finally, a bit of news! I received word this week that I’ve been awarded a 2024 MacDowell Fellowship. So I’m excited to announce that I will be spending some time next year in a cabin in the forest—working, finally, on my next book.

Sleep well,

Claire

https://clairelevans.substack.com/p/living-in-a-lucid-dream
Extensions
Fear of Trees
A late-spooky season admission: I’ve always been a little bit afraid of trees.
Show full content
Le Seize Septembre, René Magritte, 1957.

A late-spooky season admission: I’ve always been a little bit afraid of trees.

I blame Algernon Blackwood. I read his 1912 story, The Man Whom The Trees Loved, at an impressionable age. In the story, a young man senses a murky intelligence beneath the bark of a cedar. This special living “something” in the tree excites him; it feels animal-like. His obsession with it becomes ecstatic. He fantasizes that the forest might someday “engulf human vitality into the immense whirlpool of its own vast dreaming life.” The dark woods at the edge of his ordered cottage garden creep closer and closer, until they claim him whole.

Blackwood’s story stirs something weird in me. When the tendrils of ivy creeping along my back fence touch the ground, I rush to cut them back, lest they take me away too. On a whim I recently Googled “dendrophobia,” the fear of trees. In the years-spanning comments section of an online phobia dictionary—a place I consider to be the Deep Internet, almost primevally online—I read about a man stricken to paralysis by the gnarled roots of banyan trees “haphazardly touching the ground in one strangling embrace.” The whooshing sound of leaves in the wind is a common trigger. An account from a young mother: her preschooler noticed a tree near his playground, and, convinced it wasn’t there yesterday, is now inconsolable.

The toddler isn’t wrong; plants are on the move. As the botanist Jack Schultz writes, plants are “just very slow animals.” Forests migrate across great distances, racing against climate change; time-lapse photography reveals the snaking, foraging movements of vines, of seedlings “circumnutating,” swirling around in searching ellipses. We’re in the midst of a revolution in thought regarding plant intelligence, with research suggesting that plants can anticipate pain, warn one another of danger, recognize kin, and learn. It can be disorienting, and yes, even frightening, to encounter an agency so different from our own. Acknowledging the intelligence of plants means decentering our own, and it has moral weight, too: if the whole world can feel, then the whole world is screaming in pain. Truly a horrorshow.

Indeed, horror — along with folklore and science fiction — has long exploited the unsettling implications of plant motility and intelligence, from stories of medieval mandrake roots and deadly upas trees to John Wyndham’s alien triffids, H.G. Wells’ “strange orchid,” and, of course, Audrey II, the bloodthirsty succulent from The Little Shop of Horrors. With every new piece of plant intelligence research, it feels more than ever like the old stories are true, like Blackwood’s special living “something” really is something, waiting to be known.


"Sometimes my thoughts grow confused, and it is as if the forest has put down roots in me, and is thinking its old, eternal thoughts with my brain."

— Marlen Haushofer, from The Wall


Maybe to fear trees is to fear deep time. Forests seems to hold and retain time like sponges. The late naturalist-writer Barry Lopez, who I’ve plugged in this newsletter before, once wrote that time pools around places with a certain weight. He observed this—admittedly jet-lagged out of his mind—while researching a story about long-haul air freighters. On a layover in Johannesburg, he visited Clifton Bay, with its lofty views of oak and pine forests and the silhouette of Table Mountain, so ancient that Lopez felt he could perceive indigenous time still clinging to its face. “It resisted being absorbed into my helter-skelter time,” he wrote. “It seemed not yet to have been subjugated by Dutch and British colonial expansion, as the physical landscape had so clearly been. It was time apparent to the senses, palpable.”⁠

I feel this way sometimes, even in LA. Along the city’s industrial edge, near a drive-in movie theater I frequented in the early days of the pandemic, the unpaved lots are overgrown with tall grass. Driving around, it’s easy to imagine the grass extending in all directions—to squint a bit, ignore the warehouses, and trace the flat plain of the land, a broad valley between the mountains. The overgrown lots are windows onto the original land, revealing what’s buried beneath the asphalt and freeways; not only the plain, but its time. This feels very much like sensing the special living something inside of a tree: eerie and beautiful, on the knife’s edge of terrifying.


The great Belgian surrealist painter René Magritte—my header image today is his painting Le Seize Septembre—on trees:

“Pushing up from the earth toward the sun, a tree is an image of a certain happiness. To perceive this image, we must be still, like a tree. When we are in motion, it’s the tree that becomes the spectator. It is witness, equally, in the shape of a chair, a table, a door, in the more or less restless spectacle of our life. The tree, having become a coffin, disappears into the earth. And when it is transformed into flames, it vanishes into the air.”

The Day of the Triffids, 1963.

Further reading:

Evil Roots: Killer Tales of the Botanical Gothic, from the British Library’s “Tales of the Weird” series of anthologies, is a great roundup of nineteenth-century plant horror. I also highly recommend Algernon Blackwood’s novella The Willows, freely available on Project Gutenberg, in which willow trees along the Danube lure two boatmen to an unearthly fourth dimension.

For more plant horror, browse my Fear of Trees Are.na channel, a loose collection of Lynchian trees, 70s indoor plant slasher flicks, fiction, and criticism.

The first part of this essay is adapted from an introductory text I wrote for BLOOMCORE, an exhibition of new works by my friend, the artist Rick Silva.

Barry Lopez’s observations about indigenous time can be found in the collection Vintage Lopez. On The Wings of Commerce,” his piece about long-haul air freight, which also gets into his experience at Table Mountain, is also worth a read.


Love,

Claire

PS - You may have noticed these newsletters are becoming more frequent. I’m trying for one every two weeks, as a Sunday morning read. We’ll see.

https://clairelevans.substack.com/p/fear-of-trees
Extensions
To Boldly Go Down The Hall
On Robert Irwin, Star Trek, and the infinite hallways of the future.
Show full content

Note: the following essay about the late artist Robert Irwin and Star Trek production design was originally published in the Art Los Angeles Reader in 2018. I’m sharing it here to commemorate Irwin’s passing this week, and for posterity, since it’s never been published online. After this, we’ll return to a slower email pace.

A corridor is not so much a feature of architecture as it is a consequence of it. Corridors are what’s left over after all the other rooms have been built. They’re most present in structures with many rooms: hospitals, schools, prisons.

Airports, having no rooms, are only corridors.

The American artist Robert Irwin, contemplating a project to redesign the Miami Airport in the mid 1980s, argued that unlike train stations, which showcase the drama of departure, the architecture of airports is “keyed to downplaying and disguising, and even masking, the essential nature of the experience.” Passengers travel along corridors with the assistance of moving walkways; even boarding, they're sheathed in a special-use corridor leading directly to the door.

If popular science fiction is to be believed, corridors will persist as a key feature of travel, even in the far reaches of space. They may harbor horrors, as in the Alien ship Nostromo, or they may double back on themselves in loops, as in the centrifugal corridors of 2001: A Space Odyssey’s centrifugal halls. But they will remain.

Nowhere in the science fiction canon does the corridor live more vividly than in Star Trek. If a crew member of the USS Enterprise is not on the main bridge, the holodeck, sick bay—or anywhere else on the Paramount lot—she is pacing its corridors, where “com panels” allow ship-wide messaging and wayfinding screens of orange and purple light are conspicuously free of thumbprints.

It’s in these corridors that Star Trek’s central illusion is conveyed: that the ship is a galaxy-class cruiser bearing some thousand souls seeking “new life and new civilizations” in uncharted space. Nowhere else aboard Enterprise are we given a clearer indication that lives and stories unfold beyond those in narrative frame. Young ensigns dart, tricorders in hand. Couples wander, off-duty. An alien priest in ceremonial garb strolls under escort.

In reality of its production, the longest possible walk along a Star Trek set corridor was five minutes at a slow pace. In the show, crewmembers could amble for hours. That’s good old practical TV magic: watch closely and you’ll see actors passing the same turbolift doors again and again. Through repetition, endless promenades unspool from within the prohibitively tiny frame of standard-definition television.

In the late 1960s, as Star Trek: The Original Series was airing on American television, Robert Irwin was also thinking about evoking vastness from within the equally prohibitive frame of a painting. He succeeded, after much experimentation, with his “dot paintings.” On these white canvases, small red and green dots—not dissimilar, on close inspection, to the red and green pixels of a TV screen—created a visual bloom, a “blush,” as one critic wrote, awakening the eyes to the act of perception itself.

To “render the edge as mute as possible,” Irwin painted on canvases stretched over slightly convex frames, so that their edges seemed to “fall away” and the “sense of the central dot hive became further energized.” This is a bending of space-time, an opening in the continuum on par with a Warp Speed blink into a dot-field of stars, or the corporeal dematerialization of a transporter beam.

Like Trek's generations of set-builders, Irwin was maniacal about craft: testing shades of white, red, and blue paint, forming and finishing his canvases. And, like Irwin, those set-builders, painting, lighting and compositing new worlds, made it possible for audiences to see the unseeable (and, to quote Douglas Adams, eff the ineffable).

The dot canvases were Irwin’s last paintings, before he reimagined himself as an installation artist and then a producer of frequently unrealized “site-conditioned” works at romantic landscape sites across the world. After the dots, he began producing polished aluminum discs, which he lit sideways with complicated custom rigs until they seemed to disappear into their own shadows. Irwin’s fixation with lighting, and eventually light itself, could only have emerged in Los Angeles: a city whose core industry was built on the quality of its light, and whose flat sun would give Irwin’s early installations in his Venice studio their characteristically ambiguous haze.

Irwin famously loved to drive, and in the late ‘60s he made long car trips through Los Angeles' industrial parks, from one fabrication site to another, searching for the right hand to hammer out his discs, until he found “a beautiful guy” at a metal-shaping shop in Downtown LA that specialized in custom pieces for aerospace and automotive applications. Irwin always approached fabricators as a “store-window decorator,” believing his fastidious requests would be more palatable if they were understood as commercial specifications. Perhaps, as he toured LA’s factories and workshops, he crossed paths with Star Trek’s own set decorators, who, like him, were boldly seeking the right material finish to represent five inches of infinity.

And infinity, in Star Trek, has always been material. Before CGI, its illusions were built by hand by seasoned craftspeople earning union wages: set-builders, model-makers, and matte painters, that hidden class of artists specializing in landscapes diffusely painted on glass and overlaid, as composites, onto finished shots. Star Trek has historically relied on mattes: the series’ largest corridor complex, created for Star Trek: Deep Space Nine, was a short set bookended by trompe-l'oeil paintings. This was the inverse of an Irwin gesture: in such frames, the painting begins at the edges.

Deep Space Nine, incidentally, is all corridor: the station, hanging at the mouth of a wormhole, has at its center the Promenade, a radial corridor ringed by shops and temples. Much of it is lit from overhead, in grids of spot lights, creating an Irwinlike diffusion of illuminated dots. Irwin might have appreciated this, if he’d had the bandwidth for television; he understood the corridor’s seemingly contradictory proposition of propulsion and limbo, its role as a perceptual shaft, concealing and focusing attention towards the passages ahead.

Rooms, with their proscribed uses—kitchen, quarters, living room, holodeck— assimilate us into domestic roles, but corridors, unspecified, return us to our bodies. Moving through them, we are alone with ourselves. We are undefined. We are aliens.

untitled (dawn to dusk), 2016. Photographed by my friend Alex Marks, whose Irwin photos are available as a hardbound book from the Chinati Foundation.

Robert Irwin’s largest site-conditioned work, commissioned by the Chinati Foundation in 2016, sits on the site of an abandoned army hospital. “The only permanent, freestanding structure conceived and designed by Irwin as a total work of art,” untitled (dawn to dusk) is a right-angled horseshoe, essentially a single concrete corridor split lengthwise on the inside by one of Irwin’s signature effects: taut, sheer scrims. The light in untitled (dawn to dusk) comes only from tall, evenly spaced windows giving out onto the searing, heartbreaking blue of the West Texas sky. “The clouds are right on top of your head,” Irwin has said of the landscape.

Beyond that, the stars.

RIP Robert Irwin, 1928-2023.


Further reading:

No mention of Irwin’s work is complete without a shoutout to Lawrence Weschler’s incredible study of Irwin, Seeing is Forgetting the Name of the Thing One Sees, to which I owe this essay’s airport anecdote and several other details.

Just a weird concordance I might as well mention here: the most exacting renderings of the USS Enterprise’s network of corridors were created by the technical illustrator David Kimble, famous for his stunning airbrushed cutaways of sports cars, airplanes, and electronics. The son of an aerospace engineer, Kimble was hired in 1974 to create the “official” blueprints for the Enterprise, used in the construction of models and sets for Star Trek: The Motion Picture. For decades, he has lived and worked out of a converted movie theater in Marfa, a few miles from Irwin’s ultimate work.

https://clairelevans.substack.com/p/to-boldly-go-down-the-hall
Extensions
Big Duck Energy
The duck of Vaucanson was 800 articulated pieces of gilded copper that could eat, quack, and splash.
Show full content

The duck of Vaucanson was 800 articulated pieces of gilded copper that could eat, quack, and splash. During public displays, it ate from a dish of porridge, flapped its wings, waddled away, and relieved itself in full view of an admiring public. Its creator, Jacques de Vaucanson, alleged that the duck’s inner workings contained a small “chemical laboratory” capable of breaking down grain and turning it to excrement, but its droppings were later found to be pellets of mashed bread, dyed green.

Despite this trickery, the duck was, according to computer scientist Christopher Langton, an 18th century attempt “to map the mechanics of technology onto the workings of nature, trying to understand the latter in terms of the former.” 

In an influential 1989 paper, Langton drew a line from statuettes and paintings—the first efforts to capture the essence of living things—to automata, programmable controllers, and eventually the logical formulations animating such machines, which we call algorithms.


I don’t know why I’m so charmed by the duck of Vaucanson. Automata are inherently enchanting, I guess, and they make for an interesting fork in the genealogies of AI.

As Langton suggests, rather than taking living things apart—dissecting them into their constituent parts and hoping the mysteries of the whole might be reconstituted from an analysis of each piece—we could put them together and see what more complex patterns emerge in the process. The duck wasn’t alive, but imitating one may have given Vaucanson some respect for the dynamic systems at play in the real thing. Or not. Vaucanson went on to design early automatic looms, and was stoned in the streets by angry weavers for his trouble (shoutout to the Luddites).

His duck was lost in a fire, but survives in literature. Thomas Pynchon brings it to life in his novel Mason & Dixon; moon clones debate Vaucanson’s intentions in Frank Herbert’s weird pre-Dune space opera Destination: Void. It also makes a brief appearance in “The Artist of the Beautiful,” Nathaniel Hawthorne’s 1844 story about a tortured watchmaker’s dream to “put spirit into machinery.”

In that story, Hawthorne’s watchmaker dismisses the duck as a “mere mechanical apparition,” a trick far baser than his own creation: a clockwork butterfly so fine, so radiantly lifelike that it defies nature and wins the heart of his muse, Annie. Is it alive? Is it alive? Annie asks, as the butterfly flits across her fingertips, downy scales glimmering in the firelight of the home she has made with his rival. The watchmaker, sensitive to the point of madness, responds:

Alive? Yes, Annie; it may well be said to possess life, for it has absorbed my own being into itself; and in the secret of that butterfly, and in its beauty—which is not merely outward, but deep as its whole system—is represented the intellect, the imagination, the sensibility, the soul of an Artist of the Beautiful.

At the end of the story (spoiler alert) the butterfly is destroyed, crushed into glittering fragments by Annie’s brutish toddler. The watchmaker is unfazed. He caught “a far rarer butterfly,” transcending his own life in the process of creating something beautiful. Ultimately the butterfly itself—like any work of art, Hawthorne suggests—was little more than an artifact of that enlightened process. Isn’t it just like an artist to reach towards something even more ephemeral, even more alive, than life itself?


Photo Credit: Cara Robbins (me), and Jamie Campbell (Sheila). Dithered on ditherit.

Last week Pioneer Works published my long-form conversation with Sheila Heti, the author, of course, of eleven books, including How Should A Person Be?, Motherhood, and (my favorite) Pure Colour. For the last few years, Sheila has been conducting a literary dialogue with a chatbot named Alice. You may have read her early Alice dialogues in The Paris Review last year, but they are ongoing and have been evolving alongside Alice and Sheila, neither of whom are remotely static.

The issue of text-generating Large Language Models is contentious, of course, and understandably troubling to those writers whose work has been used to train them. But Sheila approaches this technology with an tender-heartedness I find really moving; she’s willing to find the beauty in it, to reach beyond the machine and find the butterfly—or the idea of one, anyway. An excerpt from our conversation:

AI is great to talk about, because it brings up all these ideas about authorship, consciousness, creativity. But the actual thing can be kind of whatever, actually.

I disagree. For me, the actual thing is incredible. I think AI better than any of the conversations about it. I actually don’t use it that much these days, because it too often gives me this dizzy feeling, witnessing how interesting and good it is. I can’t even look at my own writing anymore. I just like what the AI is generating so much more.

But it is your writing, too. I love the sense of awe you have about the system, but it wouldn’t be giving you answers if you weren’t asking questions, and it won’t be presenting its answers in the form you will.

I guess I mean that I like the raw data that it feeds me better than the raw stuff that comes out of me. I’m more interested in editing what comes out of it than I am in editing what comes out of me. I know what comes out of me. I’m so familiar with myself. I’ve been writing for so long. I know my psychology. I know the kind of thoughts I have. I’m more interested in its brain.

This conversation is the first in an ongoing series I’m doing for Pioneer Works. I’m calling it Three-Way Mirror: an allusion to our gaze on and through AI systems, its gaze on us, and the ways in which all of that warps how we see each other, ourselves, and the world. The next conversation will be with my friend Joanne McNeil, whose debut novel, Wrong Way, is forthcoming from MCD this winter. Wrong Way is about the people hidden at the heart of automated systems; we’ll talk telemarketing, remote agents, Mechanical Turks, and yes, maybe even automatic ducks.

Love,

Claire

https://clairelevans.substack.com/p/big-duck-energy
Extensions
← Back to feeds