GeistHaus
log in · sign up

Erin Kissane's small internet website

The feed of updates to Erin Kissane's small internet website

rss
23 posts
Feed metadata
Status active
Last polled Apr 29, 2026 01:36 UTC
Next poll Apr 30, 2026 01:36 UTC
Poll interval 86400s
ETag W/"a4629-Pk6x/IUv4eaA89MNtrXmTQxRyKE"
Last-Modified Tue, 18 Nov 2025 14:11:40 GMT

Posts

ATConf

This is a reference page for “People in Protocols,” a talk I gave (remotely) on March 23, 2025 at the first ATmosphere Conference in Seattle. Right now, it contains citations and further reading, but I may expand it later with additional material and/or a written version of the talk. Massive thanks to everyone who ran the conference and let me remote in when I couldn’t come in person.

Text notes Image notes
https://erinkissane.com/atconf
Extensions
Against Entropy

What I want is a transparent model of the kinds of harms that are likely to fall onto which brains and bodies when someone swings a wrecking ball through the governmental institutions that do work that is tragically flawed but nevertheless necessary to the sustenance of our collective life.

What I mostly have is a surfeit of takes, including some very good specialist analysis and a whole lot of overconfident storytelling, and the anodyne and decontextualized output of major news orgs, and some lists and spreadsheets, which are all fine.

I want something that works differently, though. I want something more like the kind of dashboard Chrisjen Avasarala might summon in The Expanse to understand the ongoing collapse of a system of governance within her remit: High-level information, showing the whole of the board. Expert briefings, sure, and links to deeper analysis, but above all, a structure that conveys not the spectacle, but the potential damage and the inflection points:

  • Without recourse to cynicism, mindreading, or conspiracy, which supposedly outstanding decisions are already known—or nearly known—in advance?
  • If Thing X happens, what is the range of likely results?
  • Mostly importantly, what can I do?

I would like such a system to model for two kinds of action:

  • meaningful obstruction by the people who can catch the hammer-blows before they land, delay them long enough for more people to strongs up and/or duck (flatten the curve but it’s the arc of a blunt instrument), and maybe deflect them entirely, and
  • meaningful protection by the people who can save the data, tell the truth about the contours of reality, feed the babies, care for the sick and disabled, run the schools, and demonstrate all the other pragmatics of love.
Obstruction vs. protection

Right now, meaningful obstruction is largely available to members of the legislature and the justice system who can try to stop things that are illegal, unconstitutional, and/or wrong and federal workers who can maybe throw sand in some gears. Those people already know what is happening. The thing I want is not for them, but for those of us who can call phones and visit offices and send money to legal efforts. There may also come a point when the level of emergency becomes so intense and so clear that a critical mass of outraged civilians can assemble and wield broader power to stop the destruction of necessary public goods. We will need sensemaking for that, too.

Meaningful protection, though—that’s never enough when you’re a citizen working against the power of your own government, but it’s accessible to everyone. I am going to make my phone calls and thank my representatives from trying to get in between the damage and its targets, and ask them to keep trying, but protection is my beat. And sensemaking is necessary here, too, to a point.

Uncertainty is a creeping void

It’s very hard to think or act when you can’t tell if you’re about to lose your job, have your research killed off, have your healthcare terminated, witness unstoppable crimes, or just experience extended and apparently unescapable moral injury.

The uncertainty of this moment—this intentional multi-system chaos filtered through some combinaton of opaque algorithms, overwheming feeds, news orgs with a very high normalcy bias, and professional terrifiers—is going to try to degrade and dissolve us because chaos is highly transmissible across interconnected systems. The thumb-headed global-economy-shorting accelerationist-ass scammers looting our institutions are on the side of entropy. I’m against it!

I am on the side of building conditions for life.

I know that doing the work necessary for sensemaking during that first year of covid helped, because people told us, a lot. I don’t know if it’s possible to make even a clear specification for the kind of system I want, or if that’s really the thing we’re going to need a few weeks or months into the future. In the meantime, I’m continuing to work toward livable networks where we can find and protect and care for each other, because I think that’s necessary. I’m connecting locally.

If the uncertainty is getting to you, I have the plainest and most time-tested advice which is: Unless or until clarity about the most crucial levers becomes available, do what you can reach. Make the calls. Care for the people you can find. Hold on by hanging onto each other.

Wayfinding for tired people

I’ve found all of these useful at various points in the past couple of weeks:

https://erinkissane.com/against-entropy
Extensions
XOXO

A wide dark painting from the 15th century, recognizably European in style and content. It depicts a confused welter of sight-hounds, human hunters both mounted and on foot, and deer, all rushing toward a vanishing point in the far back of a dark forest at night. Paulo Uccello’s The Hunt in the Forest, c. 1465–1470

This post annotates a talk I gave on August 24, 2024 at the final XOXO Festival in Portland, Oregon. The talk was about why I left the internet, how the Covid Tracking Project got me back online, and most of all how the work we did at CTP led to me to believe that we—the weirdos of internet-making and online life—have to not merely retreat from the big-world social internet, but fix it. You can watch it on YouTube now, and the text as written is below with some margin notes.

Andys Baio and McMillan made the festival and conference such an intensely sweet, thoughtful, and welcoming place for digging all the way down into what the past several years have done to us, and I feel wildly fortunate that they brought me along for the final round.

Since delivering this talk, I’ve launched a tiny studio, wreckage/salvage, to do the things—deep research, deliberate experimentation, careful network thinking—I ended the talk exhorting all of us at XOXO to do.

The talk

In the few places where I had text in my slides that didn’t get echoed in the narration, I’ve included them as text in [brackets].


A few months ago, I went back and rewatched a bunch of XOXO talks, and it turns out that not all, but a lot of them, break down into two categories, which are, roughly:

Here Is a Cool and Meaningful Thing I Did With the Internet

A flower-tossing crop of Lawrence Alma-Tadema's endlessly entertaining celebratory painting, Spring, which is set in a very dreamy version of Ancient Rome. (Yay)

and

Here Is a Horrifying Thing the Internet Did to Me

So this is a crop from Bosch's Hell panel in his The Garden of Earthly Delights tryptich and there's a big bird-demon wearing a cauldron for a hat and vases of some kind for boots eating a guy who has birds flying out of his ass and then shitting…people…into a pit…? There's a lot going on. A butt that is part of a tree and is also holding a mirror? Lots of butts, honestly. Evil nuns? A demon raccoon playing a big drum with a sad guy inside it? (Oh no)

And the tension between those two kinds of talks, and these two kinds of experiences, is the central tension of the internet. And especially of the social internet.

Because to live and work online now is to exist in a rolling disaster in which it is increasingly impossible to tell the difference between 4chan-style amateur griefing, inept political destabilization campaigns, and just randos who’ve normalized on acting like nightmares.

It’s to be goaded, manipulated, and shadowbanned by systems that hide the people and reality we need and serve us brain-scrambling streams of misinformation, creepy AI, Nazis in wretched abundance, and increasingly few chances to just be human. But a lot of us can’t leave, because these systems have such a stranglehold on our work, our reputations, and our lives.

They have all of this because we gave it to them in exchange for what they offered ten or fifteen years ago. And then, as our societies became more and more enmeshed with and dependent on these systems, the deal got worse and worse for us, and it got harder and harder to escape.

This is an increasingly terrible bargain.

And this is the experience within the empire that brought the world Facebook and YouTube and Instagram and Twitter. The things the social internet does to people outside the US and Canada and Western Europe are orders of magnitude worse.


I started working on the internet straight out of undergrad at the end of the last century.

Manuscript illumination from the Weißenauer Passionale, depicting the illuminator himself, Ruffilus, painting a gigantic R. He has his little paint pots and everything. There is also a one-legged comrade holding his or her giant foot over their head and a blue-headed and -bearded dude growing out of part of the R with a dragony creature coming out of his head. Behold Ruffilus the illuminator and his…colleagues.

This was my office. People forget what it was like before React.

My career in tech and publishing and journalism was completely tangled in the internet, and it gave me so many gifts. And the internet gave me so many gifts: Early on, at A List Apart, I got to advocate for web standards and accessibility. Later, at design agencies, I got help some brilliant people do good and important things. At OpenNews, I got to work with some of the best and kindest data journalists in the world. Sometimes I got to travel around and give talks with my friends.

I got my friends.

But also the work I did was grounded in understanding complex socio-technical systems, and when you do that for a long time, you start to get a feeling for what it looks like when a system is going off the rails. Which meant that it was hard for me to look at the the platforms and systems growing up around us and not feel deep unease.

And some of the gross internet things happened to me too, though I got off much more lightly than many. But I was also just getting worse, as a person. Meaner and sharper. And every time the platforms rewarded my moments of aggression or contempt, it felt like something tightening around my brain. So when I got the chance to get off the internet, I took it. I moved to the literal woods, planted a garden, wrote a book, and hung out with my extremely great kid. And when she got older and started school, I started trying to figure out how far from the center of tech I could possibly work and still be able to pay the bills.

For orientation purposes, this was in the very early spring of 2020.

Poster for the movie JAWS but it says 2020 instead of JAWS. There's a scary huge shark menacing a woman swimming blithely along on the surface of the sea. Why is her boob so pointy? (shark noises)

And in the early spring of 2020, my friend Robinson Meyer is the only other person I know who is obsessing about the virus that would be covid. I’m sitting on my kitchen floor in the middle of the night one night texting with Rob about covid numbers because my partner is traveling and I’ve gone feral. And at this point, we know that we have a couple hundred official covid cases in the US. What we don’t know, because the CDC has silently stopped reporting it, is how many tests we’re doing. And if you don’t know how many tests you’re doing, or where you’re doing them, case numbers really don’t tell you all that much.

And I, being me, was like, “Well, we have to assume covid’s everywhere, cause I don’t think we’re really testing enough to say otherwise.”

And Robinson, being a journalist, was like “Huh.” And then he rounded up his friend and fellow Atlantic journalist Alexis Madrigal and they stayed up all night and went to all the websites and called all the state public health departments and got the numbers.

(The answer was that despite the White House and the FDA talking about making millions of tests available, the US had done a couple of thousand tests, total. Which meant we had no idea what was happening.)

Covid Tracking Project logo on a purple background. (CTP logo, designed very rapidly by Jason Santa Maria.)

The COVID Tracking Project (CTP) was a volunteer collective that formed to build on Alexis and Robinson’s initial investigation and continue that data-gathering work for just a few weeks, until the CDC started publishing better data. When Alexis put out an initial call for volunteers on Twitter, I said I could throw in a couple hours a day for about a week or two.

A couple months later we had a real website, an API, several hundred dedicated volunteers, and I was co-running the project with Alexis and ended up doing something closer to 12-to-18-hour days for a year.

Because what we discovered after we’d done the work for a few weeks—expecting every day that the CDC would publish their data and let us get back to our lives—is that the CDC did not have the data. That the wealthiest country in the world was unprepared to count cases or tests at scale, and which relied on 56 separate state and territorial data systems with their own chronologies and data definitions and whose pipelines often included literal fax machines.

Reveal’s podcast about CTP gets into this in Episode 3, if you want to hear Pence reading those numbers.

The reasonable thing for a volunteer collective to do in the face of this shambles was to stop. Instead, our volunteers put more than 20,000 person hours into just collecting, maintaining, and reconciling the data it as more and more information came online. Because the other thing we learned was that the same federal government that had failed to assemble the data was now relying on our numbers as vital infrastructure. That FEMA was using to plan testing sites and distribute ventilators. That when Mike Pence did his White House covid briefings, he was often reading our numbers.

As we wound CTP down in the spring of 2021, journalist Rachel Glickhouse assembled a snapshot of what the project accomplished, and I drew heavily on her account in this part of the talk.

And also this time, our numbers were everywhere. In national and international newsrooms and massive data projects and all these local FOX TV affiliates. So we went out every day and got the numbers and kept the beat. By the time we’d wound down, our work had been featured in the Trump Administration’s Opening Up America Again plan in 2020, and the Biden Administration’s Day One covid strategy in 2021. Our data was cited by the CDC, the FDA, the Department of Energy, and the Department of Defense, used in more than 40 Congressional reports, and referenced in lawsuits by the NAACP and the Southern Poverty Law Center.

DJ Patil, a former White House Chief Data Scientist, said, of CTP: “You filled a gap that is essential and I fundamentally believe you and the team saved an incredible number of lives.”

The “I am so grateful” part is what I had written, and was all I intended to say. At XOXO, I added something equally true, which is that it’s a mark of institutional failure that data this important was left to amateurs and volunteers, no matter how devoted.

I am so grateful that we were able to do what we did.

And without the social internet, we’d never have been able to do it. First of all, we’d never have been able to find each other—like, all of us, but even specifically just me. I met Robinson on Twitter. Rob met Alexis on Twitter. But also the range and quality of our randos from the internet was just unparalleled. We had people from big tech and startups and students and artists and academics and even a few actual public health experts.

And Twitter was quite bad then but still usable for a lot of people and that’s where all the reporters hung out and found our data and APIs and explainers. And through the internet, we helped a lot of justifiably frightened people deeply understand the data and use it to make informed decisions about their organizations and communities and families.

Living through the experience of standing up a volunteer data organization that did so much made it blindingly clear to me that the exact networks we’re often having a pretty miserable time with are essential for building the agency and capacity to deal with emergent crises when our institutions fail us.

Which is a thing we are all continuously living through.

[pandemics]

[climate disasters]

[genocide]


The networks are worse now than they were four years ago. So much that I don’t think we could do CTP now if we needed to. And there’s a pretty strong belief in many quarters that because of corporate predation and human nature, no many-to-many social system can ultimately be anything but corrosive.

“The Dark Forest” is the title of the second novel in Liu Cixin’s Three Body trilogy. I love Maggie’s illustration and summary of the dark forest/cozy web view of the social internet, which pulls in ideas from Yancey Strickler and Venkatesh Rao.

The Dark Forest model, which gained popularity because of Three Body Problem and was adapted for the web I think most elegantly by Maggie Appleton, sort of formalizes this. It posits that the public internet is now so hostile to authentic human sociability that real interaction has to go underground, into private Slacks and Discords and group chats. And this is happening both because the platforms themselves are extractive surveillance monsters and because they’ve incentivized so many predatory, PvP behaviors.

In this light, going underground seems wise and protective.

My fundamental discomfort with that conclusion is that when those of us who have found our people and managed to get even semi-stable retreat into private spaces, we are slamming the doors on all the people who haven’t got there yet. In Dark Forest terms, we’re leaving them up there to get eaten.

And yes, we do need few-to-few connections. We need spaces of trust in which we can acknowledge meaningful disagreements and still pursue our common goals collectively, which is essentially impossible to do in public.

But we still have to find each other.

And again, projects like COVID Tracking—or projects that bulk-bought masks and biked them around New York City, or abortion funds, or the initiatives helping people get their trans kids to safer places or their families out of Gaza—keep demonstrating that we need a big-world social internet to mount a collective responses to crisis.

So I think the answer can’t be to cede everything above ground to billionaires and demagogues and the predatory forces they’ve incentivized.

And we can’t keep accepting the increasing poisonous status quo the social internet…

[(capitalism)]

…has backed us into.

Which leaves us with only one option, which is we fix the fucking networks.

Not a citation to Kuhn, but rather a page of lecture notes about his Structure of Scientific Revolutions (and his “pre-paradigms” specifically) that I’ve been dragging around in my brain for decades, which I realized all at once a few hours after delivering the talk. Brains.

Fortunately, a lot of people have had personally had enough and bailed on Twitter or Facebook. And even when they land on other big platforms, it works against the idea that the current state of affairs is inevitable and eternal. There are a lot of projects—some new, some simmering for years—trying to build better networks. I don’t know if any of these systems will be the thing itself, rather than what Thomas Kuhn called pretender technologies—the things that seem like the next paradigm shift, but don’t really go anywhere

But “pretender” technologies are also laboratories for trying out better futures.

I’m saying this in this room, now, because a lot of us remember working and living in internet systems that were flawed but not systematically designed to burn our time and emotions and safety as fuel.

We all have a role to play in making networks that are genuinely better for all of us, as network builders or writers or artists and game designers and all the other people whose presence makes networks better and richer.

There’s a real crack in the foundations of the current order. And I believe that if we each brought our weird talents and gifts and treated the problem of making better networks as our problem, instead of hoping someone else will figure it out—we would have this thing in the bag.


At around the six-month mark of Covid Tracking, The Atlantic asked for a piece about how we were doing so after my tracking work was done for the day, I stayed up and wrote something and sent it in and they were like “Ohhh we meant something…reported,” and I was like “Ahhh, that’s cool but you gotta find someone else to write that.” So this has never actually been seen, but I want to read you a piece of it.

In the project’s first few months, we talked a lot about building a plane as we flew it. Now it feels more like we’re building a bridge as we walk it. Each day’s work is an act of faith in the dark, putting down a little more knowledge to stand on. We still can’t see the other side.

But every day at 3:30 Eastern, a dozen strangers turned colleagues ignore the the weirdness of daily life in 2020 and fight their way through the eccentricities of 56 individual multi-tabbed dashboards to fish out the numbers we need. Later, their counterparts in Data Quality and Race and Ethnicity Data and Long-Term Care will log on and do their rounds. Because every day, no matter what the government does or doesn’t do, we count the sick, the very sick, and the dead, because we believe their lives and deaths should count. 

And also because there are healthcare workers and epidemiologists and officials all over the country fighting for the living. We do this work in the belief that the millions of data points we assemble will give them the clarity to help us all, somehow, find our way out.

And we’re still not out. But we’re also not back in the summer of 2020, when the hospitals were overflowing and the air in LA was unbreathable because the crematoria were working around the clock. Helping people understand what the pandemic was doing around them was only one tiny piece of what got more people through from those days to now, but it mattered.


I never meant to work in public health data, and I don’t, anymore. It’s too hard on the heart. I’m also not a climate scientist or a human rights expert. But I’m pretty okay at the internet. So I’ve spent most of the past year and a half trying to map out some of the cultural territory of our networks and explain what I find.

I am doing this because I think we need to understand, as concretely and as close to the ground as possible, what has happened to us. And then use that knowledge to try out new ways to be together, purposefully and with a careful eye on the human costs of our experiments.

Because no matter what happens here in November, we all know that in the future, there will be more crises. The next pandemic. The next climate disaster. The next genocide.

So I think we have to fix the networks now, so that when that time comes, we can put all our energy into getting more of us safely across to the other side.

Thank you.

Notes

The paintings and illustrations in the talk are from Lawrence Alma-Tadema’s beautiful and deeply hilarious Spring, Hieronymus Bosch’s Hell panel in his famous triptych, the Passionary of Weissenau (Weißenauer Passionale), an illuminated 12th century life of the Christian saints known in manuscript-nerd land as Cod. Bodmer 127, and Paulo Uccello’s hypnotic The Hunt in the Forest. The Bodmer manuscript illumination depicts the illuminator himself, and I think it’s especially charming.

The photograph at the end was taken by me and shows the Astoria-Meglar bridge, a steel cantilever through-truss bridge that spans the Columbia River a few miles inland from its egress into the Pacific. I don’t actually love being on that bridge, but I love looking at it from below.

On the subject of the bad bargain we’ve made with the social internet and its entanglement with broader systems of surveillance and hyper-financialization, I super-highly recommend Kieran Healy and Marion Fourcade’s wonderful book, The Ordinal Society, which is so clearly and tightly written that it feels like a beautiful knife; this effect is softened only slightly by the inclusion of so many load-bearing puns. (It’s funny and important, everyone should read it.)

Those curious about The Covid Tracking Project will find lots of documentation here, and the recently updated podcast about the project—and the broader failures of public health data during the pandemic—from Reveal is very good and features a lot of audio from inside the project as we worked in 2020 and 2021.

CTP’s work was also covered well by Bloomberg and the Columbia Journalism Review, and I did interviews (solo and with Alexis) for Tableau, the Annenberg Innovation Lab, and GQ. The project was archived by a great team at UCSF.

https://erinkissane.com/xoxo
Extensions
Fediverse Governance Drop

Labors of the months of April, May, and June as represented in an illustrated ninth-century manuscript produced in Salzburg from a French original. The fediverse governance processes of moderation, server leadership, and federated diplomacy. (Courtesy Bayerische Staatsbibliothek)

Back in the fall, I wrote about a research project I was diving into with Darius Kazemi. Now, after a few months of prepping and conducting interviews with people who run Mastodon and Hometown servers about how they govern their parts of the network and then many more months of analyzing and writing up what we found, we’re releasing our findings. We found so much.

The main report is a little over 40,000 words and 110 pages long, and includes dozens of excerpts from interviews with our extremely generous research participants, who spoke with us for many hours back in the spring. There’s a lot there. Much of it will be familiar to veteran fedi mods and admins, but I suspect most people will encounter at least a few things they haven’t encountered before.

In the findings, we get into…

  • Why we think the fediverse’s structure can allow for particularly humane and high-context moderation—and which of the cultural, technical, and financial gaps that our participants identified must be filled before the network can achieve its potential.
  • The interrelated governance configurations that make a server more or less manageable, and the different ways servers in our sample approached those configurations to serve their various communities.
  • The biggest gaps and annoyances in available governance tooling—spoiler, it’s mostly moderation stuff, but it also includes some fascinating things related to shared/coalitional moderation and better communication between servers.
  • What kinds of future threats are most on server operators’ minds, and which things they’re not particularly concerned about.
  • The things that keep volunteer server runners on the fediverse, give them hope, and make them feel excited about possible futures.

And so much other stuff, too.

The two satellite documents we made—Fediverse Governance Opportunities for Funders and Developers (PDF) and the Quick Start Guide to Fediverse Governance Decisions (PDF)—are essentially alternate ways into the knowledge collected in the full findings report. Choose your own adventure!

Content stuff

I wouldn’t have guessed, going in, that we’d end up with the major structural categories we landed on—moderation, server leadership, and federated diplomacy—but after spending so much time eyeball-deep in interview transcripts, I think it’s a pretty reasonable structure for discussing the big picture of governance. (The real gold is of course in the excerpts and summaries from our participants, who continuously challenged and surprised us.)

There are no manifestos to be found here, except in that our participants often eloquently and sometimes passionately express their hopes for the fediverse. There are a lot of assumptions, most of which we’ve tried to be pretty scrupulous about calling out in the text, but anything this chunky contains plentiful grist for both principled disagreement and the other kind. Our aim is to describe and convey the knowledge inherent in fediverse server teams, so we’ve really stuck close to the kinds of problems, risks, needs, and challenges those folks expressed.

I have a lot of sympathy for journalists and other professional explainers who are trying to make sense of new networks without themselves being deep in the networks’ development or maintenance ecosystems. Partly because of that sympathy, I threw in some basic theory about the fediverse in the plainest, least dashing way I could. I think talking about the fediverse as a social component of the open web, with all the joys and horrors that entails, is useful in helping non-fedi people understand that a.) it’s not a platform at all, and b.) that this genuinely does confer benefits for people who want to build communities that interconnect—carefully—with other communities. I don’t think the fediverse is fully equipped for that use, yet, but it’s my hope that the experiences, insights, and recommendations we’ve collected will help the network move in the right direction(s).

Process stuff

Although we intentionally limited our research sample to servers of a given size and with a fairly strong public commitment to intentional governance, our findings reflect a pretty high degree of heterogeneity in the specifics, which I’m really happy about. I don’t myself think that there’s a right way to approach the fediverse—some ways are obviously bad and destructive to people and communities I care about, but there are a lot of paths toward being together in better ways. I also think, even more now more than before we did this work, that the relatively subtle differences in the way our participants run medium-sized fedi servers are actually extremely meaningful for the shape of the community they end up hosting.

A brief word about how this came together: Darius and I pitched the project last fall, designed the interviews in the winter, and conducted the interviews over the spring and into early summer. When analysis and writing time arrived, Darius crunched through endless hours of transcript corrections and built out the tooling, legal, and financial findings and recs, and I wrote up the other sections of the report and the introductory analysis. Darius also built a great mini-site for the main report, which you can also read as a PDF if you’re into that.

Gratitude

Huge thanks to our participants, to Katharina Meyer and the other folks at DIIF, to the many kind experts who read early drafts and offered generous suggestions, and to my partner and kid, who cleared a lot of time and offered a lot of support so I could hole up and write yet another giant crunchy thing about the internet.

https://erinkissane.com/fediverse-governance-drop
Extensions
Untangling Threads

Back in the fall, I wrote a series of posts on a particularly horrific episode in Meta’s past. I hadn’t planned to revisit the topic immediately, but here we are, with Threads federation with the ActivityPub-based fediverse ecosystem an increasingly vivid reality.

My own emotional response to Meta and Threads is so intense that it hasn’t been especially easy to think clearly about the risks and benefits Threads’ federation brings, and to whom. So I’m writing through it in search of understanding, and in hopes of planting some helpful trail markers as I go.

What federation with Threads offers

Doing a good-faith walkthrough of what seem to me to be the strongest arguments for federating with Threads has been challenging but useful. I’m focusing on benefits to people who use networks—rather than benefits to, for instance, protocol reputation—because first- and second-order effects on humans are my thing. (Note: If you’re concerned that I’m insufficiently skeptical about Meta, skip down a bit.)

Finding our people

Back in July, I did some informal research with people who’d left Mastodon and landed on Bluesky, and one of the biggest problems those people voiced was their difficulty in finding people they wanted to follow on Mastodon. Sometimes this was because finding people who were there was complicated for both architectural and UI design reasons; sometimes it was because the people they wanted to hang out with just weren’t on Mastodon, and weren’t going to be.

For people with those concerns, Threads federation is a pretty big step toward being able to maintain an account on Mastodon (or another fediverse service) and still find the people they want to interact with—assuming some of those people are on Threads and not only on Bluesky, Twitter/X, Instagram, and all the other non-ActivityPub-powered systems.

On the flipside, Threads federation gives people on Threads the chance to reconnect with people who left commercial social media for the fediverse—and, if they get disgusted with Meta, to migrate much more easily to a noncommercial, non-surveillance-based network. I’ve written a whole post about the ways in which Mastodon migration specifically is deeply imperfect, but I’m happy to stipulate that it’s meaningfully better than nothing.

I’ll say a little more about common responses to these arguments about the primary benefits of federation with Threads later on, but first, I want to run through some risks and ethical quandaries.

The Threads federation conversations that I’ve seen so far mostly focus on:

  • Meta’s likelihood of destroying the fediverse via “embrace-extend-extinguish”
  • Meta’s ability to get hold of pre-Threads fediverse (I’ll call it Small Fedi for convenience) users’ data,
  • Threads’ likelihood of fumbling content moderation, and
  • the correct weighting of Meta being terrible vs. connecting with people who use Threads.

These are all useful things to think about, and they’re already being widely discussed, so I’m going to move quickly over that terrain except where I think I can offer detail not discussed as much elsewhere. (The EEE argument I’m going to pass over entirely because it’s functioning in a different arena from my work and it’s already being exhaustively debated elsewhere.)

Unfolding the risk surface

A few panels from an origami instruction set by Jo Nakashima, website linked in the caption. The resulting unicorn isn't Blade Runner-accurate, but it's all made from a single sheet of paper, which I find satisfying. (A tiny piece of Jo Nakashima’s excellent origami unicorn instructions.)

The risks I’ll cover in the rest of this post fall into three categories:

  1. My understanding of who and what Meta is
  2. The open and covert attack vectors that Meta services routinely host
  3. The ethics of contribution to and complicity with Meta’s wider projects

I want to deal with these in order, because the specifics of the first point will, I hope, clarify why I resist generalizing Threads federation conversations to “federating with any commercial or large-scale service.”

Who Meta is

The list of “controversies” Meta’s caused since its founding is long and gruesome, and there are plenty of summaries floating around. I spent several months this year researching and writing about just one episode in the company’s recent history because I find that deep, specific knowledge combined with broader summary helps me make much better decisions than summary alone.

Here’s the tl;dr of what I learned about Meta’s adventures in Myanmar.

Beginning around 2013, Facebook spent years ignoring desperate warnings from experts in Myanmar and around the world, kept its foot on the algorithmic accelerator, and played what the UN called “a determining role” in the genocide of the Rohingya people. A genocide which included mass rape and sexual mutilation, the maiming and murder of thousands of civilians including children and babies, large-scale forced displacement, and torture.

I wrote so much about Meta in Myanmar because I think it’s a common misconception that Meta just kinda didn’t handle content moderation well. What Meta’s leadership actually did was so multifaceted, callous, and avaricious that it was honestly difficult for even me to believe:

Combine all those factors with Meta leadership’s allergy to learning anything suggesting that they should do less-profitable, more considered things to save lives, and you get a machine that monopolized internet connectivity for millions and then flooded Myanmar’s nascent internet with algorithmically accelerated, dehumanizing, violence-inciting messages and rumors (both authentic and farmed) that successfully demonized an ethnicity and left them without meaningful support when Myanmar’s military finally enacted their campaign of genocide.

As of last year—ten years after warnings began to appear in Myanmar and about six years since the peak of the genocide—Meta was still accepting genocidal anti-Rohingya ads, the content of which was actually taken directly from widely studied documents from the United Nations Independent International Fact-Finding Mission on Myanmar, examining the communications that led to the genocide. Meta continues to accept extreme disinformation as advertising all over the world, in contradiction to its own published policies and statements, as Global Witness keeps demonstrating.

I’d be remiss if I failed to mention that according to whistleblower Sophie Zhang’s detailed disclosures, Meta—which is, I want to emphasize, the largest social media company in the world—repeatedly diverted resources away from rooting out fake-page and fake-account networks run by oppressive governments and political parties around the world, including those targeting activists and journalists for imprisonment and murder, while claiming otherwise in public.

Meta’s lead for Threads, Adam Mosseri, was head of Facebook’s News Feed and Interfaces departments during that long, warning-heavy lead-up to the genocide of the Rohingya in Myanmar. After the worst of the violence was over, Mosseri noted on a podcast that he’d lost some sleep over it.

In 2018, shortly after that podcast, Mosseri was made the new head of Instagram, from whence he comes to Threads—which, under his leadership, hosts accounts like bomb-threat generating money-making scheme Libs of TikTok and Steve Bannon’s dictatorship fancast, War Room.

Knowing the details of these events—most of which I couldn’t even fit into the very long posts I published—makes it impossible for me to cheerfully accept Meta’s latest attempt to permeate the last few contested spaces on the social internet, because touching their products makes me feel physically ill. 

It’s the difference, maybe, between understanding “plastic pollution” in the abstract vs. having spent pointless hours sifting bucketfuls of microplastics out of the sand of my home coast’s heartbreakingly beautiful and irreparably damaged beaches.

My personal revulsion isn’t an argument, and the vast majority of people who see a link to the Myanmar research won’t ever read it—or Amnesty’s reporting on Meta’s contribution to targeted ethnic violence against Tigrayan people in Ethiopia, or Sophie Zhang’s hair-raising disclosures about Meta’s lack of interest in stopping global covert influence operations, or Human Rights Watch on Meta’s current censorship practices in the Israel-Palestine war.

Nevertheless, I hope it becomes increasingly clear why the line, for some of us, isn’t about “non-commercial” or “non-algorithmic,” but about Meta’s specific record of bloody horrors, and their absolute unwillingness to enact genuinely effective measures to prevent future political manipulation and individual suffering and loss on a global scale.

Less emotionally, I think it’s unwise to assume that an organization that has…

  • demonstrably and continuously made antisocial and sometimes deadly choices on behalf of billions of human beings and
  • allowed its products to be weaponized by covert state-level operations behind multiple genocides and hundreds (thousands? tens of thousands?) of smaller persecutions, all while
  • ducking meaningful oversight,
  • lying about what they do and know, and
  • treating their core extraction machines as fait-accompli inevitabilities that mustn’t be governed except in patently ineffective ways…

…will be a good citizen after adopting a new, interoperable technical structure.

Attack vectors (open)

Some of the attack vectors Threads hosts are open and obvious, but let’s talk about them anyway.

Modern commercial social networks have provided affordances that both enable and reward the kind of targeted public harassment campaigns associated with multi-platform culture-war harassment nodes like Libs of Tiktok, who have refined earlier internet mob justice episodes into a sustainable business model.

These harassment nodes work pretty simply:

  1. Use crowdsourced surveillance juiced by fast social media search to find a target, like a children’s hospital, schoolteacher, a librarian, or a healthcare worker. (To give you a sense of scale, Libs of Tiktok named and targeted two hundred and twenty-two individual employees of schools or education organizations in just the first four months of 2022.)
  2. Use social media to publicize decontextualized statements from targeted individuals, doctored video, lies about policies and actions, and dehumanizing statements calling targeted individuals and groups evil cult members who groom children for sexual abuse, etc.
  3. Sit back while violence-inciting posts, right-wing media appearances, lots and lots of bomb threats, and Substack paychecks roll in, good teachers’ and librarians’ lives get absolutely wrecked, and anti-trans, anti-queer legislation explodes nationally.
  4. Repeat.

As I noted above, Threads currently hosts Libs of TikTok, along with plenty of other culture-war grifters devoted to hunting down private individuals talking about their lives and work and using them to do what Meta calls “real-world harm.”

Maybe none of those vicious assholes will notice that they’re now federating with a network known as a haven for thousands of LGBTQ+ people, anarchists, dissidents, furries, and other people the harassment machines love to target as ragebait.

And maybe none of the harassment nodes will notice that Mastodon is also used by very real predators and CSAM-distributors—the ones mainstream fedi servers defederate from en masse—and use that fact to further mislead their froth-mouthed volunteer harassment corps about the dangers posed by trans and queer people on the fediverse.

Maybe none of that will happen! But if I were in the sights of operators like those, I’d want to get as far from Threads as possible. And I’d take assertions that people who don’t want to federate with Threads are all irrational losers as useful revelations about the character of the people making them.

Attack vectors (covert)

I’ve written a lot about the ways in which I think the fediverse is currently unprepared to deal with the kinds of sophisticated harms Meta currently allows to thrive, and sometimes directly funds. Please forgive me for quoting myself for my own convenience:

I think it’s easy to imagine that these heavy-duty threats focus only on the big, centralized services, but an in-depth analysis of just one operation, Secondary Infektion, shows that it operated across at least 300 websites and platforms ranging from Facebook, Reddit, and YouTube (and WordPress, Medium, and Quora) to literally hundreds of other sites and forums.

The idea that no one would make the effort to actually conduct high-effort, resource-intensive information operations across smaller social platforms remains common, but is absolutely false. We’ve seen it happen already, and we’ll see it again, and I’d be shocked if next-generation large-language models weren’t already supercharging those campaigns by reducing required effort.

To believe it can’t or won’t happen on fedi—and that Threads won’t accelerate it by providing easy on-ramps and raising the profile of the fediverse more generally—seems naive at best.

Unfortunately, this isn’t something that simply suspending or blocking Threads will fix. I don’t think any action server admins take is going to prevent it from happening, but I do think the next twelve to eighteen months are a critical moment for building cross-server—and cross-platform—alliances for identifying and rooting out whatever influence networks fedi administrators and existing tooling can detect. (Especially but not only given the explosive potential of the upcoming US Presidential election and, thanks to US hegemony, its disproportionate effect on the rest of the world.)

On the pragmatic side, small-scale fedi would benefit hugely from the kind of training and knowledge about these operations that big commercial platforms possess, so if I were a fedi admin who felt fine about working with Meta, those are the kinds of requests I would be making of my probably quite nice new friends in terrible places.

How much do you want to help Meta?

Meta’s business model centers on owning the dominant forums for online human connection in most of the world and using that dominant position to construct dense webs of data that clients at every level of society will pay a lot of money for so that they can efficiently target their ad/influence campaigns.

Amnesty International has an exceptionally trenchant breakdown of the human-rights damage done by both Meta and Google’s globe-circling surveillance operations in English, French, and Spanish, and I think everyone should read it. In the meantime, I think it’s useful to remember that no matter how harmful the unintended effects of these corporations’ operations—and they’ve been immensely harmful—their corporate intent is to dominate markets and make a lot of money via ad/influence campaigns. Everything else is collateral damage.

(I’m going to blow past Meta’s ability to surveil its users—and non-users—outside of its own products for now, because I don’t have the time to get into it, but it’s still pretty gruesome.)

As moral actors, I think we should reckon with that damage—and fight to force Meta and Google to reckon with it as well—but when we look ahead to things like Threads’ next moves, I think we should keep market domination and behavior-targeted ad and influence campaigns foremost in our minds.

With those assumptions on the table, I want to think for a moment about what happens when posts from, say, Mastodon hit Threads.

Right off the bat, when Threads users follow someone who posts from Mastodon, those Masto-originating posts are going to show up in Threads users’ feeds, which are currently populated with the assistance of Meta’s usual opaque algorithmic machinery.

I find it difficult to imagine a world in which Mastodon posts federated into Threads don’t provide content against which Meta can run ads—and, less simplistically, a world in which Threads’ users’ interactions with Mastodon posts don’t provide behavioral signals that allow Meta to offer their clients more fine-grained targeting data.

Threads isn’t yet running ads-qua-ads, but it launched with a preloaded fleet of “brands” and the promise of being a nice, un-heated space for conversation—which to say, an explicitly brand-friendly environment. (So far, this has meant no to butts and no to searching for long covid info and yes to accounts devoted to stochastic anti-LGBT terrorism for profit, so perhaps that’s a useful measure of what brands consider safe and neutral.) Perhaps there’s a world in which Threads doesn’t accept ads, but I have difficulty seeing it.

So I think that leaves us with a few things to consider, and I think it’s worth teasing out several entangled but distinct framings at work in our ways of thinking about modern social networks and our complicity in their actions.

Tacit endorsement

A lot of arguments about consumer choice position choice as a form of endorsement. In this framing, having an account on a bad network implies agreement with and maybe even complicity in that network’s leadership and their actions. It boils down to endorsement by association. This comes up a lot in arguments about why people should leave Twitter.

In some formulations of this perspective, having an account on Mastodon and federating with Threads implies no endorsement of Meta’s services; in other formulations, any interconnection with Threads does imply a kind of agreement. I’ll walk through two more ways of looking at these questions that might help reveal the assumptions underlying those opposing conclusions.

Indirect (ad-based) financial support

There’s also a framing of consumer choice as opting into—or out of—being part of the attention merchants’ inventory. (This is the logic of boycotts.)

In this framing, maintaining an active account on a bad network benefits the network’s leaders directly by letting the network profit by selling our attention on to increasingly sophisticated advertising machines meant to influence purchases, political positions, and many kinds of sentiment and belief.

In the conversations I’ve seen, this framing is mostly used to argue that it’s bad to use Meta services directly, but ethically sound to federate with Threads, because doing so doesn’t benefit Meta financially. I think that’s somewhere between a shaky guess and a misapprehension, and there’s a third way to frame our participation in social networks that helps illuminate why.

Social networking as labor

There’s a third perspective that frames what we do within social networks—posting, reading, interacting—as labor. I think it’s reasonably well understood on, say, Mastodon, that networks like X’s and Meta’s rely on people doing that work without really noticing it, as a side effect of trying to [connect with friends or network our way out of permanent precarity or keep up with world events or enjoy celebrity drama or whatever].

What I don’t think we’ve grappled with is the implications of sending our labor out beyond the current small, largely ferociously noncommercial version of the fediverse and into machinery like Meta’s—where that labor becomes, in the most boneheaded formulation, something to smash between ads for flimsy bras and stupid trucks and $30 eco-friendly ear swabs, and in more sophisticated formulations, a way to squeeze yet more behavioral data out of Threads users to sell onward to advertisers.

(Just knowing which Threads users are savvy and interested enough to seek out Mastodon accounts to follow feels like a useful signal for both internal Meta work and for various kinds of advertisers.)

Okay so what

When I started trying to talk about some of the likely technical behaviors we’ll see when Mastodon posts show up inside Threads—by which I mean “they will be part of the ad machine and they will be distributed in Meta’s usual algorithmic ways”—I got a lot of responses that focused on the second framing (financial support via ad impressions). Essentially, most of these responses went, “If we’re not seeing ads ourselves, what’s the problem?”

An honest but ungenerous response is that I don’t want to contribute my labor to the cause of helping Meta wring more ~value from its users—many of whom have no meaningful alternatives—because those users are also human beings just like me and they deserve better networks just as much as I do.

A better response is that when we sit down to figure out what we want from our server administrators and what we’re going to do as individuals, it’s useful to acknowledge externalities as well as direct effects on “our” networks, because ignoring externalities (aka “other people somewhere else”) is precisely how we got the worst parts of our current moment.

On pragmatism

Compared to the social internet as a whole, the existing fediverse is disproportionately populated by people who are demonstrably willing to put their various principles above online connection to friends, family members, and others who aren’t on fedi. That’s not intrinsically good or bad—I think it’s both, in different situations—but it shapes the conversation about trade-offs.

If you think anyone who uses Threads is unredeemable, you’re probably not going to have much sympathy for people who miss their friends who use Threads. More broadly, people who feel lonely on fedi because they can’t find people they care about get characterized as lazy, vapid dopamine addicts in a lot of Mastodon conversations.

I’m not particularly interested in judging anyone’s feelings about this stuff—I am myself all over the record stating that Meta is a human-rights disaster run by callous, venal people who shouldn’t hold any kind of power. But I do believe that survivable futures require that we all have access to better ways to be together online, so I always hope for broad empathy in fedi product design.

As for me—I’m much more of a pragmatist than I was twenty years ago, or even five years ago. And I have first-hand experience with having my labor and the labor of many others toward a meaningful public good—in my case, a volunteer-assembled set of official public data points tracking the covid pandemic—used by terrible people.

When the Trump White House—which had suggested suppressing case counts by stopping testing—used our work in a piece of propaganda about how well they were handling the pandemic, I spent days feeling too physically ill to eat.

Nevertheless, I judged that the importance of keeping the data open and flowing and well contextualized was much greater than the downside of having it used poorly in service of an awful government, because we kept hearing first-hand reports that our work was saving human lives.

The experience was clarifying.

Until now, I haven’t involved myself much in discussions about how Mastodon or other fedi servers should react to Threads’ arrival, mostly because I think the right answer should be really different for different communities, my own revulsion notwithstanding.

For people whose core offline communities are stuck with Meta services, for example—and globally, that is a shit-ton of people—I think there are absolutely reasonable arguments for opening lines of communication with Threads despite Meta’s radioactivity.

For other people, the ethical trade-offs won’t be worth it.

For others still, a set of specific risks that federation with Threads opens up will not only make blocking the domain an obvious choice, but potentially also curtail previously enjoyed liberties across the non-Threads fediverse.

Here’s my point: Everyone makes trade-offs. For some people, the benefits of Threads federation is worth dealing with—or overlooking—Meta’s stomach-churning awfulness. But I do think there are human costs to conflating considered pragmatism with a lack of careful, step-by-step thought.

Practicalities

That was the whole of my sermon.

Here are some things to think about if you’re a fedi user trying to work out what to do, what questions to ask your server admins, and how to manage your own risk.

Ask your admins about policy enforcement

I think this is probably a good time for people who are concerned about federation with Threads to look through their server’s documentation and then ask their administrators about the server’s Threads-federation plan if that isn’t clear in the docs, along with things like…

  • …if the plan is to wait and see, what are the kinds of triggers that would lead to suspension?
  • …how will they handle Threads’ failure to moderate, say, anti-trans posts differently from the way they would handle a Mastodon server’s similar failure?
  • …how will they manage and adjudicate their users’ competing needs, including desires to connect with a specific cultural or geographical community that’s currently stuck on Meta (either by choice or by fiat) vs. concerns about Threads’ choice to host cross-platform harassment operators?

I don’t think the answers to these questions are going to be—or should be—the same for every server on the fediverse. I personally think Meta’s machinery is so implicated in genocide and a million lesser harms that it should be trapped inside a circle of salt forever, but even I recognize that there are billions of people around the world who have no other social internet available. These are the trade-offs.

I also think the first two questions in particular will seem easy to answer honestly, but in reality, they won’t be, because Threads is so big that the perceived costs of defederation will, for many or even most fedi admins, outweigh the benefits of booting a server that protects predators and bad actors.

I would note that most mainstream fedi servers maintain policies that at least claim to ban (open) harassment or hateful content based on gender, gender identity, race or ethnicity, or sexual orientation. On this count, I’d argue that Threads already fails the first principle of the Mastodon Server Covenant:

Active moderation against racism, sexism, homophobia and transphobia Users must have the confidence that they are joining a safe space, free from white supremacy, anti-semitism and transphobia of other platforms.

Don’t take my word for this failure. Twenty-four civil rights, digital justice and pro-democracy organizations delivered an open letter last summer on Threads’ immediate content moderation…challenges:

…we are observing neo-Nazi rhetoric, election lies, COVID and climate change denialism, and more toxicity. They posted bigoted slurs, election denial, COVID-19 conspiracies, targeted harassment of and denial of trans individuals’ existence, misogyny, and more. Much of the content remains on Threads indicating both gaps in Meta’s Terms of Service and in its enforcement, unsurprising given your long history of inadequate rules and inconsistent enforcement across other Meta properties.

Rather than strengthen your policies, Threads has taken actions doing the opposite, by purposefully not extending Instagram’s fact-checking program to the platform and capitulating to bad actors, and by removing a policy to warn users when they are attempting to follow a serial misinformer. Without clear guardrails against future incitement of violence, it is unclear if Meta is prepared to protect users from high-profile purveyors of election disinformation who violate the platform’s written policies.

For me, there is no ideal world that includes Meta, as the company currently exists. But in the most ideal available world, I think other fedi services would adopt—and publicly announce—a range of policies for dealing with Threads, including their answers to questions like the ones above.

Domain blocking and its limits

If you don’t want to federate with Threads, the obvious solution is to block the whole domain or find yourself a home server that plans to suspend it. Unfortunately, as I’ve learned through personal experience, suspensions and blocks aren’t a completely watertight solution.

In my own case, a few months ago someone from one of the most widely-suspended servers in the fediverse picked one of my more innocuous posts that had been boosted into his view and clowned in my replies in the usual way despite the fact that my home server had long since suspended the troll’s server.

So not only was my post being passed around on a server that should never have seen it, I couldn’t see the resulting trolling—but others whose servers federated with the troll server could. Now imagine that instead of some random edgelord, it had been part of the Libs of TikTok harassment sphere, invisibly-to-me using my post to gin up a brigading swarm. Good times!

Discussions about this loophole have been happening for much longer than I’ve been active on fedi, but every technical conversation I’ve seen about this on Mastodon rapidly reaches such an extreme level of “No, that’s all incorrect, it depends on how each server is configured and which implementations are in play in the following fifty-two ways,” that I’m not going to attempt a technical summary here.

Brook Miles has written about this in greater detail in the context of the “AUTHORIZED_FETCH” Mastodon configuration option—see the “Example—Boosting” section for more.

If your personal threat model is centered on not being annoyed by visible taunting, this loophole doesn’t really matter. But if you or your community have had to contend with large-scale online attacks or distributed offline threats, boosting your posts to servers your server has already suspended—and then making any ensuring threats invisible to you while leaving them visible to other attackers is more dangerous than just showing them.

Will this be a significant attack vector from Threads, specifically? I don’t know! I know that people who work on both Mastodon and ActivityPub are aware of the problem, but I don’t have any sense of how long it would take for the loophole to be closed in a way that would prevent posts from being boosted around server suspensions and reaching Threads.

In the meantime, I think the nearest thing to reasonably sturdy protection for people on fedi who have good reason to worry about the risk surface Threads federation opens up is probably to either…

  • block Threads and post followers-only or or local-only, for fedi services that support it, or
  • operate from a server that federates only with servers that also refuse to federate with Threads—which is a system already controversial within the fediverse because allowlists are less technically open than denylists.
A note on individual domain blocking

Earlier this week, I got curious about how individual blocks and server suspensions interact, and my fedi research collaborator Darius Kazemi generously ran some tests using servers he runs to confirm our understanding of the way Mastodon users can block domains.

These informal tests show that:

  1. if you individually block a domain, your block will persist even if your server admin suspends and then un-suspends the domain you blocked, and
  2. this is the case whether you block using the “block domain” UI in Mastodon (or Hometown) or upload a domain blocklist using the import tools in Settings.

The upload method also allows you to block a domain even if your home server’s admin has already suspended that domain. And this method—belt plus suspenders, essentially—should provide you with a persistent domain block even if your home server’s admins later change their policy and un-suspend Threads (or any other server that concerns you.

(If any of the above is wrong, it’s my fault, not Darius’s.)

Human/feeling

This last part is difficult, but I’m keeping it in because it’s true and it’s something I’m wrestling with.

It’s been a wild week or so watching people who I thought hated centralized social networks because of the harm they do giddily celebrating the entry into the fediverse of a vast, surveillance-centric social media conglomerate credibly accused of enabling targeted persecution and mass murder.

Rationally, I understand that the adoption of an indieweb protocol by the biggest social media company in the world feels super exciting to many fedi developers and advocates. And given the tiny scraps of resources a lot of those people have worked with for years on end, it probably feels like a much-needed push toward some kind of financial stability.

What I cannot make sense of is the belief that any particular implementation of open networking is such an obvious, uncomplicated, and overwhelming good that it’s sensible and good to completely set aside the horrors in Meta’s past and present to celebrate exciting internet milestones.

I don’t think the people who are genuinely psyched about Threads on fedi are monsters or fascists, and I don’t think those kinds of characterizations—which show up a lot in my replies—are helping. And I understand that our theories of change just don’t overlap as much as I’d initially hoped.

But for me, knowing what I do about the hundreds of opportunities to reduce actual-dead-kids harm that Meta has repeatedly and explicitly turned down, the most triumphant announcements feel like a celebration on a mass grave.

Other contexts and voices
https://erinkissane.com/untangling-threads
Extensions
Root & Branch

I took a break after the Meta in Myanmar posts, partly because I was crispy and partly because of heavy family weather. Then, wonderfully, we got a puppy, and so I’ve been “sleeping” like a new parent again, dozy and dazed and bumping into the furniture. The puppy is very good, and the family members have all survived.

But in the peculiar headspace of parents in the hospital and sleep deprivation at home, and the deep heaviness of *gesturing around* events, the things I’ve been writing about all year have been distilling down in my brain.

To recap: Things are weird on the networks. Weird like wyrd, though our fates remain unsettled; weird like wert-, the turning, the winding, the twist.

I think one of the deep weirdnesses is that lots of us we know what we don’t want, which is more of whatever thing we’ve been soaking in. But I think many—maybe all?—of the new-school networks with wind in their sails are defined more by what they aren’t than what they are: Not corporate-owned. Not centralized. Not entangled in inescapable surveillance, treadmill algorithms, ad models, billionaire brain injury. In many cases, not governable.

It’s not an untenable beginning, and maybe it’s a necessary phase, like adolescence. But I don’t think it’s sufficient—not if we want to build structures for collaboration and communion instead of blasted landscapes ruled by warlords.

On “growth”

Whenever I write about my desire for safer and better networks that are also widely accessible and inviting, I get comments about how I’m advocating for Mastodon specifically to embrace corporate-style growth at all costs. I think this is a good-faith misunderstanding, so I’ll do something I rarely do now and talk about my life as a person.

In my adolescence and for awhile afterward, my ways of relating to people and the world were shaped by the experiences I’d had as a vulnerable, culturally alienated kid—dismissive or cruel authorities, mean kids, all the quotidian ways norms get ground in. My method for self-preservation was shoving back hard enough that most people wouldn’t come around for a second try.

Somewhere in my early 20s, for lots of reasons, something shifted and my eyes rewired. When I looked up again, I could see a lot more of the precariousness and forced competition and material dangers my less-alienated peers had also been dragged through. I started caring a lot more about protection and care, not just for people like me but for the whole messy gaggle.

Personally and professionally, that’s where I’ve stayed. (Prioritizing care and collective survival over allegiance or purity or career impact or whatever simplifies a lot of things, and though I can’t recommend it as a great way to make piles of money, I’ve mostly found ways to make it work.)

So back to growth. To advocate for “Growth at all costs” or “growth for its own sake,” I’d need to have an idea of what would be best for Mastodon or for the fediverse or for any other network or protocol. I don’t, because although I like Mastodon a lot for my own social media, I don’t care professionally about any network, platform, or protocol for itself.

I care about any given platform exactly as much as it provides good online living conditions for all the people. Which means accessibility, genuine ease of use, and protective norms and rules that cut against predatory behaviors and refrain from treating human lives as an attack surface. I think the fediverse is a really interesting attempt at some of that, and I’d like to see it get better.

Taproots

The complexities of managing real-world social platforms are real and wickedly difficult and almost impossible to think about at a systems level with rigor and clarity. The reason I burned months of savings and emotional equilibrium working on the Myanmar incident report was because the specifics matter so much, and while none of us can know all of them, most of us don’t know most of them, and I think that’s dangerous.

But there are a couple of root-level network decisions that shape everything that sprouts from and branches into that exhilarating/queasy complexity, and although conversations about new networks tend to circle them obsessively, they tend to get stuck in thought-terminating clichés, locked-in frames, and edicts about how people should or would behave, if only human nature would bend to a more rational shape.

I’m a simple person, but I recognize doctrinal disputes when I see them, and I prefer not to.

Two big root-level things that I think we haven’t properly sorted out:

  • Resources: All networks require a whole lot of time and money to run well—the more meticulously run, the more money and time are required, and this is true whether we’re talking about central conglomerates or distributed networks. If we want to avoid just the most obvious bad incentives, where does that money and time come from?
  • Governance: Who—what people, what kind of people, what structures made of people—should we trust to do the heavy, tricky, fraught work of making and keeping our networks good? How should they work? To whom should they be accountable, and how can “accountability” be redeemed from its dissipated state and turned into something with both teeth and discretion?

Answers in the negative, unfortunately, don’t make good final answers: the set of all things that aren’t “billionaire” or “corporation” is…unwieldy, and not everything it contains is of equal use.

It’s my own view that answers that rely either on parsing out good intent (“good people”) or on what I’ve called elsewhere “load-bearing personalities” are fundamentally inadequate. I’ll grant that intent may matter in the abstract, but I’m 40 or 50 years old and have yet to see persuasive evidence that intent is a fixed thing that’s possible to divine with any accuracy for most people most of the time. And I’ve been a load-bearing personality. It’s not a sustainable path. People break or get warped by power or pressure or trauma. Individual commitment or charisma or good judgment are great contributions to a well structured group effort, but they shouldn’t be used as rafters and beams.

So that leaves me with the more material side of things: What kinds of systems work best? What kinds of processes? Which incentives? I want to go back to the big twiggy mess that is one specific branch of governance—content moderation—to try to make this more concrete.

As below, so above

With the exception of people out on the thin end of libertarian absolutism, most of us entangled in the communal internet prefer to live within societies governed by norms and laws.

For most of us, I think, there are things we don’t want to see, but aren’t interested in forcibly eliminating from all public forums. For me, that includes most advertising, quotidian forms of crap speech and behavior (terrible uncle stuff, rather than genocidal demagogue stuff—but note that even this is a category that shifts based on the speaker’s level of power and authority), porn, and incurious, status-seeking punditry.

Your list is probably different! But I’m talking about the stuff of mute lists and platform norms, however they’re enforced. (I would, myself, be content with an effective way for this stuff to be labeled and thrown off my timeline.)

Other things, most of us want not to exist at all.

There’s a broad consensus that, for example, CSAM shouldn’t merely be hidden from our feeds, but shouldn’t exist, because its existence is an ongoing crime that enacts continuous harm on its innocent victims and because it perpetuates new child exploitation: it’s immensely damaging in the present and it endangers the future lives of very real children.

What else? Terrorist and Nazi-esque white supremacist recruitment and propaganda material, probably—it meets the criteria of being both damaging in the present and dangerous to the future.

Even these simplest extremes quickly branch into dizzying human complexity in practice—viz the right-wing demonization of queer and trans existence as grooming, pedophilia, dangerous to children, or the application of anti-terrorism laws to all protest and dissent. But for just a moment, I’ll focus on good-faith applications.

Whom do we trust to define, identify, and remove this stuff?

Which technologies and processes are appropriate for identifying it? What are the risks and trade-offs of those technologies and processes?

What relationships with law enforcement are appropriate?

To whom should the people doing this work be accountable, and how?

To be clear, the fact that plenty of people on the fediverse are happy to trade the industrial-grade trust & safety teams of the big platforms for “literally one random pseudonymous person who vibes okay,” says a lot about the platforms we’ve experienced so far. I’m not here to oppose informal and anarchic governance systems and choices! I lean that way myself. But I want to better understand how they—and other extant governance models across the fediverse—work and when they succeed and where they fail, especially at scale, amidst substantial culture differences, and against professionalized adversaries bent on obliterating trust, distributing harmful material, surveilling dissenters, or disseminating propaganda.

Digging in

I spent a lot of this year trying to understand and write about the current landscape on this site. Now it’s time to work out more sustainable ways to contribute, and I’m pleased to finally be able to say that thanks to support from the Digital Infrastructure Insights Fund, Darius Kazemi (of Hometown and Run your own social) and I will be spending the first half of 2024 working on exactly that research, with the goal of turning everything we learn into public, accessible knowledge for people who build, run, and care about new networks and platforms.

Here’s our project page at DIIF, and Darius’s post, which has a lot more details than this one.

I’ll be writing a lot as we get moving on the work, and I’m looking forward to that with slightly distressing ferocity, probably because I don’t know any better way to hear what I’ve been listening to than to write.

https://erinkissane.com/root-branch
Extensions
Meta in Myanmar (full series)

Between July and October of this year, I did a lot of reading and writing about the role of Meta and Facebook—and the internet more broadly—in the genocide of the Rohingya people in Myanmar. The posts below are what emerged from that work.

The format is a bit idiosyncratic, but what I’ve tried to produce here is ultimately a longform cultural-technical incident report. It’s written for people working on and thinking about (and using and wrestling with) new social networks and systems. I’m a big believer in each person contributing in ways that accord with their own skills. I’m a writer and researcher and community nerd, rather than a developer, so this is my contribution.

More than anything, I hope it helps.

Meta in Myanmar, Part I: The Setup (September 28, 2023, 10,900 words)

Myanmar got the internet late and all at once, and mostly via Meta. A brisk pass through Myanmar’s early experience coming online and all the benefits—and, increasingly, troubles—connectivity brought, especially to the Rohingya ethnic minority, which was targeted by massive, highly organized hate campaigns.

Something I didn’t know going in is how many people warned Meta‚ and in how much detail, and for how many years. This post captures as many of those warnings as I could fit in.

Meta in Myanmar, Part II: The Crisis (September 30, 2023, 10,200 words)

Instead of heeding the warnings that continued to pour in from Myanmar, Meta doubled down on connectivity—and rolled out a program that razed Myanmar’s online news ecosystem and replaced it with inflammatory clickbait. What happened after that was the worst thing that people can do to one another.

Also: more of the details of the total collapse of content moderation and the systematic gaming of algorithmic acceleration to boost violence-inciting and genocidal messages.

Meta in Myanmar, Part III: The Inside View (October 6, 2023, 12,500 words)

Using whistleblower disclosures and interviews, this post looks at what Meta knew (so much) and when (for a long time) and how they handled inbound information that suggests that Facebook was being used to do harm (they shoved it to the margins).

This post introduces an element of the Myanmar tragedy that turns out to have echoes all over the planet, which is the coordinated covert influence campaigns that have both secretly and openly parasitized Facebook to wreak havoc.

I also get into a specific and I think illustrative way that Meta continues to deceive politicians and media organizations about their terrible content moderation performance, and look at their record in Myanmar in the years after the Rohingya genocide.

Meta in Myanmar, Part IV: Only Connect (October 13, 2023, 8,600 words)

Starting with the recommendations of Burmese civil-society organizations and individuals plus the concerns of trust and safety practitioners who’ve studied large-scale hate campaigns and influence operations, I look at a handful of the threats that I think cross over from centralized platforms to rapidly growing new-school decentralized and federated networks like Mastodon/the fediverse and Bluesky—in potentially very dangerous ways.

It may be tempting to take this last substantial piece as the one to read if you don’t have much time, but I would recommend picking literally any of the others instead—my concluding remarks here are not intended to stand alone.

Meta Meta (September 28, 2023, 2,000 words)

I also wrote a short post about my approach, language, citations, and corrections. That brings the total word to about 44,000.

Acknowledgements

Above all, all my thanks go to the people of the Myanmar Internet Project and its constituent organizations.

Thanks additionally to the various individuals on the backchannel whom I won’t name but hugely appreciate, to Adrianna Tan and Dr. Fancypants, Esq., to all the folks on Mastodon who helped me find answers to questions, and to the many people who wrote in with thoughts, corrections, and dozens of typos. All mistakes are extremely mine.

Many thanks also to the friends and strangers who helped me find information, asked about the work, read it, and helped it find readers in the world. Writing and publishing something like this as an independent writer and researcher is weird and challenging, especially in a moment when our networks are in disarray and lots of us are just trying to figure out where our next job will come from.

Without your help, this would have just disappeared, and I’m grateful to every person who reads it and/or passes it along.

“Thanks” is a deeply inadequate thing to say to my partner, Peter Richardson, who read multiple drafts of everything and supported me through some challenging days in my 40,000-words-in-two-weeks publishing schedule, and especially the months of fairly ghastly work that preceded it. But as ever, thank you, Peter.

https://erinkissane.com/meta-in-myanmar-full-series
Extensions
Meta in Myanmar, Part IV: Only Connect

The Atlantic Council’s report on the looming challenges of scaling trust and safety on the web opens with this statement:

That which occurs offline will occur online.

I think the reverse is also true: That which occurs online will occur offline.

Our networks don’t create harms, but they reveal, scale, and refine them, making it easier to destabilize societies and destroy human beings. The more densely the internet is woven into our lives and societies, the more powerful the feedback loop becomes.

In this way, our networks—and specifically, the most vulnerable and least-heard people inhabiting them—have served as a very big lab for gain-of-function research by malicious actors.

And as the first three posts in this series make clear, you don’t have to be online at all to experience the internet’s knock-on harms—there’s no opt-out when internet-fueled violence sweeps through and leaves villages razed and humans traumatically displaced or dead. (And the further you are from the centers of tech-industry power—geographically, demographically, culturally—the less likely it is that the social internet’s principal powers will do anything to plan for, prevent, or attempt to repair the ways their products hurt you.)

I think that’s the thing to keep in the center while trying to sort out everything else.

In the previous 30,000 words of this series, I’ve tried to offer a careful accounting of the knowable facts of Myanmar’s experience with Meta. Here’s the argument I outlined toward the end of Part II:

  1. Meta bought and maneuvered its way into the center of Myanmar’s online life and then inhabited that position with a recklessness that was impervious to warnings by western technologists, journalists, and people at every level of Burmese society. (This is most of Part I.)
  2. After the 2012 violence, Meta mounted a content moderation response so inadequate that it would be laughable if it hadn’t been deadly. (Discussed in Part I and also [in Part II].)
  3. With its recommendation algorithms and financial incentive programs, Meta devastated Myanmar’s new and fragile online information sphere and turned thousands of carefully laid sparks into flamethrowers. (Discussed [in Part II] and in Part III.)
  4. Despite its awareness of similar covert influence campaign based on “inauthentic behavior”—aka fake likes, comments, and Pages—Meta allowed an enormous and highly influential covert influence operation to thrive on Burmese-language Facebook throughout the run-up to the peak of the 2016 and 2017 “ethnic cleansing,” and beyond. (Part III.)

I still think that’s right. But this story’s many devils are in the details, and getting at least some of the details down in public was the whole point of this very long exercise.

Here at the end of it, it’s tempting to try to package up a tidy set of anti-Meta action items here and call it a day, but there’s nothing tidy about this story, or about what I think I’ve learned working on it. What I’m going to try to do instead is to try to illuminate some facets of the problem, suggest some directions for mitigations, rotate the problem, and repeat.

The allure of the do-over

After my first month of part-time research on Meta in Myanmar, I was absorbed in the work and roughed up by the awfulness of what I was learning and, frankly, incandescently furious with Meta’s leadership. But sometime after I read Faine Greenwood’s posts—and reread Craig Mod’s essay for the first time since 2016—I started to get scared, for reasons I couldn’t even pin down right away. Like, wake-up-at-3am scared.

At first, I thought I was just worried that the new platforms and networks coming into being would also be vulnerable to the kind of coordinated abuse that Myanmar experienced. And I am worried about that and will explain at great length later in this post. But it wasn’t just that.

Craig’s essay about his 2015 fieldwork with farmers in Myanmar captures something real about the exhilarating possibilities of a reboot:

…there is a wild and distinct freedom to the feeling of working in places like this. It is what intoxicates these consultants. You have seen and lived within a future, and believe—must believe—you can help bring some better version of it to light here. A place like Myanmar is a wireless mulligan. A chance to get things right in a way that we couldn’t or can’t now in our incumbent-laden latticeworks back home.

This rings bells not only because I remember my own early-internet spells of optimism—which were pretty far in the rearview by 2016—but because I recognize a much more recent feeling, which is the way it felt to come back online last fall, as the network nodes on Mastodon were starting to really light up.

I’d been mostly off social media since 2018, with a special exception for covid-data work in 2020 and 2021. But in the fall and winter of 2022, the potential of the fediverse was crackling in ways it hadn’t been in my previous Mastodon experiences in 2017 and 2018. If you’ve been in a room where things are happening, you’ll recognize that feeling forever, and last fall, it really felt like some big chunks of the status quo had changed state and gone suddenly malleable.

I also believe that the window for significant change in our networks doesn’t open all that often and doesn’t usually stay open for long.

So like any self-respecting moth, when I saw it happening on Mastodon, I dropped everything I’d been doing and flew straight into the porch light and I’ve been thinking and writing toward these ideas since.

Then I did the Myanmar project. By the time I got to the end of the research, I recognized myself in the accounts of the tech folks at the beginning of Myanmar’s internet story, so hopeful about the chance to escape the difficulties and disappointments of the recent past.

And I want to be clear: There’s nothing wrong with feeling hopeful or optimistic about something new, as long as you don’t let yourself defend that feeling by rejecting the possibility that the exact things that fill you with hope can also be turned into weapons. (I’ll be more realistic—will be turned into weapons, if you succeed at drawing a mass user base and don’t skill up and load in peacekeeping expertise at the same time.)

A lot of people have written and spoken about the unusual naivety of Burmese Facebook users, and how that made them vulnerable, but I think Meta itself was also dangerously naive—and worked very hard to stay that way as long as possible. And still largely adopts the posture of the naive tech kid who just wants to make good.

It’s an act now, though, to be clear. They know. There are some good people working themselves to shreds at Meta, but the company’s still out doing PR tapdancing while people in Ethiopia and India and (still) Myanmar suffer.

When I first realized how bad Meta’s actions in Myanmar actually were, it felt important to try to pull all the threads together in a way that might be useful to my colleagues and peers who are trying in various ways to make the world better by making the internet better. I thought I would end by saying, “Look, here’s what Meta did in Myanmar, so let’s get everyone the fuck off of Meta’s services into better and safer places.”

If your immediate response to this idea is that I’m undervaluing your decentralized network of choice, stick with me for a minute.

I’ve landed somewhere more complicated, because although I think Meta’s been a disaster, I’m not confident that there are sustainable better places for the vast majority of people to go. Not yet. Not without a lot more work.

We’re already all living through a series of rolling apocalypses, local and otherwise. Many of us in the west haven’t experienced the full force of them yet—we experience the wildfire smoke, the heat, the rising tide of authoritarianism, and rollbacks of legal rights. Some of us have had to flee. Most of us haven’t lost our homes, or our lives. Nevertheless, these realities chip away at our possible futures. I was born at about 330 PPM; my daughter was born at nearly 400.

The internet in Myanmar was born at a few seconds to midnight. Our new platforms and tools for global connection have been born into a moment in which the worst and most powerful bad actors, both political and commercial, are already prepared to exploit every vulnerability.

We don’t get a do-over planet. We won’t get a do-over network.

Instead, we have to work with the internet we made and find a way to rebuild and fortify it to support the much larger projects of repair—political, cultural, environmental—that are required for our survival.

I think those are the stakes, or I’d be doing something else with my time.

What “better” requires

I wrestled a lot with the right way to talk about this and how much to lean on my own opinions vs. the voices of Myanmar’s own civil society organizations and the opinions of whistleblowers and trust and safety experts.

The process of doing this series has altered my own sense of what the internet is and does, so everything I write after is necessarily going to at least partially emerge from what I’ve learned this summer and fall.

I’ve ended up taking the same approach to this post as I did with the previous three, synthesizing and connecting information from people with highly specific expertise and only sometimes drawing from my own experience and work.

If you’re not super interested in decentralized and federated networks, you probably want to skip down a few sections.

If you’d prefer to get straight to the primary references, here they are:

Notes and recommendations from people who were on the ground in Myanmar, and are still working on the problems the country faces:

Two docs related to large-scale threats. The federation-focused “Annex Five” of the big Atlantic Council report, Scaling Trust on the Web. The whole report is worth careful reading, and this annex feels crucial to me, even though I don’t agree with every word.

I’m also including Camille François’ foundational 2019 paper on disinformation threats, because it opens up important ideas.

I think there are many things Meta and Twitter and YouTube (and TikTok and so on) got and get horrifically wrong, but the trust and safety teams in those companies learned things that everyone interested in better networks needs to know.

Three deep dives with Facebook whistleblowers.

…otherwise, here’s what’s worrying me.

1. Adversaries follow the herd

Realistically, a ton of people are going to stay on centralized platforms, which are going to continue to fight very large-scale adversaries. (And realistically, those networks are going to keep ignoring as much as they can for as long as they can—which especially means that outside the US and Western Europe, they’re going to ignore a lot of damage until they’re regulated or threatened with regulation. Especially companies like Google/YouTube, whose complicity in situations like the one in Myanmar has been partially overlooked because Meta’s is so striking.)

But a lot of people are also trying new networks, and as they do, spammers and scammers and griefers will follow, in increasingly large numbers. So will the much more sophisticated people—and pro-level organizations—dedicated to manipulating opinion; targeting, doxxing, and discrediting individuals and organizations; distributing ultra-harmful material; and sowing division among their own adversaries. And these aren’t people who will be deterred by inconvenience.

In her super-informative interview on the Brown Bag podcast from the ICT4Peace Foundation, Myanmar researcher Victoire Rio mentions two things that I think are vital to this facet of the problem: One is that as Myanmar’s resistance moved off of Facebook and onto Telegram for security reasons after the coup, the junta followed suit and weaponized Telegram as a crowdsourced doxxing tool that has resulted in hundreds of arrests—Rio calls it “the Gestapo on steroids.”

This brings us to the next thing, which is commonly understood in industrial-grade trust and safety circles, but I think less so on newer networks, which have mostly experienced old-school adversaries—basic scammers and spammers, distributors of illegal and horrible content, and garden-variety amateur Nazis and trolls—which is that although those blunter and less sophisticated harms are still quite bad, the more sophisticated threats that are common on the big centralized platforms are considerably more difficult to identify and root out. And if the people running new networks don’t realize that what we’re seeing right now are the starter levels, they’re going to be way behind the ball when better organized adversaries arrive.

2. Modern adversaries are heavy on resources and time

Myanmar has a population of about 51 million people, and in the years before the coup, it already had an internal adversary in the military that ran a professionalized, Russia-trained online propaganda and deception operation that maxed out at about 700 people, working in shifts to manipulate the online landscape and shout down opposing points of view. It’s hard to imagine that this force has lessened now that the genocidaires are running the country.

Russia’s adversarial operations roll much deeper, and aren’t limited to the well-known, now allegedly disbanded Internet Research Agency.

And although Russia is the best-known adversary in most US and Western European conversations I’ve been in, it’s very far from being the only one. Here’s disinfo and digital rights researcher Camille François, warning about the association of online disinformation with “the Russian playbook”:

Russia is neither the most prominent nor the only actor using manipulative behaviors on social media. This framing ignores that other actors have abundantly used these techniques, and often before Russia. Iran’s broadcaster (IRIB), for instance, maintains vast networks of fake accounts impersonating journalists and activists to amplify its views on American social media platforms, and it has been doing so since at least 2012.

What’s more, this kind of work isn’t the exclusive domain of governments. A vast market of for-hire manipulation proliferates around the globe, from Indian public relations firms running fake newspaper pages to defend Qatar’s interests ahead of the World Cup and Israeli lobbying groups running influence campaigns with fake pages targeting audiences in Africa.

This chimes with what Sophie Zhang reported about fake-Page networks on Facebook in 2019—they’re a genuinely global phenomenon, and they’re bigger, more powerful, and more diverse in both intent and tactics than most people suspect.

I think it’s easy to imagine that these heavy-duty threats focus only on the big, centralized services, but an in-depth analysis of just one operation, Secondary Infektion, shows that it operated across at least 300 websites and platforms ranging from Facebook, Reddit, and YouTube (and WordPress, Medium, and Quora) to literally hundreds of other sites and forums.

These adversaries will take advantage of decentralized social networks. Believing otherwise requires a naivety I hope we’ll come to recognize as dangerous.

3. No algorithms ≠ no trouble

Federated networks like Mastodon, which eschews algorithmic acceleration, offer fewer incentives for some kinds of adversarial actors—and that’s very good. But fewer isn’t none.

Here’s what Lai and Roth have to say about networks without built-in algorithmic recommendation surfaces:

The lack of algorithmic recommendations means there’s less of an attack surface for inauthentic engagement and behavioral manipulation. While Mastodon has introduced a version of a “trending topics” list—the true battlefield of Twitter manipulation campaigns, where individual posts and behaviors are aggregated into a prominent, platform-wide driver of attention—such features tend to rely on aggregation of local (rather than global or federated) activity, which removes much of the incentive for engaging in large-scale spam. There’s not really a point to trying to juice the metrics on a Mastodon post or spam a hashtag, because there’s no algorithmic reward of attention for doing so…

These disincentives for manipulation have their limits, though. Some of the most successful disinformation campaigns on social media, like the IRA’s use of fake accounts, relied less on spam and more on the careful curation of individual “high-value” accounts—with uptake of their content being driven by organic sharing, rather than algorithmic amplification. Disinformation is just as much a community problem as it is a technological one (i.e., people share content they’re interested in or get emotionally activated by, which sometimes originates from troll farms)—which can’t be mitigated just by eliminating the algorithmic drivers of virality.

Learning in bloody detail about how thoroughly Meta’s acceleration machine overran all of its attempts to suppress undesirable results has made me want to treat algorithmic virality like a nuclear power source: Maybe it’s good in some circumstances, but if we aren’t prepared to do industrial-grade harm-prevention work and not just halfhearted cleanup, we should not be fucking with it, at all.

But, of course, we already are. Lemmy uses algorithmic recommendations. Bluesky has subscribable, user-built feeds that aren’t opaque and monolithic in the way that, say, Facebook’s are—but they’re still juicing the network’s dynamics, and the platform hasn’t even federated yet.

I think it’s an open question how much running fully transparent, subscribable algorithmic feeds that are controlled by users mitigates the harm recommendation systems can do. I think have a more positive view of AT Protocol than maybe 90% of fediverse advocates—which is to say, I feel neutral and like it’s probably too early to know much—but I’d be lying if I said I’m not nervous about what will happen when the people behind large-scale covert influence networks get to build and promote their own algo feeds using any identity they choose.

4. The benefits and limits of defederation

Another characteristic of fediverse (by which I mean “Activity-Pub-based servers, mostly interoperable”) networks is the ability for both individual users and whole instances to defederate from each other. The ability to “wall off” instances hosting obvious bad actors and clearly harmful content offers ways for good-faith instance administrators to sharply reduce certain kinds of damage.

It also means, of course, that instances can get false-flagged by adversaries who make accounts on target groups’ instances and post abuse in order to get those instances mass-defederated, as was reportedly happening in early 2023 with Ukrainian servers. I’m inclined to think that this may be a relatively niche threat, but I’m not the right person to evaluate that.

A related threat that was expressed to me by someone who’s been working on the ground in Myanmar for years is that authoritarian governments will corral their citizens on instances/servers that they control, permitting both surveillance and government-friendly moderation of propaganda.

Given the tremendous success of many government-affiliated groups in creating (and, when disrupted, rebuilding) huge fake-Page networks on Facebook, I’d also expect to see harmless-looking instances pop up that are actually controlled by covert influence campaigns and/or organizations that intend to use them to surveil and target activists, journalists, and others who oppose them.

And again, these aren’t wild speculations: Myanmar’s genocidal military turned out to be running many popular, innocuous-looking Facebook Pages (“Let’s Laugh Casually Together,” etc.) and has demonstrated the ability to switch tactics to keep up with both platforms and the Burmese resistance after the coup. It seems bizarre to me to assume that equivalent bad actors won’t work out related ways to take advantage of federated networks.

5. Content removal at mass scale is failing

The simple version of this idea is that content moderation at mass scale can’t be done well, full stop. I tend to think that we haven’t tried a lot of things that would help—not at scale, at least. But I would agree that doing content moderation in old-internet ways on the modern internet at mass scale doesn’t cut it.

Specifically, I think it’s increasingly clear that doing content moderation as a sideline or an afterthought, instead of building safety and integrity work into the heart of product design, is a recipe for failure. In Myanmar, Facebook’s engagement-focused algorithms easily outpaced—and often still defeat—Meta’s attempts to squash the hateful and violence-inciting messages they circulated.

Organizations and activists out of Myanmar are calling on social networks and platforms to build human-rights assessments not merely into their trust and safety work, but into changes to their core product design. Including, specifically, a recommendation to get product teams into direct contact with the people in the most vulnerable places:

Social media companies should increase exposure of their product teams to different user realities, and where possible, facilitate direct engagement with civil society in countries facing high risk of human rights abuse.

Building societal threat assessments into product design decisions is something that I think could move the needle much more efficiently than trying to just stuff more humans into the gaps.

Content moderation that focuses only on messages or accounts, rather than the actors behind them, also comes up short. The Myanmar Internet Project’s report highlights Meta’s failure—as late as 2022—to keep known bad actors involved in the Rohingya genocide off Facebook, despite its big takedowns and rules nominally preventing the military and the extremists of Ma Ba Tha from using Facebook to distribute their propaganda:

…most, if not all, of the key stakeholders in the anti-Rohingya campaign continue to maintain a presence on Facebook and to leverage Facebook and other platforms for influence. As we repeatedly warned the platforms, the bulk of the harmful content we face comes from a handful of actors, who have been consistently violating Terms of Services and Community Standards.

The Myanmar Internet Project recommends that social media companies “rethink their moderation approach to more effectively deter and—where warranted—restrict actors with a track record of violating their rules and terms of services, including by enforcing sanctions and restrictions at an actor and not account level, and by developing better strategies to detect and remove accounts of actors under bans.”

This is…going to be complicated on federated networks, even if I set aside the massive question of how federated networks will moderate messages originating outside their instances that require language and culture expertise they lack.

I think Jon Camfield’s post on future trust and safety threats to federated networks is good on a lot of this stuff.

I’ll focus here on Mastodon because it’s big and it’s been federated for years. Getting rid of obvious, known bad actors at the instance level is something Mastodon excels at—viz the full-scale quarantine of Gab. If you’re on a well-moderated, mainstream instance, a ton of truly horrific stuff is going to be excised from your experience on Mastodon because the bad instances get shitcanned. And because there’s no central “public square” to contest on Mastodon, with all the corporations-censoring-political-speech-at-scale issues those huge ~public squares raise, many instance admins feel free to use a pretty heavy hand in throwing openly awful individuals and instances out of the pool.

But imagine a sophisticated adversary with a sustained interest in running both a network of covert and overt accounts on Mastodon and things rapidly get more complicated.

Lai and Roth weigh in on this issue, noting that the fediverse currently lacks capability and capacity for tracking bad actors through time in a structured way, and also doesn’t presently have much in the way of infrastructure for collaborative actor-level threat analysis:

First, actor-level analysis requires time-consuming and labor-intensive tracking and documentation. Differentiating between a commercially motivated spammer and a state-backed troll farm often requires extensive research, extending far beyond activity on one platform or website. The already unsustainable economics of fediverse moderation seem unlikely to be able to accommodate this kind of specialized investigation.

Second, even if you assume moderators can, and do, find accounts engaged in this type of manipulation— and understand their actions and motivations with sufficient granularity to target their activity—the burden of continually monitoring them is overwhelming. Perhaps more than anything else, disinformation campaigns demonstrate the “persistent” in “advanced persistent threat”: a single disinformation campaign, like China-based Spamouflage Dragon, can be responsible for tens or even hundreds of thousands of fake accounts per month, flooding the zone with low-quality content. The moderation tools built into platforms like Mastodon do not offer appropriate targeting mechanisms or remediations to moderators that could help them keep pace with this volume of activity.… Without these capabilities to automate enforcement based on long-term adversarial understanding, the unit economics of manipulation are skewed firmly in favor of bad actors, not defenders.

There are people thinking about and working on this stuff on the fediverse side—individuals and nascent institutions both—and that gives me hope.

There’s also the perhaps even greater challenge of working across instances—and ideally, across platforms—to identify and root out persistent threats. Lai and Roth again:

From an analytic perspective, it can be challenging, if not impossible, to recognize individual accounts or posts as connected to a disinformation campaign in the absence of cross-platform awareness of related conduct. The largest platforms—chiefly, Meta, Google, and Twitter (pre-acquisition)—regularly shared information, including specific indicators of compromise tied to particular campaigns, with other companies in the ecosystem in furtherance of collective security. Information sharing among platform teams represents a critical way to build this awareness—and to take advantage of gaps in adversaries’ operational security to detect additional deceptive accounts and campaigns.… Federated moderation makes this kind of cross-platform collaboration difficult.

I predict that many advocates of federated and decentralized networks will believe that Lai and Roth are overstating these gaps in safety capabilities, but I hope more developers, instance administrators, and especially funders, will take this as an opportunity to prioritize scaled-up tooling and institution-building.

Edited to add, October 16, 2023: Independent Federated Trust and Safety (IFTAS), an organization working on supporting and improving trust and safety on federated networks, just released the results of their Moderator Needs Assessment results, highlighting needs for financial, legal, technical, and cultural support.

Meta’s fatal flaw

I think if you ask people why Meta failed to keep itself from being weaponized in Myanmar, they’ll tell you about optimizing for engagement and ravenously, heedlessly pursuing expansion and profits and continuously fucking up every part of content moderation.

I think those things are all correct, but there’s something else, too, though “heedless” nods toward it: As a company determined to connect the world at all costs, Meta failed, spectacularly, over and over, to make the connections that mattered, between their own machinery and the people it hurt.

So I think there are two linked things Meta could have done to prevent so much damage, which are to listen out for people in trouble and meaningfully correct course.

“Listening out” is from Ursula Le Guin, who said it in a 2015 interview with Choire Sicha that has never left my mind. She was speaking about the challenge of working while raising children while her partner taught:

…it worked out great, but it took full collaboration between him and me. See, I cannot write when I’m responsible for a child. They are full-time occupations for me. Either you’re listening out for the kids or you’re writing. So I wrote when the kids went to bed. I wrote between nine and midnight those years.

This passage is always with me because the only time I’m not listening out, at least a little bit, is when my kid is completely away from the house at school. Even when she’s sleeping, I’m half-concentrating on whatever I’m doing and…listening out. I can’t wear earplugs at night or I hallucinate her calling for me in my sleep. This is not rational! But it’s hardwired. Presumably this will lessen once she leaves for college or goes to sea or whatever, but I’m not sure it will.

So listening out is meaningful to me for embarrassingly, viscerally human reasons. Which makes it not something a serious person puts into an essay about the worst things the internet can do. I’m putting it here anyway because it cuts to the thing I think everyone who works on large-scale social networks and tools needs to wire into our brainstems.

In Myanmar and in Sophie Zhang’s disclosures about the company’s refusal to prioritize the elimination of covert influence networks, Meta demonstrated not just an unwillingness to listen to warnings, but a powerful commitment to not permitting itself to understand or act on information about the dangers it was worsening around the world.

It’s impossible for me to read the Haugen and Zhang disclosures and not think of the same patterns dismissing and hiding dangerous knowledge that we’ve seen from tobacco companies (convicted of racketeering and decades-long patterns of deception over tobacco’s dangers), oil companies (being sued by the state of California over decades-long patterns of deception over their contributions to climate change), or the Sacklers (who pled guilty to charges based on a decade-long pattern of deception over their contribution to the opioid epidemic).

But you don’t have to be a villain to succumb to the temptation to push away inconvenient knowledge. It often takes nothing more than being idealistic or working hard for little (or no) pay to believe that the good your work does necessarily outweighs its potential harms—and that especially if you’re offering it for free, any trouble people get into is their own fault. They should have done their own research, after all.

And if some people are warning about safety problems on an open source network where the developers and admins are trying their best, maybe they should just go somewhere else, right? Or maybe they’re just exaggerating, which is the claim I saw the most on Mastodon when the Stanford Internet Observatory published its report on CSAM on the fediverse.

“We’re all volunteers doing our best,” might be true, but it isn’t an escape. And although, wonderfully, I’m not running a gigantic open-source network, I do know a few things about large-scale volunteer work with high stakes. Good intentions are not a free pass.

We can’t have it both ways. Either people making and freely distributing tools and systems have some responsibility for their potential harms, or they don’t. If Meta is on the hook, so are people working in open technology. Even nice people with good intentions.

So: Listening out. Listening out for signals that we’re steering into the shoals. Listening out like it’s our own children at the sharp end of the worst things our platforms can do.

The warnings about Myanmar came from academics and digital rights people. They came, above all, from Myanmar, nearly 8,000 miles from Palo Alto. Twenty hours on a plane. Too far to matter, for too many years.


The civil society people who issued many of the warnings to Meta have clear thoughts about the way to avoid recapitulating Meta’s disastrous structural callousness during the years leading up to the genocide of the Rohingya. Several of those recommendations involve diligent, involved, hyper-specific listening to people on the ground about not only content moderation problems, but also dangers in the core functionality of social products themselves.

Nadah Feteih and Elodie Vialle’s recent piece in Tech Policy Press, “Centering Community Voices: How Tech Companies Can Better Engage with Civil Society Organizations” offers a really strong introduction to what that kind of consultative process might be like for big platforms. I think it also offers about a dozen immediately useful clues about how smaller, more distributed, and newer networks might proceed as well.

But let’s get a little more operational.

“Do better” requires material support

It’s impossible to talk about any of this without talking about the resource problem in open source and federated networks—most of the sector is critically underfunded and built on gift labor, which has shaping effects on who can contribute, who gets listened to, and what gets done.

It would be unrealistic bordering on goofy to expect everyone who contributes to projects like Mastodon and Lemmy or runs a small instance on a federated network to independently develop in-depth human-rights expertise. It’s just about as unrealistic to expect that even lead developers who are actively concerned about safety to have the resources and expertise to arrange close consultation with relevant experts in digital rights, disinformation, and complex, culturally specific issues globally.

There are many possible remedies to the problems and gaps I’ve tried to sketch above, but the one I’ve been daydreaming about a lot is the development of dedicated, cross-cutting, collaborative institutions that work not only within the realm of trust and safety as it’s constituted on centralized platforms, but also on hands-on research that brings the needs and voices of vulnerable people and groups into the heart of design work on protocols, apps, and tooling.

Maintainers and admins all over the networks are at various kinds of breaking points. Relatively few have the time and energy to push through year after year of precariousness and keep the wheels on out of sheer cussedness. And load-bearing personalities are not, I think, a great way to run a stable and secure network.

Put another way, rapidly growing, dramatically underfunded networks characterized by overtaxed small moderation crews and underpowered safety tooling present a massive attack surface. Believing that the same kinds of forces that undermined the internet in Myanmar won’t be able to weaponize federated networks because the nodes are smaller is a category error—most of the advantages of decentralized networks can be turned to adversaries’ advantage almost immediately.

Flinging money indiscriminately isn’t a cure, but without financial support that extends beyond near-subsistence for a few people, it’s very hard to imagine free and open networks being able to skill up in time to handle the kinds of threats and harms I looked at in the first three posts of this series.

The problem may look different for venture-funded projects like Bluesky, but I don’t know. I think in a just world, the new CTO of Mastodon wouldn’t be working full-time for free.

I also think that in that just world, philanthropic organizations with interests in the safety of new networks would press for and then amply fund collective, collaborative work across protocols and projects, because regardless of my own concerns and preferences, everyone who uses any of the new generation of networks and platforms deserves to be safe.

We all deserve places to be together online that are, at minimum, not inimical to offline life.

So what if you’re not a technologist, but you nevertheless care about this stuff? Unsurprisingly, I have thoughts.

Everything, everywhere, all at once

The inescapable downside of not relying on centralized networks to fix things is that there’s no single entity to try to pressure. The upside is that we can all work toward the same goals—better, safer, freer networks—from wherever we are. And we can work toward holding both centralized and new-school networks accountable, too.

If you live someplace with at least semi-democratic representation in government, you may be able to accomplish a lot by sending things like Amnesty International’s advocacy report and maybe even this series to your representatives, where there’s a chance their staffers will read them and be able to mount a more effective response to technological and corporate failings.

If you have an account on a federated/network, you can learn about the policies and plans of your own instance administrators—and you can press them (I would recommend politely) about their plans for handling big future threats like covert influence networks, organized but distributed hate campaigns, actor-level threats, and other threats we’ve seen on centralized networks and can expect to see on decentralized ones.

And if you have time or energy or money to spare, you can throw your support (material or otherwise) behind collaborative institutions that seek to reduce societal harms.

On Meta itself

It’s my hope that the 30,000-odd words of context, evidence, and explanation in parts 1–3 of this series speak for themselves.

I’m sure some people, presumably including some who’ve worked for Meta or still do, will read all of those words and decide that Meta had no responsibility for its actions and failures to act in Myanmar. I don’t think I have enough common ground with those readers to try to discuss anything.

There are quite clearly people at Meta who have tried to fix things. A common thread across internal accounts is that Facebook’s culture of pushing dangerous knowledge away from its center crushes many employees who try to protect users and societies. In cases like Sophie Zhang’s, Meta’s refusal to understand and act on what its own employees had uncovered is clearly a factor in employee health breakdowns.

And the whistleblower disclosures from the past few years make it clear that many people over many years were trying to flag, prevent, and diagnose harm. And to be fair, I’m sure lots of horrible things were prevented. But it’s impossible to read Frances Haugen’s disclosures or Sophie Zhang’s story and believe that the company is doing everything it can, except in the sense that it seems unable to conceive of meaningfully redesigning its products—and rearranging its budgets—to stop hurting people.

It’s also impossible for me to read anything Meta says on the record without thinking about the deceptive, blatant, borderline contemptuous runarounds it’s been doing for years over its content moderation performance. (That’s in Part III, if you missed it.)

Back in 2018, Adam Mosseri, who was in charge of News Feed—a major “recommendation surface” on which Facebook’s algorithms boosted genocidal anti-Rohingya messages in Myanmar—during the genocide, wrote that he’d lost some sleep over what had happened.

The lost sleep apparently didn’t amount to much in the way of product-design changes, considering that Global Witness found Facebook doing pretty much the exact same things with the same kinds of messages three years later.

But let’s look at what Mosseri actually said:

There is false news, not only on Facebook but in general in Myanmar. But there are no, as far as we can tell, third-party fact-checking organizations with which we can partner, which means that we need to rely instead on other methods of addressing some of these issues. We would look heavily, actually, for bad actors and things like whether or not they’re violating our terms of service or community standards to try and use those levers to try and address the proliferation of some problematic content. We also try to rely on the community and be as effective as we can at changing incentives around things like click-bait or sensational headlines, which correlate, but aren’t the same as false news.

Those are all examples of how we’re trying to take the issue seriously, but we lose some sleep over this. I mean, real-world harm and what’s happening on the ground in that part of the world is actually one of the most concerning things for us and something that we talk about on a regular basis. Specifically, about how we might be able to do more and be more effective, and more quickly.

This is in 2018, so six years after Myanmar’s digital-rights and civil-society organizations started contacting Meta to tell them about the organized hate campaigns on Facebook in Myanmar, which Meta appears to have ignored, because all those organized campaigns were still running through the peak of the Rohingya genocide in 2016 and 2017.

This interview also happens several years after Meta started relying on members of those same Burmese organizations to report content—because, if you remember from the earlier posts in this series, they hadn’t actually translated the Community Standards or the reporting interface itself. Or hired enough Burmese-speaking moderators to handle a country bigger than a cruise ship. It’s also interesting that Mosseri reported that Meta couldn’t find any “third-party fact-checking organizations” given that MIDO, which was one of the organizations reporting content problems to them, actually ran its own fact-checking operation.

And the incentives on the click-bait Mosseri mentions? That would be the market for fake and sensationalist news that Meta created by rolling out Instant Articles, which directly funded the development of Burmese-language clickfarms, and which pretty much destroyed Myanmar’s online media landscape back in 2016.

Mosseri and his colleagues talked about it on a regular basis, though.

I was going to let myself be snarky here and note that being in charge of News Feed during a genocide the UN Human Rights Council linked to Facebook doesn’t seem to have slowed Mosseri down personally, either. He’s the guy in charge of Meta’s latest social platform, Threads, after all.

But maybe it goes toward explaining why Threads refuses to allow users to search for potentially controversial topics, including the effects of an ongoing pandemic. This choice is being widely criticized as a failure to let people discuss important things. It feels to me like more of an admission that Meta doesn’t think it can do the work of content moderation, so it’s designing the product to avoid the biggest dangers.

It’s a clumsy choice, certainly. And it’s weird, after a decade of social media platforms charging in with no recognition that they’re making things worse. But if the alternative is returning to the same old unwinnable fight, maybe just not going there is the right call. (I expect that it won’t last.)

The Rohingya are still waiting

The Rohingya are people, not lessons. Nearly a million of them have spent at least six years in Bangladeshi camps that make up the densest refugee settlement on earth. Underfunded, underfed, and prevented from working, Rohingya people in the camps are vulnerable to climate-change-worsened weather, monsoon flooding, disease, fire, and gang violence. The pandemic has concentrated the already intense restrictions and difficulties of life in these camps.

If you have money, Global Giving’s Rohingya Refugee Relief Fund will get it into the hands of people who can use it.

The Canadian documentary Wandering: A Rohingya Story provides an intimate look at life in Kutupalong, the largest of the refugee camps. It’s beautifully and lovingly made.

Screenshot from the film, Wandering: A Rohingya Story, showing a Rohingya mother smoothing her hands over her daugher's laughing face.Screenshot from the film, Wandering: A Rohingya Story, showing a Rohingya mother smoothing her hands over her daugher's laughing face.

From Wandering: A Rohingya Story. This mother and her daughter destroyed me.

Back in post-coup Myanmar, hundreds of thousands of people are risking their lives resisting the junta’s brutal oppression. Mutual Aid Myanmar is supporting their work. James C. Scott (yes) is on their board.

In the wake of the coup, the National Unity Government—the shadow-government wing of the Burmese resistance against the junta—has officially recognized the wrongs done to the Rohingya, and committed itself to dramatic change, should the resistance prevail:

The National Unity Government recognises the Rohingya people as an integral part of Myanmar and as nationals. We acknowledge with great shame the exclusionary and discriminatory policies, practices, and rhetoric that were long directed against the Rohingya and other religious and ethnic minorities. These words and actions laid the ground for military atrocities, and the impunity that followed them emboldened the military’s leaders to commit countrywide crimes at the helm of an illegal junta.

Acting on our ‘Policy Position on the Rohingya in Rakhine State’, the National Unity Government is committed to creating the conditions needed to bring the Rohingya and other displaced communities home in voluntary, safe, dignified, and sustainable ways.

We are also committed to social change and to the complete overhaul of discriminatory laws in consultation with minority communities and their representatives. A Rohingya leader now serves as Deputy Minister of Human Rights to ensure that Rohingya perspectives support the development of government policies and programs and legislative reform.

From the refugee camps, Rohingya youth activists are working to build solidarity between the Rohingya people and the mostly ethnically Bamar people in Myanmar who, until the coup, allowed themselves to believe the Tatmadaw’s messages casting the Rohingya as existential threats. Others, maybe more understandably, remain wary of the NUG’s claims that the Rohingya will be welcomed back home.


In Part II of this series, I tried to explain—clearly but at speed—how the 2016 and 2017 attacks on the Rohingya, which accelerated into full-scale ethnic cleansing and genocide, began when Rohingya insurgents carried out attacks on Burmese security forces and committed atrocities against civilians, after decades of worsening repression and deprivation in Myanmar’s Rakhine State by the Burmese government.

This week, the social media feed of one of the young Rohingya activists featured in a story I linked above is filled with photographs from Gaza, where two million people live inside fences and walls, and whose hospitals and schools and places of worship are being bombed by the Israeli military because of Hamas’s horrific attacks on Israeli civilians, after decades of worsening repression and deprivation in Gaza and the West Bank by the Israeli government.

We don’t need the internet to make our world a hell. But I don’t think we should forgive ourselves for letting our technology make the world worse.

I want to make our technologies into better tools for the many, many people devoted to building the kinds of human-level solidarity and connection that can get more of us through our present disasters to life on the other side.

https://erinkissane.com/meta-in-myanmar-part-iv-only-connect
Extensions
Meta in Myanmar, Part III: The Inside View

“Well, Congressman, I view our responsibility as not just building services that people like to use, but making sure that those services are also good for people and good for society overall.” — Mark Zuckerberg, 2018

In the previous two posts in this series, I did a long but briskly paced early history of Meta and the internet in Myanmar—and the hateful and dehumanizing speech that came with it—and then looked at what an outside-the-company view could reveal about Meta’s role in the genocide of the Rohingya in 2016 and 2017.

In this post, I’ll look at what two whistleblowers and a crucial newspaper investigation reveal about what was happening inside Meta at the time. Specifically, the disclosed information:

  • gives us a quantitative view of Meta’s content moderation performance—which, in turn, highlights a deceptive PR move routine Meta uses when questioned about moderation;
  • clarifies what Meta knew about the effects of its algorithmic recommendations systems; and
  • reveals a parasitic takeover of the Facebook platform by covert influence campaigns around the world—including in Myanmar.

Before we get into that, a brief personal note. There are few ways to be in the world that I enjoy less than “breathless conspiratorial.” That rhetorical mode muddies the water when people most need clarity and generates an emotional charge that works against effective decision-making. I really don’t like it. So it’s been unnerving to synthesize a lot of mostly public information and come up with results that wouldn’t look completely out of place in one of those overwrought threads.

I don’t know what to do with that except to be forthright but not dramatic, and to treat my readers’ endocrine systems with respect by avoiding needless flourishes. But the story is just rough and many people in it do bad things. (You can read my meta-post about terminology and sourcing if you want to see me agonize over the minutiae.)

Content warnings for the post: The whole series is about genocide and hate speech. There are no graphic descriptions or images, and this post includes no slurs or specific examples of hateful and inciting messages, but still. (And there’s a fairly unpleasant photograph of a spider at about the 40% mark.)

The disclosures

When Frances Haugen, a former product manager on Meta’s Civic Integrity team, disclosed a ton of internal Meta docs to the SEC—and several media outlets—in 2021, I didn’t really pay attention. I was pandemic-tired and I didn’t think there’d be much in there that I didn’t know. I was wrong!

Frances Haugen’s disclosures are of generational importance, especially if you’re willing to dig down past the US-centric headlines. Haugen has stated that she came forward because of things outside the US—Myanmar and its horrific echo years later in Ethiopia, specifically, and the likelihood that it would all just keep happening. So it makes sense that the docs she disclosed would be highly relevant, which they are.

There are eight disclosures in the bundle of information Haugen delivered via lawyers to the SEC, and each is about one specific way Meta “misled investors and the public.” Each disclosure takes the form of a letter (which probably has a special legal name I don’t know) and a huge stack of primary documents. The majority of those documents—internal posts, memos, emails, comments—haven’t yet been made public, but the letters themselves include excerpts, and subsequent media coverage and straightforward doc dumps have revealed a little bit more. When I cite the disclosures, I’ll point to the place where you can read the longest chunk of primary text—often that’s just the little excerpts in the letters, but sometimes we have a whole—albeit redacted—document to look at.

In the fourth post in the series, I’ll say a little about what the documents reveal about Meta’s culture. Here, I’ll just note that many people inside Meta clearly tried to make things better. How well that worked is another question.

Before continuing, I think it’s only fair to note that the disclosures we see in public are necessarily those that run counter to Meta’s public statements, because otherwise there would be no need to disclose them. And because we’re only getting excerpts, there’s obviously a ton of context missing—including, presumably, dissenting internal views. I’m not interested in making a handwavey case based on one or two people inside a company making wild statements. So I’m only emphasizing points that are supported in multiple, specific excerpts.

Let’s start with content moderation and what the disclosures have to say about it.

How much dangerous stuff gets taken down?

We don’t know how much “objectionable content” is actually on Facebook—or on Instagram, or Twitter, or any other big platform. The companies running those platforms don’t know the exact numbers either, but what they do have are reasonably accurate estimates. We know they have estimates because sampling and human-powered data classification is how you train the AI classifiers required to do content-based moderation—removing posts and comments—at mass scale. And that process necessarily lets you estimate from your samples roughly how much of a given kind of problem you’re seeing. (This is pretty common knowledge, but it’s also confirmed in an internal doc I quote below.)

The platforms aren’t sharing those estimates with us because no one’s forcing them to. And probably also because, based on what we’ve seen from the disclosures, the numbers are quite bad. So I want to look at how bad they are, or recently were, on Facebook. Alongside that, I want to point out the most common way Meta distracts reporters and governing bodies from its terrible stats, because I think it’s a very useful thing to be able to spot.

One of Frances Haugen’s SEC disclosure letters is about Meta’s failures to moderate hate speech. It’s helpfully titled, “Facebook misled investors and the public about ‘transparency’ reports boasting proactive removal of over 90% of identified hate speech when internal records show that ‘as little as 3-5% of hate’ speech is actually removed.”1

Here’s the excerpt from the internal Meta document from which that “3–5%” figure is drawn:

…we’re deleting less than 5% of all of the hate speech posted to Facebook. This is actually an optimistic estimate—previous (and more rigorous) iterations of this estimation exercise have put it closer to 3%, and on V&I [violence and incitement] we’re deleting somewhere around 0.6%…we miss 95% of violating hate speech.2

Here’s another quote from different memo excerpted in the same disclosure letter:

[W]e do not … have a model that captures even a majority of integrity harms, particularly in sensitive areas … We only take action against approximately 2% of the hate speech on the platform. Recent estimates suggest that unless there is a major change in strategy, it will be very difficult to improve this beyond 10-20% in the short-medium term.3

Another estimate from a third internal document:

We seem to be having a small impact in many language-country pairs on Hate Speech and Borderline Hate, probably ~3% … We are likely having little (if any) impact on violence.4

Here’s a fourth one, specific to a study about Facebook in Afghanistan, which I include to help contextualize the global numbers:

While Hate Speech is consistently ranked as one of the top abuse categories in the Afghanistan market, the action rate for Hate Speech is worryingly low at 0.23 per cent.5

I don’t think these figures need a ton of commentary, honestly. I would agree that removing less than a quarter of a percent of hate speech is indeed “worryingly low,” as is removing 0.6% of violence and incitement messages. I think removing even 5% of hate speech—the highest number cited in the disclosures—is objectively terrible performance, and I think most people outside of the tech industry would agree with that. Which is presumably why Meta has put a ton of work into muddying the waters around content moderation.

So back to that SEC letter with the long name. It points something out, which is that Meta has long claimed that Facebook “proactively” detects between 95% (in 2020, globally) and 98% (in Myanmar, in 2021) of all the posts it removes because they’re hate speech—before users even see them.

At a glance, this looks good. Ninety-five percent is a lot! But since we know from the disclosed material that based on internal estimates the takedown rates for hate speech are at or below 5%, what’s going on here?

Here’s what Meta is actually saying: Sure, they might identify and remove only a tiny fraction of dangerous and hateful speech on Facebook, but of that tiny fraction, their AI classifiers catch about 95–98% before users report it. That’s literally the whole game, here.

So…the most generous number from the disclosed memos has Meta removing 5% of hate speech on Facebook. That would mean that for every 2,000 hateful posts or comments, Meta removes about 100–95 automatically and 5 via user reports. In this example, 1,900 of the original 2,000 messages remain up and circulating. So based on the generous 5% removal rate, their AI systems nailed…4.75% of hate speech. That’s the level of performance they’re bragging about.

You don’t need to take my word for any of this—Wired ran a critique breaking it down in 2021 and Ranking Digital Rights has a strongly worded post about what Meta claims in public vs. what the leaked documents reveal to be true, including this content moderation math runaround.

Meta does this particular routine all the time.

The shell game

Here’s Mark Zuckerberg on April 10th, 2018, answering a question in front of the Senate’s Commerce and Judiciary committees. He says that hate speech is really hard to find automatically and then pivots to something that he says is a real success, which is “terrorist propaganda,” which he simplifies immediately to “ISIS and Al Qaida content.” But that stuff? No problem:

Contrast [hate speech], for example, with an area like finding terrorist propaganda, which we’ve actually been very successful at deploying A.I. tools on already. Today, as we sit here, 99 percent of the ISIS and Al Qaida content that we take down on Facebook, our A.I. systems flag before any human sees it. So that’s a success in terms of rolling out A.I. tools that can proactively police and enforce safety across the community.6

So that’s 99% of…the unknown percentage of this kind of content that’s actually removed.

Zuckerberg actually tries to do the same thing the next day, April 11th, before the House Energy and Commerce Committee, but he whiffs the maneuver:

…we’re getting good in certain areas. One of the areas that I mentioned earlier was terrorist content, for example, where we now have A.I. systems that can identify and—and take down 99 percent of the al-Qaeda and ISIS-related content in our system before someone—a human even flags it to us. I think we need to do more of that.7

The version Zuckerberg says right there, on April 11th, is what I’m pretty sure most people think Meta means when they go into this stuff—but as stated, it’s a lie.

No one in those hearings presses Zuckerberg on those numbers—and when Meta repeats the move in 2020, plenty of reporters fall into the trap and make untrue claims favorable to Meta:

…between its AI systems and its human content moderators, Facebook says it’s detecting and removing 95% of hate content before anyone sees it. —Fast Company

About 95 percent of hate speech on Facebook gets caught by algorithms before anyone can report it… —Ars Technica

Facebook said it took action on 22.1 million pieces of hate speech content to its platform globally last quarter and about 6.5 million pieces of hate speech content on Instagram. On both platforms, it says about 95% of that hate speech was proactively identified and stopped by artificial intelligence. —Axios

The company said it now finds and eliminates about 95% of the hate speech violations using automated software systems before a user ever reports them… —Bloomberg

This is all not just wrong but wildly wrong if you have the internal numbers in front of you.

I’m hitting this point so hard not because I want to point out ~corporate hypocrisy~ or whatever, but because this deceptive runaround is consequential for two reasons: The first is that it provides instructive context about how to interpret Meta’s public statements. The second is that it actually says extremely dire things about Meta’s only hope for content-based moderation at scale, which is their AI-based classifiers.

Here’s Zuckerberg saying as much to a congressional committee:

…one thing that I think is important to understand overall is just the sheer volume of content on Facebook makes it so that we can’t—no amount of people that we can hire will be enough to review all of the content.… We need to rely on and build sophisticated A.I. tools that can help us flag certain content.8

This statement is kinda disingenuous in a couple of ways, but the central point is true: the scale of these platforms makes human review incredibly difficult. And Meta’s reasonable-sounding explanation is that this means they have to focus on AI. But by their own internal estimates, Meta’s AI classifiers are only identifying something in the range of 4.75% of hate speech on Facebook, and often considerably less. That seems like a dire stat for the thing you’re putting forward to Congress as your best hope!

The same disclosed internal memo that told us Meta was deleting between 3% and 5% of hate speech had this to say about the potential of AI classifiers to handle mass-scale content removals:

[O]ur current approach of grabbing a hundred thousand pieces of content, paying people to label them as Hate or Not Hate, training a classifier, and using it to automatically delete content at 95% precision is just never going to make much of a dent.9

Getting content moderation to work for even extreme and widely reviled categories of speech is obviously genuinely difficult, so I want to be extra clear about a foundational piece of my argument.

Responsibility for the machine

I think that if you make a machine and hand it out for free to everyone in the world, you’re at least partially responsible for the harm that the machine does.

“It’s very difficult” perhaps doesn’t carry as much weight when you’re pulling in $40 billion in annual profits, as Meta did in 2021. Is it forty billion dollars difficult?

Also, even if you say, “but it’s very difficult to make the machine safer!” I don’t think that reduces your responsibility so much as it makes you look shortsighted and bad at machines.

Beyond the bare fact of difficulty, though, I think the more what harm the machine does deviates from what people might expect a machine that looks like this to do, the more responsibility you bear: If you offer everyone in the world a grenade, I think that’s bad, but also it won’t be surprising when people who take the grenade get hurt or hurt someone else. But when you offer everyone a cute little robot assistant that turns out to be easily repurposed as a rocket launcher, I think that falls into another category.

Especially if you see that people are using your cute little robot assistant to murder thousands of people and elect not to disarm it because that would make it a little less cute.

This brings us to the algorithms.

“Core product mechanics” Screencap of an internal Meta document titled Facebook and responsibility, with a header image of the Bart Simson writing on the chalkboard meme in which the writing on the board reads, Facebook is responsible for ranking and recommendations!Screencap of an internal Meta document titled Facebook and responsibility, with a header image of the Bart Simson writing on the chalkboard meme in which the writing on the board reads, Facebook is responsible for ranking and recommendations!

From a screencapped version of “Facebook and responsibility,” one of the disclosed internal documents.

In the second post in this series, I quoted people in Myanmar who were trying to cope with an overwhelming flood of hateful and violence-inciting messages. It felt obvious on the ground that the worst, most dangerous posts were getting the most juice.

Thanks to the Haugen disclosures, we can confirm that this was also understood inside Meta.

The disclosed documents largely come from the years shortly after after the peak of the genocide of the Rohingya in Myanmar. They’re almost all retrospective, so I think they’re highly applicable to the period I’ve been looking at.

In 2019, a Meta employee wrote a memo called “What is Collateral damage.” It included these statements (my emphasis):

“We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook and the family of apps are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.

If integrity takes a hands-off stance for these problems, whether for technical (precision) or philosophical reasons, then the net result is that Facebook, taken as a whole, will be actively (if not necessarily consciously) promoting these types of activities. The mechanics of our platform are not neutral. 10

If you work in tech or if you’ve been following mainstream press accounts about Meta over the years, you presumably already know this, but I think it’s useful to establish this piece of the internal conversation.

Here’s a long breakdown from 2020 about the specific parts of the platform that actively put “unconnected content”—messages that aren’t from friends or Groups people subscribe to—in front of Facebook users. It comes from an internal post called “Facebook and responsibility” (my emphasis):

Facebook is most active in delivering content to users on recommendation surfaces like “Pages you may like,” “Groups you should join,” and suggested videos on Watch. These are surfaces where Facebook delivers unconnected content. Users don’t opt-in to these experiences by following other users or Pages. Instead, Facebook is actively presenting these experiences…

News Feed ranking is another way Facebook becomes actively involved in these harmful experiences. Of course users also play an active role in determining the content they are connected to through feed, by choosing who to friend and follow. Still, when and whether a user sees a piece of content is also partly determined by the ranking scores our algorithms assign, which are ultimately under our control. This means, according to ethicists, Facebook is always at least partially responsible for any harmful experiences on News Feed.

This doesn’t owe to any flaw with our News Feed ranking system, it’s just inherent to the process of ranking. To rank items in Feed, we assign scores to all the content available to a user and then present the highest-scoring content first. Most feed ranking scores are determined by relevance models. If the content is determined to be an integrity harm, the score is also determined by some additional ranking machinery to demote it lower than it would have appeared given its score. Crucially, all of these algorithms produce a single score; a score Facebook assigns. Thus, there is no such thing as inaction on Feed. We can only choose to take different kinds of actions.11

The next few quotes will apply directly to US concerns, but they’re clearly broadly applicable to the 90% of Facebook users who are outside the US and Canada, and whose disinfo concerns receive vastly fewer resources.

This one is from an internal Meta doc from November 5, 2020:

Not only do we not do something about combustible election misinformation in comments, we amplify them and give them broader distribution.12

When Meta staff tried to take the measure of their own recommendation systems’ behavior, they found that the systems led a fresh, newly made account into disinfo-infested waters very quickly:

After a small number of high quality/verified conservative interest follows… within just one day Page recommendations had already devolved towards polarizing content.

Although the account set out to follow conservative political news and humor content generally, and began by following verified/high quality conservative pages, Page recommendations began to include conspiracy recommendations after only 2 days (it took <1 week to get a QAnon recommendation!)

Group recommendations were slightly slower to follow suit - it took 1 week for in-feed GYSJ recommendations to become fully political/right-leaning, and just over 1 week to begin receiving conspiracy recommendations.13

The same document reveals that several of the Pages and Groups Facebook’s systems recommend to its test user show multiple signs of association with “coordinated inauthentic behavior,” aka foreign and domestic covert influence campaigns, which we’ll get to very soon.

Before that, I want to offer just one example of algorithmic malpractice from Myanmar.

Flower speech

Panzagar campaign illustration depicting a Burmese girl with thanaka on her cheek and illustrated flowers coming from her mouth, flowing toward speech bubbles labeled with the names of Burmese towns and cities.Panzagar campaign illustration depicting a Burmese girl with thanaka on her cheek and illustrated flowers coming from her mouth, flowing toward speech bubbles labeled with the names of Burmese towns and cities.

Back in 2014, Burmese organizations including MIDO and Yangon-based tech accelerator Phandeeyar collaborated on a carefully calibrated counter-speech project called Panzagar (flower speech). The campaign—which was designed to be delivered in person, in printed materials, and online—encouraged ordinary Burmese citizens to push back on hate speech in Myanmar.

Later that year, Meta, which had just been implicated in the deadly communal violence in Mandalay, joined with the Burmese orgs to turn their imagery into digital Facebook stickers that users could apply to posts calling for things like the annihilation of the Rohingya people. The stickers depict cute cartoon characters, several of which offer admonishments like, “Don’t be the source of a fire,” “Think before you share,” “Don’t you be spawning hate,” and “Let it go buddy!”

The campaign was widely and approvingly covered by western organizations and media outlets, and Meta got a lot of praise for its involvement.

But according to members of the Burmese civil society coalition behind the campaign, it turned out that the Panzagar Facebook stickers—which were explicitly designed as counterspeech—“carried significant weight in their distribution algorithm,” so anyone who used them to counter hateful and violent messages inadvertently helped those messages gain wider distribution.14

I mention the Panzagar incident not only because it’s such a head-smacking example of Meta favoring cosmetic, PR-friendly tweaks over meaningful redress, or because it reveals plain incompetence in the face of already-serious violence, but also because it gets to what I see as a genuinely foundational problem with Meta in Myanmar.

Even when the company was finally (repeatedly) forced to take notice of the dangers it was contributing to, actions that could actually have made a difference—like rolling out new programs only after local consultation and adaptation, scaling up culturally and linguistically competent human moderation teams in tandem with increasing uptake, and above all, altering the design of the product to stop amplifying the most charged messages—remained not just undone, but unthinkable because they were outside the company’s understanding of what the product’s design should take into consideration.

This refusal to connect core project design with accelerating global safety problems means that attempts at prevention and repair are relegated to window-dressing—or which are actually counterproductive, as in the case of the Panzagar stickers, which absorbed the energy and efforts of local Burmese civil society groups and turned them into something that made the situation worse.

In a 2018 interview with Frontline about problems with Facebook, Meta’s former Chief Security Officer, Alex Stamos, returns again and again to the idea that security work properly happens at the product design level. Toward the end of the interview, he gets very clear:

Stamos: I think there was a structural problem here in that the people who were dealing with the downsides were all working together over kind of in the corner, right, so you had the safety and security teams, tight-knit teams that deal with all the bad outcomes, and we didn’t really have a relationship with the people who are actually designing the product.

Interviewer: You did not have a relationship?

Stamos: Not like we should have, right? It became clear—one of the things that became very clear after the election was that the problems that we knew about and were dealing with before were not making it back into how these products are designed and implemented.15

Meta’s content moderation was a disaster in Myanmar—and around the world—not only because it was treated and staffed like an afterthought, but because it was competing against Facebook’s core machinery.

And just as the house always wins, the core machinery of a mass-scale product built to boost engagement always defeats retroactive and peripheral attempts at cleanup.

This is especially true once organized commercial and nation-state actors figured out how to take over that machinery with large-scale fake Page networks boosted by fake engagement, which brings us to a less-discussed revelation: By the mid-2010s, Facebook had effectively become the equivalent of botnet in the hands of any group, governmental or commercial, who could summon the will and resources to exploit it.

A lot of people did, including, predictably, some of the worst people in the world.

Meta’s zombie networks

Ophiocordyceps formicarum observed at the Mushroom Research Centre, Chiang Mai, Thailand; Steve Axford (CC BY-SA 3.0)

Content warning: The NYT article I link to below is important, but it includes photographs of mishandled bodies, including those of children. If you prefer not to see those, a “reader view” or equivalent may remove the images. (Sarah Sentilles’ 2018 article on which kinds of bodies US newspapers put on display may be of interest.)

In 2018, the New York Times published a front-page account of what really happened on Facebook in Myanmar, which is that beginning around 2013, Myanmar’s military, the Tatmadaw, set up a dedicated, ultra-secret anti-Rohingya hatefarm spread across military bases in which up to 700 staffers worked in shifts to manufacture the appearance of overwhelming support for the genocide the same military then carried out.16

When the NYT did their investigation in 2018, all those fake Pages were still up.

Here’s how it worked: First, the military set up a sprawling network of fake accounts and Pages on Facebook. The fake accounts and Pages focused on innocuous subjects like beauty, entertainment, and humor. These Pages were called things like, “Beauty and Classic,” “Down for Anything,” “You Female Teachers,” “We Love Myanmar,” and “Let’s Laugh Casually.” Then military staffers, some trained by Russian propaganda specialists, spent years tending the Pages and gradually building up followers.17

Then, using this array of long-nurtured fake Pages—and Groups, and accounts—the Tatmadaw’s propagandists used everything they’d learned about Facebook’s algorithms to post and boost viral messages that cast Rohingya people as part of a global Islamic threat, and as the perpetrators of a never-ending stream of atrocities. The Times reports:

Troll accounts run by the military helped spread the content, shout down critics and fuel arguments between commenters to rile people up. Often, they posted sham photos of corpses that they said were evidence of Rohingya-perpetrated massacres…18

That the Tatmadaw was capable of such a sophisticated operation shouldn’t have come as a surprise. Longtime Myanmar digital rights and technology researcher Victoire Rio notes that the Tatmadaw had been openly sending its officers to study in Russia since 2001, was “among the first adopters of the Facebook platform in Myanmar” and launched “a dedicated curriculum as part of its Defense Service Academy Information Warfare training.”19

What these messages did

I don’t have the access required to sort out which specific messages originated from extremist religious networks vs. which were produced by military operations, but I’ve seen a lot of the posts and comments central to these overlapping campaigns in the UN documents and human rights reports.

They do some very specific things:

  • They dehumanize the Rohingya: The Facebook messages speak of the Rohingya as invasive species that outbreed Buddhists and Myanmar’s real ethnic groups. There are a lot of bestiality images.
  • They present the Rohingya as inhumane, as sexual predators, and as an immediate threat: There are a lot of graphic photos of mangled bodies from around the world, most of them presented as Buddhist victims of Muslim killers—usually Rohingya. There are a lot of posts about Rohingya men raping, forcibly marrying, beating, and murdering Buddhist women. One post that got passed around a lot includes a graphic photo of a woman tortured and murdered by a Mexican cartel, presented as a Buddhist woman in Myanmar murdered by the Rohingya.
  • They connect the Rohingya to the “global Islamic threat”: There’s a lot of equating Rohingya people with ISIS terrorists and assigning them group responsibility for real attacks and atrocities by distant Islamic terror organizations.

Ultimately, all of these moves flow into demands for violence. The messages call incessantly and graphically for mass killings, beatings, and forced deportations. They call not for punishment, but annihilation.

This is, literally, textbook preparation for genocide, and I want to take a moment to look at how it works.

Helen Fein is the author of several definitive books on genocide, a co-founder and first president of the International Association of Genocide Scholars, and the founder of the Institute for the Study of Genocide. I think her description of the ways genocidaires legitimize their attacks holds up extremely well despite having been published 30 years ago. Here, she classifies a specific kind of rhetoric as one of the defining characteristics of genocide:

Is there evidence of an ideology, myth, or an articulated social goal which enjoins or justifies the destruction of the victim? Besides the above, observe religious traditions of contempt and collective defamation, stereotypes, and derogatory metaphor indicating the victim is inferior, subhuman (animals, insects, germs, viruses) or superhuman (Satanic, omnipotent), or other signs that the victims were pre-defined as alien, outside the universe of obligation of the perpetrator, subhuman or dehumanized, or the enemy—i.e., the victim needs to be eliminated in order that we may live (Them or Us).20

It’s also necessary for genocidaires to make claims—often supported by manufactured evidence—that the targeted group itself is the true danger, often by projecting genocidal intent onto the group that will be attacked.

Adam Jones, the guy who wrote a widely used textbook on genocide, puts it this way:

One justifies genocidal designs by imputing such designs to perceived opponents. The Tutsis/ Croatians/Jews/Bolsheviks must be killed because they harbor intentions to kill us, and will do so if they are not stopped/prevented/annihilated. Before they are killed, they are brutalized, debased, and dehumanized—turning them into something approaching “subhumans” or “animals” and, by a circular logic, justifying their extermination.21

So before their annihilation, the target group is presented as outcast, subhuman, vermin, but also themselves genocidal—a mortal threat. And afterward, the extraordinary cruelties characteristic of genocide reassure those committing the atrocities that their victims aren’t actually people.

The Tatmadaw committed atrocities in Myanmar. I touched on them in Part II and I’m not going to detail them here. But the figuratively dehumanizing rhetoric I described in parts one and two can’t be separated from the literally dehumanizing things the Tatmadaw did to the humans they maimed and traumatized and killed. Especially now that it’s clear that the military was behind much of the rhetoric as well as the violent actions that rhetoric worked to justify.

In some cases, even the methods match up: The military’s campaign of intense and systematic sexual violence toward and mutilation of women and girls, combined with the concurrent mass murder of children and babies, feels inextricably connected to the rhetoric that cast the Rohingya as both a sexual and reproductive threat who endanger the safety of Buddhist women and outbreed the ethnicities that belong in Myanmar.

Genocidal communications are an inextricable part of a system that turns “ethnic tensions” into mass death. When we see that the Tatmadaw was literally the operator of covert hate and dehumanization propaganda networks on Facebook, I think the most rational way to understand those networks is as an integral part of the genocidal campaign.


After the New York Times article went live, Meta did two big takedowns. Nearly four million people were following the fake Pages identified by either the NYT or by Meta in follow-up investigations. (Meta had previously removed the Tatmadaw’s own official Pages and accounts and 46 “news and opinion” Pages that turned out to be covertly operated by the military—those Pages were followed by nearly 12 million people.)

So given these revelations and disclosures, here’s my question: Does the deliberate, adversarial use of Facebook by Myanmar’s military as a platform for disinformation and propaganda take any of the heat off of Meta? After all, a sovereign country’s military is a significant adversary.

But here’s the thing—Alex Stamos, Facebook’s Chief Security Officer, had been trying since 2016 to get Meta’s management and executives to acknowledge and meaningfully address the fact that Facebook was being used as host for both commercial and state-sponsored covert influence ops around the world. Including in the only place where it was likely to get the company into really hot water: the United States.

“Oh fuck”

On December 16, 2016, Facebook’s newish Chief Security Officer, Alex Stamos—who now runs Stanford’s Internet Observatory—rang Meta’s biggest alarm bells by calling an emergency meeting with Mark Zuckerberg and other top-level Meta executives.

In that meeting, documented in Sheera Frenkel and Cecilia Kang’s book, The Ugly Truth, Stamos handed out a summary outlining the Russian capabilities. It read:

We assess with moderate to high confidence that Russian state-sponsored actors are using Facebook in an attempt to influence the broader political discourse via the deliberate spread of questionable news articles, the spread of information from data breaches intended to discredit, and actively engaging with journalists to spread said stolen information.22

“Oh fuck, how did we miss this?” Zuckerberg responded.

This whole section draws information from Frenkel and Kang’s reporting in An Ugly Truth. For context, they’re both New York Times reporters—Frenkel previously covered Facebook and other tech companies for BuzzFeed News, and Kang did the same at the Washington Post.

Stamos’ team had also uncovered “a huge network of false news sites on Facebook” posting and cross-promoting sensationalist bullshit, much of it political disinformation, along with examples of governmental propaganda operations from Indonesia, Turkey, and other nation-state actors. And the team had recommendations on what to do about it.

Frenkel and Kang paraphrase Stamos’ message to Zuckerberg (my emphasis):

Facebook needed to go on the offensive. It should no longer merely monitor and analyze cyber operations; the company had to gear up for battle. But to do so required a radical change in culture and structure. Russia’s incursions were missed because departments across Facebook hadn’t communicated and because no one had taken the time to think like Vladimir Putin.23

Those changes in culture and structure didn’t happen. Stamos began to realize that to Meta’s executives, his work uncovering the foreign influence networks, and his choice to bring them to the executives’ attention, were both unwelcome and deeply inconvenient.

All through the spring and summer of 2017, instead of retooling to fight the massive international category of abuse Stamos and his colleagues had uncovered, Facebook played hot potato with the information about the ops Russia had already run.

On September 21, 2017, while the Tatmadaw’s genocidal “clearance operations” were approaching their completion, Mark Zuckerberg finally spoke publicly about the Russian influence campaign for the first time.24

In the intervening months, the massive covert influence networks operating in Myanmar ground along, unnoticed.

Thanks to Sophie Zhang, a data scientist who spent two years at Facebook fighting to get networks like the Tatmadaw’s removed, we know quite a lot about why.

What Sophie Zhang found

In 2018, Facebook hired a data scientist named Sophie Zhang and assigned her to a new team working on fake engagement—and specifically on “scripted inauthentic activity,” or bot-driven fake likes and shares.

Within her first year on the team, Zhang began finding examples of bot-driven engagement being used for political messages in both Brazil and India ahead of their national elections. Then she found something that concerned her a lot more. Karen Hao of the MIT Technology Review writes:

The administrator for the Facebook page of the Honduran president, Juan Orlando Hernández, had created hundreds of pages with fake names and profile pictures to look just like users—and was using them to flood the president’s posts with likes, comments, and shares. (Facebook bars users from making multiple profiles but doesn’t apply the same restriction to pages, which are usually meant for businesses and public figures.)

The activity didn’t count as scripted, but the effect was the same. Not only could it mislead the casual observer into believing Hernández was more well-liked and popular than he was, but it was also boosting his posts higher up in people’s newsfeeds. For a politician whose 2017 reelection victory was widely believed to be fraudulent, the brazenness—and implications—were alarming.25

When Zhang brought her discovery back to the teams working on Pages Integrity and News Feed Integrity, both refused to act, either to stop fake Pages from being created, or to keep the fake engagement signals the fake Pages generate from making posts go viral.

But Zhang kept at it, and after a year, Meta finally removed the Honduran network. The very next day, Zhang reported a network of fake Pages in Albania. The Guardian’s Julia Carrie Wong explains what came next:

In August, she discovered and filed escalations for suspicious networks in Azerbaijan, Mexico, Argentina and Italy. Throughout the autumn and winter she added networks in the Philippines, Afghanistan, South Korea, Bolivia, Ecuador, Iraq, Tunisia, Turkey, Taiwan, Paraguay, El Salvador, India, the Dominican Republic, Indonesia, Ukraine, Poland and Mongolia.26

For a much more recent example of Meta refusing to remove fake-Page networks and coordinated influence campaigns connected to high-profile accounts, see this September 27, 2023 Washington Post article on how Meta knowingly let covert coordinated influence campaigns run loose on Facebook because they were run by the Indian army.

According to Zhang, Meta eventually established a policy against “inauthentic behavior,” but didn’t enforce it, and rejected Zhang’s proposal to punish repeat fake-Page creators by banning their personal accounts because of policy staff’s “discomfort with taking action against people connected to high-profile accounts.”27

Zhang discovered that even when she took initiative to track down covert influence campaigns, the teams who could take action to remove them didn’t—not without persistent “lobbying.” So Zhang tried harder. Here’s Karen Hao again:

She was called upon repeatedly to help handle emergencies and praised for her work, which she was told was valued and important.

But despite her repeated attempts to push for more resources, leadership cited different priorities. They also dismissed Zhang’s suggestions for a more sustainable solution, such as suspending or otherwise penalizing politicians who were repeat offenders. It left her to face a never-ending firehose: The manipulation networks she took down quickly came back, often only hours or days later. “It increasingly felt like I was trying to empty the ocean with a colander,” she says.28

Julia Carrie Wong’s Guardian piece reveals something interesting about Zhang’s reporting chain, which is that Meta’s Vice President of Integrity, Guy Rosen, was one of the people giving her the hardest pushback.

Remember Internet.org, also known as Free Basics, aka Meta’s push to dominate global internet use in all those countries it would go on to “deprioritize” and generally ignore?

Guy Rosen, Meta’s then-newish VP of Integrity, is the guy who previously ran Internet.org. He came to lead Integrity directly from being VP of Growth. Before getting acquihired by Meta, Rosen co-founded a company The Information describes as “a startup that analyzed what people did on their smartphones.”29

Meta bought that startup in 2013, nominally because it would help Internet.org. In a very on-the-nose development, Rosen’s company’s supposedly privacy-protecting VPN software allowed Meta to collect huge amounts of data—so much that Apple booted the app from its store.

So that’s Facebook’s VP of Integrity.

“We simply didn’t care enough to stop them”

In the Guardian, Julia Carrie Wong reports that in the fall of 2019, Zhang discovered that the Honduras network was back up, and she couldn’t get Meta’s Threat Intelligence team to deal with it. That December, she posted an internal memo about it. Rosen responded:

Facebook had “moved slower than we’d like because of prioritization” on the Honduras case, Rosen wrote. “It’s a bummer that it’s back and I’m excited to learn from this and better understand what we need to do systematically,” he added. But he also chastised her for making a public [public as in within Facebook —EK] complaint, saying: “My concern is that threads like this can undermine the people that get up in the morning and do their absolute best to try to figure out how to spend the finite time and energy we all have and put their heart and soul into it.”[31]

You can go read the MIT Technology Review piece and the Guardian piece, which have far more managerial quotes than I can fit in here, all of them hair-raising.

In a private follow-up conversation (still in December, 2019), Zhang alerted Rosen that she’d been told that the Facebook Threat Intelligence team would only prioritize fake networks affecting “the US/western Europe and foreign adversaries such as Russia/Iran/etc.”

Rosen told her that he agreed with those priorities. Zhang pushed back (my emphasis):

I get that the US/western Europe/etc is important, but for a company with effectively unlimited resources, I don’t understand why this cannot get on the roadmap for anyone … A strategic response manager told me that the world outside the US/Europe was basically like the wild west with me as the part-time dictator in my spare time. He considered that to be a positive development because to his knowledge it wasn’t covered by anyone before he learned of the work I was doing.

Rosen replied, “I wish resources were unlimited.”30

I’ll quote Wong’s next passage in full: “At the time, the company was about to report annual operating profits of $23.9bn on $70.7bn in revenue. It had $54.86bn in cash on hand.”

In early 2020, Zhang’s managers told her she was all done tracking down influence networks—it was time she got back to hunting and erasing “vanity likes” from bots instead.

But Zhang believed that if she stopped, no one else would hunt down big, potentially consequential covert influence networks. So she kept doing at least some of it, including advocating for action on an inauthentic Azerbaijan network that appeared to be connected to the country’s ruling party. In an internal group, she wrote that, “Unfortunately, Facebook has become complicit by inaction in this authoritarian crackdown.”

Although we conclusively tied this network to elements of the government in early February, and have compiled extensive evidence of its violating nature, the effective decision was made not to prioritize it, effectively turning a blind eye.”31

After those messages, Threat Intelligence decided to act on the network after all.

Then Meta fired Zhang for poor performance.

On her way out the door, Zhang posted a long exit memo—7,800 words—describing what she’d seen. Meta deleted it, so Zhang put up a password-protected version on her own website so her colleagues can see it. So Meta got Zhang’s entire website taken down and her domain deactivated. Eventually Meta got enough employee pressure that it put an edited version back up on their internal site.32

Shortly thereafter, someone leaked the memo to Buzzfeed News.

In the memo, Zhang wrote:

I’ve found multiple blatant attempts by foreign national governments to abuse our platform on vast scales to mislead their own citizenry, and caused international news on multiple occasions. I have personally made decisions that affected national presidents without oversight, and taken action to enforce against so many prominent politicians globally that I’ve lost count.33

And: “[T]he truth was, we simply didn’t care enough to stop them.”

On her final day at Meta, Zhang left notes for her colleagues, tallying suspicious accounts involved in political influence campaigns that needed to be investigated:

There were 200 suspicious accounts still boosting a politician in Bolivia, she recorded; 100 in Ecuador, 500 in Brazil, 700 in Ukraine, 1,700 in Iraq, 4,000 in India and more than 10,000 in Mexico.34

“With all due respect”

Zhang’s work at Facebook happened after the wrangling over Russian influence ops that Alex Stamos’ team found. And after the genocide in Myanmar. And after Mark Zuckerberg did his press-and-government tour about how hard Meta tried and how much better they’d do after Myanmar.35

It was an entire calendar year after the New York Times found the Tatmadaw’s genocide-fueling fake-Page hatefarm that Guy Rosen, Facebook’s VP of Integrity, told Sophie Zhang that the only coordinated fake networks Facebook would take down were the ones that affected the US, Western Europe, and “foreign adversaries.”36

In response to Zhang’s disclosures, Rosen later hopped onto Twitter to deliver his personal assessment of the networks Zhang found and couldn’t get removed:

With all due respect, what she’s described is fake likes—which we routinely remove using automated detection. Like any team in the industry or government, we prioritize stopping the most urgent and harmful threats globally. Fake likes is not one of them.

One of Frances Haugen’s disclosures includes an internal memo that summarizes Meta’s actual, non-Twitter-snark awareness of the way Facebook has been hollowed out for routine use by covert influence campaigns:

We frequently observe highly-coordinated, intentional activity on the FOAS [Family of Apps and Services] by problematic actors, including states, foreign actors, and actors with a record of criminal, violent or hateful behaviour, aimed at promoting social violence, promoting hate, exacerbating ethnic and other societal cleavages, and/or delegitimizing social institutions through misinformation. This is particularly prevalent—and problematic—in At Risk Countries and Contexts.37

So, they knew.

Because of Haugen’s disclosures, we also know that in 2020, for the category, “Remove, reduce, inform/measure misinformation on FB Apps, Includes Community Review and Matching”—so, that’s moderation targeting misinformation specifically—only 13% of the total budget went to the non-US countries that provide more than 90% of Facebook’s user base, and which include all of those At Risk Countries. The other 87% of the budget was reserved for the 10% of Facebook users who live in the United States.38

In case any of this seems disconnected with the main thread of what happened in Myanmar, here’s what (formerly Myanmar-based) researcher Victoire Rio had to say about covert coordinated influence networks in her extremely good 2020 case study about the role of social media in Myanmar’s violence:

Bad actors spend months—if not years—building networks of online assets, including accounts, pages and groups, that allow them to manipulate the conversation. These inauthentic presences continue to present a major risk in places like Myanmar and are responsible for the overwhelming majority of problematic content.39

Note that Rio says that these inauthentic networks—the exact things Sophie Zhang chased down until she got fired for it—continued to present a major risk in 2020.

It’s time to skip ahead.

Let’s go to Myanmar in 2021, four years after the peak of the genocide. After everything I’ve dealt with in this whole painfully long series so far, it would be fair to assume that Meta would be prioritizing getting everything right in Myanmar. Especially after the coup.

Meta in Myanmar, again (2021)

If you want to understand the coup and its aftermath, you might read the Council on Foreign Relations explainer, an AFAICT competent NYT backgrounder, the Crisis Group’s wonk-grade explainer, or the remarkable first-person “Chronicle of a Coup” blog series at Tea Circle, in which a pseudonymous westerner relates his experience of life in Myanmar during and after the coup. Human Rights Watch is reporting on the junta’s abuses, which they consider crimes against humanity.

In 2021, the Tatmadaw deposed Myanmar’s democratically elected government and transferred the leadership of the country to the military’s Commander-in-Chief. Since then, the military has turned the machines of surveillance, administrative repression, torture, and murder that it refined on the Rohingya and other ethnic minorities onto Myanmar’s Buddhist ethnic Bamar majority.

Also in 2021, Facebook’s director of policy for APAC Emerging Countries, Rafael Frankel, told the Associated Press that Facebook had now “built a dedicated team of over 100 Burmese speakers.”

This “dedicated team is,” presumably, the group of contract workers employed by the Accenture-run “Project Honey Badger” team in Malaysia.40 (Which, Jesus.)

In October of 2021, the Associated Press took a look at how that’s working out on Facebook in Myanmar. Right away, they found threatening and violent posts:

One 2 1/2 minute video posted on Oct. 24 of a supporter of the military calling for violence against opposition groups has garnered over 56,000 views.

“So starting from now, we are the god of death for all (of them),” the man says in Burmese while looking into the camera. “Come tomorrow and let’s see if you are real men or gays.”

One account posts the home address of a military defector and a photo of his wife. Another post from Oct. 29 includes a photo of soldiers leading bound and blindfolded men down a dirt path. The Burmese caption reads, “Don’t catch them alive.”41

That’s where content moderation stood in 2021. What about the algorithmic side of things? Is Facebook still boosting dangerous messages in Myanmar?

In the spring of 2021, Global Witness analysts made a clean Facebook account with no history and searched for တပ်မ​တော်—“Tatmadaw.” They opened the top page in the results, a military fan page, and found no posts that broke Facebook’s new, stricter rules. Then they hit the “like” button, which caused a pop-up with “related pages” to appear. Then the team popped open up the first five recommended pages.

Here’s what they found:

Three of the five top page recommendations that Facebook’s algorithm suggested contained content posted after the coup that violated Facebook’s policies.  One of the other pages had content that violated Facebook’s community standards but that was posted before the coup and therefore isn’t included in this article.

Specifically, they found messages that included:

  • Incitement to violence
  • Content that glorifies the suffering or humiliation of others
  • Misinformation that can lead to physical harm42

As well as several kinds of posts that violated Facebook’s new and more specific policies on Myanmar.

So not only were the violent, violence-promoting posts still showing up in Myanmar four years later after the atrocities in Rakhine State—and after the Tatmadaw turned the full machinery of of its violence onto opposition members of Myanmar’s Buddhist ethnic majority—but Facebook was still funneling users directly into them after even the lightest engagement with anodyne pro-military content.

This is in 2021, with Meta throwing vastly more resources at the problem than it ever did during the period leading up to and including the genocide of the Rohingya people. Its algorithms are making active recommendations by Facebook, precisely as outlined in the Meta memos in Haugen’s disclosures.

By any reasonable measure, I think this is a failure.

Meta didn’t respond to requests for comment from Global Witness, but when the Guardian and AP picked up the story, Meta got back to them with…this:

Our teams continue to closely monitor the situation in Myanmar in real-time and take action on any posts, Pages or Groups that break our rules. We proactively detect 99 percent of the hate speech removed from Facebook in Myanmar, and our ban of the Tatmadaw and repeated disruption of coordinated inauthentic behavior has made it harder for people to misuse our services to spread harm.43

One more time: This statement says nothing about how much hate speech is removed. It’s pure misdirection.


Internal Meta memos highlight ways to use Facebook’s algorithmic machinery to sharply reduce the spread of what they called “high-harm misinfo.” For those potentially harmful topics, you “hard demote” (aka “push down” or “don’t show”) reshared posts that were originally made by someone who isn’t friended or followed by the viewer. (Frances Haugen talks about this in interviews as “cutting the reshare chain.”)

And this method works. In Myanmar, “reshare depth demotion” reduced “viral inflammatory prevalence” by 25% and cut “photo misinformation” almost in half.

In a reasonable world, I think Meta would have decided to broaden use of this method and work on refining it to make it even more effective. What they did, though, was decide to roll it back within Myanmar as soon as the upcoming elections were over.44

The same SEC disclosure I just cited also notes that Facebook’s AI “classifier” for Burmese hate speech didn’t seem to be maintained or in use—and that algorithmic recommendations were still shuttling people toward violent, hateful messages that violated Facebook’s Community Standards.

So that’s how the algorithms were going. How about the military’s covert influence campaign?

Reuters reported in late 2021 that:

As Myanmar’s military seeks to put down protest on the streets, a parallel battle is playing out on social media, with the junta using fake accounts to denounce opponents and press its message that it seized power to save the nation from election fraud…

The Reuters reporters explain the military has assigned thousands of the soldiers to wage “information combat” in what appears to be an expanded, distributed version of their earlier secret propaganda ops:

“Soldiers are asked to create several fake accounts and are given content segments and talking points that they have to post,” said Captain Nyi Thuta, who defected from the army to join rebel forces at the end of February. “They also monitor activity online and join (anti-coup) online groups to track them.” 45

(We know this because Reuters journalists got hold of a high-placed defector from the Tatmadaw’s propaganda wing.)

When asked for comment, Facebook’s regional Director of Public Policy told Reuters that Meta “‘proactively’ detected almost 98 percent of the hate speech removed from its platform in Myanmar.”

“Wasting our lives under tarpaulin”

The Rohingya people forced to flee Myanmar have scattered across the region, but the overwhelming majority of those who fled in 2017 ended up in the Cox’s Bazar region of Bangladesh.

The camps are beyond overcrowded, and they make everyone who lives in them vulnerable to the region’s seasonal flooding, to worsening climate impacts, and to waves of disease. This year, the refugees’ food aid was just cut from the equivalent of $12 a month to $8 month, because the international community is focused elsewhere.46

The complex geopolitical situation surrounding post-coup Myanmar—in which many western and Asian countries condemn the situation in Myanmar, but don’t act lest they push the Myanmar junta further toward China—seems likely to ensure a long, bloody conflict, with no relief in sight for the Rohingya.47

The UN estimates that more than 960,000 Rohingya refugees now live in refugee camps in Bangladesh. More than half are children, few of whom have had much education at all since coming to the camps six years ago. The UN estimates that the refugees needed about $70.5 million for education in 2022, of which 1.6% was actually funded. 48

Amnesty International spoke with Mohamed Junaid, a 23-year-old Rohingya volunteer math and chemistry teacher, who is also a refugee. He told Amnesty:

Though there were many restrictions in Myanmar, we could still do school until matriculation at least. But in the camps our children cannot do anything. We are wasting our lives under tarpaulin.49

In their report, “The Social Atrocity,” Amnesty wrote that in 2020, seven Rohingya youth organizations based in the refugee camps made a formal application to Meta’s Director of Human Rights. They requested that, given its role in the crises that led to their expulsion from Myanmar, Meta provide just one million dollars in funding to support a teacher-training initiative within the camps—a way to give the refugee children a chance at an education that might someday serve them in the outside world.

Meta got back to the Rohingya youth organizations in 2021, a year in which the company cleared $39.3B in profits:

Unfortunately, after discussing with our teams, this specific proposal is not something that we’re able to support. As I think we noted in our call, Facebook doesn’t directly engage in philanthropic activities.


In 2022, Global Witness came back for one more look at Meta’s operations in Myanmar, this time with eight examples of real hate speech aimed at the Rohingya—actual posts from the period of the genocide, all taken from the UN Human Rights Council findings I’ve been linking to so frequently in this series. They submitted these real-life examples of hate speech to Meta as Burmese-language Facebook advertisements.

Meta accepted all eight ads.50

The final post in this series, Part IV, will be up in about a week. Thank you for reading.


  1. Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated.↩︎

  2. Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated.↩︎

  3. Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated. The quoted part is cited to an internal Meta document called “Demoting on Integrity Signals.”↩︎

  4. Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated. The quoted part is cited to an internal Meta document called “A first look at the minimum integrity holdout.”↩︎

  5. Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated. The quoted part is cited to an internal Meta document called “fghanistan Hate Speech analysis.”↩︎

  6. Transcript of Mark Zuckerberg’s Senate hearing,” The Washington Post (which got the transcript via Bloomberg Government), April 10, 2018.↩︎

  7. Transcript of Zuckerberg’s Appearance Before House Committee,” The Washington Post (which got the transcript via Bloomberg Government), April 11, 2018.↩︎

  8. Transcript of Zuckerberg’s Appearance Before House Committee,” The Washington Post (which got the transcript via Bloomberg Government), April 11, 2018.↩︎

  9. Facebook Misled Investors and the Public About ‘Transparency’ Reports Boasting Proactive Removal of Over 90% of Identified Hate Speech When Internal Records Show That ‘As Little As 3-5% of Hate’ Speech Is Actually Removed,” Whistleblower Aid, undated.↩︎

  10. Facebook Misled Investors and the Public About Its Role Perpetuating Misinformation and Violent Extremism Relating to the 2020 Election and January 6th Insurrection,” Whistleblower Aid, undated; “Facebook Wrestles With the Features It Used to Define Social Networking,” The New York Times, Oct. 25, 2021. This memo hasn’t been made public even in a redacted form, which is frustrating, but the SEC disclosure and NYT article cited here both contain overlapping but not redundant excerpts from which I was able to reconstruct this slightly longer quote.↩︎

  11. Facebook and responsibility,” internal Facebook memo, authorship redacted, March 9, 2020, archived at Document Cloud as a series of images.↩︎

  12. Facebook misled investors and the public about its role perpetuating misinformation and violent extremism relating to the 2020 election and January 6th insurrection,” Whistleblower Aid, undated. (Date of the quoted internal memo comes from The Atlantic.)↩︎

  13. Facebook Misled Investors and the Public About Its Role Perpetuating Misinformation and Violent Extremism Relating to the 2020 Election and January 6th Insurrection,” Whistleblower Aid, undated.↩︎

  14. Facebook and the Rohingya Crisis,” Myanmar Internet Project, September 29, 2022. This document is offline right now at the Myanmar Internet Project site, so I’ve used Document Cloud to archive a copy of a PDF version a project affiliate provided.↩︎

  15. Full interview with Alex Stamos filmed for The Facebook Dilemma, Frontline, October, 2018.↩︎

  16. A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” Paul Mozur, The New York Times, October 15, 2018.↩︎

  17. A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” Paul Mozur, The New York Times, October 15, 2018; “Removing Myanmar Military Officials From Facebook,” Meta takedown notice, August 28, 2018.↩︎

  18. A Genocide Incited on Facebook, With Posts From Myanmar’s Military,” Paul Mozur, The New York Times, October 15, 2018.↩︎

  19. The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020.↩︎

  20. “Genocide: A Sociological Perspective,” Helen Fein, Current Sociology, Vol.38, No.1 (Spring 1990), p. 1-126; republished in Genocide: An Anthropological Reader, ed. Alexander Laban Hinton, Blackwell Publishers, 2002, and this quotation appears on p. 84 of that edition.↩︎

  21. Genocide: A Comprehensive Introduction, Adam Jones, Routledge, 2006, p. 267.↩︎

  22. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021.↩︎

  23. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021.↩︎

  24. Read Mark Zuckerberg’s full remarks on Russian ads that impacted the 2016 elections,” CNBC News, September 21, 2017.↩︎

  25. She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.↩︎

  26. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  27. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  28. She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.↩︎

  29. The Guy at the Center of Facebook’s Misinformation Mess,” Sylvia Varnham O’Regan, The Information, June 18, 2021.↩︎

  30. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  31. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  32. She Risked Everything to Expose Facebook. Now She’s Telling Her Story,” Karen Hao, MIT Technology Review, July 29, 2021.↩︎

  33. “I Have Blood on My Hands”: A Whistleblower Says Facebook Ignored Global Political Manipulation, Craig Silverman, Ryan Mac, Pranav Dixit, BuzzFeed News, September 14, 2020.↩︎

  34. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  35. The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020.↩︎

  36. How Facebook Let Fake Engagement Distort Global Politics: A Whistleblower’s Account,” Julia Carrie Wong, The Guardian, April 12, 2021.↩︎

  37. Facebook Misled Investors and the Public About Bringing ‘the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated. This is a single-source statement, but it’s a budget figure, not an opinion, so I’ve used it.↩︎

  38. Facebook Misled Investors and the Public About Bringing ‘the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated. This is a single-source statement, but it’s a budget figure, not an opinion, so I’ve used it.↩︎

  39. The Role of Social Media in Fomenting Violence: Myanmar,” Victoire Rio, Policy Brief No. 78, Toda Peace Institute, June 2020.↩︎

  40. Zuckerberg Was Called Out Over Myanmar Violence. Here’s His Apology.” Kevin Roose and Paul Mozur, The New York Times, April 9, 2018.↩︎

  41. Hate Speech in Myanmar Continues to Thrive on Facebook, Sam McNeil, Victoria Milko, The Associated Press, November 17, 2021.↩︎

  42. Algorithm of Harm: Facebook Amplified Myanmar Military Propaganda Following Coup,” Global Witness, June 23, 2021.↩︎

  43. Algorithm of Harm: Facebook Amplified Myanmar Military Propaganda Following Coup,” Global Witness, June 23, 2021.↩︎

  44. Facebook Misled Investors and the Public About Bringing ‘the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated.↩︎

  45. ‘Information Combat’: Inside the Fight for Myanmar’s soul,” Fanny Potkin, Wa Lone, Reuters, November 1, 2021.↩︎

  46. Rohingya Refugees Face Hunger and Loss of Hope After Latest Ration Cuts,” Christine Pirovolakis, UNHCR, the UN Refugee Agency, July 19, 2023.↩︎

  47. Is Myanmar the Frontline of a New Cold War?,” Ye Myo Hein and Lucas Myers, Foreign Affairs, June 19, 2023.↩︎

  48. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022; the education funding estimates come from “Bangladesh: Rohingya Refugee Crisis Joint Response Plan 2022,” OCHA Financial Tracking Service, 2022, cited by Amnesty.↩︎

  49. Facebook Approves Adverts Containing Hate Speech Inciting Violence and Genocide Against the Rohingya,” March 20, 2022.↩︎

  50. Facebook Misled Investors and the Public About Bringing ‘the World Closer Together’ Where It Relegates International Users and Promotes Global Division and Ethnic Violence,” Whistleblower Aid, undated.↩︎

https://erinkissane.com/meta-in-myanmar-part-iii-the-inside-view
Extensions
Meta in Myanmar, Part II: The Crisis

This is the second post in a series on what Meta did in Myanmar and what the broader technology community can learn from it. It will make a lot more sense if you read the first post—these first two are especially tightly linked and best understood as a single story. There’s also a meta-post with things like terminology notes, sourcing information, and a corrections changelog.

But in case you haven’t read Part I, or in case you don’t remember all billion words of it…

Let’s recap

In the years leading up to the worst violence against the Rohingya people, a surge of explicit calls for the violent annihilation of the Rohingya ethnic minority flare up across Myanmar—in speeches by military officers and political party members, in Buddhist temples, in YouTube videos, through anonymous Bluetooth-transmitted messages in cafes, and, of course, on Facebook.

What makes Facebook special, though, is that it’s everywhere. It’s on every phone, which is in just about every home. Under ultra-rigid military control, the Burmese have long relied on unofficial information—rumors—to get by. And now the country’s come online extremely quickly, even in farming villages that aren’t yet wired for electricity.

For context, Meta’s 2015 global profits hit $3.7 billion and the company jumps 200 places in the Forbes 500 ranking. “2015 was a great year for Facebook,” Mark Zuckerberg announces. “We continue to invest in better serving our community, building our business, and connecting the world.”

And into all the phones held in all the hands of all these people who are absolutely delighted to connect and learn and better understand the world around them, Facebook is distributing and accelerating professional-grade hatred and disinformation whipped up in part by the extremist wing of Myanmar’s widely beloved Buddhist religious establishment.

It’s a very bad setup.

The dangers rising in Myanmar in the mid-2010s aren’t only clear in hindsight: For years, Burmese and western civil society experts, digital rights advocates, tech folks—even Myanmar’s own government—have been warning Meta that Facebook is fueling a slide toward genocide. In 2012 and 2014, waves of—sometimes state-supported—communal violence occur; the Burmese government even directly connects unchecked incitement on Facebook to one of the riots and blocks the site to stop the violence.

Meta has responded by getting local Burmese groups to help it translate its rules and reporting flow, but there’s no one to deal with the reports. For years, Meta employs a total of one Burmese-speaking moderator for this country of 50M+ people—which by the end of 2015 they increased to four.

This brings us to 2016, when Meta doubles down on connection.

The next billion

In 2013, Mark Zuckerberg announces the launch of Facebook’s new global internet-expansion initiative, Internet.org. Facebook will lead the program with six other for-profit technology companies: two semiconductor companies, two handset makers, a telecom, and Opera. There’s a launch video, too, with lots of very global humans doing celebratory human things set to pensive piano notes with a JFK speech about world peace playing over it.1

Alongside the big announcement, Zuckerberg posts a memo about his plans, titled “Is Connectivity a Human Right?” Facebook’s whole deal, he writes, is to make the world more open and connected:

But as we started thinking about connecting the next 5 billion people, we realized something important: the vast majority of people in the world don’t have any access to the internet.

The problem, according to Zuckerberg, is that data plans were too costly—which is because of missing infrastructure. His memo then makes a brief detour through economics, explaining that internet access == no more zero-sum resources == global prosperity and happiness:

Before the internet and the knowledge economy, our economy was primarily industrial and resource-based. Many dynamics of resource-based economies are zero sum. For example, if you own an oil field, then I can’t also own that same oil field. This incentivizes those with resources to hoard rather than share them. But a knowledge economy is different and encourages worldwide prosperity. It’s not zero sum. If you know something, that doesn’t stop me from knowing it too. In fact, the more things we all know, the better ideas, products and services we can all offer and the better all of our lives will be.

And in Zuckerberg’s account, Facebook is really doing the work, putting in the resources required to open all of these benefits to everyone:

Since the internet is so fundamental, we believe everyone should have access and we’re investing a significant amount of our energy and resources into making this happen. Facebook has already invested more than $1 billion to connect people in the developing world over the past few years, and we plan to do more.2

As various boondoggles have recently demonstrated, social media executives are not necessarily brilliant people, but neither is Mark Zuckerberg a hayseed. What his new “Next Billion” initiative to “connect the world“ will do is build and reinforce monopolistic structures that give underdeveloped countries not real “internet access” but…mostly just Facebook, stripped down and zero-rated so that using it doesn’t rack up data charges.

The Internet.org initiative debuts to enthusiastic coverage in the US tech press, and many mainstream outlets.3 The New York Times contributes a more skeptical perspective:

[Social media] companies have little choice but to look overseas for growth. More than half of Americans already use Facebook at least once a month, for instance, and usage in the rest of the developed world is similarly heavy. There is nearly one active cellphone for every person on earth, making expansion a challenge for carriers and phone makers.

Poorer countries in Asia, Africa and Latin America present the biggest opportunity to reach new customers—if companies can figure out how to get people there online at low cost.4

In June of 2013, Facebook had 1.1 billion monthly active users, only 198 million of which were in the US. As I write this post in 2023, the number of monthly active users is up to 3 billion, only 270 million of which are in the US. So usage numbers in the US have only risen 36% in ten years, while monthly active users everywhere else went up 188%.5 By 2022, 55% of all social media use was in Asia.6

Whenever you read about Meta’s work “connecting the world,” I think it’s good to keep those figures in mind.

But just because the growth was happening globally didn’t mean that Meta was attending to what its subsidized access was doing outside the US and Western Europe.

In An Ugly Truth, their 2021 book about Meta’s inner workings, New York Times reporters Sheera Frenkel and Cecelia Kang write that no one at Meta was responsible for assessing cultural and political dynamics as new communities came online, or even tracking whether they had linguistically and culturally competent moderators to support each new country.

A Meta employee who worked on the Next One Billion initiative couldn’t remember anyone “directly questioning Mark or Sheryl about whether there were safeguards in place or raising something that would qualify as a concern or warning for how Facebook would integrate into non-American cultures.”7

In 2015, Internet.org rebrands as Free Basics after the initiative attracts broad criticism for working against net neutrality—it’s a PR move that foreshadows the big rebrand from Facebook to Meta shortly after Frances Haugen delivers her trove of internal documents to the SEC in 2021.8

In 2016, it’s time to roll out Free Basics in Myanmar, alongside a stripped-down version of Facebook called Facebook Flex that lets people view text for free and then pay for image and video data.9 Facebook is already super-popular in Myanmar for reasons covered in the previous post, but when Myanmar’s largest telecom, MPT, launches Free Basics and Facebook Flex, Facebook’s Myanmar monthly active user count more than doubles from a little over 7 million users in 2015 to at least 15 million in 2017. (Several US media sources say 30 million, though I don’t think I believe them.)10

But I want to be clear—for a ton of people across Myanmar, getting even a barebones internet was life-changingly great.

“Before, I just had to watch the clouds”

In early 2017, journalist Doug Bock Clark interviewed people in Myanmar—including MIDO cofounder Nay Phone Latt—about the internet for Wired.

Clark quotes a farmer who cultivates the tea plantation his family has worked for generations in Shan State:

I have always lived in the same town with about 900 people, which is in a very beautiful forest but also very isolated. When I was a child, we lived in wooden houses and used candles at night, and the mountain footpaths were too small even for oxcarts. For a long time, life didn’t change.

In 2014, the tea farmer’s town got a cell tower, and in 2016 a local NGO demonstrated an app that offered weather forecasts, market prices, and more. That really changed things:

Being able to know the weather in advance is amazing—before, I just had to watch the clouds! And the market information is very important. Before, we would sell our products to the brokers for very low prices, because we had no idea they sold them for higher prices in the city. But in the app I can see what the prices are in the big towns, so I don’t get cheated…

If you haven’t read Craig’s writeup, do yourself a favor and take a look—the photos alone are incredible. It’s archived on his site as well, if the Atlantic gives you a paywall.

This brings me back to Craig Mod’s essay about his ethnographic work in rural Myanmar that I quoted from a lot in Part I of this series. Here, Mod is talking about internet use with a group of farmers: “The lead farmer mentions Facebook and the others fall in. Facebook! Yes yes! They use Facebook every day. They feel that spending data on Facebook is a worthwhile investment.”

One of the farmers wants to show Mod a post, and Mod and his colleagues speculate while the post loads:

Earlier, he said to us, lelthamar asit—Like any real farmer, I know the land. And so we wonder: What will he show us? A new farming technique? News about the upcoming election? Analysis on its impact on farmers? He shows us: A cow with five legs. He laughs. Amazing, no? Have you ever seen such a thing?11

It’s a charming story. But it’s hard not to feel a little ill, reading back from the perspective of 2023.

In the middle of a video podcast interview, Frances Haugen relates a story in the context of Meta trying to make tooling for reporting misinformation:

And one of our researchers said, you know, that sounds really obvious. Like that sounds like it would be a thing that would work. Except for when we went in and did interviews in India, people are coming online so fast that when we talk to people with master’s degrees… They say things like, why would someone put something fake on the Internet? That sounds like a lot of work.12

This anecdote is meant to point to the relative naiveté of Indian Facebook users, but honestly I recognize the near-universal humanity of the idea—that all of that manufacturing would just be too much work for regular people to do! It’s the argument against conspiracies in general. For those of us whose brains haven’t been ruined by the internet, it’s reasonable to think that regular people just wouldn’t go to all that trouble.

As it happens, in Myanmar and lots of other places, it’s not only regular people doing the work of disinformation and incitement, and we’ll get to that later. But regular people across Myanmar are reading all these anti-Rohingya messages and looking at the images and watching the videos, and…a lot of them are buying it.

“Everyone knows they’re terrorists”

This brings me back to Faine Greenwood’s essay that I also quoted from a lot in the previous post, and specifically to Greenwood’s “honest-to-god Thomas Friedman moment” in a Burmese cab back in 2013:

The driver was a charming young Burmese man who spoke good English, and we chatted about the usual things for a bit: the weather (sticky), how I liked Yangon (quite a bit, hungry dogs aside), and my opinion on Burmese food (I’m a fan).

Then he asked me what I was in town for, and I told him that I’d come to write about the Internet. “Oh, yes, I’ve got a Facebook account now,” he said, with great enthusiasm. “It is very interesting. Learning a lot. I didn’t know about all the bad things the Bengalis had been doing.” 

“Bad things?” I asked, though I knew what he was going to say next. 

“Killing Buddhists, stealing their land. There’s pictures on Facebook. Everyone knows they’re terrorists,” he replied. 

“Oh, fuck,” I thought.13

Greenwood’s story closely parallels one Matt Schissler tells reporters Sheera Frenkel and Cecilia Kang for An Ugly Truth. (Schissler is one of the people delivering dire warnings to Meta in Part I of this series.)

In Schissler’s story, it’s also 2013, and he’s starting to see some really hair-raising stuff. His Buddhist friends start relating their conspiracy theories about the Rohingya and showing him “grainy cell phone photos of bodies they said were of Buddhist monks killed by Muslims.” They’re telling him ISIS fighters are on their way to Myanmar.

This narrative is even coming from a journalist friend, who calls to warn Schissler of a Muslim plot to attack the country. The journalist shows him a video as proof:

Schissler could tell that the video was obviously edited, dubbed over in Burmese with threatening language. “He was a person who should have known better, and he was just falling for, believing, all this stuff.”14

It’s miserably hot in Myanmar when Craig Mod is there in 2016—steam-broiling even in the shade, and the heat shows up a lot in Mod’s notes. His piece ends with a grace note about a weather forecast:

Farmer Number Fifteen loves the famous Myanmar weatherman U Tun Lwin, now follows him on Facebook. I hunt U Tun Lwin down, follow him too, in solidarity, although I’m pretty sure I know what tomorrow’s weather will be.15

When I reread Mod’s essay about halfway through my research for this series, my eye caught on that name: U Tun Lwin. I’d just seen it somewhere.

It was in the findings report of the United Nations Human Rights Council’s Independent International Fact-Finding Mission on Myanmar (just “the UN Mission” in the rest of this post).

It was there because in the fall of 2016, about a year after Craig was in Myanmar and as a wave of extreme state violence against the Rohingya is kicking off, there’s this Facebook post. The UN Mission reports that “Dr. Tun Lwin, a well-known meteorologist with over 1.5 million followers on Facebook, called on the Myanmar people to be united to secure the ‘west gate.’” (The “west gate” is the border with Bangladesh, and this is a reference to the idea that the Rohingya are actually all illegal “Bengali” immigrants.)

Myanmar, Tun Lwin continued in his post, “does not tolerate invaders,” and its people must be alert “now that there is a common enemy.” As of August 2018, when the UN Mission published their report, Tun Lwin’s post was still up on Facebook. It had 47,000 reactions, over 830 comments, and nearly 10,000 shares. In the comments, people called the existence of the Rohingya in Rakhine State a “Muslim invasion” and demanded that the Rohingya be uprooted and eradicated.16

The longest civil war

I need to say a little bit about the Tatmadaw, for reasons that will almost immediately become clear.

Sai Latt, whom I quoted several times in Part I, has published a chapter from his dissertation detailing how the Tatmadaw forces destitute teenagers and crime suspects into the military, socializes them into an ethnofascist order, and exploits them to enrich their superior officers. It’s no explanation for the Tatmadaw’s actions, just another wrenching piece of context.

Tatmadaw (literally “grand army”) is the umbrella term for Myanmar’s armed forces—it includes the army, navy, and air force, but a Tatmadaw officer also oversees the national police force. There’s a ton of history I have to elide, but the two crucial things to know are that Tatmadaw generals have been running Myanmar (or heavily influencing its government) since the country gained independence, and the military’s been at war with multiple ethnic armed groups throughout Myanmar since just after the end of WWII.17

These conflicts—by some accountings, the longest-running civil war in the world—have been marked by the Tatmadaw’s intense violence against civilians. The UN Mission findings report that I cite throughout this series includes detailed accounts of Tatmadaw atrocities targeting civilian members of ethnic minorities in Kachin and Shan States. Human Rights Watch and many other organizations have detailed Tatmadaw brutalities focusing on ethnic minorities in Karen State and elsewhere in Myanmar.18

Information about these conflicts and atrocities was readily available in English throughout Meta’s expansion into the region. I include this brief and inadequate history to explain that it was not difficult, in this period, to learn what the Tatmadaw really was, and what they were capable of doing to civilians.

Which brings us, finally, to what happened to the Rohingya in 2016 and 2017.

Clearance operations

Content warning for these next two sections: I’m going to be brief and avoid graphic descriptions, but these are atrocities, including the torture, rape, and murder of adults and children.

2016 was supposed to be the first year in Myanmar’s new story. In the landmark 2015 general elections in Myanmar, Aung San Suu Kyi’s party wins a supermajority, and takes office in the spring of 2016. This is a huge deal—obviously most of all within Myanmar, but also internationally, because it looks like Myanmar’s moving closer to operating as a true democracy. But the Rohingya are excluded from the vote, and from a national peace conference held that summer to try to establish a ceasefire between the Tatmadaw and armed ethnic minority groups.19

The approximately 140,000 Rohingya people displaced in the 2012 violence are at this point largely still living in IDP (internally displaced person) camps and deprived of the neccesities of life, and the Myanmar government has continued tightening—or eliminating—the already nearly impossible paths to citizenship and a more normal life for the Rohingya as whole.20

The violence has continued, as well. According to a 2016 US State Department “Atrocities Prevention Report,” the Rohingya also continued to experience extremist mob attacks, alongside governmental abuses including “torture, unlawful arrest and detention, restricted movement, restrictions on religious practice, and discrimination in employment and access to social services.”21

This is all background for what happens next.

I give this accounting not to be shocking or emotionally manipulative, but because I don’t think we can assess and rationally discuss Meta’s responsibilities—in Myanmar and elsewhere—unless we allow ourselves to understand what happened to the human beings who took the damage.

In October of 2016, a Rohingya insurgent group, the Arakan Rohingya Salvation Army (ARSA), attacks Burmese posts on the Myanmar-Bangladesh border, killing nine border officers and four Burmese soldiers. The Tatmadaw respond with what they called “clearance operations,” nominally aimed at the insurgents but in fact broadly targeting all Rohingya people.22

A 2016 report from Amnesty International—and, later, the UN Human Rights Council’s Independent Fact-Finding Mission in Myanmar—document the Tatmadaw’s actions, including the indiscriminate rape and murder of Rohingya civilians, the arbitrary arrests of hundreds of Rohingya men including the elderly, forced starvation, and the destruction of Rohingya villages.23 Tens of thousands of Rohingya flee over the border to Bangladesh.24

Through the winter of 2016 and into 2017, bursts of violence continue—Tatmadaw officers beating Rohingya civilians, Buddhist mobs in Rakhine State attacking Rohingya people, Rohingya militants killing people they saw as betrayers. Uneasy times.

Then, on the morning of August 25th, 2017, ARSA fighters mount crude, largely unsuccessful attacks on about 30 Burmese security posts.25 Simultaneously, according to an Amnesty investigation, ARSA fighters murder at least 99 Hindu civilians, including women and children, in two villages in Northern Rakhine State.26 (Despite the mass-scale horrors that would follow, this act was, by any measure, an atrocity.)

And after that, everything really goes to hell.

In response to the ARSA attacks, the Tatmadaw begins its second wave of clearance operations and begins, in Amnesty International’s words, systematically attacking “the entire Rohingya population in villages across northern Rakhine State.”27

Accelerating genocide

I’ve worked with atrocity documentation before. I still don’t know a right way to approach what comes next. I do know that the people who document incidents of communal and state violence for organizations like Medicins Sans Frontieres and the UN Human Rights Council use precise, economical language. Spend enough time with their meticulous tables and figures the precision itself begins to feel like rage.

MSF notes that “the rates of mortality captured here are likely to be underestimates, as the data does not account for those people who have not yet been able to flee Myanmar, or for families who were killed in their entirety.”

Based on their extensive and intimate survey work with refugees who escaped to Bangladesh, Médecins Sans Frontières estimates that in a single month between August 25th and September 24th of 2017, about 11,000 Rohingya die in Myanmar, including 1,700 children. Of these, about 8,000 people are violently killed, including about 1,200 children under the age of five.28

The UN Mission’s report notes that in attacks on Rohingya villages, women and children, including infants, are “specifically targeted.”29 According to MSF, most of the murdered children under five are shot or burned, but they note that about 7% are beaten to death.30

In what Amnesty International calls “a relentless and systematic campaign,” the Tatmadaw publicly rape hundreds—almost certainly thousands—of Rohingya women and girls, many of whom they also mutilate. They indiscriminately arrest and torture Rohingya men and boys as “terrorists.” They push whole communities into starvation by burning their markets and blocking access to their farms. They burn hundreds of Rohingya villages to the ground.31

Over the ensuing weeks, more than 700,000 people (”more than 702,000 people,” Amnesty writes, “including children”) flee to squalid, overcrowded, climate-vulnerable refugee camps in Bangladesh.32 That’s more than 80% of the Rohingya previously living in Rakhine State.

The UN Mission’s findings report comes out about a year later.

I’ve cited it a lot already in this post and the previous one. The document runs to 444 pages, opens with a detailed background for the 2017 crisis and then becomes a catalog of thousands of collective and individual incidents of the Tatmadaw’s systematic torture, rape, and murder of members of Rohingya—and, to a lesser but still horrific extent, of other ethnic minorities across Myanmar. The scale and level of detail are beyond anything else I’ve encountered; accounts of mutilations, violations, and the murder of children in front of their parents go on page after page after page. My honest advice is that you don’t read it.33

Classifying incidents of violence as genocide is a lengthy, fraught, and uneven process. The UN Human Rights Council’s High Commissioner calls the events in Myanmar “a textbook example of ethnic cleansing.”34 The International Court of Justice is currently hearing a case against Myanmar brought under the international Genocide Convention.35 The US State Department officially classifies the events in Myanmar as a genocide, as do many genocide scholars and institutions. In this series, I follow the usage of the United States Holocaust Memorial Museum in Washington, DC, in whose work I have complete confidence.36

But…Facebook?

If you’ve read this far, then first, thank you. Maybe get a drink of water or something.

Second, I think you may be—probably should be—wondering how many of the things I’ve just related can be connected to something as relatively inconsequential as Facebook posts.

I want to do a tiny summary and then preview some arguments that I won’t really be able to dig into until the end of this post and especially in the next one, when I finally get into the documents and investigations that show what was happening under the hood of Meta’s content recommendation engines.

The escalation from relatively isolated incidents of anti-Rohingya violence pre-2012 into the two big waves of attacks that year, the semi-communal semi-state violence in 2016, and the full-on Tatmadaw-led genocide in 2017 was accompanied by an overwhelming rise in Facebook-mediated disinformation and violence-inciting messages.

And as I’ve tried to show and will keep illustrating with examples, these messages built intense anti-Rohingya beliefs and fears throughout Myanmar’s mainstream Buddhist culture. Those beliefs and fears quite clearly led to direct incidents of communal (non-state) violence.

Determining whether those beliefs also constituted even a partial manfactured mainstream consent to the Tatmadaw’s actions in 2016 and 2017 is both out of my lane and honestly maybe unknowable, given the impossibility of untangling what was known by whom, and when. What I think I can say is that they ran in exact parallel to the Tatmadaw’s genocidal operations.

The overwhelming volume and velocity of this hate campaign would not have been possible without Meta, which did four main things to enable it:

  1. Meta bought and maneuvered its way into the center of Myanmar’s online life and then inhabited that position with a recklessness that was impervious to warnings by western technologists, journalists, and people at every level of Burmese society. (This is most of Part I.)
  2. After the 2012 violence, Meta mounted a content moderation response so inadequate that it would be laughable if it hadn’t been deadly. (Discussed in Part I and also below.)
  3. With its recommendation algorithms and financial incentive programs, Meta devastated Myanmar’s new and fragile online information sphere and turned thousands of carefully laid sparks into flamethrowers. (Discussed below and in Part III.)
  4. Despite its awareness of similar covert influence campaign based on “inauthentic behavior”—aka fake likes, comments, and Pages—Meta allowed an enormous and highly influential covert influence operation to thrive on Burmese-language Facebook throughout the run-up to the peak of the 2016 and 2017 “ethnic cleansing,” and beyond. (Part III.)

The lines of this argument have all been drawn by better informed people than me. Amnesty International’s 2022 report, “The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” directly implicates Meta in the genocidal propaganda campaigns and furor that led up to the Tatmadaw’s atrocities in Rakhine State. The viral acceleration of dehumanizing and violent posts in 2017, Amnesty writes, made those messages “appear ubiquitous on Burmese-language Facebook, creating a sense that everyone in Myanmar shared these views, helping to build a shared sense of urgency in finding a ‘solution’ to the ‘Bengali problem’ and ultimately building support for the military’s 2017 ‘clearance operations’.”37

And as I noted in the intro to the first post in this series, the UN Mission’s own lead investigator stated that Facebook played “a determining role” in the violence.38

But again, I think it’s reasonable and important to ask whether that can really be possible, and to look carefully at the evidence.

On one hand it seems obvious that Meta was indeed negligent about expanding content moderation, and deeply misguided in continuing to expand into Myanmar without fixing the tide of genocidal messages that experts had been warning them about since at least 2012. Meta’s behavior, after all those years of warnings, is hard to describe as anything but callous.

But does any of that make them responsible for what the Tatmadaw did?

Let’s start with the content moderation problem. Which means that we have to look at some of the actual content Meta allowed to circulate on Burmese-language Facebook during the waves of violence in 2016 and 2017.

Rumors and lies

Content warning: Hate speech, ethnic slurs.

On September 12, 2017, during the peak of the Tatmadaw’s genocidal attacks on the Rohingya, the Institute for War and Peace Reporting released an update on their two-year project in Myanmar with a dozen-odd local journalists and monitors who tracked and reported on hate speech and incitement to violence.

The post is called “How Social Media Spurred Myanmar’s Latest Violence,” and it’s written by IWPR’s regional director, Alan Davis. It’s both cringey—Davis starts with a dig at how backward and superstitious the Buddhist establishment is—and obviously rooted in real moral anguish at having failed to prevent the disaster. Much of the meat of the post is focused on Facebook, and Davis’s observations are sharp (emphasis mine):

The vast majority of hate speech was on social media, particularly Facebook.… while not all hate speech was anti-Muslim or anti-Rohingya, the overwhelming majority certainly was. Much was juvenile and just plain nasty, while a good deal was insidious and seemed to be increasingly organised. A lot of it was also smart and it was clear a great deal of time and energy had gone into some of the postings. 

Over time, we saw the hate speech becoming more targeted and militaristic. Wild allegations spread, including claims of Islamic State (IS) flags flying over mosques in Yangon where munitions were being stored, of thwarted plots to blow up the 2,500 year-old Shwedagon Pagoda in Yangon and supposed cases of Islamic agents smuggling themselves across the border.  

…we felt a clear sense that in the absence of any kind of political leadership that a darkening and deepening vacuum that would ultimately result in a violent reckoning.… Most importantly, we warned that rumours and lies peddled and left unchecked might end up creating their own reality. 39

On October 30, 2017, just after the full-scale ethnic cleansing began, Sitagu Sayadaw, a Buddhist monk and one of the most respected religious leaders in Myanmar, gave a sermon to an audience of soldiers—and to the rest of the country, via a Facebook livestream. His sermon featured a passage from the Mahavamsa in which monks comfort a Buddhist king consumed by guilt after leading a war in which millions died:

“Don’t worry, your Highness. Not a single one of those you killed was Buddhist. They didn’t follow the Buddhist teachings and therefore they did not know what was good or bad. Not knowing good or bad is the nature of animals. Out of over five hundred thousand you killed, only one and a half were worth to be humans. Therefore it is a small sin and does not deserve your worry.40

The UN Mission’s report includes many other examples of religious, governmental, and military figures comparing Rohingya people to fleas, weeds, and animals—and in some cases, making explicit reference to the necessity of emulating both the Holocaust and the United States’ bombing of Hiroshima and Nagasaki.41

The report also includes specific examples of the kinds of dehumanizing and inciting posts and comments going around Facebook in 2017. I’m only going to include a few, but I think it’s important to be clear about what Meta let circulate, months into a full-on ethnic cleansing operation:

  • In early 2017, a Burmese “patriot” posts a graphic video of police beating citizens in another country, with the comment: “Watch this video. The kicks and the beatings are very brutal. I watch the video and feel that it is not enough. In the future […] Bengali disgusting race of Kalar terrorists who sneaked into our country by boat, need to be beaten like that. We need to beat them until we are satisfied.” (The post was still up on Facebook in July 2018.)
  • A widely shared August 2017 post: “…the international community all condemned the actions of the Khoe Win Bengali [“Bengali that sneaked in”] terrorists. So, in this just war, to avenge the deaths of the ethnic people who got beheaded, and the policemen who got hacked into pieces, we are asking the Tatmadaw to turn these terrorists into powder and not leave any piece of them behind.”
  • Another post: “Accusations of genocide are unfounded, because those that the Myanmar army is killing are not people, but animals. We won’t go to hell for killing these creatures that are not worth to be humans.”
  • Another post: “If the (Myanmar) army is killing them, we Myanmar people can accept that… current killing of the Kalar is not enough, we need to kill more!”42

Let’s look at some quantifiable data on the volume of extremist posts during the period—we don’t have much, because only Meta really knows, but we do have a couple of windows into the way things escalated.

By 2016, data analyst Raymond Serrato, who eventually goes to work for the Office of the United Nations High Commissioner for Human Rights, has been studying social media in Myanmar for a couple of years. So when when the Tatmadaw’s clearance operations swing into action in 2016, he’s already watching what’s happening in a big (55k member) Facebook group run by Ma Ba Tha supporters—a “hangout for Buddhist patriots,” as Seratto describes it.43

What Serrato sees in this group is a rising curve in posting volume in the late summer of 2017 before the Arakan Rohingya Salvation Army attacks, and then spiking hard immediately after the attacks, as the Tatmadaw began the concentrated genocidal operation against the Rohingya.

Ray Serrato's graph of Facebook posts in the extremist Group, showing enormous spikes followed by long-term elevation of posting rates beginning in August, 2016.Ray Serrato's graph of Facebook posts in the extremist Group, showing enormous spikes followed by long-term elevation of posting rates beginning in August, 2016.

Visualization by Raymond Serrato.

Serrato’s research is limited in scope—he’s only using the Groups API—but his snapshot of how hardline nationalist post volume went through the roof in 2017 clearly runs alongside the qualitative reports from Burmese and western observers—and victims.

What Meta did about it

Across the first-person narratives from Burmese and western tech and civil society people, there’s a thread of increasingly intense frustration—bleeding into desperation—among the people who tried, over and over, to get individual pieces of dehumanizing propaganda, graphic disinformation, and calls to violence removed from Facebook by reporting them to Facebook.

They report posts and never hear anything. They report posts that clearly call for violence and eventually hear back that they’re not against Facebook’s Community Standards. This is also true of the Rohingya refugees Amnesty International interviews in Bangladesh—they were also reporting posts demonizing and threatening their communities, and it didn’t help.44

Writing on behalf of the Burmese and western people in the private Facebook group with Facebook employees, Htaike Htaike Aung and Victoire Rio summarize the situation in 2016, during the first wave of “clearance operations”:

…Facebook was unequipped to proactively address risk concerns. They relied nearly exclusively on us, as local partners, to point them to problematic content. Upon receiving our escalations…they would typically address the copy we escalated but take no further steps to remove duplicate copies or address the systemic policy or enforcement gaps that these escalations brought to light.… We kept asking for more points of contact, better escalation protocols, and interlocutors with knowledge of the language and context who could make decisions on the violations without requiring the need for translators and further delays. We got none of that.45

And as we now know, Meta’s fleet of Burmese-speaking contractors had grown to a total of four at the end of 2015. According to Reuters, in 2018, Meta had about 60 people reviewing reported content from Myanmar via the Accenture-run “Project Honey Badger” contract operation in Kuala Lumpur, plus three more in Dublin, to monitor Myanmar’s approximately 18 million Facebook users.46 So in 2016 and 2017, Meta has somewhere between 4 and 63-ish Burmese speakers monitoring hate speech and violence-inciting messages in Myanmar. And zero of them, incidentally, in Myanmar itself.

I don’t know how many content reviewers Meta employed globally in 2016 and 2017, so we have to skip ahead to get an estimate. In his 2018 appearance before the US House Energy and Commerce Committee, Mark Zuckerberg is asked by Texas House Representative Pete Olson whether Meta employs about 27,000 people. Zuckerberg says yes.

OLSON: I’ve also been told that about 20,000 of those people, including contractors, do work on data security. Is that correct?

ZUCKERBERG: Yes. The 27,000 number is full time employees. And the security and content review includes contractors, of which there are tens of thousands. Or will be. Will be by the time that we hire those.47

There are several remarkable things about this exchange, including that when Rep. Olsen afterward sums up, incorrectly, that this means that more than half of Meta’s employees deal with “security practices,” Zuckerberg doesn’t correct him, but I’ll just emphasize that Meta is claiming to have (or be hiring!) tens of thousands of contractors to work on security and content review, in 2018. And for Myanmar, where by 2018 the genocide of the Rohingya has already peaked, they’ve managed to assemble about 63.

As it turns out, even the United Nations’ own Mission, acting in an official capacity, can’t get Facebook to remove posts explicitly calling for the murder of a human rights defender.

“Don’t leave him alive”

Both the UN Mission’s findings and Amnesty International’s big report tell the story of this person—an international aid worker targeted for his alleged cooperation with the Mission. He’s unnamed in the UN report; Amnesty calls him “Michael.”

Here’s how it happens: “Michael” does an interview with a local journalist in Myanmar about the situation he’d observed in Rakhine State, and the interview goes viral on Facebook.

The response by anti-Rohingya extremists is immediate and intense: The most dangerous Facebook post made about Michael features a picture of his opened passport and describes him as a “Muslim” and “national traitor.” The comments on the Facebook post call for Michael’s murder: “If this animal is still around, find him and kill him. There needs to be government officials in NGOs.” “He is a Muslim. Muslims are dogs and need to be shot.” “Don’t leave him alive. Remove his whole race. Time is ticking.”48

Strangers start recognizing Michael from the viral posts, and warning him that he’s in danger. The threats expand to include his family.

The UN Mission team investigating the attacks on the Rohingya knows Michael. They get involved, reporting the post with the photo of Michael’s passport in it to Facebook four times. Each time, they get the same response: the post had been reviewed and “doesn’t go against one of [Facebook’s] specific Community Standards.”49

By this point, the post has been shared more than 1,000 times, and many others have appeared. Michael’s friends and colleagues in Myanmar and in the US are reporting everything they can find—some posts get deleted, but hundreds more appear, “like a game of whack-a-mole.”50

The UN team escalates and emails an official Facebook email account; no one responds. At this point, the team tells Michael that it’s time to get out of Myanmar—it’s too dangerous to stay.

Several weeks later, the UN Mission is finally able to get Facebook to take down the original post, but only then with the help of a contact at Facebook. And copies of the post keep circulating on Facebook.

The Mission team write that they encountered “many similar cases where individuals, usually human rights defenders or journalists, become the target of an online hate campaign that incites or threatens violence.”

In their briefing document about the many attempts to get Facebook to stop fueling the violence in Myanmar, Htaike Htaike Aung and Victoire Rio write:

Despite the escalating risks, we did not see much progress over that period, and Facebook was just as unequipped to deal with the escalation of anti-Rohingya rhetoric and violence in August 2017 as they had been in 2016.… Ultimately, it was still down to us, as local partners, to warn them. We simply couldn’t cope with the scale.51

Meta’s active harms: the incentives

In a 2016 interview, Burmese civil-society and digital-literacy activists Htaike Htaike Aung and Phyu Phyu Thi speak about the work their organization, MIDO, was doing to counter hate speech and misinformation. Which was a lot: They’re doing digital literacy and media literacy training, they’ve built more than 60 digital literacy centers throughout Myanmar, they monitor online hate speech, and they run a “Real or Not” fact-checking page for Burmese users.52

Even so, Myanmar’s civil society organizations and under-resourced activists simply can’t keep pace with what’s happening online—not without action on Meta’s part to sharply reduce and deviralize genocidal content at the product-design level.

There were—and are—ways for Meta to change its inner machinery to reduce or eliminate the harms it does. But in 2016, the company actually does something that makes the situation much worse.

In addition to continuing to algorithmically accelerate extremist messages, Meta introduces a new program that took a wrecking ball to Myanmar’s online media landscape: Instant Articles.

If you’re from North America or Europe, you probably know Instant Articles as one of the ways Meta persuaded media organizations to publish their work directly on Facebook, ostensibly in exchange for fast loading and and shared ad revenue.

Instant Articles was kind of a bust for actual media organizations, but in many places, including in Myanmar, it became a way for clickfarms to make a lot of money—ten times the average Burmese salary—by producing and propagating super-sensationalist fake news.

“In a country where Facebook is synonymous with the internet,” the MIT Technology Review’s Karen Hao writes, “the low-grade content overwhelmed other information sources.”53

The result for Myanmar’s millions of Facebook users is an explosive decompression of its online information sphere. In 2015, before Instant Articles expands to Myanmar, 6 out of 10 websites getting the most engagement on Facebook in Myanmar are “legitimate” media organizations. A year after Instant Articles hits the country, legitimate publishers make up only 2 of the top 10 publishers on Facebook. By 2018, the number of legit publishers on the list is zero—all 10 are fake news.54

This is the online landscape in place in 2016 and 2017.

Then there are the algorithms.

“People saw the vilest content the most”

When he speaks to Amnesty International about his experience being targeted on Facebook, Michael (who was in Myanmar 2013–2018) also talks about what Facebook’s News Feed looked like in Myanmar in more general terms:

“The vitriol against the Rohingya was unbelievable online—the amount of it, the violence of it. It was overwhelming. There was just so much. That spilled over into everyday life…

The news feed in general [was significant]—seeing a mountain of hatred and disinformation being levelled [against the Rohingya], as a Burmese person seeing that, I mean, that’s all that was on people’s news feeds in Myanmar at the time. It reinforced the idea that these people were all terrorists not deserving of rights. This mountain of misinformation definitely contributed [to the outbreak of violence].”

And elsewhere in the same interview:

The fact that the comments with the most reactions got priority in terms of what you saw first was big—if someone posted something hate-filled or inflammatory it would be promoted the most—people saw the vilest content the most. I remember the angry reactions seemed to get the highest engagement. Nobody who was promoting peace or calm was getting seen in the news feed at all.”55

So let’s remember, by 2016, active observers of social media—and Facebook in particular—have a pretty good sense of what makes things go viral. And clearly there are organized groups in Myanmar—MaBaTha’s hardline monks, for one—who are super skilled at getting a lot of eyes on their anti-Rohingya Facebook.

But the big, super-frustrating problem with trying to understand Facebook’s effects through accounts like these is that they only describe what can be deduced from the network’s exterior surfaces—what people see, what they report, what happens afterward. I believe these accounts—I especially trust the statements from Burmese people working on the ground—but they’re all coming from outside Facebook’s machinery.

Which is why we’re incredibly lucky to get, just a few years later, an inside view of what was really happening—and what Meta knew about it as it happened.

Next up: Part III: The Inside View.


  1. Technology Leaders Launch Partnership to Make Internet Access Available to All,” Facebook.com August 20, 2013, archived at Archive.org. The promotional video is “Every one of us,” Internet.org, August 20, 2013.↩︎

  2. Is Connectivity A Human Right?” Mark Zuckerberg, Facebook.com, August 20, 2013 (the memo is undated, so I’m taking the date from contemporary reports and other launch documents).↩︎

  3. Facebook And 6 Phone Companies Launch Internet.org To Bring Affordable Access To Everyone,” Josh Constine, TechCrunch, August 20, 2013; “Facebook’s internet.org initiative aims to connect ‘the next 5 billion people’,” Stuart Dredge, The Guardian, August 21, 2013; “Facebook project aims to connect global poor,” AlJazeera America, August 21, 2013.↩︎

  4. Facebook Leads an Effort to Lower Barriers to Internet Access,” Vindu Goel, The New York Times, August 20, 2013.↩︎

  5. Meta Earnings Presentation, Q2 2023, July 26, 2023 (date from associated press release); “Facebook’s Q2: Monthly Users Up 21% YOY To 1.15B, Dailies Up 27% To 699M, Mobile Monthlies Up 51% To 819M,” TechCrunch, July 24, 2013. (The actual earnings presentation deck from the 2013 call doesn’t seem to be online except as a few screencaps here and there, which is irritating.)↩︎

  6. Distribution of Worldwide Social Media Users in 2022, by Region,” Statista, 2022.↩︎

  7. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021 (Chapter Nine: “Think Before You Share”).↩︎

  8. What Happened to Facebook’s Grand Plan to Wire the World?” Jessi Hempel, Wired, May 17, 2018; “Facebook is changing its name to Meta as it focuses on the virtual world,” Elizabeth Dwoskin, The Washington Post, October 28, 2021. (That should be a paywall-free WaPo link, but it doesn’t always work.)↩︎

  9. Myanmar’s MPT launches Facebook’s Free Basics,” Joseph Waring, Mobile World Live, June 7, 2016.↩︎

  10. Hatebook: Why Facebook is losing the war on hate speech in Myanmar,” Reuters, August 15, 2018. (You may see bigger numbers elsewhere—in a 2017 New York Times article, Kevin Roose claims that Facebook had 30 million users in Myanmar in 2017. Roose doesn’t cite his sources, but the same range his uses, from two million to more than 30 million, shows up in The Atlantic and CBS News. I don’t think there’s any way this number can be right, but Meta doesn’t disclose this information.)↩︎

  11. The Facebook-Loving Farmers of Myanmar,” Craig Mod, The Atlantic, January 21, 2016.↩︎

  12. Facebook is Worse than You Think: Whistleblower Reveals All | Frances Haugen x Rich Roll,” The Rich Roll Podcast, September 7, 2023. This is a little outside my usual sourcing zone—Roll is a vegan athlete…influencer, I gather?—but Haugen does a lot of interviews, and sometimes the least formal ones turn up the most interesting statements. The context for the bit I quote comes in around 8:30 and the quote is at 9:19.↩︎

  13. “[“Facebook Destroys Everything: Part 1,” Faine Greenwood, August 8, 2023.↩︎

  14. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021.↩︎

  15. The Facebook-Loving Farmers of Myanmar,” Craig Mod, The Atlantic, January 21, 2016.↩︎

  16. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018—the “report landing page includes summaries, metadata, and infographics. Content warnings apply throughout, this is atrocity material.↩︎

  17. Ethnic Insurgencies and Peacemaking in Myanmar,” Tin Maung Maung Than, The Newsletter of the International Institute for Asian Studies, No.66, Winter 2013.↩︎

  18. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018; “They Came and Destroyed Our Village Again: The Plight of Internally Displaced Persons in Karen State,” Human Rights Watch, June 9, 2005.↩︎

  19. Civil War in Myanmar,” the Center for Preventive Action at the Council on Foreign Relations, publish date not provided; updated April 25, 2023.↩︎

  20. The Burmese Labyrinth: A History of the Rohingya Tragedy, Carlos Sardiña Galache, Verso, 2020. The “140,000” figure is drawn from “One year on: Displacement in Rakhine state, Myanmar,” a briefing note from the UN Human Rights Council published June 7, 2013.↩︎

  21. Atrocities Prevention Report: Targeting of and Attacks on Members of Religious Groups in the Middle East and Burma,” US Department of State, March 17, 2016.↩︎

  22. Myanmar: Security Forces Target Rohingya During Vicious Rakhine Scorched-Earth Campaign, Amnesty International, December 19, 2016.↩︎

  23. Myanmar: Security Forces Target Rohingya During Vicious Rakhine Scorched-Earth Campaign, Amnesty International, December 19, 2016; “Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  24. 21,000 Rohingya Muslims Flee to Bangladesh to Escape Persecution in Myanmar,” Ludovica Iaccino, The International Business Times, December 6, 2016.↩︎

  25. Rohingya Crisis: Finding out the Truth about Arsa Militants,” Jonathan Head, BBC, October 11, 2017.↩︎

  26. Myanmar: New evidence reveals Rohingya armed group massacred scores in Rakhine State,” Amnesty International, May 22, 2018.↩︎

  27. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  28. Rohingya Crisis—A Summary of Findings from Six Pooled Surveys,” Médecins Sans Frontières, December 9, 2017.↩︎

  29. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018↩︎

  30. Rohingya Crisis—A Summary of Findings from Six Pooled Surveys,” Médecins Sans Frontières, December 9, 2017.↩︎

  31. Crimes Against Humanity in Myanmar,” Amnesty International, May 15, 2019 (dated PDF version).↩︎

  32. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  33. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  34. UN Human Rights Chief Points to ‘Textbook Example of Ethnic Cleansing’ in Myanmar,” UN News, September 11, 2017.↩︎

  35. World Court Rejects Myanmar Objections to Genocide Case,” Human Rights Watch, July 22, 2022.↩︎

  36. Genocide, Crimes Against Humanity and Ethnic Cleansing of Rohingya in Burma,” Anthony Blinken, US Department of State, March 21, 2022. “Country Case Studies: Burma,” United States Holocaust Memorial Museum, undated resource.↩︎

  37. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  38. U.N. investigators cite Facebook role in Myanmar crisis,” Reuters, March 12, 2018.↩︎

  39. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  40. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  41. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  42. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018.↩︎

  43. How Facebook and Google Fund Global Misinformation,” Karen Hao, The MIT Technology Review, November 20, 2021; “The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  44. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021; “The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022;↩︎

  45. Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022.↩︎

  46. Hatebook: Why Facebook is losing the war on hate speech in Myanmar,” Reuters, August 15, 2018.↩︎

  47. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  48. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  49. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  50. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  51. Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022.↩︎

  52. ‘If It’s on the Internet It Must Be Right’: an Interview With Myanmar ICT for Development Organisation on the Use of the Internet and Social Media in Myanmar,” Rainer Einzenberger, Advances in Southeast Asian Studies (ASEAS), formerly the Austrian Journal of South-East Asian Studies, December 30, 2016.↩︎

  53. How Facebook and Google Fund Global Misinformation,” Karen Hao, The MIT Technology Review, November 20, 2021. Karen Hao is so good on all of this, btw. One of the best.↩︎

  54. How Facebook and Google Fund Global Misinformation,” Karen Hao, The MIT Technology Review, November 20, 2021; “The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  55. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

https://erinkissane.com/meta-in-myanmar-part-ii-the-crisis
Extensions
Meta in Myanmar, Part I: The Setup

“Technology is like a bomb in Myanmar.” —Kyaw Kyaw, frontman of Burmese punk band Rebel Riot 1

Back in early July, I started working on a quick series of posts about online structures of refuge and exposure. In a draft of what I meant to be the second post in the series, I tried to write a tight two or three paragraphs about the role Meta played in the genocide of the Rohingya people in Myanmar, and why it made me dubious about Threads. Over time, those two or three paragraphs turned into a long summary, then a detailed timeline, then an unholy hybrid of blog post and research paper.

What I learned in the process was so starkly awful that I finally set the whole series aside for a while until I could do a lot more reading and write something more substantial. Nearly three months later, I’m ready to share my notes.

Here’s a necessary personal disclosure: I’ve never trusted Facebook, mostly because I’ve been around tech for a long time and everything I’ve ever learned about the company looked like a red flag. Like, I’m on the record swearing about it.

But once I started to really dig in, what I learned was so much gnarlier and grosser and more devastating than what I’d assumed. The harms Meta passively and actively fueled destroyed or ended hundreds of thousands of lives that might have been yours or mine, but for accidents of birth. I say “hundreds of thousands” because “millions” sounds unbelievable, but by the end of my research I came to believe that the actual number is very, very large.

To make sense of it, I had to try to go back, reset my assumptions, and try build up a detailed, factual understanding of what happened in this one tiny slice of the world’s experience with Meta. The risks and harms in Myanmar—and their connection to Meta’s platform—are meticulously documented. And if you’re willing to spend time in the documents, it’s not that hard to piece together what happened.

I started down this path—in this series, on this site, over this whole year—because I want to help make better technologies and systems in service of a better world. And I think the only way to make better things is to thoroughly understand what’s happened so far. Put another way, I want to base decisions on transparently sourced facts and cautiously reasoned analysis, not on assumptions or vibes—mine or anyone else’s.

What I want to promise you, my imaginary reader, is that I’ve approached this with as much care and precision as I can. I cite a lot of documentation from humanitarian organizations and many well-sourced media reports, and also a bunch of internal Meta documentation. What I’m after is maybe something like a cultural-technical incident report. I hope it helps.

This is the first of four posts in the series. Thank you for reading.

I’ve put notes about terms (Meta vs. Facebook, Myanmar vs. Burma, etc.) and sources and warnings in a separate meta-post. If you’re someone who likes to know about that kind of thing, you might want to pop that post open in a tab. If you spot typos or inaccuracies, I’m reachable at erin at incisive.nu and appreciate all bug reports.

Content warning: I’m marking some sections and some cited sources with individual warnings, but this is a story with a genocide at its heart, so be aware of that going in. I’ve also included a small selection of hateful messages in this post and throughout the series so that it’s clear what we’re talking about when we talk about “hate speech.” Some contain words that, used in specific contexts, constitute slurs. I put some notes about all of that in the meta post as well.

This is a story about tech

Even if you never read any further, know this: Facebook played what the lead investigator on the UN Human Rights Council’s Independent International Fact-Finding Mission on Myanmar (hereafter just “the UN Mission”) called a “determining role” in the bloody emergence of what would become the genocide of the Rohingya people in Myanmar.2

From far away, I think Meta’s role in the Rohingya crisis can feel blurry and debatable—it was content moderation fuckups, right? In a country they weren’t paying much attention to? Unethical and probably negligent, but come on, what tech company isn’t, at some point?

Plus, Meta has popped up in the press and before Congress to admit that they fucked up and have concrete plans to do better. They lost sleep over it, said Adam Mosseri, the person in charge of Facebook’s News Feed, back in March of 2018.

By that point in 2018, Myanmar’s military had murdered thousands of Rohingya people, including babies and children, and beaten, raped, tortured, starved, and imprisoned thousands more. About three-quarters of a million Rohingya had fled Myanmar to live in huge, disease-infested refugee camps in Bangladesh.

And Meta?

By that point, Meta had been receiving detailed and increasingly desperate warnings about Facebook’s role as an accelerant of genocidal propaganda in Myanmar for six years.


My big hope for the internet is that we handle the shit we need to handle to make sturdier, less poisoned/poisonous ways to connect and collaborate in the gnarly-looking decades ahead. I think that’s not just possible, but that it’s our responsibility to work toward it. Also, I’m pretty sure that despite the best intentions and the most transparent processes, we risk doing enormous harm if we don’t learn from the past. (Maybe even if we do.)

This series is for anyone who, like me—and despite everything not good about the tech world—has found themselves periodically heartened and sustained by open technology projects, online communities, and ways of being together even when we’re far apart.

And the thing I want you, and all of us, to remember about the sudden flowering of the internet in Myanmar in the 2010s is that in the beginning, it was incredibly welcome and so filled with hope.

Welcome to the internet

After decades of crushing state repression, a more democratic regime came into power in Myanmar in 2011 and gradually relaxed restrictions on the internet, and on speech more generally. Journalist and tech researcher Faine Greenwood (they/them) is working in Southeast Asia at the time, and gets caught up in the spirit of the moment, despite their skepticism about the benefits of the internet:

I’d connected with a Myanmar NGO dedicated to digital inclusion, and through them, I got a chance to meet and interview a number of brilliant and extremely online Burmese people, all of them brimming with long-suppressed, almost giddy, optimism about their country’s technological future.

It was hard for me not to share their enthusiasm, their massive relief at finally getting out from under the jackboot of a military regime that had tried to lock them away from the rest of their world for as long as they could remember. I came away from speaking with them with a warm, happy feeling about how online communication maybe, just maybe, really did have the power to unfuck the world.3

And online communication was coming in fast, as the price of SIM cards, which had been controlled by the ruling junta, dropped from the equivalent of $2,000 USD in 2009, to $250 in 2012, to $1.50 in 2014.4

Mobile adoption explodes, from less than one quarter of a percent of the population in 2011 to more than 90% in 2017.5 And smartphone use and internet uptake spikes with the mobile revolution instead of dragging along behind—a 2017 Bloomberg article does some flavor reporting to contextualize those numbers:

Thiri Thant Mon, owner of a small investment bank in the city, says she still remembers how magazines from the outside world used to arrive weeks late because censors needed time to comb through them.

“Suddenly because we’re on internet,” she said, “people realize what the rest of the world looks like. Now it’s like everybody on the street is talking about Trump. A few years ago, nobody knew what was happening in the next town.”6

In 2015, writer and photographer Craig Mod is working with a team doing internet ethnography in rural Myanmar on behalf of an organization that designs and builds hardware—including modern farming implements—to improve the lives of farmers, who make up the majority of Myanmar’s populace. (Disclosure: Craig and I have halfass-known each other for going on a couple of decades, and I’m a longtime fan of his work.)

Mod’s essay about his work is clearly the product of a sensitive eye and a nerd’s delight in the way new technology twines up with the unevenly distributed realities of daily life:

The village still lacks electricity although they’ve pooled funds and a dozen newly planted metal-power poles dot the fields, waiting to be wired up. Through our interpreter I ask, Where do you charge? Farmer Number Ten points to a car battery hanging in the corner onto which familiar USB wires are spliced.

The tech is all over the place, “cheap but capable,” and Craig wonders if the one-smartphone-per-human result would make Nicholas Negroponte happy. There are no iPhones, no credit cards, no data plans: “Everyone buys top-up from top-up shops, scratches off complex serial numbers printed in a small font, types them with special network codes into their phone dialers in a way that feels steampunkish, like they’re divining data. They feel each megabyte.”7


This is the point in the conversation when it becomes impossible not to talk about Facebook. Because in Myanmar, even back in 2012 and 2013, being online meant being on Facebook.

I’d read about Facebook’s superdominance in Amnesty International’s reporting, in long stories from Reuters and Wired and the NYT and the Guardian, in books, even in UN reports. But Mod’s notes bring to life both Facebook’s pervasiveness and the way Burmese people actually used it:

We ask about apps. The farmer uses [chat app] Viber and Facebook. He says he chats with a few friends on Facebook but mainly people he doesn’t know. Most of his Facebook friends are strangers. He tells us his brother installed the app for him, and set up his account. He doesn’t know the email address that was used. He gets most of his news from Facebook. The election looms and he loves the political updates.

Farmer Number Ten tells us he used to use radio for news but no more. He says he hasn’t turned the radio on in years. Other news apps—like one called TZ—use too much data. He’s data conscious. He uses Facebook mostly at night when the internet is fastest, and cheapest.

And speaking with the proprietor of a cellphone shop:

Facebook is the most popular app, he says. Nine out of ten people who come into the shop want Facebook. Ten months ago SIM prices dropped, data prices dropped, interest in Facebook jumped. I take note. Only half the people who come into the shop already have a Facebook account. The other half don’t know how to make one. I do that for them, he says. I am the account maker.

And what about other apps? He mentions a news app called TZ. Once popular, now less so. He brushes his hand aside and says it’s too data hungry. Everyone is data sensitive he says and reiterates: Facebook. Nobody needs a special app for their interests. Just search for your interest on Facebook. Facebook is the Internet.8

But how did Facebook get to be the internet?

In the early years, there’s a stripped-down version of the app kicking around that used less data than competitors’. But also, when Myanmar’s government opens up to two foreign telecom services, the stronger of the two, Norway’s Telenor, “zero-rates” Facebook.9 Essentially, zero-rating is a selective subsidy; it means that customers won’t be charged for the data they use for some parts of the internet, but would be charged for others. For all of Telenor’s customers, using Facebook is free.

I want to briefly flip ahead to something Meta whistleblower Frances Haugen says in a 2021 interview:

Facebook bought the privilege of being the internet for the majority of languages in the world. It subsidized people’s use of its own platform, and said “Hey, you can use anything you want, but you’ll have to pay for that—or you can use our platform for free.” As a result, for the majority of languages in the world, 80 or 90% of all the content in that exists in those languages is on Facebook.10

That this explosion of connectivity presented dangers as well as freedoms is immediately clear to civil society orgs in Myanmar, for two main reasons: The first is the state of comparative innocence with which the vast majority of Burmese people approach the internet. The second is that the political situation in Myanmar is a powderkeg at best.

There’s a third thing, though most people don’t know it yet, which is that Meta’s decision-making about Myanmar reflects no willingness to adjust for the first two. To get to that, we need to start at the beginning.

The blogger and the monk

A tiny canned history of modern Myanmar might go like this: In a series of East-India-Company–entangled conflicts, the British Empire took control of Myanmar, then called Burma. From the beginning of the wars in 1824 to 1948, Britain ruled Burma, with a big rupture during WWII when Japan invaded Burma.

Thant Myint-U’s The River of Lost Footsteps: Histories of Burma (FSG, 2006) is a very readable history of Myanmar.

After the war, the Burmese government the British left behind was too weak to withstand the combination of vigorous civil conflict with ethnic minorities on the frontiers and a 1962 coup by military leaders.

Beginning in 1962, Burma—renamed Myanmar in English by the ruling junta, but the two names come from the same root—is under incredibly tight military rule. Beginning in 2011, the junta relaxes their grip, and Myanmar begins a precarious transition to a more democratic form of government. Note: All through this entire period, Myanmar’s military, the Tatmadaw, is also waging a 70-year civil war with armed insurgencies associated with several different ethnic minorities.

Starting the year after Myanmar’s first quasi-democratic general election in 2010, the Burmese government begins granting mass amnesties to the country’s many political prisoners. Hundreds of people—journalists, activists, artists, religious leaders, and many more—are released over the next few years.

In the mass amnesty granted in January of 2012, two of the many prisoners released will go on to found organizations that play major and opposing roles in escalating crises of communal and military violence that Myanmar’s entry to the internet will fuel.

One is Nay Phone Latt, an early blogger and digital rights activist, who was jailed in 2008, basically for blogging about the 2007 Saffron Revolution demonstrations. Nay Phone Latt will go on to co-found MIDO, an organization dedicated to helping Myanmar’s ordinary citizens reap the benefits of the newly available internet. Over the next several years, MIDO will come up repeatedly in the intense struggle against online hate campaigns, especially on Facebook.11

The second is Ashin Wirathu, a Buddhist monk jailed in 2003 for sermons inciting violence against Myanmar’s Muslim communities.12 Wirathu will go on to digitize Myanmar’s hardline extremist Buddhist movement, which will play a major role in the coming waves of anti-Muslim violence—also especially on Facebook.


In 2013, Faine Greenwood goes back to Myanmar to write about the country’s first Internet Freedom Forum, “a gathering dedicated to helping Myanmar’s people take advantage of the new, liberated Internet.” Faine writes about the heady vibes:

Nay Phone Latt spoke at the conference, and so did a number of the other brilliant young Burmese tech enthusiasts I’d met before. The mood was still buoyantly optimistic as we circulated from one Post-It note-filled brainstorming session to the next, as we drank tea, discussed Internet freedom regulations and online privacy.

And yet, I could detect a slight edge in the air, a certain trepidation that had grown, mutated into new forms, in the few months since I’d been away […] During the conference, we talked about how hateful talk about the Rohingya was starting to pop up on Facebook, about how it was casting an ominous shadow over the good things about helping more people get online.13

The edge in the air and the ominous shadow aren’t just vague feelings—they’re connected to a surge in communal violence the same year of that critical mass amnesty, and to a parallel rise in online hate speech.

MIDO co-founder and program director Htaike Htaike Aung spoke on the online dynamics she and her colleagues encountered in the early days:

Unlike in countries where people gradually got used to the Internet and learnt how to find good content, thus learning what is bad content, for Myanmar this hype went straight up and people did not have the time to reflect on what the Internet is actually about. This perception can be summarized in the phrase: “Okay, if it’s on the Internet it must be right”. This really is dangerous, particularly if there are people who are using the Internet for the wrong agenda and propaganda.14

In 2014, Nay Phone Latt explains what he’s seeing and why he’s worried about it:

When we advocate for free speech, reducing hate speech is included.… Speech calling for hitting or killing someone is hate speech, and can spread hate among people and is a risk for society… It is the wrong use of freedom of speech. I am worried about that because it is not only spreading on social media but also by some writers and [Buddhist] monks who are spreading hate speech publicly. […]

If people hate each other, a place will not be safe to live. I worry about that most for our society. In some places, although they are not fighting, hate exists within their heart because they have poured poison into their heart for a long time [through hate speech]. It can explode in anytime.15

So what were Htaike Htaike Aung and Nay Phone Latt and their colleagues seeing that made them so worried? Here are some Facebook comments from all the way back in 2011, sparked by the BBC’s audacity in…referring to Rohingya people as a ethnic group that exists in Myanmar at all. The BBC’s offending infographic had been up for a year before the BBC’s Burmese Facebook page was flooded with comments:

Kick out all Muslim Kalar [Rohingyas, South Asians/Indians] from Burma. If this doesn’t work, then kill them to death. It’s time for Arakan to unite with each other.

Don’t assume that I won’t sharpen my knife. I am ready to make it sharp for the sake of protecting our nation, religion and races against those Bengali cheaters.

F@#$-ing Kalar, we will slap your face with shoes and cut your heads. Don’t criticize the god with little of what you know. We will set you on fire to death and turn the mosques into wholesales/retail pork markets…16

“Kalar,” when used to refer to Rohingya people and other Muslims in Myanmar, is a slur with implications of ugliness, dark skin, and/or foreign ancestry.

The New Mandala article about the controversy includes memes mocking BBC journalist Anna Jones using vintage 4chan trollface visuals, if that helps situate things for you.

So this is the landscape already in place before Nay Phone Latt was released from prison in 2012. And things don’t get any better as Facebook/internet adoption spikes.

We know now that behind the scenes, Burmese activists and organizations—and concerned westerners—spent this period desperately trying to get Facebook to act on the rapidly rising surge of anti-Rohingya hate speech. But neither those efforts nor public educational/digital literacy campaigns can keep up with what’s happening online.

The Rohingya

I need to make a brief detour, about the Rohingya. Here’s Médicins Sans Frontières, from their reference page about the Rohingya refugee crisis:

The Rohingya are a stateless ethnic group, the majority of whom are Muslim, who have lived for centuries in the majority Buddhist Myanmar, mostly in the country’s north, in Rakhine state. However, Myanmar authorities contest this; they claim the Rohingya are Bengali immigrants who came to Myanmar in the 20th century.

Described by the United Nations in 2013 as one of the most persecuted minorities in the world, the Rohingya are denied citizenship under Myanmar law. 17

I’ve relied heavily on Carlos Sardiña Galache’s The Burmese Labyrinth (Verso, 2020) and Thant Myint-U’s The Hidden History of Burma (WW Norton, 2019) to build out my own understanding of the Rohingya crisis and the history leading up to it, and I highly recommend both books.

It’s hard to overstate how contested those basic facts are within Myanmar itself, where successive governments have rejected any recognition of Rohingya existence, let alone legitimacy, referring to them instead as illegal “Bengali” immigrants.18 (The Rohingya Culture Center in Chicago has published a concise timeline of Rohingya history, and it makes a good, quick orientation.19)

Going deeper, things get immediately, bogglingly complicated. My background is inadequate to the task of explaining the entangled ethnic, religious, and political histories of Myanmar in even just Rakhine (formerly Arakan) State, where the Rohingya crisis takes place. What you need to know to understand this series is mostly just that Rohingya people do in fact exist and have long lived under crushing restrictions. Also that Myanmar’s Buddhist political mainstream has long been concerned with racial purity and security. And openly so: the official motto of the Ministry of Immigration and Population is: “The earth will not swallow a race to extinction but another race will.”20

The rumors and the killings (2012)

In this section, I draw heavily from the findings of the UN Independent International Fact-Finding Mission on Myanmar. I want to put a caution here in the main text that the findings document is an analysis of atrocities, so heads up, it’s very rough reading.

Content warning: Violence against men, women, and children.

On the 28th of May, 2012, a Buddhist Rakhine woman called Ma Thida Htwe is killed in a village in Rakhine State. The next day, a newspaper reports that she was raped and murdered by “kalars” and calls the killing “the worst homicide case in Myanmar.” Almost immediately, graphic photos of her body begin circulating online. 21

The findings of UN Mission note that although Ma Thida Htwe was clearly murdered, both the rape allegation and ethnic origin of the suspected culprits remain in doubt, and that, “In the following days and weeks, it was mainly the rape allegation, more than the murder, which was used to incite violence and hatred against the Rohingya.”22

For the people who’ve been working to whip up anti-Rohingya hatred and violence, the crime against Ma Thida Htwe is a perfect opportunity—and they use it, immediately, to suggest that it’s merely the leading edge of a coming wave of attacks by Rohingya terrorists. Four days after Ma Thida Htwe’s death, on June 1st, Zaw Htay, the spokesman of the President of Myanmar, posts a statement on Facebook warning that “Rohingya terrorists” are “crossing the border into Myanmar with weapons.” He goes on:

We don’t want to hear any humanitarian or human rights excuses. We don’t want to hear your moral superiority, or so-called peace and loving kindness.… Our ethnic people are in constant fear in their own land. I feel very bitter about this. This is our country. This is our land.23

On June 3rd, five days after the Rakhine woman’s murder, a nationalist Buddhist group hands out leaflets in a village in southern Rakhine State stating that Muslims are assaulting Buddhist women. A bus carrying ten Rohingya men attempts to pass through the village that same day, and the villagers haul the men out, beat them to death, and destroy the bus. A few days later, Rohingya people in a nearby town riot, killing Buddhist villagers and burning homes. 24

The information landscape in Myanmar is so unstable that accounts of any given incident conflict, often in major ways. Sai Latt’s post I link to refers to nine Rohingya men and a woman—other accounts refer to ten Muslim men and one Buddhist killed by mistake. I’m using the UN Mission’s carefully investigated accounts as the closest thing to ground truths, but few details—especially in media accounts—should be taken as absolute facts.

In an article published on the academic regional analysis site New Mandala only seven days after the Rohingya men are killed in Rakhine State, Burmese PhD candidate Sai Latt writes:

What actually triggered public anger? It may have been racial profiling by the ethnic Arakan news agency, Narinjara… When the rape incident took place, the agency published news identifying the accused with their Islamic faith. In its Burmese language news, the incident was presented as if Muslims [read aliens] were threatening local people, and now they have raped and brutally killed a woman. The words—Muslims, Kalar, Islam—were repeatedly used in its news reports. The news spread and people started talking about it in terms of Kalar and “Lu Myo Char [i.e. different race/people or alien] insulting our woman”.

Interestingly, Naranjara and Facebook users started talking about the accused rapists as Kalars even when the backgrounds of the accused were still unclear.25

In the wave of military and communal violence that follows, members of both Rakhine (the dominant Buddhist ethnicity in Rakhine State) and Rohingya communities commit murder and arson, but the brunt of the attacks falls on the Rohingya. The UN Mission’s findings describe house-burning, looting, “extrajudicial and indiscriminate killings, including of women, children and elderly people,” and the “mass arbitrary arrests” and torture of Rohingya people by soldiers and police. More than 100,000 people, most of them Rohinyga, are forced from their homes. 26

The next wave of violence comes in October and shows evidence of coordinated planning. According to Human Rights Watch, the fall attacks were “organized, incited, and committed by local Arakanese political party operatives, the Buddhist monkhood, and ordinary Arakanese, at times directly supported by state security forces.” The attacks included mass killings of Rohingya men, women, and children, whose villages and homes were burned to the ground, sometimes in simultaneous attacks across geographically distant communities.

In the worst attack that October, police and soldiers in Myanmar’s military, the Tatmadaw, preemptively confiscate sticks and other rudimentary weapons from Rohingya villagers, then stand by and watch while a Rakhine mob murders at least 70 Rohingya people over the course of a single day.

Human Rights Watch reports that 28 children are hacked to death in the attack, including 13 children under the age of 5.27

Every spark is more likely to turn into a fire

According to many of Amnesty International’s interviews with Rohingya refugees, the 2012 episodes of communal violence mark the turning point of Myanmar’s slide into intense anti-Rohingya rhetoric, persecution, and, ultimately, genocide. Mohamed Ayas, a Rohingya schoolteacher and refugee, puts it this way:

We used to live together peacefully alongside the other ethnic groups in Myanmar. Their intentions were good to the Rohingya, but the government was against us. The public used to follow their religious leaders, so when the religious leaders and government started spreading hate speech on Facebook, the minds of the people changed.28

In a short 2013 documentary, journalist and filmmaker Aela Callan interviews Richard Horsey, a Myanmar-based political analyst with decades of experience in the country and (now, at least) advisor to The International Crisis Group.

Horsey’s take on the situation in Myanmar accords with the Amnesty interviews:

We’ve seen violence in previous decades by Buddhist populations against Muslim populations, but what’s new is that this information is readily available and transmissible. People are using Facebook and mobile phones, and so you get a much greater resonation every time there’s an issue. Every time there’s a spark, it’s much more likely to turn into a fire.29

In the months between the June and October waves of violence in Myanmar, dehumanizing and violence-inciting anti-Rohingya messages continue to circulate on Burmese-language Facebook. Western observers who fly in to conduct investigations emphasize town-hall meetings and pamphlets spreading hateful and violent rhetoric, but—while noting that it was also surging online—write off the internet as a major influence on the events because internet access remains unusual in rural Rakhine State.30

The very early and very strong concerns of Burmese observers and activists I’ve cited above—Htaike Htaike Aung, Nay Phone Latt, and Sai Latt—about hate speech on Facebook specifically suggest to me that it’s possible that western observers may have underestimated the role of secondhand transmission of internet-circulated ideas, including printed copies of internet propaganda, which are cited as a vector in an interview with another MIDO co-founder, Phyu Phyu Thi. (This exact dynamic—the printed internet—shows up in Sheera Frenkel’s 2016 BuzzFeed News article, which cites “print magazines called Facebook and The Internet.”)31

In any case, online and off, hardline Buddhist monks, members of the government, and what appear to be ordinary people are all echoing a few distinct ideas: They claim that the Rohingya are not a real ethnic group, but illegal “Bengali” immigrants; subhuman animals who “outbreed” Buddhists; indistinguishable from terrorists; an immediate danger to the sexual purity and safety of Buddhist women and to Buddhist Myanmar as a whole. 32

Sai Latt also cites a Facebook account for exiled Burmese film director Cho Tu Zal with 5,000 friends and more than 2,000 subscribers as a major source of anti-Rohingya propaganda—this is in fact the same guy who coordinated the Facebook campaign against the BBC in 2011.

Sai Latt writes in June 2012 that the anti-Rohingya hate campaign is “a public and transnational movement orchestrated openly on social media websites,” which includes thousands of people from the Burmese diaspora:

The campaign is so severe that such comments and postings have been littered on thousands of Facebook walls and pages. Thousands of Burmese-speaking Facebook users are exposed to them every single day. Many Facebook groups such as “Kalar Beheading Gang” appeared one after another. While it is possible to report abuse to Facebook, there were few Burmese Muslim Facebook users who would report to Facebook. Even when they did successfully, new groups keep appearing frequently.

That Facebook Page, “Kalar Beheading Gang,” shows up in the international press as well. It features “an illustration of a grim reaper with an Islamic symbol on its robe on a blood-spattered background,” and a stream of graphic photos presented as evidence of Rohingya atrocities. By the time the Hindustan Times article goes to press on June 14, 2012, the Page already has more than 500 likes.33

When Sai Latt says that what he’s seeing on Facebook is an orchestrated movement, he’s right. Remember Wirathu, the Buddhist monk released in the same amnesty as MIDO’s Nay Phone Latt? Soon after his release from prison, he’s one of the people doing the orchestrating.

“A faster way to spread the messages”

“We are being raped in every town, being sexually harassed in every town, being ganged up on and bullied in every town.… In every town, there is a crude and savage Muslim majority.” —Wirathu, 2013 34

Ashin Wirathu is very, very good at Facebook. He spoke with BuzzFeed News about his history on the platform:

[Wirathu’s] first account was small, he said, and almost immediately deleted by Facebook moderators who wrote that it violated their community standards. The second had 5,000 friends and grew so quickly he could no longer accept new requests. So he started a new page and hired two full-time employees who now update the site hourly.

“I have a Facebook account with 190,000 followers and a news Facebook page. The internet and Facebook are very useful and important to spread my messages,” he said.

On the dozens of Facebook pages he runs out of a dedicated office, Wirathu has called for the boycott of Muslim businesses, and for Muslims to be expelled from Myanmar.35

In a 2013 video interview with the Global Post, Wirathu speaks placidly about the possibility of interfaith problem-solving and multi-ethnic peace—and then, with no transition or change of tone, about Muslims “devouring the Burmese people, destroying Buddhist and Buddhist order, forcefully taking action to establish Myanmar as an Islamic country, and forcefully implementing them.” To these absolute fantasies—Muslims make up maybe five percent of Myanmar’s population at this time—Wirathu adds that, “Muslims are like the African carp. They breed quickly and they are very violent and they eat their own kind.”36

International perception matters to Myanmar’s Buddhist establishment. In Callan’s documentary, you can see protesting monks carrying a printed banner denouncing Hannah Beech, with her photograph crossed out.

In 2013, Time puts Wirathu on the cover of its international editions as “The Face of Buddhist Terror.” Time reporter Hannah Beech quotes him in the cover story explaining that 90% of Muslims in Myanmar are “radical bad people,” and that Barack Obama is “tainted by black Muslim blood.“ 37

Cover of Time's international editions for July 1, 2013. The photo shows Ashin Wirathu, a Burmese man wearing the crimson robe of a Buddhist monk. His head is shaved and his expression is thoughtful. The cover text reads The Face of Buddhist Terror, in progressively large type, and underneath, How militant monks are fueling anti-Muslim violence in Asia, by Hannah Beech.Cover of Time's international editions for July 1, 2013. The photo shows Ashin Wirathu, a Burmese man wearing the crimson robe of a Buddhist monk. His head is shaved and his expression is thoughtful. The cover text reads The Face of Buddhist Terror, in progressively large type, and underneath, How militant monks are fueling anti-Muslim violence in Asia, by Hannah Beech.

Photo by Adam Dean/Panos

It would be hard to overstate Wirathu’s influence in Myanmar during the years leading up to the beginning of the Rohingya genocide. One NGO writes that, “It is noteworthy that almost every major outbreak of communal violence since October 2012 in Rakhine state has been preceded by a 969-sponsored preaching tour in the area, usually by [Wirathu] himself.”38 In fact, the same thing is true of the 2012 violence, though from a little further away: Wirathu led a rally in Mandalay in September of 2012 in support of a proposal by then-President Thein Sein to deport the Rohingya en masse. The worst of the violence in Rakhine State took place the very next month.39

I think it’s useful to note that the 969/Ma Bha Tha movement Wirathu is associated with is also a force for practical and immediate good among Myanmar’s Buddhist majority—they run or support lots of Buddhist religious education for children, legal aid, donation drives, and relief campaigns. And these things, along with the high esteem in which monks are generally held in Myanmar, makes them seem exceptionally credible.40

Wirathu himself is beloved not only for his incendiary sermons that “defend” Buddhism by demonizing Muslims, but also for his direct involvement in community support. If the project of supporting and defending Buddhism is extended to eliminating the Rohingya, maybe those two kinds of action blur. In Callan’s documentary, Wirathu holds office hours to help ordinary community members solve problems and disputes. His attention flicks constantly between the petitioners kneeling in front of him and the smartphone cradled in his hand.

A few years later, Wirathu—by then running a Facebook empire with nearly 200,000 followers, dozens of Pages, and a full-time staff—sums up the situation: “If the internet had not come to [Myanmar], not many people would know my opinion and messages like now,” Wirathu told BuzzFeed News, adding that he had always written books and delivered sermons but that the “internet is a faster way to spread the messages.” 41

A quick aside

I don’t want to spend much time on my process, but I do want to make a note about the provenance of a document that would otherwise have unclear sourcing.

Several accounts of Meta’s role in Myanmar’s persecution of the Rohingya include partial lists of the many warnings people on the ground in Burma—and concerned international human-rights watchers—delivered to Meta executives between 2012 and 2018. Taking on the implications of that litany of warnings and missed chances is what first gave me the first taste of the sinking dread that has characterized my experience of this project.

Because of citations in Amnesty International’s big Meta/Myanmar report, I knew there was a “briefing deck” floating around somewhere that had more details, but the closest reference I could find led to a dead URL that Archive.org didn’t have, and emailing around didn’t turn anything up.

Eventually, I found a version of it on a French timeline-making site in the form of a heavily annotated timeline, and cross-referenced that version with tweets from Htaike Htaike Aung and Victoire Rio, the women credited in the Amnesty report with making the deck. The URL they link to in the tweets is the dead one, but the preview image in their tweets is an exact match for the timeline I found. I cite this timeline a lot below.

Edited Oct. 16 2023 to add: The original material is now back online in one form at the Myanmar Internet Project website, and someone affiliated with the project also passed on a PDF form with attached case studies, which I’ve archived via Document Cloud: “Facebook and the Rohingya Crisis.”

The warnings

The timeline showing Burmese civil-society and digital-rights leaders' attempts to warn Meta about what it was enabling in Myanmar.The timeline showing Burmese civil-society and digital-rights leaders' attempts to warn Meta about what it was enabling in Myanmar.

Screenshot of the timeline discussed above.

All the way back in November of 2012, Htaike Htaike Aung, MIDO’s program director, speaks with Meta’s Director of Global Public Policy and Policy Director for Europe about the proliferation of hate speech on Facebook at a roundtable in Azerbaijan. Eleven months later, in October 2013, Htaike Htaike Aung brings up her concerns again in the context of “rising inter-communal tensions” at a roundtable discussion in Indonesia attended by three Meta policy executives including the Director of Global Public Policy.42

At the same time, activists and researchers from MIDO and Yangon-based tech-accelerator Phandeeyar “followed up over email to ask for ways to get Facebook to review problematic content and address emergency escalations.” Facebook didn’t respond. MIDO and Phandeeyar staff start doing “targeted research on hate speech.”43

In November of 2013, Aela Callan (whose documentary work I’ve cited) meets with Facebook’s VP of Communications and Public Policy, Eliot Schrage, to warn about anti-Rohingya hate speech—and the fake accounts promoting it—on Facebook.44 After her visit, Meta puts Callan in touch with Internet.org—the global initiative that would eventually cement Facebook’s hegemony in Myanmar, among other places—and with Facebook’s bullying-focused “Compassion Team,” but not with anyone inside Facebook who could actually help.45

In March 2014, four months before deadly, Facebook-juiced riots break out in Mandalay, Htaike Htaike Aung travels to Menlo Park with Aela Callan after RightsCon Silicon Valley. The two women met with members of Meta’s Compassion Team to try once again to get Facebook to pay attention to the threat its service poses in Myanmar. 46

In the same month, Susan Benesch, head of the Dangerous Speech Project, coordinates a briefing call for Meta, attended by half a dozen Meta employees, including Arturo Bejar, one of Facebook’s heads of engineering.

On the call, Matt Schissler, a Myanmar-based human-rights specialist, delivers a grim assessment describing dehumanizing messages, faked photos, and misinformation spreading on Facebook. In An Ugly Truth: Inside Facebook’s Battle for Domination, their 2021 book about Facebook’s dysfunction, reporters Sheera Frenkel and Cecilia Kang describe Schissler’s presentation:

Toward the end of the meeting, Schissler gave a stark recounting of how Facebook was hosting dangerous Islamophobia. He detailed the dehumanizing and disturbing language people were using in posts and the doctored photos and misinformation being spread widely.47

Reuters notes that one of the examples Schissler gives Meta was a Burmese Facebook Page called, “We will genocide all of the Muslims and feed them to the dogs.” 48

None of this seems to get through to the Meta employees on the line, who are interested in…cyberbullying. Frenkel and Kang write that the Meta employees on the call “believed that the same set of tools they used to stop a high school senior from intimidating an incoming freshman could be used to stop Buddhist monks in Myanmar.”49

Aela Callan later tells Wired that hate speech seemed to be a “low priority” for Facebook, and that the situation in Myanmar, “was seen as a connectivity opportunity rather than a big pressing problem.”50

Then comes Mandalay.

“Facebook said nothing”

I don’t have space to tell the whole story of the 2014 Mandalay violence here—Timothy McLaughlin’s Wired article is good on it—but it has all the elements civil society watchdogs have been worried about: Two innocent Muslim men are falsely accused of raping a Buddhist woman, sensationalist news coverage explodes, and Ashin Wirathu exploits the story to further his cause. Riots break out, and all kinds of wild shit starts circulating on Facebook.

This is exactly the kind of situation Htaike Htaike Aung, Callan, Schissler, and others had been warning Meta could happen—but when it actually arrives, no one at Meta picks up the phone.

As the violence in Mandalay worsens, the head of Deloitte in Myanmar gets an urgent call. Zaw Htay, the official spokesman of Myanmar’s president, needs help reaching Meta. The Deloitte executive works into the night trying to get someone to respond, but never gets through. On the third day of rioting, the government gets tired of waiting and blocks access to Facebook in Mandalay. The block works—the riots died down. And the Deloitte guy’s inbox suddenly fills with emails from Meta staffers wanting to know why Facebook has been blocked.51

The government wasn’t the only group trying to get through. A few months before, a handful of Facebook employees—including members of policy, legal, and comms teams—had formed a private Facebook group, to allow some of the civil-society experts from Myanmar to flag problems to Facebook staff directly.

With false reports and calls to violence beginning to spread on Facebook in Mandalay, Burmese activists and westerners in the private group try to alert Facebook. They get no response, even after the violence turns deadly. A member of the group later tells Frenkel and Kang:

When it came to answering our messages about the riots, Facebook said nothing. When it came to the internet being shut down, and people losing Facebook, suddenly, they are returning messages right away. It showed where their priorities are.52

A couple of weeks after the violence in Mandalay in 2014, Facebook’s Director of Policy for the Asia-Pacific region, Mia Garlick, travels to Myanmar for the first time. In a panel discussion, Garlick discusses Meta’s policies and promises to…speed up the translation of Facebook’s Community Standards (its basic content rules) into Burmese. That single piece of translation work—Facebook’s main offering in response to internationally significant ethnic violence about which the company has been warned for two years—takes Meta fourteen months to accomplish and relies, in the end, on the Phandeeyar folks in the private Facebook group.53

Also in 2014, Meta agrees to localize their reporting tool for hate speech and other objectionable content. Meta employees work with MIDO and other Burmese civil-society people to translate and fine-tune the tool, and they launch it by the end of 2014.

There’s just one problem: As the Burmese civil society people in the private Facebook group finally learn, Facebook has a single Burmese-speaking moderator—a contractor based in Dublin—to review everything that comes in. The Burmese-language reporting tool is, as Htaike Htaike Aung and Victoire Rio put it in their timeline, “a road to nowhere.”54

Yet more warnings

Heading into 2015, the warnings keep coming:

February 2015: Susan Benesch, founder and director of the Dangerous Speech Project at the Berkman Klein Center (that’s the Berkman Center as was, for my fellow olds) gives a presentation called “The Dangerous Side of Language” during Compassion Day at Facebook. According to this legal filing for a class-action suit against Meta on behalf of the Rohingya, the presentation explains “how anti-Rohingya speech is being disseminated by Facebook.”55

March 2015: Matt Schissler travels to Menlo Park to meet with Facebook employees in person. Reuters reports that he “gave a talk at Facebook’s California headquarters about new media, particularly Facebook, and anti-Muslim violence in Myanmar. More than a dozen Facebook employees attended, he said.” Frenkel and Kang describe his presentation as documenting “the seriousness of what was happening in Myanmar: hate speech on Facebook was leading to real-world violence in the country, and it was getting people killed.”56

After his meeting at Meta, Schissler has lunch with a few Meta employees, and one of them asked Schissler if he thought Facebook could contribute to a genocide in Myanmar, to which Schissler responded, “Absolutely.” Afterwards, Schissler tells Frenkel and Kang, one Facebook employee loudly remarks, “He can’t be serious. That’s just not possible.”57

PBS Frontline interviewed David Madden in 2018 for its documentary, The Facebook Dilemma; that interview is a good introduction to the dynamics in play.

May 2015: Another expert comes to Menlo Park to warn Facebook about the dangerous dynamics it was feeding in Myanmar: David Madden, founder of Phandeeyar, the technology accelerator in Myanmar. In an interview with Amnesty International, Madden reported that “those of us who were working on these issues in Myanmar had a sense that people in Facebook didn’t appreciate the nature of the political situation in the country.”58

In his meeting with Meta, Madden discusses specific examples of dangerous content and drew a direct analogy between radio news in Rwanda and Meta’s role in Myanmar, “meaning it would be the platform through which hate speech was spread and incitements to violence were made.”59

September 2015: Mia Garlick, Facebook’s Director of Public Policy, Asia-Pacific, comes to Myanmar to finally launch Facebook’s Burmese-language Community Standards. Phandeeyar puts together a group of “15+ civil-society leaders from across Myanmar” to brief Garlick “on specific incidents and actors.”60

In an interview with Amnesty International, Victoire Rio reports that several of these leaders specifically tell Garlick that Facebook’s Community Standards aren’t being enforced in Myanmar.61

Which, of course they aren’t: Reuters reports that in 2015, Facebook employed a total of two (2) Burmese-speaking moderators, expanding to four (4) by the year’s end.62


To sum up a little bit, by the end of 2015, Meta knew—as much as any organization can be said to know—that both international civil society experts and the government of Myanmar believe Facebook had a significant role in the 2014 Mandalay riots.

And they’d been warned, over and over, that multiple dedicated civil-society and human-rights organizations believed that Facebook was worsening ethnic conflict.

They’d been shown example after example of dehumanizing posts and comments calling for mass murder, even explicitly calling for genocide. And David Madden had told Meta staff to their faces that Facebook might well play the role in Myanmar that radio played in Rwanda. Nothing was subtle.

After all that, Meta decided not to dramatically scale up moderation capacity, to permanently ban the known worst actors, or to make fundamental product-design changes to reliably deviralize posts inciting hatred and violence.

Instead, in 2016, it was time to get way more people in Myanmar onto Facebook.

That’s next, in Part II: The Crisis.


  1. Weaponizing Social Media: The Rohingya Crisis, CBS News, February 26, 2018 (YouTube lists the date when the documentary was uploaded, not when it was initially broadcast). Also available at the CBS News site, where you will have to watch multiple ads for Pfizer. Content warning: First-person accounts of atrocities.↩︎

  2. U.N. Investigators Cite Facebook Role in Myanmar Crisis,” Reuters, March 12, 2018.↩︎

  3. Facebook Destroys Everything: Part 1, Faine Greenwood, August 8, 2023.↩︎

  4. Myanmar Cuts the Price of a SIM Card by 99%,” Sam Petulla, Quartz, April 3, 2013.↩︎

  5. Digital 2011: Myanmar,” Simon Kemp, Datareportal, December 28, 2011; “Digital 2017: Myanmar,” Simon Kemp, Datareportal, February 1, 2017, both cited in Amnesty International’s report, “The Social Atrocity,” cited elsewhere.↩︎

  6. The Unprecedented Explosion of Smartphones in Myanmar,” Philip Heijmans with assistance from Jason Clenfield, Grace Huang, Masumi Suga, and Mai Ngoc Chau, Bloomberg, July 31, 2012.↩︎

  7. The Facebook-Loving Farmers of Myanmar,” Craig Mod, The Atlantic, January 21, 2016.↩︎

  8. The Facebook-Loving Farmers of Myanmar,” Craig Mod, The Atlantic, January 21, 2016.↩︎

  9. Telenor Brings Zero-Rated Wikipedia and Facebook Access to Myanmar,” LIRNEasia, November 3, 2014.↩︎

  10. Frances Haugen DW News Interview,” DW Global Media Forum, November 10, 2021 (Haugen’s sentence begins at about 7:39 in the video).↩︎

  11. Imprisoned Burmese blogger Nay Phone Latt to receive top PEN honor,” PEN America, April 14, 2010; “‘I Am Really Worried About Our Country’s Future’,” Jessica Mudditt, Mizzima, September 21, 2014 (archived at Burma Link).↩︎

  12. ‘It Only Takes One Terrorist’: The Buddhist Monk Who Reviles Myanmar’s Muslims,” Marella Oppenheim, The Guardian, May 12, 2017.↩︎

  13. Facebook Destroys Everything: Part 1,” Faine Greenwood, August 8, 2023.↩︎

  14. ‘If It’s on the Internet It Must Be Right’: An Interview With Myanmar ICT for Development Organisation on the Use of the Internet and Social Media in Myanmar,” Rainer Einzenberger, Advances in Southeast Asian Studies (ASEAS), formerly the Austrian Journal of South-East Asian Studies, December 30, 2016.↩︎

  15. ‘Hate Speech Pours Poison Into the Heart’,” San Yamin Aung, The Irrawaddy, April 9, 2014.↩︎

  16. BBC under fire on Rohingyas,” Sai Latt, New Mandala, November 3, 2011.↩︎

  17. The Rohingya: Persecuted Across Time and Place,” Médecins Sans Frontières, resource undated.↩︎

  18. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018, paragraph 460 — the “report landing page includes summaries, metadata, and infographics. Content warnings apply throughout.↩︎

  19. History of the Rohingya,” Rohingya Culture Center, resource undated.↩︎

  20. ‘An Open Prison without End’: Myanmar’s Mass Detention of Rohingya in Rakhine State,” Human Rights Watch, October 8, 2020↩︎

  21. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018, paragraph 704. Content warnings apply throughout.↩︎

  22. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018, paragraph 625. Content warnings apply throughout.↩︎

  23. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018, paragraph 705. Content warnings apply throughout.↩︎

  24. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018, paragraph 627. The New Light of Myanmar article cited in the findings is archived at https://www.burmalibrary.org/docs13/NLM2012-06-05.pdf. Content warnings apply throughout.↩︎

  25. Intolerance, Islam and the Internet in Burma,” Sai Latt, New Mandala, June 10, 2012↩︎

  26. Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018, paragraph 628. Content warnings apply throughout.↩︎

  27. ‘All You Can Do is Pray’: Crimes Against Humanity and Ethnic Cleansing of Rohingya Muslims in Burma’s Arakan State,” Human Rights Watch, April 2013 (PDF version). Content warnings apply throughout.↩︎

  28. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  29. Myanmar: Freedom from Hate, Aela Callan, Al Jazeera English, September 5, 2013.↩︎

  30. Symposium on Myanmar and International Indifference: The Rohingya Genocide — Warning Signs, International Inaction, and Missteps,” Matthew Smith, OpinioJuris, August 29, 2022; “‘All You Can Do is Pray’: Crimes Against Humanity and Ethnic Cleansing of Rohingya Muslims in Burma’s Arakan State,” Human Rights Watch, April 2013 (PDF version); “Myanmar Conflict Alert: Preventing Communal Bloodshed and Building Better Relations,” International Crisis Group, June 12 2012.↩︎

  31. ‘If It’s on the Internet It Must Be Right’: An Interview With Myanmar ICT for Development Organisation on the Use of the Internet and Social Media in Myanmar,” Rainer Einzenberger, Advances in Southeast Asian Studies (ASEAS), formerly the Austrian Journal of South-East Asian Studies, December 30, 2016; “This Is What Happens When Millions Of People Suddenly Get The Internet,,” Sheera Frenkel, BuzzFeed News, November 20, 2016.↩︎

  32. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022; “Report of the Detailed Findings of the Independent International Fact-Finding Mission on Myanmar,” United Nations Human Rights Council, September 17, 2018 (many paragraphs over the report).↩︎

  33. Web users vent rage over Myanmar unrest,” Hindustan Times/AFP June 14, 2012; “How the Word ‘Kalar’ is a Depressing Indictment of Myanmar Society,” Myanmar Mix, April 22, 2020.↩︎

  34. Buddhist Monk Uses Racism and Rumours to Spread Hatred in Burma,” Kate Hodal, The Guardian, April 18, 2013.↩︎

  35. This Is What Happens When Millions Of People Suddenly Get The Internet,,” Sheera Frenkel, BuzzFeed News, November 20, 2016.↩︎

  36. A Burmese Journey Q&A With Ashin Wirathu,” Tin Aung Kyaw, Global Post, June 2013 (archived at the Rohingya Video News YouTube channel); note: the YouTube comment section is now filled with people cheering for Wirathu’s anti-Islamic positions. (Related article about this interview, from which I draw the “African carp” translation.)↩︎

  37. The Face of Buddhist Terror,” Hannah Beech, Time, July 1, 2013.↩︎

  38. Hidden Hands Behind Communal Violence in Myanmar: Case Study of the Mandalay Riots, Justice Trust Policy Report,” Justice Trust, March 2015 (archived at Burma Library).↩︎

  39. ‘It Only Takes One Terrorist’: The Buddhist Monk Who Reviles Myanmar’s Muslims,” Marella Oppenheim, The Guardian, May 12, 2017.↩︎

  40. Misunderstanding Myanmar’s Ma Ba Tha,” Matthew J Walton, Asia Times, June 9, 2017.↩︎

  41. This Is What Happens When Millions Of People Suddenly Get The Internet,” Sheera Frenkel, BuzzFeed News, November 20, 2016.↩︎

  42. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022, p 51; “Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022.↩︎

  43. Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022.↩︎

  44. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  45. Hatebook: Why Facebook Is Losing the War on Hate Speech in Myanmar,” Reuters, August 15, 2018. This piece is part of Reuters’ extraordinary, Pulitzer-winning coverage of the Rohingya genocide, Myanmar Burning, which landed two Burmese Reuters reporters in prison in Myanmar for documenting a mass killing of Rohingya people; both men were released in 2019.Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022.↩︎

  46. Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022.↩︎

  47. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021—I read the ebook edition, so I have no page numbers to cite, but this comes from Chapter Nine: “Think Before You Share.”↩︎

  48. Weaponizing Social Media: The Rohingya Crisis, CBS News, February 26, 2018 (YouTube lists the date when the documentary was uploaded, not when it was initially broadcast). Also available at the CBS News site, where you will have to watch multiple ads for Pfizer. Content warning: First-person accounts of atrocities.↩︎

  49. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021 (Chapter Nine: “Think Before You Share”).↩︎

  50. Hatebook: Why Facebook Is Losing the War on Hate Speech in Myanmar,” Reuters, August 15, 2018; “How Facebook’s Rise Fueled Chaos and Confusion in Myanmar,” Timothy McLaughlin, Wired, July 6, 2018.↩︎

  51. How Facebook’s Rise Fueled Chaos and Confusion in Myanmar,” Timothy McLaughlin, Wired, July 6, 2018.↩︎

  52. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021 (Chapter Nine: “Think Before You Share”).↩︎

  53. How Facebook’s Rise Fueled Chaos and Confusion in Myanmar,” Timothy McLaughlin, Wired, July 6, 2018; An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021 (Chapter Nine: “Think Before You Share”).↩︎

  54. Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022; “How Facebook’s Rise Fueled Chaos and Confusion in Myanmar,” Timothy McLaughlin, Wired, July 6, 2018; An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021 (Chapter Nine: “Think Before You Share”).↩︎

  55. Class-action complaint and demand for jury trial,” filed in the Superior Court of the State of California for the County of San Mateo by Rafey S. Balabanian and Richard Fields of Edelson PC and Fields PLLC, The Guardian, December 6, 2021. (I got the date from Reuters.)↩︎

  56. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021 (Chapter Nine: “Think Before You Share”).↩︎

  57. An Ugly Truth: Inside Facebook’s Battle for Domination, Sheera Frenkel and Cecilia Kang, HarperCollins, July 13, 2021 (Chapter Nine: “Think Before You Share”).↩︎

  58. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  59. The Facebook Dilemma, Frontline, October, 2018; the full interview with David Madden is available as well.↩︎

  60. Rohingya and Facebook,” Htaike Htaike Aung, Victoire Rio, possibly others, August 2022.↩︎

  61. The Social Atrocity: Meta and the Right to Remedy for the Rohingya,” Amnesty International, September 29, 2022.↩︎

  62. Hatebook: Why Facebook Is Losing the War on Hate Speech in Myanmar,” Reuters, August 15, 2018.↩︎

https://erinkissane.com/meta-in-myanmar-part-i-the-setup
Extensions
Meta Meta

I wrote so many posts that that my posts needed a post. Sorry about that.

All four posts are now up: Part I, Part II, Part III, and Part IV.

The series this note accompanies is a work of synthesis, in which I attempt to collect up the hundreds of pieces of the overall story of Meta’s actions (and inactions) in Myanmar before and after the peak of the genocide of the Rohingya people in 2016 and 2017, and make them intelligible to a readership of English-speaking people who want to make or use an internet that doesn’t replicate these dynamics.

If you spot typos or inaccuracies, I’m reachable at erin at incisive.nu and appreciate all bug reports. Someday I may even fix up the font loading on this website.

Limitations (mine)

I’m the furthest thing from a scholar of Burmese history or politics, and although my background has included working with atrocity materials in other contexts, I’m also not a genocide scholar. My aim with this series is to give mostly-western makers and users of social technology a sense of one US-based technology company’s role in what happened to just one group of people in just one place over a very limited time range. As enormously long as this series is, I’ve left out vastly more than I’ve put in, especially about the complexities of both colonial and post-colonial violence and repression in Myanmar.

If you want to get a sense of Myanmar’s history both generally and as it relates to oppression of the Rohingya and other ethnic minorities, I really like Carlos Sardiña Galache’s The Burmese Labyrinth (Verso, 2020) and Thant Myint-U’s The Hidden History of Burma (WW Norton, 2019).

Content warnings

This is a series about a US technology company’s role in a genocide, so it describes the events leading up to and constituting the genocide.

I’ve tried to be really careful about what makes it into the text, and I haven’t included any graphic descriptions or images—or any images of traumatized fleeing refugees, for that matter.

Within the text of my posts, I warn readers before getting into particularly rough sections dealing with events in Myanmar. But I do say, in general terms, what happened, which was about as bad as anything can be. The sources I cite range from high-level and abstract to literal catalogs of atrocities—and even some of the regular newspaper reporting includes images that, if you look at them closely, may haunt you. If you decide to get into the source documentation and you don’t have experience working with atrocity material, you might want to keep the Tetris intervention handy.

I also quote examples of anti-Rohingya and anti-Islamic hate speech throughout the series, some of which include terms that qualify as slurs and some of which are just horrible things to say.

Terms

It’s been really hard to figure out what to call things in a story about Meta’s actions in Myanmar—or should that be Facebook’s actions in Burma?

When I talk about the platform, I say “Facebook,” but when I talk about the company that runs Facebook, I say “Meta,” no matter what they were called at the point in history I’m looking at. There’s a good argument to be made for just calling them Facebook all the way through.

Is it silly to talk about Meta in 2013, when the company was still called Facebook? Probably! But so are corporate rebranding exercises. Also, we do this with humans all the time, and I’m pretty committed to calling people and entities what they ask to be called.

Let this be my tiny part in making sure that the construct called Meta can’t escape responsibility for what the construct called Facebook did.

In American English, Myanmar is pronounced Myanma with a long a—the r is a little relic of British colonial history.

When I talk about Myanmar, I say “Myanmar,” not “Burma,” because that’s what the country’s government calls the country in English and what most other countries and international bodies use. There are some linguistic complexities and tangled history, and many people from Myanmar still use “Burma” in English-language text. As a white American, I think using colonial-period names in my work on principle would be a little weird, so I’m erring on the side of “use the name the entity requests.”

My own government refuses to write “Myanmar” in official documents because the Burmese government that switched from using “Burma” to “Myanmar” as the official English name was a military government that took control of the country in a coup in 1962. Given the United States’ history of supporting both military governments and coups elsewhere in the world, I’m…not going to say anything else about that.

Things get trickier, though, in referring to people from Myanmar, and to the most commonly spoken language. Technically, “Myanma” is the official term for people who live in Myanmar, inclusive of both the Bamar ethnic majority and the officially recognized ethnic minorities, but in all my reading for this series, I encountered that exact usage about three times, total. Similarly, the most commonly spoken language in Myanmar is officially “the Myanmar language,” but in practice, the usage is vanishingly rare, so I use “Burmese” in both cases.

More place and ethnicity names: The city formerly known as Rangoon is now Yangon. Arakan State and the Arakan ethnicity are now Rakhine State and the Rahkine ethnicity, but my sources are all over the place, so some quoted material still uses the old terms.

Personal names: Burmese-language personal names can be tricky for western readers—there are multiple formulations of many names, and a person’s name isn’t divided into given name and surname/family name. I’ve given the most common formulation of Burmese people’s names, and provide them in full each time they come up, because shortening them to a faux-surname would be weird.

The “Ashin” in “Ashin Wirathu” is an honorific used for monks and certain other honored figures, so I’ve dropped it after the first reference. In quotations and cited sources, you’ll also see “U Wirathu”‚ and “U” here is an honorific use of “uncle.” (Relatedly, the “Daw” in “Daw Aung San Suu Kyi” is the “auntie” honorific.)

Sources and citations

I’ve cited sources very heavily in this series because I want to provide a highly explorable web of information, and because I think the subject demands attention to sourcing. And because all of this stuff came from other people and they deserve credit. I’ve done a lot of plain old hyperlinking as well, but the footnotes in each post provide the most complete set of references. I use a roughly Chicago-style citation method, but with titles first instead of authors because a lot of my references lack designated authors and also it’s my website and I can do whatever I want.

I quote from a lot of US- and UK-based media sources. When I do, I lean on in-depth investigative reporting rather than more general explainery coverage. I also rely heavily on the work of the UN Human Rights Council’s Independent International Fact-Finding Mission on Myanmar, and on reporting and research from Amnesty International, Human Rights Watch, and Médicins Sans Frontières.

A sidenote: One of the things I did to get up to speed for this series was watch a lot of video, including interviews with physically scarred Rohingya women and children in refugee camps, as well as footage of incidents of violence in Rahkine State. I also read detailed reports on specific incidents in Myanmar, because I wanted to make sure that I understood the nature of the evidence before making claims. I’m not going to discuss any of that except to say that what I saw and read provided enough evidence for me to feel very secure citing the sources I’ve cited.

Especially in the third and fourth posts in this series, I use a lot of material from internal Meta documents disclosed by whistleblowers. Meta responds to all whistleblower disclosures with blanket denials, and sometimes frankly absurd statements meant to act, I think, as radar chaff. Having read a bunch of these responses, I don’t think they actually add anything at all to the conversation, so I rarely quote from them, but I do link to some in footnotes.

Also, I’ve included views from as many Burmese civil-society and digital-rights people as I could cram in, along with first-hand perspectives from several western journalists and tech people who spent time in Myanmar. I’ve tried to rely on local, Burmese-speaking sources for ground truths about the interaction of the internet with communal and state violence, and include western perspectives for backup and a sense of the vibes.

The Rohingya

I deal with pieces of the history of the Rohingya at various points throughout the series, so here I will confine myself to terminology and the basic facts of Rohingya existence.

As far as I can tell from listening to and watching interviews with native Rohingya speakers, the “ing” in “Rohingya” is pronounced by Rohingya people as it is in “bring,” sometimes with a tiny tap on the g, as in “Long Island” (stereotypical), but subtler. “Rohinja” is a Burmese-language pronunciation adopted into British English.

The most deeply researched accounts I’ve read about the history of the Rohingya in Rakhine State agree that there have been population flows across what is now the border between Myanmar and Bangladesh for centuries, and that the pre-2016 Rohingya population in Rakhine has roots reflecting both longstanding local populations and periodic inflows from what is now Bangladesh. These inflows included people “originally” from what is now Rakhine State in Myanmar, who’d been forced to migrate north and later returned, as well as people “originally” from the area now known as Bangladesh.

Claims that the Rohingya are “illegal Bengali immigrants” were widely accepted in Myanmar before the 2021 coup, but they’re best understood as political maneuvering intended to demonize an ethnic group, not statements of fact.

While we’re on the subject, it’s my opinion that borders are not a coherent way of describing ancestry and belonging, or of parceling out the obligation to treat people like humans.

Changelog

Friday, September 29: Fixed some typos in this post, expanded content warnings in this post and Part I to explicitly include hate speech and ethnic slurs, and refined the language in the section in this post that deals with using “Burma” vs. “Myanmar.” I’ve also reorganized some of the reading recommendations in this post and tried to make explicit some of the many things that I’m necessarily leaving out. Thank you to readers for raising flags about these issues.

Saturday, September 30: Posted Part II, fixed a couple of typos throughout and one inexplicable word substitution in Part II (a random insert of “Malaysian,” which shouldn’t be in the post at all). Thank you to everyone who reported!

Friday, October 6: Posted Part III. Fixed several typos in the other two parts and changed some ambiguous phasing to clarify Arturo Bejar’s position in the Facebook org chart. Thank you to all reporters of errors! I’m sure I’ve made several more.

Friday, October 13: Posted Part IV. Fixed yet more typos.

Monday, October 16: Posted the series home and acknowledgements post. Fixed yet more typos, probably.

https://erinkissane.com/meta-meta
Extensions
Mastodon Is Easy and Fun Except When It Isn’t

Ed. note: This post is from July, 2023. It circulates every month or so on social networks and a lot of people think it’s new, probably because I run dates at the bottom of each post intead of just under the title. Until I get around to changing that, this is your temporal location-finder.

After my last long post, I got into some frustrating conversations, among them one in which an open-source guy repeatedly scoffed at the idea of being able to learn anything useful from people on other, less ideologically correct networks. Instead of telling him to go fuck himself, I went to talk to about fedi experiences with people on the very impure Bluesky, where I had seen people casually talking about Mastodon being confusing and weird.

My purpose in gathering this informal, conversational feedback is to bring voices into the “how should Mastodon be” conversation that don’t otherwise get much attention—which I do because I hope it will help designers and developers and community leaders who genuinely want Mastodon to work for more kinds of people refine their understanding of the problem space.

what I did

I posted a question on Bluesky (link requires a login until the site comes out of closed beta) for people who had tried/used Mastodon and bounced off, asking what had led them to slow down or leave. I got about 500 replies, which I pulled out of the API as a JSON file by tweaking a bash script a nice stranger wrote up on the spot when I asked about JSON export, and then extracted just the content of the replies themselves, with no names/usernames, IDs, or other metadata attached. Then I dumped everything into a spreadsheet, spent an hour or so figuring out what kind of summary categories made sense, and then spent a few more hours rapidly categorizing up to two reasons for each response that contained at least one thing I could identify as a reason. (I used to do things like this at a very large scale professionally, so I’m reasonably good and also aware that this is super-subjective work.)

None of this is lab-conditions research—sorry, I meant NONE OF THIS IS LAB-CONDITIONS RESEARCH—and I hope it’s obvious that there are shaping factors at every step: I’m asking the question of people who found their way to Bluesky, which requires extra motivation during a closed beta; I heard only from people who saw my question and were motivated to answer it; I manually processed and categorized the responses.

I didn’t agonize over any of this, because my goal here isn’t to plonk down a big pristine block of research, but to offer a conversational glimpse into what real humans—who were motivated to try not one, but at least two alternatives to Twitter—actually report about their unsatisfactory experiences on Mastodon.

Lastly, I’ve intentionally done this work in a way that will, I hope, prove illegible and hostile to summary in media reports. It’s not for generalist reporters, it’s for the people doing the work of network and community building.

A note on my approach to the ~data and numbers: It would be very easy to drop a bunch of precise-looking numbers here, but that would, I think, misrepresent the work: If I say that I found at least one categorizable reason in 347 individual replies, that’s true, but it sounds reassuringly sciency. The truth is more like “of the roughly 500 replies I got, about 350 offered reasons I could easily parse out.” So that’s the kind of language I’ll be using. Also, I feel like quoting short excerpts from people’s public responses is fine, but sharing out the dataset, such as it is, would be weird for several reasons, even though people with a Bluesky login can follow the same steps I did, if they want.

got yelled at, felt bad

The most common—but usually not the only—response, cited as a primary or secondary reason in about 75 replies—had to do with feeling unwelcome, being scolded, and getting lectured. Some people mentioned that they tried Mastodon during a rush of people out of Twitter and got what they perceived as a hostile response.

About half of the people whose primary or secondary reasons fit into this category talked about content warnings, and most of those responses pointed to what they perceived as unreasonable—or in several cases anti-trans or racist—expectations for content warnings. Several mentioned that they got scolded for insufficient content warnings by people who weren’t on their instance. Others said that their fear of unintentionally breaking CW expectations or other unwritten rules of fedi made them too anxious to post, or made posting feel like work.

Excerpts:

  • Feels like you need to have memorized robert’s rules of the internet to post, and the way apparently cherished longtimers get hostile to new people
  • i wanted to post about anti-trans legislation, but the non-US people would immediately complain that US politics needed to be CWed because it “wasn’t relevant”
  • I don’t know where all the many rules for posting are documented for each instance, you definitely aren’t presented them in the account creation flow, and it seems like you have to learn them by getting bitched at
  • Constantly being told I was somewhat dim because I didn’t understand how to do things or what the unwritten rules were.
  • I posted a request for accounts to follow, the usual sort of thing, who do you like, who is interesting, etc. What I got was a series of TED Talks about how people like me were everything that was wrong with social media.
  • sooooooo much anxiety around posting. i was constantly second-guessing what needed to be hidden behind a CW
  • the fact that even on a science server, we were being badgered to put bug + reptile stuff behind a CW when many of our online presences are literally built around making these maligned animals seem cool and friendly was the last straw for me

What I take from this: There obviously are unwelcoming, scoldy people on Mastodon, because those people are everywhere. I think some of the scolding—and less hostile but sometimes overwhelming rules/norms explanation—is harder to deal with on Mastodon than other places because the people doing the scolding/explaining believe they have the true network norms on their side. Realistically, cross-instance attempts to push people to CW non-extreme content are a no-go at scale and punish the most sensitive and anxious new users the most. Within most instances, more explicit rules presented in visible and friendly ways would probably help a lot.

In my experience, building cultural norms into the tooling is much more effective and less alienating than chiding. The norm of using alt-text for images would be best supported by having official and third-party tools prompt for missing alt-text—and offer contextual help for what makes good alt text—right in the image upload feature. Similarly, instances with unusual CW norms would probably benefit from having cues built into their instance’s implementation of the core Mastodon software so that posters could easily see a list of desired CWs (and rationales) from the posting interface itself, though that wouldn’t help those using third-party apps. The culture side of onboarding is also an area that can benefit from some automation, as with bots on Slack or Discord that do onboarding via DM and taggable bots that explain core concepts on demand.

couldn’t find people or interests, people didn’t stay

A cluster of related reasons came in at #2, poor discoverability/difficulty finding people and topics to follow, #4, missing specific interests or communities/could only find tech, and #7, felt empty/never got momentum. I am treating each group as distinct because I think they’re about subtly but importantly different things, but if I combined them, they’d easily be the largest group of all.

It’s probably a measure of the overall technical/UX sophistication of the responding group that several people explicitly referred to “discoverability.”)

People in the “poor discoverability” group wrote about frustration with Mastodon features: how hard it was to find people and topics they wanted to follow, including friends they believed to already be on Mastodon. They frequently also said they were confused or put off by the difficulty of the cross-server following process as secondary reasons. Several people wrote about how much they missed the positive aspects of having an algorithm help bring new voices and ideas into their feeds, including those that they wouldn’t have discovered on their own, but had come to greatly value. Another group wrote about limited or non-functional search as a blocker for finding people, and also for locating topics—especially news events or specialist conversations.

The “missing specific interests or communities” group wrote about not finding lasting community—that the people and communities they valued most on Twitter either didn’t make it to Mastodon at all, or didn’t stick, or they couldn’t find them, leaving their social world still largely concentrated on Twitter even when they themselves made the move. Several also noted that tech conversations were easy to find on Mastodon, but other interests were much less so.

The “felt empty” group made an effort to get onto Mastodon, and in some cases even brought people over with them, but found themselves mostly talking into a void after a few weeks when their friends bailed for networks that better met their needs.

Excerpts:

  • For me, it was that Mastodon seemed to actively discourage discoverability. One of the things I loved most about Twitter was the way it could throw things in front of me that I never would have even thought to go look for on my own.
  • I feel like every time I try to follow a conversation there back to learn more about the poster I end up in a weirdly alien space, like the grocery store on the other side of town that’s laid out backwards
  • It seemed like it needed to pick a crowd, rather than discover new ones. Fewer chances at serendipity.
  • I also remember trying to follow instructions people posted about “simple” ways to migrate over your Twitter follows/Lists, & none of them really worked for me, & I got frustrated at how much time I was spending just trying to get things set up there so I wasn’t completely starting from scratch
  • Mastodon was too isolating. And the rules made me feel like the worst poster.
  • Quote-replies from good people giving funny/great information is how I decide are important follows.
  • Discoverability/self promo is limited & typing out 6 hashtags is annoying. # being in the actual posts clutter things (unlike cohost/insta).
  • Difficulty in finding new follows was high up for me. But even once I got that figured out, it was a pain to add new people to follow if they weren’t on my instance.
  • finding people you want to follow is hard enough. Adding in the fact that if you joined the wrong server you might never find them? Made it seem not worth the trouble.
  • I couldn’t really figure out how to find people and who was seeing what I posted; I was never sure if I had full visibility into that
  • the chief problem was an inability to find a) my friends from Twitter who were already there and b) new friends who had similar interests, both due to the bad search function
  • Just didn’t seem active enough to feel worth learning all the ins and outs.

What I take from this: Mastodon would be much friendlier and easier to use for more people if there were obvious, easy ways to follow friends of friends (without the copy-paste-search-follow dance). Beyond making that easier, Mastodon could highlight it during onboarding.

Making it easy to search for and find and follow people—those who haven’t opted out of being found—would also be tremendous help in letting people rebuild their networks not just when coming from elsewhere, but in the not-that-rare case of instances crashing, shutting down, or being defederated into oblivion, especially since automatic migration doesn’t always work as intended.

Missing replies also feed into this problem, by encouraging duplicate responses instead of helping people find their way into interesting conversations and notes—a social pattern that several people mentioned as something they prize on more conversationally fluent networks.

too confusing, too much work, too intimidating

The next big cluster includes group #3, too confusing/too much work getting started, group #5, felt siloed/federation worked badly, and group #7, instance selection was too hard/intimidating.

A lot of people in the responding group found the process of picking an instance, signing up, and getting set up genuinely confusing. Others understood how to do it, but found it to be too time-consuming, or too much work for an uncertain return on investment. A couple of people had so many technical errors getting signed up to their first instance that they gave up. Several mentioned that they were so flooded with tips, guides, and instructions for doing Mastodon right that it seemed even more confusing.

Many found the idea and practice of federation to be confusing, offputting, or hostile; they cited difficulties in selecting the “right” instance and shared stories about ending up on an obviously wrong one and then losing their posts or having migration technically fail when they moved. Several explicitly used the words “silo” or “siloed” to describe how they felt trying to find people who shared their interests and also, I think crucially, people who didn’t share special interests, but who would be interesting to follow anyway. (This is obviously intimately tied to discoverability.)

Several brought up patchwork federation and unexpected or capricious defederation. Side conversations sprang up over how difficult people found it to pigeonhole themselves into one interest or, conversely, manage multiple accounts for multiple facets of their lives.

Excerpts:

  • My Twitter friends joined various Mastodon servers that didn’t talk to each other and I gave up on trying to figure it out.
  • I’m tech savvy and have found mastodon simply opaque. I’ve set up 4 accounts, each on a different server, and don’t know how to amalgamate all the people I’m following everywhere (assuming all those servers federate with each other).
  • It was the thing where people had to make whole twitter threads just to explain how to sign up
  • the federation model is a mess and it’s impossible to use. i’ve been using computers all day every day since the 90s and mastodon makes me question whether i’m actually good at them
  • discovered I was on some kind of different continent from my friends, and could not follow them, nor they me. Immediately felt frustration and disgust and never looked back.
  • I was told picking a server didn’t matter. Then it turned out it actually mattered a great deal for discoverability. Then I’m told ‘migrating is easy’, which is just a straight up lie.
  • Just 100 tiny points of friction for little return

What I take from this: I agree with these people, and I think all fedi projects meant for a broad audience should focus on fixing these problems.

too serious, too boring, anti-fun

People in this category talked about a seriousness that precluded shitposting or goofiness, and a perceived pressure to stay on topic and be earnest at all times.

  • It felt like the LinkedIn version of Twitter - just didn’t have any fun there
  • It feels overly earnest and humorless — I don’t consider myself a particularly weird or ironic poster but I want some of those people around saying funny stuff, you know?
  • And in the occasional moments where I do feel like being a little silly & humorous, I want to be in a crowd that will accept that side of me rather than expecting a constant performance of seriousness!
  • it just didn’t have as much fun or joy as early Twitter and Bluesky
  • ultimately, I just bounced off of the culture, because it wasn’t banter-y and fun. It feels too much like eating your vegetables.

What I take from this: Honestly, I think this is the most obvious culture clash category and is less something that needs to be directly addressed and more something that will ease with both growth and improved discoverability, which will help people with compatible social styles find each other. I think the other piece of this is probably the idea of organizing people into interest-based instances, which I think is fundamentally flawed, but that’s a subject for another time.

complicated high-stakes decisions

There’s a meta conversation that is probably unavoidable, and that I’d rather have head-on than in side conversations. It’s about what we should let people have, and it shapes the discourse (and product decisions) about features like quote posts, search, and custom feeds/algorithms—things that are potentially central in addressing some of the problems people raised in their replies to my question on Bluesky.

Broadly speaking, in the landscape around and outside of the big corporate networks, there are two schools of thought about these kinds of potentially double-edged features.

The first, which I’ll call Health First, prefers to omit the features and affordances that are associated with known or potential antisocial uses. So: no quote-posts or search because they increase the attack surface afforded to griefers and nurture the viral dynamics that drive us all into a sick frenzy elsewhere. No custom algorithms because algorithms have been implemented on especially Facebook and YouTube in ways that have had massive and deeply tragic effects, including literal genocide affecting a million adults and children in Myanmar whose lives are no less real than yours or mine.

The second, which I’ll call Own Your Experience, states that people, not software, are responsible for networked harms, and places the burden of responsible use on the individual and the cultural mechanisms through which prosocial behavior is encouraged and antisocial behavior is throttled. So: yes to quote-posts and search and custom feeds, and just block or defederate anyone using them to do already banned things, like harassment or abuse or the kind of speech that, given the right conditions, ignites genocide.

A thing I think about all the time is the research showing that people would literally rather self-administer painful electrical shocks than be bored. You can make the most virtuous and intentionally non-harmful network in the world, but if it doesn’t feel alive, most people will pick something worse instead.

At their simplest, I don’t like either of these positions, though they both get some things right. The Own Your Experience school doesn’t really grapple with the genuinely terrifying dynamics of mass-scale complex systems. And I don’t think the Health First school has come to terms with the fact that in an non-authoritarian society, you can’t make people choose networks that feel like eating their vegetables over the ones that feel like candy stores. Even most people who consciously seek out ethically solid options for their online lives aren’t going to tolerate feeling isolated from most of their peers and communities, which is what happens when a network stays super niche.

From where I stand, there are no obvious or easy answers…which means that people trying to make better online spaces and tools must deal with a lot of difficult, controversial answers.

If I had to pick a way forward, I’d probably define a target like, “precisely calibrated and thoughtfully defanged implementations of double-edged affordances, grounded in user research and discussions with specialists in disinformation, extremist organizing, professional-grade abuse, emerging international norms in trust & safety, and algorithimic toxicity.”

If that sounds like the opposite of fun DIY goofing around on the cozy internet, it is. Doing human networks at mass scale isn’t a baby game, as the moral brine shrimp in charge of the big networks keep demonstrating. Running online communities comes with all kinds of legal and ethical obligations, and fediverse systems are currently on the back foot with some of the most important ones (PDF).

this post is too long, time to stop

Right now, Mastodon is an immense achievement—a janky open-source project with great intentions that has overcome highly unfavorable odds to get to this point and is experiencing both growing pains and pressure to define its future. If I were Eugen Rochko, I would die of stress.

I don’t know if Mastodon can grapple with the complexities of mass scale. Lots of people would prefer it didn’t—staying smaller and lower-profile makes it friendly to amateur experimentation and also a lot safer for people who need to evade various kinds of persecution. But if Mastodon and other fedi projects do take on the mass scale, their developers must consider the needs of people who aren’t already converts. That starts by asking a lot of questions and then listening closely and receptively to the answers you receive.

https://erinkissane.com/mastodon-is-easy-and-fun-except-when-it-isnt
Extensions
Notes From a Mastodon Migration

I changed servers on Mastodon last week and I learned a lot and I have opinions.

To begin with, the best time to move from one home server to another (“migrate instances”) on Mastodon is never. The second-best time is as soon as possible, because the longer you wait, the more you stand to lose. But until you’ve been on Mastodon for a while, you aren’t going to know how well you get along with your server’s mod policy and administrators, and how your server’s reputation will affect your experience. It’s not ideal! But it’s what we have right now.

As a freshly moved person, here’s what I’ve learned, what I wish I’d known in advance, and what I hope the makers of federated systems will change. (But first, thank you to all the friends and strangers who advised me on my choice of new server. You-all are great.)

what you can move

If both your old and new home servers are running the most up-to-date version of Mastodon, you will probably be able to automatically bring over your followers and manually export and then upload your following lists, mutes and blocks, and bookmarks.

You’ll manage all of the above by exporting data from your current account, creating a new account on a new server, and then following a short but slightly hair-raising series of steps to move everything over. I very highly recommend using Nuztalgia’s excellent server migration guide to help you through this process.

what you can’t move

Most guidance on moving servers includes a few warnings about what won’t move, but leave out lots more, I think mostly because they seem too obvious to mention if you’ve already internalized the Mastodon way. I don’t think they’re too obvious to mention, and some of them surprised me very much despite my many, many years in tech.

Here’s my best crack at a list:

  • Your username: Your old one will no longer work, and your old profile will link to your new one.
  • All notifications associated with your old account: If people reply to your old threads, you won’t know it unless you open those threads and look for new replies with your eyeballs. If people mention your old username in new posts, you won’t see those, and they won’t get any notifications that they’re mentioning a decommissioned account, so there can be whole conversations in which people think they’re engaging with you that you never see.
  • Your posts: On Mastodon, when you move, you leave your old posts behind at their original URLs. (This isn’t true with all fediverse software—the promising but still small Calckey/Firefish can port in posts from old accounts.) You can export them to a local archive, but you can’t upload them to your new server. But—warning klaxon—when you export your posts, you’re literally ONLY GETTING YOUR OWN POSTS. Which means that you will never again have access to…
  • Your inbound (pseudo-)DMs and follow-locked replies: The local archive you export from your old account will not include anything people sent to you. This makes perfect sense to plenty of software developers—so much that it’s not mentioned in any of the migration docs I’ve seen—but seems to me like a massive oversight that implies a deeply peculiar understanding of what sociability and conversations are. (It may be that if you go in and bookmark your incoming messages, they show up in your bookmarks in your export, but I haven’t tested this, and it would still break the integrity of your conversations.)
  • Your followers’ lists: If any of your followers made a list (like “people I follow who post flowers” or whatever) and put you on it, you won’t be on it after you move. (Your old dead username will be. Your new live username won’t.) I know this only because someone who follows me noticed I’d gone missing from a list.

So…yes, you can move your account. The process isn’t that difficult. But even if it works well, which it doesn’t always, you lose a lot—more than I think is reasonable to ask of people who just want to hang out with their friends.

why this matters

If it weren’t so difficult to understand how to choose a server to begin with, the downsides of migration would sting less, but it is so hard to know if you’ve found the right (for many varied values of right) server until you’re already settled in—by which time you’ve built up posts and conversations you may not be delighted to lose.

Mastodon-the-company recently tried to partially solve this by nudging new people to Mastodon.social, the big server they run, but that big server is blocked by many small servers, and has a reputation for being middling on moderation, though I think that’s been improving. But this plan rests on the idea that people can get accustomed to Mastodon on Mastodon.social and then migrate to a place they like better. Which makes sense, because migration is at the center of what Mastodon promises.

The biggest selling point of federated networks like Mastodon and the broader fediverse—and, soon, Bluesky!—is that if you don’t like the way your instance is being run, you can move your account without the huge penalty of starting over. Theoretically, this means that unlike Twitter or Facebook or Instagram (or Post or Cohost or…) accounts, Mastodon accounts are much less vulnerable to the predations of negligent corporations, billionaires with bad personalities, and server admins who decide to incorporate invasive surveillance and ad technologies.

In practice, this means you’re trading being vulnerable to the whims of centralized corporate services and rich weirdos for…being vulnerable to the whims of whatever rando spins up a server, unless migration is speedy, comprehensive, and safe. That’s why it matters that migration is both clunky and surprisingly lossy.

where to, then?

Getting account portability right—in the technical architecture sense—is a heated topic across both fediverse/ActivityPub and Bluesky/AT Protocol conversations. Rather than weighing in on anyone’s shouty GitHub issues, I’m posting here—both because I’m not immersed in the protocol-level work and because I think the various system-specific conversations needs cross-currents.

Here are some human/cultural things I hope orgs working on federated systems will do:

  • Prioritize, support, and fund both technical and non-technical (or marginally technical) but essential work to improve initial server-picking, including, in order of increasing difficulty:
    • developing ultra-clear but not oversimplified guidance to help potential users understand what server choice means (for them as people, not philosophically) and how to pick one,
    • collecting and publishing more—and more useful—information about servers,
    • building reputation systems (yes! I mean it!) to help guide server choices, with appropriately sophisticated mechanisms for handling maladaptive behaviors and bad actors.
  • Put non-dev users through any proposed or existing migration process, take notes, and fix the gaps that this process reveals. For any temporarily unfixable gaps, build in high-viz unmissable documentation and warnings. (Major and permanently unfixable gaps suggest that the underlying technology isn’t there yet.)
  • Clearly and officially document the whole migration process, including what you can keep and what you will lose. (I thought before migration that this was done already but after going through the process, it seems clear that the docs are still in the fetal stage.) Use multiple formats! Make a video.
  • Most of all: Approach the concepts of “identity,” “conversation,” and “portability,” as cultural patterns that bring with them expectations and assumptions from the whole of your users’ online and offline experience, and build/fix your technical approaches accordingly.
daydreaming

Here’s my reward for writing up the above: Imagine orienting new federation-curious people by offering them a tour of the positive benefits federated systems offer—so not “billionaires can’t take it away” or “ad companies can’t surveil you,” but…

Constellations of trusted, independent, stable, thoughtfully moderated communities that integrate with each other and offer distinctive moderation and curation to meet widely varying needs. Places for people who want to swim in the firehose, places for people who want a gentle sanctuary, and places for people who need to move between those modes.

Easy, reassuring ways to find your way to a home server by signalling what kind of experience you want from a social network and getting back vetted choices that meet your needs and are ready to welcome you in—or drop you into the shitposting lava, if that’s what you’re after.

Portability and migration processes that maintain your connections and conversations (without requiring you to become your own IT department or tying you to data-hosting services that sell your information) when you choose to move within a system.

Easy, subscribable, transparent moderation options that allow you to layer in your own preferences on top of your chosen server’s, so you can tweak your experience as needed.

If you’re watching conversations around ActivityPub and AT Protocol, you’ll know that there are significant tensions about the trade-offs required to build or refit systems to achieve these kinds of ideals. I think it’s going to take (more) years of effort to get federated systems to any of those states, and the techno-cultural work required to get there will be difficult, dizzyingly complex, and controversial, and I doubt any system will get it all right.

But my hope—and the reason I’m writing this stuff—is that as more systems and platforms come online outside the biggest central services, we’ll be able to learn from each other in moving toward better adaptation to human needs.

https://erinkissane.com/notes-from-a-mastodon-migration
Extensions
The Affordance Loop

A black-and-white postcard photograph of more than a dozen men standing on tall stilts wearing berets, many in rustic sheepskin coats. The stiltwalking French shepherds in the marshes of the Landes are incredible, and I love this postcard—which you can buy from a nice person on eBay—in particular.

Affordances, in the simplest terms, are what an object offers or provides to a specific individual at a particular moment in time. This sense originates with James and Eleanor Gibson, a pair of celebrated researchers in psychology. James Gibson lays out a wonderfully informal definition in one of his later works:

The affordances of the environment are what it offers the animal, what it provides or furnishes, either for good or ill. The verb to afford is found in the dictionary, but the noun affordance is not. I have made it up. I mean by it something that refers to both the environment and the animal in a way that no existing term does. It implies the complementarity of the animal and the environment. The antecedents of the term and the history of the concept will be treated later; for the present, let us consider examples of an affordance.

If a terrestrial surface is nearly horizontal (instead of slanted), nearly flat (instead of convex or concave), and sufficiently extended (relative to the size of the animal) and if its substance is rigid (relative to the weight of the animal), then the surface affords support. It is a surface of support, and we call it a substratum, ground, or floor. It is stand-on-able, permitting an upright posture for quadrupeds and bipeds. It is therefore walk-on-able and run-over-able. It is not sink-into-able like a surface of water or a swamp, that is, not for heavy terrestrial animals. Support for water bugs is different. —The Ecological Approach to Visual Perception, p. 127)

(Support for water bugs is different!)

In UX/interface design, conversations about affordances tend to revolve around the cues designers can offer to reveal what a system actually offers the people who use it. Those conversations, paired with mass use of the online and mobile interfaces coming out of especially Apple, Microsoft, and Google resulted in the modern interface design vocabulary we can take for granted until we’re doing phone tech support with someone who doesn’t distinguish a webpage from a home screen from a preferences panel.

On the social internet, people who have used the biggest platforms and networks enter new ones expecting to find ~standard affordances and expecting that familiar interface cues will map to familiar affordances.

When newer systems and tools confound those expectations, people get, at best, confused. At worst, they try to walk across a solid-looking but sink-into-able surface and get stuck in a bog, while stilt-walking locals familiar with the marshy ground cluster around to chide them for acting like—well, water bugs, maybe.


Both online and off, I am comically bad at spotting secondary or subtle affordance cues. If there is a door with a horizontal bar and a sign that reads “PULL,” I am going to run full-tilt into that glass. Vertical bar and a little “push” label? I’m gonna stand in the hallway pulling like Babe the Blue Ox and wondering aloud why the door is locked—real ninety-ninth percentile performance on not getting it. This, plus my design-adjacent background, inclines me to listen to other peoples’ complaints about confusion even when I personally find a system easy to use.

The class of design problems I (literally) run into manifests when an object’s actual affordances don’t match the main visual/perceptual cues it offers, as Donald Norman lays out in his foundational and genuinely great Design of Everyday Things. Norman’s humanist approach to these problems was a major contributor to the emergence of user-centered design (now “human-centered” design, which is nicer). It also resulted in the doors I run into getting tagged “Norman doors” in his honor, which seems like poor thanks for a lifetime of good work.

I think this is also connected to Alexander’s theories on the way spaces that worsen conflicting desires instead of resolving them harm not only themselves, but also the systems around them.

The people who make casinos, malware, chum-box ads, and other scams use Norman door patterns intentionally, to deceive, trap, and exploit. And although people in the US-based culture I come from condemn those uses, we tend to consider accidental deceptive affordance cues merely thoughtless, much as we prefer to punish those who kill people intentionally much more harshly than those who kill by accident or negligence, even though the person is just as dead. In practice, this emphasis on intent produces cultural systems in which no one is responsible and lots of people die.

Legally, we balance our hyper-valuation of intent with the duty of care, but in daily life, duty of care gets ideologically clotheslined by personal responsibility and liability-reduction theater, like the tiny-print waivers we sign to do anything mildly risky, or the Montana school district that chose to rotate students around the classroom every 15 minutes during covid restrictions to avoid triggering quarantine rules. This orientation saturates the tech industry, which responds to more active constructions of duty of care with things like foot-dragging cookie warnings.

It’s not surprising that few commentators apply the idea of the duty of care—even informally—to social platforms and communities, but once you start assessing online conflicts and disasters through that lens, it gets hard to un-see the breaches.


In an interview with Justin Hendrix on the Tech Policy Press podcast The Sunday Show, Dr. Johnathan Flowers digs into affordances on the social internet:

…by affordances I mean the resources, the tools, the very structure of the website—things like hashtags, retweets, quote tweets, and so on. The ways that a given community forms by means of the affordances of a digital platform structures some of the nature of that community.

As André Brock notes in his book, Distributed Blackness, Black Twitter as a phenomena emerged by means of the affordances of Twitter, through how things like quote tweets, retweets and hashtags allowed Black users to engage in Black digital practices, which are the performances of Black offline culture by means of the affordances of the digital platform of Twitter. So things like call and response, playing the dozens—which are culturally-mediated practices within the Black community, to engage in information sharing, in community building—were all enabled by the features of Twitter such that Black folks could engage in digital practices that made present their identities as members of a community.

(Disclosure, I’ve very lightly repunctuated the transcript excerpts for readability, based on my repeated listenings. Flowers speaks in dense paragraphs and makes it sound easy, and I want to represent his work as well as I can here.)

This is one of several I think highly relevant takes on affordances in the interview, and I encourage interested readers to listen to or read it in full. In this post, I’m mostly going to talk about specific results of specific affordance—and affordance-cue mismatches—but I want to start by nodding toward two subtler points Flowers makes. The first pulls in Sara Ahmed’s work on the way whiteness functions in the world to talk about the underlying cultural patterns that shape online spaces:

…if you have a space that is predominantly populated by white persons regardless of their other identities, if you are in a space primarily populated by white persons, the norms, the habits, the very structure of that space will take on a likeness to whiteness by virtue of how the majority of people participate in that space. As I said, Mastodon is a very white space. It is not unlike other tech spaces where whiteness is predominant. Insofar as this is the case, the norms, the habits, the affordances of the platform will inherit whiteness.

“Inheriting whiteness” here has direct bearing on the kind of norms-based conflicts that occur and recur when white Mastodon users try to discipline Black Mastodon users for discussing racist oppression—or even mentioning race. Here’s Flowers:

Insofar as the majority of the users on Mastodon are white, then they take up the kinds of ways that whiteness organizes space—including an entitlement to freedom from say, understanding one’s complicitness in racism, or freedom from engaging with experiences of racism as made present by users of color.

The last point I want to pull out of Flowers’ interview is that the underlying cultural patterns of Mastodon influence not only the affordances that get built, but also the norms governing how people are permitted to use those affordances. This shows up in what is socially required, as in “you must use content warnings for mentions of your experience of racism,” and in what is socially forbidden, as in “linking to another post is just the same as a quote-tweet and we don’t do that kind of thing here.”

So: Patterns, affordances, and norms.


Some of what’s on my mind is about Mastodon/fedi, which has lost tempo on a lot of the parts of the Twitter migration that I’ve cared most about—Black Twitter, literary Twitter, radical journalist Twitter, while its someday-maybe-rival Bluesky has imported at least some of those communities despite obvious problems.

Mastodon’s userbase remains divided on whether the network’s niche status is a good thing. Plenty of people on Mastodon are committed to making both Mastodon and the broader fediverse (the apps and services interconnecting via the ActivityPub protocol) easier for a mass audience to use and understand—but the opposing point of view is also very present, and often very heated. The view that e.g. Twitter users have proven themselves to be Nazi sympathizers who should be excluded from the fediverse is a minority view, but it shows up like clockwork. So do claims that anyone who wants greater ease of use on Mastodon is really just mad about not having a billionaire overlord to complain to—or that finding Mastodon confusing is a smokescreen for clout-chasing and influencer culture and loving capitalism.

A little further behind those complaints, you find the people making extremely forthright claims that Black people who discuss race are the real racists and don’t belong.

One of my very first exchanges in my 2022 re-attempt to get situated on Mastodon was with a scrupulously polite young man who hoped that Mastodon could repeat the successes of Gamergate in “gatekeeping” the gaming world to keep it healthy. In hindsight, this was heavyhanded foreshadowing that patterns of intentional, active, and probably coordinated exclusionary tactics were already entrenched in the fediverse—and not only on the known trollfarms and abuse hubs that even moderately well-run instances block.

There are subtler and better arguments about the way security through obscurity (non-derisive) and interface friction can provide provisional refuge for people targeted for abuse. I think people making those arguments usually have extremely valid concerns, especially but not only when they talk about fears of Meta’s upcoming federation. For today, I’ll just say that remediation work on confusing Mastodon features and flows should always either preserve the existing affordances that offer shelter and refuge to existing users or replace them with better ones that provide the same benefits. (The best way to do this would probably be to engage in large-scale user research and participatory design, but I say that about everything.)


As I initially wrote back in April, there are plenty of reasons to avoid Bluesky, the most obvious of which is that Jack Dorsey was involved in the initial funding and remains on the board. Nevertheless, I came in a little bit optimistic about the infrastructure Bluesky’s AT Protocol might be able to offer intentional communities in a federated future. Since then, things have…not been going great.

Last week on Bluesky, the company wrapped up a string of own-goals on the trust and safety side by 1.) failing to run a username denylist that included technical terms but not even the most common list of racial and other slurs and straight-up Nazi signifiers, 2.) going silent except for GitHub comments and PRs after a Black user brought obviously awful usernames to light, and 3.) eventually publishing a super-anodyne statement that their policy has always disallowed racism and harassment and reflects their values. Katherine Alejandra Cross has a good overview of the situation on Wired:

When reached for comment, Bluesky’s press office mostly repeated the language of its posted apology on the platform itself. However, unlike in their thread, Bluesky admitted a “mistake” occurred. The company added that an “incident report to increase transparency and accountability” is in the making and will be published soon.

If this is true, it could represent a small step forward in restoring trust with a user base that, for now, is disproportionately made up of marginalized communities badly burned by the failures of moderation on large platforms. But skepticism is warranted. What Bluesky corporate has presented up to this point is communication that could have been generated by ChatGPT—the vague, anodyne language in its vanishingly rare public statements on the matter verge on the embarrassing.

My only quibble with this characterization is that nothing the company or its CEO has posted about this most recent incident qualifies as an apology. In the days since I initially drafted this post, a Bluesky protocol engineer posted an actual apology that I thought was a starting point. Otherwise, it’s been mostly silence in the face of broad pressure for the company’s leadership to acknowledge that the way they’ve handled this and previous moderation controversies has left a lot of Black (and non-Black) users feeling—I think rightfully—deeply pessimistic about the future of the platform and the protocol.

Maybe Bluesky reforms itself and hires a benevolent assassin to publicly lead trust and safety on their beta instance and handle those same dimensions in a culturally literate way in the protocol itself. Maybe not! But a lot of people want to use the beta, and if the team makes it to federation, far more people will be affected by the cultural patterns the protocol reinforces, enables, and restricts, so I’m rooting for them to get it together.

Ultimately, I think it’s going to be near-impossible to build out the right patterns for a prosocial big-world protocol without being scarily prescient about ethical hazards, deeply versed in common abuse patterns, demonstrably determined to prevent every bit of preventable harm, and absurdly great at egoless comms and course correction. I’m not seeing that yet.

But here’s the thing: despite the Bluesky-the-company repeatedly tripping over its own feet, the beta actually is getting moderated, and on a reasonably snappy cadence—I’ve seen reported racist and transphobic posts and accounts get smacked down much more quickly than they were on, say, 2021 Twitter. And Bluesky-the-beta-platform is still where I’m finding a lot of the most on-point and sophisticated discussions of network norms/trust and safety/moderation work, along with all kinds of conversations between Black and brown journalists, public figures, writers, and artists.

An Afro-Boricua writer posting as Bryanna, Angry Noodle summarized the dynamic on Bluesky, quoted here with permission:

I post because I like attention. The positive kind, not the negative.

I think the moment we all admit we just want to be liked, life will be a lot better for all of us.

And it’s sad because social media platforms know this and capitalize off of it. They know very well that we’re desperate for an easy way to connect, and that for a lot of us Twitter fulfilled that role.

And they know that being semi-good means that they can still be mostly bad and won’t lose us.

This brings me back around to the Tech Policy Press podcast episode, to this exchange between Hendrix and Dr. Flowers:

Hendrix: I hear you almost mourning something that you do see as legitimate and worth preserving inside of this flawed environment, which is that, of course, these networks that have been built up sometimes in—well, I don’t want to use the word spite—but sometimes in opposition to the dynamics of that platform. Is that a fair assessment of your argument?

Flowers: Yeah, that’s a fair assessment of my argument, I would say that “in spite of the platform” is actually a pretty good way to put it. When we start talking about Mastodon, one of the things that I’m going to try to make clear is how this “in spite of” nature functions as a result of the ways that platforms inherit whiteness or inherit structures of oppression. But “in spite of” is actually a fairly good way to put it, because it is in spite of the privately-owned nature of the platform, that it has become a digital commons.

I think those interlinked dynamics—that people want connection so badly that they’ll accept “mostly bad” spaces, and that the history of the social internet includes oppositional and defiant uses of network affordances ”in spite of” their patterning in modes of capitalism and white supremacist assumptions and norms—belong at the center of our conversations about what better online spaces could be. I also think they’re signal flares that fediverse and other new-school network advocates should attend to, even when they come from or point toward networks those advocates find repellent.

What does it mean if the “mostly bad” on Bluesky or until-very-recently-Twitter works for proportionately more Black people and other folks of color than the purportedly healthier systems at work on Mastodon?

If I had to sum up the whole entangled ball of predictable failures and unlikely successes across Mastodon and Bluesky at this exact moment, I’d say that neither place is covering itself in glory, but in terms of the provision of refuge, more familiar and cherished affordances with clear cues are attracting mass audiences better than high-friction tech structures designed to be beneficially anti-viral at the cost of being fun and easy to use.

Again, that’s all despite the ongoing moderation and communication problems, and even though pretty much everyone I follow on Bluesky is openly skeptical that it will work out.

I will also say that in spite of the problems I’m about to get into in exhaustive detail, Mastodon and the fediverse offer a big, motivated, often super-generous ecosystem of toolmakers, devs, and instance operators who really, really want to nail big-world + semi-sheltered networking that is inclusive and welcoming. I share that goal—it’s the only reason I’m spending so much of my time and energy wrestling with this stuff.


Back to affordances, then. I’ve said everything I’m about to write on Mastodon already, and lots of other people have too, but finding or re-finding things there is a nightmare, so I’m going to write it all down here as a reference. If you’re not into the details, do skip down.

Some of the expectation-afforance and cue-affordance mismatches on Mastodon are super-intentional, like the omission of built-in quote-posts and attendant notification systems, and the super-local and usually only semi-functional search. These omissions were meant to reduce virality and increase the healthiness of federated conversations, and Mastodon-the-organization has made a public commitment to working on careful ways of eventually integrating versions of those features.

Then there’s the other stuff, which starts before you get onto Mastodon at all. Signing up means picking an instance, and the instance you’re on will, through its moderation actions and policies and its federation and defederation choices, define much of your experience on the fediverse, but instances currently distinguish themselves primarily by location, size, or nominal shared interests, rather than by a detailed and transparent discussion of those actual major experience differentiators.

The missing stair actually matters, here: If you’re a Black gay dude, say, and you stumble into choosing an instance that federates with known trollfarms and harassment hubs, and you make a post that reveals anything about your identity, you’re very likely to get flooded with some of the worst abuse anyone ever gets on the internet. None of this will be clear to you before you sign up.

Assuming you escape that fate, then once you’re on, you’ll also have to individually find your people who’ve made it onto Mastodon—but probably not by searching by name or partial username, because search is flaky. If you google them, maybe you can find them, but clicking through the search result will bring you to their instance, which probably isn’t your instance, so you can’t follow them from there, because that’s not where you’re logged in. So you have to get their whole usernames—maybe from their Twitter bios if that’s allowed this month?—and search for them on your own instance, then follow.

Maybe you also want to follow people back, so look at your followers list and pop open some profiles. If you’re using the main web interface (or many apps), those profiles will open on, you guessed it, those people’s home instances. Try to follow them there and you’ll get hit with a login request, but you can’t log in there because there isn’t where your account lives. Unless you’re on the same instance, in which case it does.

You can follow straight from your timeline, though, so if you see a cool post, maybe you open the person’s profile in a different tab, decide if you want to follow back, and then re-find their post on your timeline so you can actually follow.

“I’m not sure why you find this difficult,” say the people running around on marsh-stilts. “Maybe you just love billionaires and chasing clout.”

Now let’s say you made it onto Mastodon and you find some of your people or you just make new friends or whatever (this is literally a strategy I’ve seen advocated for across Mastodon). Cool. This brings us to the next thing, which is that replies on Mastodon have strong affordance cues familiar to us from other networks, but the actual affordances are different:

  • If you make a post and people reply, you will see all the replies directed to you with any level of privacy (public, unlisted, followers-only, mentioned-people only—the quasi-DM of masto) and from any instance unless you’ve blocked or muted the replier. (Or if they’re sending you a quasi-DM and you don’t follow them and you have DMs turned off for people you don’t follow, in which case their reply will go into a black hole and neither of you will know it.)
  • If you are looking at replies to someone else’s post, you will only see the replies that come from instances that your instance already has a federation relationship with. Someone replying from a small instance that no one on your instance follows or is followed by? You’re not going to see their reply unless you actively open the post on its home instance and read the thread there.

This maybe sounds wonky and unimportant, but in practice, it means that tons of people who are replying are only seeing a fraction of the previous replies—which, in turn, means that the original poster often gets tons of near-identical responses/the same joke over and over/other annoying things, and that nearly everyone in the conversation is participating with a slightly different set of replies as their context. Add a couple language translation glitches and you have a recipe for deep social weirdness.

So if you’re that gay Black dude who accidentally landed on a server that federates with internet demons, your replies are eventually going to be filled with the worst things anyone can imagine sending to you, and none of the people on well-moderated instances will see the abuse you’re getting because their instances already yeeted those assholes into space. (Not even getting into the way followers-only and quasi-DM posts also reduce the visibility of abusive, threatening, creepy, and criminally tedious replies.) So when you then talk about the abuse you’re getting, most people will have no idea what you mean, because they don’t see it. Some of them will think you’re making it up.

Honestly this is so complicated I feel like I am making it up, but alas.


All these janky affordance/cue problems on Mastodon are a pretty crystalline example of what I’ve tried to get to in more abstract ways in earlier posts this year. People with specific goals and orientations made the system and laid down the cultural patterns that govern acceptable uses. Other people later joined in, and some of them made it through the gantlet of sign-up, got lucky on instance choice, were able to find friends, and brought in different hopes and purposes that are orthogonal to those of the initial makers. But unless/until those people and their communities hit critical mass, their attempts to hip-check the tooling and the norms will encounter so much resistance—some of it explicitly vile and organized, some not—that a lot of them drift away.

This is the affordance loop: Communities shape tools that shape communities, surrounded by everything happening in the world around us.

And another loop, online and off: Bad outcomes happen in spite of good intentions. When the fact of initial good intentions result in hyper-defensive responses to critique, the outcomes get worse and worse and worse. This is the second reason I try to force myself not to think too much about intention and motivation: When people get critiques laced with statements about their intent, it’s easy for them to respond with “Nope, that’s not what’s in my heart” and throw out the whole thing. In my experience, critiques that relentlessly focus on explicit statements and specific outcomes are a little harder to dodge. (The first reason is that I am really bad at understanding other people’s inner lives.)


We used to call intentionally deceptive affordance cues in software “dark patterns.” I think now they’re just a seamless part of the pervasive scam culture we accept online: Enter your email for 10% off! Lol now we have your email but you have to give us your phone number to get the code.

But let’s go back to the unintentional deceptions, like the Norman doors. Let’s say you build something that looks like it’s private, either because it’s described or visualized as “locked” (like Mastodon’s followers-only posts or messages) or behind a login wall (like the closed Bluesky beta), but really it’s not private, or not very private, or not private for long and you totally said that in the docs.

People will still treat the thing like it’s private and post private information, including their bare butts, all over it. When this material is later made explicitly public, some of those people will be upset. Others will then berate those same humans for stupidly trusting the affordance cues; those people are wrong.

If you make something that looks and feels private, you’re making a commitment in the honorable and ancient language of affordances to behave accordingly. No amount of getting cute about not posting anything on the internet you wouldn’t want in a newspaper changes that. Slapping the equivalent of yellow plastic FLOORS SLIPPERY WHEN WET signs might get you through until you can fix it, but anything less means you’re building a trap, even if you didn’t mean to.

Now let’s say you build something boat-shaped, slap some high-viz paint on it, and drop it into a sea next to a sinking passenger cruiser. You might even get the chance to haul up some of the fleeing, floundering passengers, which seems nice!

Congratulations (genuine): You’ve made a lifeboat. This is true whether you call your boat LIFEBOAT or closed beta or jkjklol don’t hurt me. It was true from the moment you pulled someone out of the water, and it’s inextricably tangled with the duty of care.


A 1909 Los Angeles Herald Sunday Magazine article describes the stiltwalking peasants of the Landes (the ones in the header image for this post) as:

…a gay folk, who always seem to make the best of their hard lot, and manage to remain cheerful although they are generally miserably poor and frequently fever-stricken. In the more remote parts of the country the postman goes his rounds on stilts, and can thus do long distances in a short space of time. Mounted on his wooden legs, he is independent of brooks, marshes and hedges, and goes straight on, checked by nothing.

The 1908 tone is really something—if you open the page, beware the hair-raising bit about British roadbuilding in Africa—but there’s still something about simple, independent technologies that permit freedom and flourishing under difficult circumstances that probably resonates with a lot of people who want to build much better networks, preferably out of free and malleable pieces of software.

It seems relevant that the Landes region wasn’t marshy before humans cleared its forests around 600 AD—and it isn’t marshy now, because of a drainage and monocultural tree-planting program that kicked off in the early 1900s, in the process disrupting a thousand-year-old way of life adapted to the marshes’ challenges.

I was thinking about this earlier today, when I read Melissa Johnson’s funny, gross, sometimes wrenching account of a trek through the Guatemalan jungle for the wedding of her friends, an Arab-American woman and Mexican-American woman who met in the Army and then got out to live safely together. Why travel to a country where same-sex marriage is illegal and violence against LGBT people is a real threat—and then haul themselves to the top of a pyramid in the ruins of El Mirador—for their tiny ceremony?

Johnson writes that:

…neutral ground doesn’t exist for Angela and Suley. When they announced their engagement back in the States, members of their families cried—and not in the happy way. Despite getting their marriage license in California, the couple didn’t feel safe having a public wedding during the first year of the Trump administration. Choosing peak rainy season has assured them of precious privacy. We have not seen, nor will we see, another tourist the entire week. This is what a history of trauma yields. When you’ve been forbidden to be yourself for so long, a lost city feels like home.

The networks I want will feel like a refuge—including for people who need different affordances than the ones that keep me afloat.

Coming soon: Home and sanctuary. (How should a lifeboat be?) Why I’m talking about refugees at all.

Related:

https://erinkissane.com/the-affordance-loop
Extensions
Qualities of Life

In an unfortunately fascinating 1963 collection of essays on computer-simulated personality, Silvan Tomkins, the founder of affect theory, wrote about:

…the tendency of jobs to be adapted to tools, rather than adapting tools to jobs. If one has a hammer one tends to look for nails, and if one has a computer with a storage capacity, but no feelings, one is more likely to concern oneself with remembering and with problem solving than with loving and hating.

Tomkins’ observation, which comes in the middle of an extended meditation on computer personality that is a wild ride for lay readers interested in AI, has concentrated my thinking about the tools that made our current networks: An ever-accelerating explosion of technical possibilities. Silicon Valley relationship networks. Venture capital’s it’s a fountain of money or a failure mandate. Open source’s programming-first culture. A lot of defensive libertarianism, a little charisma. The emotional range of a wellness beverage.

We made what we made because of what we carried in with us.


Consider the difference between two places: The first is a courtyard ringed in verandas or porches that provide shelter from rain and bright sunlight—a half-outdoor experience; it also offers views onto a street or open square, and is criss-crossed by pathways between multiple entrances and exits. The second is a fully enclosed courtyard with a single way in and no views into other open spaces; it has an abrupt transition from inside to out, with no arcade or partial cover at the edges.

A bird's-eye-view illustration from The Timeless Way of Building showing a courtyard labeled 'Living courtyard,' with a wide transitional zone between indoors and out, a large opening that offers a view out, criss-crossing paths, and some unlabeled folderol, possibly including a semi-shaded seating area, and a second courtyard, labeled 'Dead courtyard,' which is an empty box inside a box with a single door leading out into it.

In The Timeless Way of Building (last discussed here), Christopher Alexander writes about the first courtyard as alive and the second as dead, for several reasons, all of which I find illuminating.

Alexander, like Marie Kondo, builds his theoretical apparatus on the principle that given the right prompts, humans can reliably and accurately feel the difference between things that are alive and things that aren’t. If Alexander’s work were emerging into the world now, instead of in the 1970s, I suspect that it would be both conceived and received in terms of traditional/anti-colonial/anti-imperialist forms of knowledge.

First, Alexander says, the closed-off courtyard is dead because it produces—and prevents the resolution of—conflicting desires in the people meant to use it. People want to go out, but the stark contrast between out and in is too abrupt to be inviting, and the total enclosure and lack of views produces a claustrophobic feeling and quickly sends those who do venture out back indoors. Without functional pathways that bring people into a custom of crossing through the courtyard, the building’s inhabitants also spend less time there, and the courtyard is outside patterns of daily habit. In these ways, the courtyard fails to strengthen the life and wholeness of the people for whom it was made.

Second, the courtyard is dead because it fails to be self-sustaining. Because it’s uninviting, unpleasant to stay in, and removed from the normal walking patterns of the building, the courtyard becomes neglected. (I would add that commercially maintained but dead-feeling spaces, which are very common in many institutions of modern life, reflect intense upkeep and still project a sense of standoffishness or gloomy abandonment.)

Finally, the dead courtyard is dead because it pushes conflictedness out into the surrounding spaces. Alexander makes a precise argument about this, so I want to quote it at length in all its 1970s-gender-norms glory:

We try to go out, but are frustrated, because the courtyard itself pushes us away. We still need, somehow, to go out; the forces remain within us, but can find no resolution here. We have no way of resolving the situation for ourselves. The unresolved conflict remains underground; it contributes the stress which is building up. First, it reduces our capacity to resolve other conflicts for ourselves, and makes it even more likely that unresolved forces will spill over in another situation. Second, if the force does spill over, it may create even greater tension, in another situation, where there is no proper outlet for it.

Suppose, for example, the people who want to be outside go out instead and sit on the road, where trucks are going by. It is OK. But then perhaps a child gets hurt. Or, even if a child does not actually get hurt, the mother fears for it, and shouts, and conveys a continuous sense of unease to the child, so that his play is spoiled. … In one fashion or another, the effects always ripple out.

You may say—well, people can adapt. But in the process of adapting, they destroy some other part of themselves. We are very adaptive, it is true. But we can also adapt to such an extent that we do ourselves harm. The process of adaptation has its costs. It may be, for example, that the child adapts, by turning to books. The desire to play in the street conforms now to the dangers, and the mother’s cries. But now the person has lost some of the exuberant desire to run about. He has adapted, but he has made his own life less rich, less whole, by being forced to do so.

The “bad” patterns are unable to contain the forces which occur in them.

As a result, these forces spill over into other nearby systems […] the courtyard which fails makes children want to play outside and causes stress and danger in the street.

But these forces make other nearby patterns fail as well. The pattern of the street may not be conceived as a place for children to play. So, suddenly, a pattern of the street, which might be in balance without this force, itself becomes unstable and inadequate. […]

In the end, the whole system must collapse.

I think I blew past all of the parent-child stuff the first few times I read Timeless Way, but now it snags me every time. The parent here is not presented as a neurotic mother or its update, a helicopter parent: The danger of the street is not imaginary. The mother responds to the shape of the built system in which she and her child exist—and then her shouts of anxiety become part of the system through which the child’s personality is formed.


When the subject of social media problems arises, commenters tend to divide into one of two very low-level orientations toward attributing problems (or successes) to systems vs. attributing them to personalities.

I’m always (as in literally every day) thinking about something journalist Tina Vasquez posted during our first covid summer: “What is the machine that did this to this person?

It’s not always obvious that this happening, because sometimes the surface discussion is more about whether the root problem is, say, interface design or capitalism, both of which are systems. But it’s rare to get through an online conversation without someone proposing that systems talk is largely a distraction, because some people are just assholes. And indeed, many people are assholes. Most of us, even, given the right contexts and deficits, though it’s clearly overrepresented among the leaders of social media companies. But lean hard enough on personality as explanation for mass phenomena and we get arguments that should be all too familiar, in which poor people are individually lazy, Black men individually scary, mass gun-murderers individually mentally ill, and homeless people individually recalcitrant.

I think it’s obvious that it’s always both, systems and people: People and systems can synchronize and strengthen each other (for good or evil); good personalities can patch bad systems, or sabotage them in service of good; good systems can reduce the harm destructive personalities inflict. But unless you work in therapy or social services, I think systems-level work offers the only practical place to put your lever if you want to move the world.

When I write about our networks as haunted machines, I am writing toward the systems sense of the mess we’re in. When I write about Christopher Alexander, it’s because his work has been one of my touchstones for a systems sense of something better—and better in a genuinely different way.


To be alive in the sense found in A Timeless Way of Building, a system would have to:

  1. avoid piling unresolveable stresses onto the people inside it,
  2. maintain its own aliveness through self-sustaining evolution and repair, and
  3. avoid worsening the life of the systems around it, ranging from peer-level technical systems to things like “civil society.”

This framework is so heartening to me because it torches the constructions of a system’s inhabitants’ well-being and the spillover of generated stresses into the wider world as externalities we need not consider.

The Alexandrian orientation puts these factors at the center of the work, in terms sufficiently specific to make them hard to evade—unlike, say, the “triple bottom line” meant to indicate that corporations should consider worrying about “People” and “Planet” in addition to “Profits.” In practice, this has aways been too woozy and vague to do much besides stretching the humanish skin of corporate social responsibility over the usual forms of extraction.

In the same way, tech slogans like “We put users first,” could mean nearly anything, so they usually mean nothing. The first criteria for Alexanderian aliveness, though, translates into something like, “Create only features, interfaces, and systems that resolve conflicting human desires.” I think that’s something we could use.

Moving to the second requirement in the aliveness criteria—being self-sustaining—clarifies problems of organizational character and pulls me back into adrienne maree brown’s sense of fractal trouble. In my own experience, most organizations that genuinely do a good job serving humans and avoid making the systems around them worse exist in a state of permanent precarity. I think this happens both because those orgs burn out the people who hold the place together, and especially because under modern capitalism, it’s nearly impossible to achieve financial stability when funding essential social support work is constructed as charity.

The third requirement of aliveness—to build systems that avoid worsening the world by dumping outward the stresses they create—cuts to moral vacuum at the tech industry’s core, which is the neutrality that sets the burning world at a professional distance behind UV-filtering glass.


Fine, but what does it mean in our actual work? Let’s begin with a person. Since I’m right here, we can start with me. A handful of the conflicting desires that arise on the networks I’ve been using for a couple of decades:

  • I want to have interesting conversations online without participating in modes of interaction I would never tolerate offline, ranging from brigading to individual abuse to soul-erodingly tedious explanation of my own words back to me.
  • I want to be visible enough to make interesting friends and attract work that I’m good at without getting stalked, doxxed, or successfully targeted for mass harassment. Relatedly, I want people who are structurally least likely to be heard offline to be granted space and access without being made into sacrificial bait for hate campaigns.
  • I want to be able to have semi-secluded conversations with variously durable and ephemeral sets of people without giving up my overall ability to participate in a wider, more public conversation. And I want to be able to be a good host, who can invite people into conversations without exposing them to the common brutalities.

A ~product design~ that resolved these kinds of conflicting desires would be vastly more useful than the notions of “healthy networks” that zoom around every time a tech millionaire’s kids turn thirteen. There’s nothing alive about networks that enforce civility over care, and there’s much more life in honest squabbling than in the weird rictus of LinkedIn. But an emotionally and culturally literate form of social design would work toward spaces that let us be our petty, banged-up selves while structurally lessening the damage our bad days do to the people around us, instead of fanning each minor flare of ill temper into a housefire.

Stopping here feels depressing, so I won’t. There’s so much more life to be had if we start with human cultural patterns and try to strengthen what is alive and good in them.

  • When I’m invited into semi-sheltered spaces or moments, I want it to feel like I’m wandering through a particularly nice party—outdoors, day’s heat just subsiding, one cold drink, maybe some lightning bugs?—catching edges of conversations—and then I want to be able to trace unthreaded clusters of people and meaning back through time so I can perceive deeper relationships and shared interests.
  • I want to be able to browse through my online acquaintances’ varied interests like I was nosing through their bookshelves while they make coffee without sifting through stilted hashtags or stalking them across sites. I want to skip political slogans in favor of shared pathways into what we actually do in support of the ideas we care about.
  • I want systems of trust, recommendation, and vouching for people that work at least as well as the ones I use when I need a local dentist or a good lunch without the bandwagon problems that arise when trust gets conflated with celebrity.
  • I want to be able to summon a cloud of expert discussion on any topic, with each analysis clearly situated within the commenter’s background and history—and I want to do it without becoming a human filter for the corrosive exhaust of misinformation campaigns.
  • On the purely selfish side, I want social tools to support my cognitive capacity during wide-ranging reading, not to weaken it. I want an easy way to find half-remembered fragments that doesn’t involve me carefully labeling everything I might someday wish to remember. I want a stereotypical little demon to coalesce, sift through what I told people I would email them or promised I’d check out, and send me a list—and then I want it to break down harmlessly into water and salt, retaining exactly nothing, selling out exactly no one.

Tomkins, in the essay I quote at the top of this post, is paraphasing a central idea from George Kingsley Zipf’s Human Behavior and the Principle of Least Effort, a psycho-linguistic treatise that explosively decompresses a single insight about the inter-relation of tools that seek jobs and jobs that seek tools into an argument that includes passages like this one:

As to the incentives, first, of the greedy and venturesome outsider who wants to supplant some y lord of the system, we can see that as the y status of a lord increases, his Ay, income increases proportionally, and therewith his attractiveness to the bold outsider. In short, the y lord’s attractiveness is proportional to Ay. On the other hand, as the Ay, income increases, the number of y lords who have that income decreases according to c/Ay2, with the result that the N, opportunities of supplanting a y lord decrease more rapidly than the Ay, income increases.

…which is the kind of thing you don’t see as much now outside the twenty-five-cent stack at an estate sale, and also extremely funny to me. I’m afraid it’s also reflective of some of the communication problems I have when I try to talk about social forms with people trained to think algorithmically. It’s not the encoding of life’s weird pageant that I’m after; I just want tools better suited to human hands.

As I type this post, Reddit-the-company is steadfastly clinging to their decision to demolish third-party tools critical to Reddit-the-community’s ability to self-moderate, and therefore to produce anything of value to anyone but the edgelords and grifters who form the sticky ring around the bathtub when the water drains away.

Given that we seem to be stuck, as a techno-culture, on something as basic as “don’t actively harm the people whose gift of free labor is your company’s only value,” maybe it’s unhelpful to be thinking out loud about how much better our social worlds could be! I don’t know. But unless we’re going to excise the influence of our networks from our societies entirely, and especially given the whole Gibsonian jackpot situation (spoilers for The Peripheral there), it feels impossible not to try.

https://erinkissane.com/qualities-of-life
Extensions
All This Unmobilized Love

Straight out of undergrad, I applied to a bookselling job and didn’t get it, so I started working in tech. By the end of my first year there, I was convinced that the internet was having and would continue to have a norm-smashing, discombobulating effect on the shape of the world and that, left to the paths it was on, it was going to concentrate social inequalities and extractive corporate behavior. I started volunteering for an early web magazine that focused on web standards and accessibility, because it felt like the best-while-nearest place to put my time to help shape the internet in service of a better world. That turned into a couple of decades of internet stuff, but I steered around the big platforms because I didn’t think their intrinsic incentives would ever let them make healthy, non-extractive things, even if individual employees wanted to.

I still think getting our networks right—or at least making them better and putting them in service to the life and health of the world—is critical to the work required to get more of us safely to the other side of the next ten, twenty, thirty years.


A step back: About 25 years into the social internet era, we’ve seen weirdly little experimentation with social forms at scale. Most of our emerging networks have been driven by the workability of technical forms: chat, forums, feeds, galleries, streaming video, two or three variations on comments, and increasingly powerful (and decreasingly transparent) recommendation engines.

Even most of the emergent gestures in our interfaces are tweaks on tech-first features—@ symbols push Twitter to implement threading, hyperlinks eventually get automated into retweets, quote-tweets go on TikTok and become duets. “Swipe left to discard a person” is one of a handful of new gestures, and it’s ten years old.

If this were only boring, we could ignore it. But by treating a handful of technical capabilities and conventions as the low-level building blocks of social tools, we’re leaving almost everything good about the human experience on the table. Where are the networks that deeply in their bones understand hospitality vs. performance, safe-to vs. safe-from, double-edged visibility, thresholds vs. hearths, gifts vs. barter, bystanders vs. safety-builders, even something as foundational as power differentials? I don’t think we have them, except piecemeal and by chance, or through the grace of socially gifted moderators and community leads who patch bad product design with their own EQ.

I guess it’s not difficult to understand why these big gaps exist: Commercial software is almost entirely funded, hyped, and judged by a system devoted to jackpots that dismisses most of the natural and built world as an externality, while open source software is mostly built and maintained by people who choose to (and can afford to) spend their non-waged time writing code to serve their own kinda nerdy needs.

But it means there’s a ton of room for fruitful and humane exploration. And I don’t think it’s going to come from the usual hyper-financialized tech places—or from the way things have generally been done in open source, either.


The big promise of federated social tools is neither Mastodon (or Calckey or any of the other things I’ve seen yet) nor the single-server Bluesky beta—it’s new things built in new ways that use protocols like AT and ActivityPub to interact with the big world. A couple weeks back on Bluesky, I reposted a thread that nailed my own reasons for being interested in protocols and platforms:

I’m not at all a tech solutionist - I dislike most technology. I think AI is generally anti-human & I think crypto is generally anti-social. The idea of a massive AI labeler that you offload everything to is completely antithetical to my beliefs

My interest & goals w atproto stem from the fact that our current online systems are preventing us from coordinating in human ways. I want to unbundle & recompose these systems so that people can build online spaces for people

Spaces that are a joy to participate in, that feel safe & full of meaning. Spaces that inspire you, that can challenge you in the right ways

Tech doesn’t solve people problems. But it does shape the tools that help us coordinate solving people problems. As a society, we have shitty tools

Our current social networks are broken, but we can remake them with space for humanity

It’s very good! What I didn’t realize at the time is that it’s from Daniel Holmgren, a protocol engineer on the Bluesky team. I think this is the exact right orientation to building the next generation of social tools—not as “Twitter without billionaire” or “Mastodon but easy to use” but as infrastructure on which existing and nascent communities can build places that are safe and connected, joyful and challenging, wide-ranging and deeply human.

I’m online enough to pause here both for the folks who are assembling their responses about why I should never mention BlueskyI wrote this for you, it’s okay if you hate it, my hopes for Bluesky are that it provides productive competition for the fediverse and eventually becomes a really interesting distributed option. I will also acknowledge the people about to say that AT or ActivityPub or indeed anything federated cannot ever be truly good or safe, etc. Thing is, the centralized systems never belong to good people, and even when we wrestle the amoral jerks into doing a right thing, it’s fragile and temporary. For entirely pragmatic reasons, I don’t believe central corporate authorities can keep us safe when they’ve demonstrated for 25 years that they can’t, won’t, and are incentivized to do otherwise. There is no corporate-walled internet we can trust to rid us of griefers and racists and surveillance systems.

“Banning Nazi servers is insufficient, there should be no Nazi servers,” is a perspective I deeply sympathize with, but unless you bring the entirety of the internet under either corporate or governmental control—and neither of those scenarios has a great record on de-Nazification—the best we can do is exclude bad actors from our spaces.

On the big centralized platforms, that work of exclusion is performed by largely unaccountable entities who by design subordinate safety to their own priorities (growth, engagement), with much of the worst and most traumatizing work offloaded to “offshore” moderators who experience the internet’s most awful depravity at industrial scale.

And again, which things get excluded shift with the winds of rich men’s opinions and the same terrifying politics that the big platforms enabled for a profit. To me, this is a big dead end. But there are other ways, and the most promising to me are the ones that can be managed by and for specific communities while also remaining in contact with the big internet.


Even when it’s draining and difficult, even when it means working under threat of online attacks that turn into offline consequences, lots of of people demonstrably want to put our time into things that ease our own fear and dread, repair what’s broken, and build sound knowledge and useful resources. Seeing this in action during the pandemic’s first year changed my understanding of the shape of the world; given ways to do this work for ourselves and each other, we will. But right now, a lot of the entities providing us with those ways to help are for-profit platforms. Some platforms like Reddit and StackExchange (and Facebook Groups) are almost entirely moderated by dedicated volunteer labor, and every big network relies heavily on volunteer peer moderation in the form of (frequently onerous) flagging and reporting. Despite late capitalism, despite all the things we’re going through, the internet already runs on dedicated volunteer labor.

What most of us rarely have time for is the organizational work that runs alongside and on top of technical protocols to enable participatory design processes, build institutional knowledge, and establish the kinds of multi-facted support and user-accountable governance that produces sustainability. Which is, I think, why so much of our time and love ends up benefiting platforms that extract profits rather than ploughing them back into the soil.

We’re already on the verge of a new generation of protocols and platforms, and it’s my big hope that their builders will focus on clarity and ease of use as these technologies mature. But we also need new generations of user-accountable institutions to realize the potential of new tech tools—which loops back to what I think Holgren was writing toward on Bluesky.

I think it’s at the institutional and constitutional levels that healthier and more life-enhancing big-world tools and places for community and sociability will emerge—and are already emerging. Over the coming months, I’ll be speaking with and writing about people working on some of the tools and communities that I think help point ways forward—and with people who’ve built fruitful, immediately useful theories and practices about what the networks have dragged us through already and how we escape the traps we’ve been living in.

In the meantime, some things I’m watching extra closely:

(”Unmobilized love,” is a callback to one of Mike Davis’s final interviews.)

https://erinkissane.com/all-this-unmobilized-love
Extensions
Tomorrow & Tomorrow & Tomorrow

We realize then that it is just the patterns of events in space which are repeating in the building or the town: and nothing else.

Nothing of any importance happens in a building or a town except what is defined within the patterns which repeat themselves.

—Christopher Alexander, The Timeless Way of Building

At the core of Christopher Alexander’s work is the belief that the shape and character of our spaces cannot help but influence the events that repeat inside them. The characteristics of the built environment don’t explain everything—the enveloping cultural context is a massive force—but they explain a lot.

Over the past 20 years, this is very much what I’ve come to believe about the shape of the things we’ve built online. Software design isn’t the only thing that shapes our behavior, but the idea—which I’ve seen a lot this past few weeks—that interface design/network structure/feature sets don’t actually matter at all is as wrong as the idea that a bedroom with huge unshaded windows will allow for restorative sleep, or an open-plan office will permit a state of flow.

So in my work life, I’m trying to sharpen my sense of how and why the patterns we build on the internet shape the behaviors we enact inside them. But I keep getting sidetracked into the opposite formulation—how repeated behavior shapes our spaces, how rituals turn to hauntings, how buried things keep erupting into the present.


In the early 1980s, British engineer Vic Tandy was working for a company that made medical equipment used in life-support and anesthesia. In a coincidence that changed the course of Tandy’s career, the converted garage that served as the company’s laboratory was—according to some of Tandy’s colleagues and a night cleaner—haunted.

Having dismissed his colleagues’ experiences as nonsense, Tandy himself noticed a few emotional and physical symptoms—depression, cold shivers, growing discomfort—but ignored them until he was working alone in the lab one night:

As he sat at the desk writing he began to feel increasingly uncomfortable. He was sweating but cold and the feeling of depression was noticeable. The cats were moving around and the groans and creaks from what was now a deserted factory were “spooky”, but there was also something else. It was as though something was in the room with V.T. […] As he was writing he became aware that he was being watched, and a figure slowly emerged to his left. It was indistinct and on the periphery of his vision but it moved as V.T. would expect a person to. The apparition was grey and made no sound. The hair was standing up on V.T.’s neck and there was a distinct chill in the room. As V.T. recalls, “It would not be unreasonable to suggest I was terrified”. V.T. was unable to see any detail and finally built up the courage to turn and face the thing. As he turned the apparition faded and disappeared. There was absolutely no evidence to support what he had seen so he decided he must be cracking up and went home.

The next day, Tandy, a competitive fencer, brought a foil blade into the lab to use the bench vise to cut threads into the blade for attachment to a tang. While clamped in the vise, the blade began “frantically vibrating up and down,” which spooked the hell out of him. By moving the vise around the lab, Tandy located a low frequency (~19 HZ) standing wave that was eventually traced to a fan in the air-extraction system. When the fan’s mounting was modified, the standing wave disappeared—as did the lab’s oppressive feeling and sense of ghostly presence.

Tandy and a fellow professor at Coventry University wrote up his experiences and subsequent speculations on the mechanisms of low-frequency hauntings in a short paper for the Journal of the Society for Psychical Research that remains one of the prizes in my collection of obscure PDFs. Tandy went on to conduct large-scale experiments on the physical properties of hauntedness.

I think about the vibrating foil blade every time anyone mentions engagement.


The quasi-McLuhan quotation about shaping the tools that thereafter shape us traces back to Winston Churchill’s case for rebuilding the bombed-out Commons Chamber of Parliament along its original lines. When he he wrote that “we shape our buildings and afterwards our buildings shape us,” he was explicitly writing about the shape of the Chamber as a way of maintaining exactly two sharply opposing political parties rather than a left-right gradient, and about concentrating the energy of the deliberating body by making the room a little bit too small, thus forcing its members to squeeze in and hustle for a seat.

It was a characteristically unsubtle speech:

The House has shown itself able to face the possibility of national destruction with classical composure. It can change Governments, and has changed them by heat of passion. It can sustain Governments in long, adverse, disappointing struggles through many dark, grey months and even years until the sun comes out again. I do not know how else this country can be governed other than by the House of Commons playing its part in all its broad freedom in British public life. We have learned—with these so recently confirmed facts around us and before us—not to alter improvidently the physical structures which have enabled so remarkable an organism to carry on its work of banning dictatorships within this island and pursuing and beating into ruin all dictators who have molested us from outside.

I’d known about the Churchill speech for a long time, but WWII isn’t my period and until this year, I’d assumed that the quotation dealt with postwar rebuilding. It’s actually from 1943, the year of the Warsaw Ghetto Uprising—well after the Blitz, but with the war’s outcomes still unknown. And as much as Churchill goes on at length about the health and vigor of the House over the long term, it’s ultimately a wartime argument: change cannot be risked.


The structures of our network commons have concentrated our responses to the forces already pressing against our livelihoods and children and futures. Within their engagement-optimized interfaces, we’ve built ourselves into a standing wave: Abusive posts became network-wide events that require a response not only from moderating authorities, but from every user.

In this machine, silence transmutes to approval of the worst thing happening; via entirely real human needs for signals of safety and support, continuous attention and engagement become mandatory. Simply bad posts are opportunities for demonstrations of prowess. People we agree with become footholds for demonstrating all the subtle ways in which they don’t quite understand. Sometimes—rarely—these moves result in policy changes, but fight and flight and status display all taste the same to the machine.

Maybe for you, it didn’t start on Twitter. Maybe was forums or the blogosphere or Reddit. Maybe it was Facebook with terrible people from high school or TikTok with people who hate you for liking a thing, or not liking it enough. But we built the machines around our weird amygdalas and then we went inside them and now the machine is no longer confined to a stack of software + policy + vibes; we carry it in ourselves. We haunt each new place we enter. We can feel this happening in our bodies, which is why touch grass is so accidentally real.

We shape our structures and afterward our structures shape us, but the we of the first clause and the us of the second are not the same.

The secret heart of every panopticon is not the all-seeing-eye, but the confessional. Like a god, the machine already knows what we’ve done. We confess to reclaim our own voices, or sometimes in search of grace—though in the machine, grace is only available to some people, until we make it available to none. The gears of commercial networks are surveillance systems built on structures that elicit a continuous stream of confessions made public. Confessions in public become testimony; testimony summons congregations. We raise our voices in defiance or affirmation, knowing there will be consequences we don’t understand. The databanks grow.


I started reading Susan Cooper’s The Dark Is Rising series to my kid between Christmas and New Year’s this year. The first two books in the sequence, published in 1964 and 1973, feel like products of the 1960s—there’s a clarity of motion and a starkness of contrast and a residual sweetness in the descriptions of rural landscapes that I recognize from my own childhood spent reading books that were already old then.

The third book, Greenwitch, is another thing altogether. Published in 1974, it’s dreamlike and nightmarish and packed with images I associate more with folk horror than children’s fantasy adventure, down to the Greenwitch, a wicker woman built of living branches and inhabited by a childlike consciousness that is born anew each year to be sacrificed and reunited with the wild magic from which it came. The true line of the book’s plot hinges on the heroine’s instinctive and ultimately world-changing moment of recognition of the entity as not a human, but a person. Plenty of adventures swirl around this moment, but everything that matters is inward, invisible, and relational. (It’s a kind of book that doesn’t get published anymore.)

During a series of action scenes that would, in a more straightforwardly adventurous book, form the plot’s climax, the protagonist of Greenwitch is asleep, unaware of the terrifying events taking place just outside her house. And all those action scenes our girl misses while she’s sleeping are also dreams: They seem to be an entire village’s collective nightmares of terrible incidents of mass violence, some just barely within living memory and some that stretch much further back. The ancient dead arise in a collective dream and take up arms beside the living dreamers, who die and suffer. A ghost ship rises from the sea and sails over the moors, and eventually out of time. The town burns. But the next morning brings apparent normalcy; the stage resets.

Greenwitch offers no lasting resolution for the village’s ghostly violence. At the book’s end, even the Greenwitch entity herself remains trapped in her sacrificial cycle. And unlike the sleeping child heroine, the reader saw everything that happened in the dark, and is left with no assurance that the ghostly standing waves of past violence are not merely waiting to be summoned again.


In the machine, we are always forgetting, chasing the same discourses and panics in circles. Instead of making restitution, we wait for the cycle to erase the screen and carry on as before. Stay long enough and everything rhymes with something that gave you scars, but that everyone else has forgotten. Resolution eludes us online even more than off. But then, the paradox: Nothing stays gone, either. Fast search resuscitates archives without even a bump in load time. Screenshots jump networks and decades; we have the receipts. Somewhere between the continual etch-a-sketch and structurally eidetic memory, the provisional and crucial ties of solidarity recede, always just out of reach.

I logged into Twitter to see if my stupid, painstakingly deleted tweets had been un-deleted: they had been, by the thousands. It’s a little on the nose.

We won’t technologize our way out of the ghost machine. I don’t think we’ll mod our way out, either. Actual trust and real safety do require protection from griefers and villains—and abuses of authority—but that’s table stakes: that’s the floor.

A building or a town becomes alive when every pattern in it is alive: when it allows each person in it, and each plant and animal, and every stream, and bridge, and wall and roof, and every human group and every road, to become alive in its own terms.

And as that happens, the whole town reaches the state that individual people sometimes reach at their best and happiest moments, when they are most free.

—Christopher Alexander, The Timeless Way of Building

Here in my body, I want to be more human in service of a less painfully haunted world. I want ways of being together that let us pay our respects and build different kinds of power. I want to practice being free.

https://erinkissane.com/tomorrow-and-tomorrow-and-tomorrow
Extensions
Books I’m Reading

As I’ve mentioned elsewhere, I’m doing a bunch of reading-like-a-grad-student* this spring to try to get my head around the current state of online places and ways to be together and work together. A few people asked for a reading list, so I’ve pulled the books together—my stack of papers and articles is pushing 350 items and I don’t have time to winnow it down yet, but I’ll get there. (Also I’m sure I’ve forgotten books, this is literally just what I could gather up in my hands/on my devices.)

This is a snapshot of what I’m looking at, not a set of recommendations. Some of these books are much better than others. (About half of these are rereads.)

books aimed at practitioners (mostly not recently published)
  • Designing Social Interfaces: Principles, Patterns, and Practices for Improving the User Experience, Crumlish and Malone
  • Community Building on the Web : Secret Strategies for Successful Online Communities, Amy Jo Kim
  • Building Successful Online Communities: Evidence-Based Social Design, Kraut and Resnick
  • Designing for Community: The Art of Connecting Real People in Virtual Places, Derek Powazek
  • Online Communities : Designing Usability, Supporting Sociability, Jenny Preece
  • Digital Habitats: Stewarding Technology for Communities, Wenger, White, and Smith
framework reading
  • Community and Privacy: Toward a New Architecture of Humanism, Chermayeff and Alexander
  • The Timeless Way of Building, Christopher Alexander
  • Emergent Strategy, adrienne maree brown
  • A Pattern Language, Alexander, Ishikawa, and Silverstein
  • Governing The Commons: The Evolution of Institutions for Collective Action, Elinor Ostrom
  • Understanding Knowledge as a Commons, eds. Hess and Ostrom
  • The Real World of Technology, Ursula Franklin
  • The Ursula Franklin Reader: Pacifism as a Map
  • How We Get Free: Black Feminism and the Combahee River Collective, Keeanga-Yamahtta Taylor
  • Thinking In Systems, Donella Meadows
  • Fields, Factories, and Workshops, Peter Kropotkin
the rest
  • Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media, Tarleton Gillespie
  • Here Comes Everybody: The Power of Organizing Without Organizations, Clay Shirky
  • Lurking: How a Person Became a User, Joanne McNeil
  • The Virtual Community: Homesteading on the Electronic Frontier, Howard Rheingold
  • The Power of Many: How the Living Web Is Transforming Politics, Business, and Everyday Life, Christian Crumlish
  • The Well: A Story of Love, Death & Real Life in the Seminal Online Community, Katie Hafner
  • The Rise of Virtual Communities: In Conversation with Virtual World Pioneers, Amber Atherton
  • Black Software: The Internet & Racial Justice, from the AfroNet to Black Lives Matter, Charlton McIllwain
  • The People’s Platform: Taking Back Power and Culture in the Digital Age, Astra Taylor
  • Design Justice: Community-Led Practices to Build the Worlds We Need, Sasha Costanza-Chock
  • In the Wake: On Blackness and Being, Christina Sharpe
  • New Dark Age: Technology and the End of the Future, James Bridle
  • The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, Shoshana Zuboff
  • The Fight for Privacy: Protecting Dignity, Identity, and Love in the Digital Age, Danielle Citron
  • Understanding Context: Environment, Language, and Information Architecture, Andrew Hinton
  • Building Web Reputation Systems, Farmer and Glass
  • Hacking Diversity: The Politics of Inclusion in Open Technology Cultures, Christina Dunbar-Hester
  • Race After Technology: Abolitionist Tools for the New Jim Code, Ruha Benjamin
  • Radical Technologies: The Design of Everyday Life, Adam Greenfield
  • Working in Public: The Making and Maintenance of Open Source Software, Nadia Eghbal
  • Design for Safety, Eva Penzeymoog
  • Liberating Voices: A Pattern Language for Communication Revolution, Douglas Schuler
  • Algorithms of Oppression: How Search Engines Reinforce Racism, Safiya Noble
  • The Cold Start Problem: How to Start and Scale Network Effects, Andrew Chen
  • The Production of Houses, Notes on the Synthesis of Form, all four volumes of The Nature of Order, Alexander and co.
  • The Next American Revolution: Sustainable Activism for the Twenty-First Century, Grace Lee Boggs
  • The Religion of Technology, David Noble
  • Routledge Handbook of the Study of the Commons, ed. Hudson, Rosenbloom, and Cole
  • My Tiny Life: Crime and Passion in a Virtual World, Julien Dibbell

* In honor of the hours I am spending virtuously not playing Tears of the Kingdom, grad-student mode for me rn is pretty much shaking trees, grabbing apples, snatching birds’ eggs, bagging weird emus, and hoarding sticks. In June, we cook. (And kill monsters.)

https://erinkissane.com/books-im-reading
Extensions
Interhooking, interlocking, nonmaterial

This is the first in a series of reading notes I’m posting as I work through a stack of texts on community and sociability. I posted an intro to this series yesterday. You don’t need to read the intro to read this, but if you’re unfamiliar with either Christopher Alexander or adrienne maree brown, you might want to glance over it because I’m going to dive straight in. (The other most useful piece of context is probably that I’m a medium-old internet nerd with an academic background in English lit and a bunch of strong interests in other humanities zones.)

If you’re even a little interested in Alexander, those two books are where I’d begin, unless you’re super excited about math, in which case I’d start with Notes on the Synthesis of Form. I think Timeless Way is particularly beautiful.

In The Timeless Way of Building (1979), Christopher Alexander laid out much of the theory behind the work he anatomized in A Pattern Language, which was published two years earlier.

In Timeless Way, Alexander writes that:

…each law or pattern is itself a pattern of relationships among still other laws, which are themselves just patterns of relationships again.

For though each pattern is itself apparently composed of smaller things which look like parts, of course, when we look closely at them, we see that these apparent “parts” are patterns too.

Consider, for example, the pattern we call a door. This pattern is a relationship among the frame, the hinges, and the door itself: and these parts in turn are made of smaller parts: the frame is made of uprights, a crosspiece, and cover mouldings over joints; the door is made of uprights, crosspieces and panels; the hinge is made of leaves and a pin. Yet any one of these things we call its “parts” are themselves in fact also patterns, each one of which may take an almost infinite variety of shapes, and color and exact size—without once losing the essential field of relationships which make it what it is.

The patterns are not just patterns of relationships, but patterns of relationships among other smaller patterns, which themselves have still other patterns hooking them together—and we see finally, that the world is entirely made of all these interhooking, interlocking nonmaterial patterns. (p. 90-91)

This is very 1970s! But also, I think, extremely right and useful: It gets us looking up and down the scale, watching for the way things interlock, or fail to interlock. This, in turn, leads us into the process-centric approach to architecture that Alexander promoted in his later career, when he judged that the pattern-based system he’d been teaching wasn’t actually producing great buildings out in the world.

The Nature of Order, Alexander’s four-volume grand unified theory, comes in at a bit more than 2,000 pages, and although I love it, I’m not going to try to summarize it. Instead, I want to glancingly look at what Alexander calls, near the end of his career, “The Fundamental Differentiating Process.” I will horrifically oversimplify this process as:

  • Make a thing (or fix a thing that’s weak) by applying one of a series of 15 transformations designed to differentiate and strengthen both the part and the broader whole,
  • run up and down and across the system looking for trouble,
  • adjust as needed,
  • repeat.

It’s systems design, obviously—and algorithmic, as Dorian Taylor notes—but Alexander’s mode is specifically tied to his sense of the revelation and strengthening of a latent order that unfolds, as in the natural world. Hold onto that thought for a second.

Both in his early focus on patterns and especially in his later emphasis on the 15 transformations, Alexander was working with ideas of bringing latent wholeness and aliveness and goodness into full being by working at every scale at once. I’ve loved this from the moment I first read it.

I especially love this orientation for community/sociability work because it boosts up a sense of each individual piece of the work as both a sub- and a super-pattern, vibrating with potential to help or harm. This is simultaneously fascinating and terrifying, which is probably the right way to feel about community work.

unfolding, emergence, life

To carry on with ruthless oversimplifications, I think the central orientation in adrienne maree brown’s Emergent Strategy is one of opportunistic, flexible, relationship-centered, low-ego interaction that both accepts and influences the unstoppable force of change. It takes Octavia Butler’s Earthseed principles, biomimicry, and a trunkful of other interesting things as a springboard into ways of being in the world that, in brown’s view, allow us to use simple ways of interacting with each other to build toward beautiful complexity.

Here’s brown:

How we are at the small scale is how we are at the large scale. The patterns of the universe repeat at scale. There is a structural echo that suggests two things: one, that there are shapes and patterns fundamental to our universe, and two, that what we practice at a small scale can reverberate to the largest scale. (Emergent Strategy, p 52)

See also “That which is above is from that which is below and that which is below is from that which is above,” which lazy translation from Arabic turned into, “as above, so below.”

brown calls this guiding principle fractal. She writes, “In the framework of emergence, the whole is a mirror of the parts. Existence is fractal—the health of the cell is the health of the species and the planet.” (p 13)

And here’s how she talks about the way bad patterns of work and behavior refract and replicate across even organizations trying to do good work:

So many of our organizations working for social change are structured in ways that reflect the status quo. We have singular charismatic leaders, top down structures, money-driven programs, destructive methods of engaging conflict, unsustainable work cultures, and little to no impact on the issues at hand. This makes sense; it’s the ­water we’re swimming in. But it creates patterns. Some of the patterns I’ve seen that start small and then become movement wide are:

  • Burn out. Overwork, underpay, unrealistic expectations.
  • Organizational and movement splitting.
  • Personal drama disrupting movements.
  • Mission drift, specifically in the direction of money.
  • Stagnation—an inability to make decisions.

[…] And this may be the most imporant element to understand—that what we practice at the small scale sets the patterns for the whole system. (p 52-53, emphasis brown’s.)

This rings a full peal for me—on the negative-pattern side, because we’ve all seen organizations suffer from bad behavioral patterns that ripple outward. But more directly, on the positive side, because I have recent and kinda life-changing experience with the way focusing on the little squishy soft stuff changes everything.

treating the small scale as the big thing

Both at the COVID Tracking Project and before that, with the SRCCON conference series I worked on at OpenNews, we tried really hard to get the small scale right. In both cases, we did it because it seemed like the right thing to do—and for CTP, where we were trying to cobble together a national pandemic dataset using only our own hands and brains and those of several hundred remote volunteers, the only thing to do. We didn’t do it anywhere near perfectly in either organization, but we really sweated the human details—not in a luxury amenities or benefits package way, but in recognition of the shared human experiences of being embodied and tired and stressed out.

At SRCCON (a series of live conferences for technology, design, and data folks in newsrooms) that meant things like spacious schedules, genuinely good food, free or low-cost on-site childcare, live human captioning, venues with natural light, scholarships, a code of conduct backed by a safety team and plan, and a lot of other things.

At CTP, getting the small scale right meant welcoming, relationship-based training and orientation, steady encouragement to take time away from the work, and a high-fun/high-camaraderie internal community on Slack to help us all grind through months of wrenching, exacting work.

In particular, we tried to build non-punitive, structural safeguards around the human experience of making mistakes—because it let us assemble the best dataset we could, but also for the well-being of the people doing the work. And our decision to give our people explicit forgiveness and support when they screwed up turned into an essential part of the project’s macro culture; I think our insistence on individual grace, transparency, and quick, careful correction is a big part of what let us run a two-week crisis project for a full year. (There were extremely not-fun things, too, like asking even really helpful people whose working orientations went against the project’s emergent culture to step back or all the way out.)

At both CTP and SRCCON, our work on small-scale interactions seemed to help people inside the big systems work together with not just professionalism or civility but real charity and camaraderie to an extent that I found both consistently surprising and legitimately inspiring. And I’m wildly biased—I think the SRCCON folks were largely an exceptionally great subset of journalism and that our long-term volunteers at CTP were some of the best people anywhere—but I also think that handling a lot of small-scale things with unfeigned care and respect let most of them be at their best even in difficult circumstances.

where I got with these readings

Social platforms and communities and sub-community interactions online are a messy combination of pseudo-place and repeating event, and this can make them difficult to think about clearly. I’m finding it useful to spend time wrangling with the way interpersonal patterns and physical/architectural patterns self-replicate and buzz and either reinforce or dampen each other. Alexander and brown’s adjacent senses of pattern and replicating order have been simultaneously clarifying and complicating for me, in ways that feel fruitful for longer-term work.

next time: space, rituals, patterns

Next post up unless I get distracted again: patterns and events as standing waves, taking as a jumping-off point Alexander’s understanding of repetition:

Of course, the pattern of space, does not “cause” the pattern of events.

Neither does the pattern of events “cause” the pattern in the space. The total pattern, space and events together, is an element of people’s culture. It is invented by culture, transmitted by culture, and merely anchored in space.

But there is a fundamental inner connection between each pattern of events, and the pattern of space in which it happens. (Timeless Way, p. 92, emphasis Alexander’s)

And probably a lot about ghosts.

Oh and btw, if it looks like I’m poorly rehearsing a fundamental text from your particular field, it’s probably because I haven’t read it and you should send it to me so I can be smarter.

https://erinkissane.com/interhooking-interlocking-nonmaterial
Extensions
Patterns, Prophets & Priests

I’ve been working on communication and community online for a couple of decades, but the past few years have shaken up my understanding of what we’re really doing here. So I’m trying to think in a focused way about how we work and hang out together online and because I’m me, I’ve collected a working library of about 300 articles and papers and 30ish books to get through, most of them fairly technical. Here at the start, though, I’m circling around a set of books that I’m thinking of as framework-level—not all theory, but abstracted out a couple steps from the down-in-it reading about experiments in community and sociability.

One of the clusters of framework texts I’m working with is the series of books by Christopher Alexander and his rotating crew of collaborators in and around the Center for Environmental Structure. These books have been touchstones for me for the past nearly twenty years, and giving myself a little time to loop back through them with my time-altered brain has felt both restorative and important. Right now I’m wrapping up a proper reread of Alexander’s The Timeless Way of Building and finding a bunch of connections to another framework text I’m working through, Octavia Butler scholar, facilitator, and doula adrienne maree brown’s Emergent Strategy.

Update: The first regular post in my reading notes series is now up, as is a partial reading list.

I’m going to post a short(??) series here on those connections, but there’s some stuff I need to get out of my brain first, so this is that.

prophets and priests

I like reading Alexander and brown together in part because they’re so divergent in tone and form. I wrote around this contrast for days, but I think Henry Farrell’s essay this morning on tech-company founders as prophets in the Weberian sense of prophet vs. priest handed me exactly the distinction I’d been flailing around for. (Do read the Crooked Timber post, it’s great—and this, too, for a 15-second orientation.)

Reading Emergent Strategy is a dramatically different experience from working through the Alexander books: brown’s book is brief and delivered on the wing and frequently incantatory, intermingling leadership notes, theory, and poetry; its central dynamic is one of skimming over open water, dipping occasionally down and coming up with an illustrative anecdote. And, full confession, the book has been tricky for me to work with because it operates at a level of abstraction that sometimes reminds me of a very particular kind of learned, citation-heavy, often religious, ultimately charismatic leadership style. The approach itself isn’t objectively good or bad, but I mistrust it in the exact way that ever since I came off a rock face free-climbing at 17, my body locks up at a very specific distance from the ground.

And of course I do, I realized belatedly, reading Henry’s post. The ideas at the core of brown’s Octavia Butler scholarship are themslves a new scripture that Butler’s extremely compelling prophet character, Lauren Oya Olamina, codifies at the founding of her Earthseed religion. It’s prophets all the way down! Not in precisely the Weber sense, because brown is pretty hardcore about proposing rulesets and she’s definitely concerned with everyday life, but the vibe is present—she even talks about her own charismatic qualities in a really self-aware way. I might situate her as a prophet who is committed enough to do priestly things, which: respect.

But I’ve been wrangling a little with why Alexander doesn’t get to me in the same way. I think it’s mostly that in orientation, Alexander was a fundamentally priestly person who had to do some prophet things…which resulted in a huge number of pages of highly detailed explanations, case studies, and process discussions. The man trained as a contractor so he could spend much of his life on building sites, moving stakes and flags and cinderblocks around with his clients and colleagues—and a chunk of the rest of it doing things like coming up with alternative construction management contracts that would allow bureaucratic processes to flow around building projects based on the principles he outlined. And it turns out that reluctant prophets who just want to do priest things are close to my heart.

ANYWAY, now that I’ve done more of the reading around brown’s work—and been dragged backward through the events of the past few years—my sense of Emergent Strategy now is less that it’s a reference work I can’t quite click with and more that it’s a floating world of perceptive field notes and marginalia hovering across and between other things I’m reading and rereading by Octavia Butler (the focus of much of brown’s scholarship), Grace Lee Boggs, and many others. I think this reading is in tune with what brown writes about her own intentions:

I am offering […] a cluster of thoughts in development, observations of existing patterns, and questions of how we apply the brilliance of the world around us to our efforts to coexist in and with this world as humans, particularly for those of us seeking to transform the crises of our time, to turn our legacy towards harmony.

Her work coheres hypertextually and relationally, around and through community, and is as at least as embedded in collective forward movement as the work Alexander and his colleagues at the Center for Environmental Structure got up to. And it’s been a joy to realize that many of the ideas at the heart of both Alexander and his colleagues’ work and that of brown and her shimmering cloud of references are not just compatible, but very closely linked.

the big questions

Alexander grappled—as a teacher, in his books, in his buildings—with an elusive but unifying theory of how to make buildings and other built spaces good, whole, and alive in ways that nourish the people inside them, sustain themselves, and improve the environments around them. I am, naturally, permanently fascinated with this work.

If you aren’t familiar with Alexander, I think Dorian Taylor’s conversational intro on the Informed Life podcast—which, rejoice, comes with a transcript—is a solid starting point.

Alexander’s most widely read book is, by a wide margin, A Pattern Language—and it’s a great book that I always enjoy returning to. But I find the work he and his colleagues did around it to be the most valuable for what I’m doing now—especially the theory and reasoning and process laid out in A Timeless Way of Building and his later, intensely philosophical work, The Nature of Order. But throughout all his books, I would paraphrase the central question as: How can we make [buildings, systems, communities, towns] which make the people within them more whole and healthy and alive, which sustain themselves without draining or brutalizing anyone or anything else, and which support and improve the world around them?

Which is, or should be, one of the all-time big questions for anyone who builds anything ever.

Alexander, to his credit, never tried to dodge the weight of his beliefs. When pressed in an awkward but interesting short documentary to explain his goal for a particular building project, he glanced at the speaker with an expression of mild frustration and offered only, “To make God appear in a field.”

adrienne maree brown is also investigating big mysteries, but from a different angle; she takes it as table stakes that her readers are dedicated to social justice and communal survival. But she’s also done that kind of justice work long enough to recognize that it’s difficult and often heartbreakingly ineffective—and frequently needlessly destructive to the people inside it. (My friends who’ve worked in human rights organizations have testified at length about this phenomenon.) I think the central question for brown is: how can we behave with one another in ways that make those goals achieveable—and our achievements enduring—at every scale, beginning with the way we treat one another as individual souls working toward a common good?

Reading Alexander and brown together, it seems to me that right in the center of the Venn diagram of “bungled deadened spaces that make us and the world worse,” and, “perverse behavioral patterns and incentives that warp even well-intentioned interactions” is…

the internet

Much of the online work we’ve done in the past couple of decades has failed to achieve any of brown’s or Alexander’s goals.

Instead, we’ve built or accepted…

  • the sprawling social platforms that greased the slide into increasingly dehumanizing politics and summoned the worst imaginable Biblically accurate angels of our nature
  • the rat-king of tech companies, news orgs, and entertainment conglomerates competing against each other to extract our attention and data for onward sale
  • the sacrifices of some human minds to the labor of protecting wealthier people from seeing atrocities in their feeds and some human bodies to the crushing and sometimes deadly labor of making frictionless online orders bloom into take-out and socks and perfect glass phones in our hands

…with, at its center, capital’s unqunchable rapacity and the high-value tech-world scammers feeding it increasingly hollow and absurd confidence games—Theranos one-offs, crypto, the legless metaverse—until nearly every tech corp is joyfully announcing that human interactions and bodies of knowledge can be replaced 1:1 with the hollowest and most absurd of confidence games, large language models pretending to be tools that can be trusted with anything at all.

We did some great things, too, almost all of them related directly to to the starry scatter of individual and small-group (and briefly, movement-sized) human connections that are heartening, strengthening, and good for the whole system. Also Wikipedia and whatever your equivalent of appliance-repair YouTube is, without which I personally would be back to breaking things with screwdrivers and swearing.

For a good chunk of the past 10 or 12 years, it’s felt to me like we were stuck in an age that shoved genuine attention to human-ness and care to the margins. (Margins as in sheltered places, as in idealistic founders who get pushed out after acquisitions, as in think tanks that absorb huge grants without exerting visible influence over the internet as it’s lived, as in the teams who get fired first when the Lay Off Your Smart People Challenge spreads like norovirus though the tech sector, again.)

But now? I’m not sure we’re as stuck as we were even a year ago, though not because of any of the certainties the fully financialized humans trying to make Web 3 happen are selling.

I think things feel wiggly and interesting right now because we really just do not know how things are going to shake out. Which means that maybe people who make things online but don’t have billions of dollars or a seat at the VC table can have more influence over the next generations of online sociability and communal life. But experience suggests that the window of opportunity will slam closed on our fingers as soon as the Duplo blocks of technocapital sort themselves into a sturdier new configuration, so we gotta try to get everyone out while we can.

Which, I suppose, is why I’m here right now. And tomorrow I’ll get on with the reading notes.

alongside this
https://erinkissane.com/patterns-prophets-and-priests
Extensions
Blue Skies Over Mastodon

a label from White Wave's vegetarian soysage product, featuring a very late-1970's hand-drawn smiling pig with wiggly ears and a garland of flowers. The listed ingredients include okara (soy fiber), nutritional yeast, wheat germ, whole wheat flour, soymilk, safflower oil, and shoyu.

In the early 80s, my mom worked a couple shifts a month at a little small-town food co-op that smelled like nutritional mummy. She brought home things like carob chips and first-generation Soysage, which remains one of the grossest things I have ever eaten. This was also a real boom time for unsalted Legume Surprise and macrobiotically balanced grain mush that tasted like macrame owl.

This food sold reasonably well to a fringe class of Americans, including many who were rightfully worried about pesticides, animal cruelty, and the health effects of a meat-and-potatoes diet, and also a bunch who were just a real specific kind of nerd. And there was a strong current in the community of scorn for people who were lured into eating junk food when they could be eschewing seasonings until they could properly enjoy the glories of gelled millet or whatever.

I’ve been thinking about this a lot over the past few months on Mastodon and especially this week, as I hung out observing the pupal stage of Bluesky.

If you haven’t used either product, you might be forgiven for thinking that they sound really similar in structure except that one has a connection to Dorsey and the other does not. TechDirt’s Mike Masnick has a great breakdown of the two networks as successors to Twitter.

To get the background out of the way: Mastodon is a decentralized social network developed in the open and built on the ActivityPub protocol. It was founded by German software developer Eugen Rochko and is presently a German non-profit company. Bluesky is a social networking app built on the new Authenticated Transfer Protocol for decentralized networks, which is being developed in the open by the Bluesky team. Bluesky launched out of Twitter as a project promoted by Twitter founder Jack Dorsey and is presently an independent US-based public benefit company headed by Jay Graber (Dorsey retains a seat on the board).

we are not the same

Things got interesting in the Bluesky closed beta this week when a ton of people got let in while the app was still in an unstable state—no block function, semi-working mute function, problems with enormous threads. Posters ran around threatening noted centrist Matt Yglesias with hammer emojis, etc.

Lots of people joined the Bluesky beta and posted about why it worked better for them than Mastodon did. A big chunk of Mastodon responded with a social immune response intended to both warn people away from Bluesky for a very long list of reasons, including its association with Dorsey, its incompleteness and everything that clearly meant about the intentions of the developers, and that it would split the decentralized network vote. Many, many posts that amounted to, “Bluesky obviously won’t ban Nazis, let me repeat an enlightening story about a Nazi bar I’ve heard 400 times.”

Incidentally, when a straightforwardly “I’m a Nazi” Nazi showed up in the beta, people used the report function, and the Bluesky team labeled the account and banned it from the Bluesky app and restricted promotion of the account of the person who invited him. This changed exactly none of the tenor of the Nazi conversation on Mastodon, but it happened.

I have a suspicion that a lot of the defensive maneuvering on Mastodon is happening because Mastodon fans know that the network absolutely cannot compete on user friendliness and basic social functionality, so they’re leaning hard into the things it does get right—and then, in some cases, trying to shame people into not even thinking about trying a competing network.

But about that ease of use problem. Let’s rewind for a second.

bouncing off Mastodon

During the big waves of Twitter-to-Mastodon migrations, tons of people joined little local servers with no defederation policy and were instantly overwhelmed with gore and identity-based hate. A lot of those people, understandably, did not stick around, and plenty of them went back to their other social spaces and warned others that Mastodon wasn’t safe. For people who lucked out and landed on a well-moderated instance, finding fun people to follow was hard and actually following each of them often involved three separate steps, depending on which link you happened to click.

It’s a lot of hassle for a gamble on a network that might not end up being what you need.

Over on Bluesky, by contrast, once you’re in the beta, it’s super easy to sign up, find people, follow them, and participate in conversations. I’m seeing a lot of the people I’ve missed the most since I stopped using Twitter in like 2018, which is a delight, but I’m also not really posting because it’s a chaos machine and it’s still way too early for me to know if I really want to contribute there.

The thing is, networks can recover from even big initial fuckups. Mastodon developers could have made a project of interviewing people who wanted to leave Twitter and then building their needs as a roadmap. Writers and designers could make a great brief visual + textual guide to a few fun, tightly moderated instances to join, with pros and cons and a comparison of moderation and defederation policies, and slap that on the front page of Join Mastodon. Or the team could have taken any of dozens of other suggestions for streamlining. None of that happened.

You can recover from bad product design choices by changing things, but you do have to change things. Neither did the Mastodon core developers take swift action to—well, do much of anything.

Editing on May 1 2023 to add: Eugen Rochko published a new blog post today that discusses immediate changes to the mobile sign-up flow, which should help with both the initial barrier and, maybe more importantly, the initial safety problem of people ending up on bad instances because they didn’t know any better. (Inevitably, lots of Mastodon users think this change is a terrible idea, but I’m not getting into that.)

In what I think is a positive sign, Rochko also wrote:

We’re always listening to the community and we’re excited to bring you some of the most requested features, such as quote posts, improved content and profile search, and groups. We’re also continuously working on improving content and profile discovery, onboarding, and of course our extensive set of moderation tools, as well as removing friction from decentralized features. Keep a lookout for these updates soon.

I’m adding this new context here because I think it kind of leapfrogs some of what I wrote in this post, and I’m leaving the rest of the post intact as a discussion of how things had been going until now. But I’m generally optimistic about these statements, and I hope they mean we’ll see more changes for the good. (end edit)

unfriendly design feeds insular culture

I—a nerd—actually really like Mastodon most of the time, but I would like it so much more and feel like it was doing a lot more good in the world if it were more welcoming and easier to use. When I raise these points on Mastodon, I get a steady stream of replies telling me that everything I’m whining about is actually great, that valuing a “pleasant UI” over the abstraction of federation is shallow and disqualifying, and that people who find Mastodon difficult don’t belong anyway, so I should “go join Spoutible” or whatever.

And of course this stuff shows up in much worse ways for at least some Black and brown people on Mastodon.

I hate it that I can’t in good conscience encourage Black friends to get on Mastodon, because I know they’re going to be continuously chided by white people if they mention race or criticize anything at all about Mastodon itself. I hate that “a difficult sign-up process keeps out lazy people with bad culture” is a thing in so many Mastodon conversations. (Fun fact, if you hold this idea up to your ear, you can hear them say “sheeple.”)

I have absolutely zero fortune-telling to offer re: Bluesky. The AT protocol approach is enough of a tweak on existing models that I think it’s pretty much impossible to tell how it’s all going to play out when the technical abstractions meet actual users at scale—most of all, because it remains to be seen whether or how much the team will accept feedback on things that aren’t working (and for whom). In what seems to me like a moderately good sign, late on Saturday, Bluesky CEO Jay Graber posted:

At the very beginning of bluesky I said the tech would be straightforward to build, but moderation, and designing decentralized moderation, would be hard. It is. I talked with a bunch of people about it at our meetup today, but need to get the chance to sit down and write—so, logging off, see you tomorrow, and I hope we can get more of your proposed approaches implemented soon.

Maybe they’ll figure it out, maybe they won’t, but I would love to see even half the kicked-anthill energy being spent hating a closed beta app directed toward making Mastodon better for more people.

the strongest path forward for Mastodon advocates

I haven’t mentioned the simplest and IMO best critique of Bluesky and most other big platforms, which is that they emerged out of venture-capital galaxy brain, which has the moral sense of an AI chatbot. After the past decade or so on Twitter, “I won’t touch anything Jack Dorsey has touched” is a reasonable reaction. “I will only put my social labor into platforms that can never benefit billionaires” is fair.

But the missing step, to me, is when people with principled objections to other platforms are unwilling or unable to make the alternatives of their choosing more welcoming to more people. And there are absolutely people trying to do the work, but they’re dependent on the choke-point of what Mastodon-the-company decides is valuable. (Almost like something…centralized?)

I know “product design” is super-capitalist phrasing, but I want to invoke the idea of something made explicitly to be appealing, not just nutritious.

One of the big things I’ve come to believe in my couple of decades working on internet stuff is that great product design is always holistic: Always working in relation to a whole system of interconnected parts, never concerned only with atomic decisions. And this perspective just straight-up cannot emerge from a piecemeal, GitHub-issues approach to fixing problems. This is the main reason it’s vanishingly rare to see good product design in open source.

Great product design is also grounded in user research and a commitment to ongoing evaluation and iteration. For something like a decentralized social network, it also requires letting people from many distinct communities help steer the ship—and building ways to work toward consensus in some areas and accept both conflict and compromise in others. And great design at mass scale requires the core team to value mass adoption and push back—hard and loudly—against the idea that inconvenience is good because it filters out undesirables.

This doesn’t mean that I think Mastodon should necessarily implement full-text search or the whole set of interlocking patterns that constitute Twitter-style quote posts. But particularly given the third-party pressure on both search and quote posts, I think it’s way past time to do full-scale user research and design work on ways to integrate some kinds of search and quotation in some places and in ways that preserve privacy, safety, and autonomy. And to handle the whole nested doll of problems related to sign-up, discovery, and following, for starters.

while I’m opinionating

I feel enormous empathy for tiny teams doing high-pressure work. I think Rochko and his team have pulled off great work over the past six years, and I think the tendency to assume the worst motivations for every action maintainers take is a great example of the way that treating open-source projects like merchants and behaving like enraged customers is gross and destructive. But I also think the best way out of the overloaded-maintainer nightmare is to:

  • communicate transparently—and mostly not in unfindable replies to random people,
  • to make alliances with people who have capacities you lack, like user research and distributed deliberation, and
  • to devolve power whenever you can.

I recognize that that last piece is incredibly difficult to do when you feel like your singular human judgment is at the core of something huge, because judgment doesn’t necessarily scale. But my big hope for Mastodon is that the core maintainers find a way to do it in the very near future—or that other organizations step up to fund and shepherd forks of the project.

tl;dr

If we want more people to enjoy what we believe are the benefits of something like Mastodon, it’s on us to make it delicious and convenient and multi-textured and fun instead of trying to shame people into eating their soysage and unsalted soup.

I hope all of that is actually possible for Mastodon, because a lot of great people very much want it to become a more welcoming place. But the longer Mastodon stays in Linux-on-the-desktop mode, the more likely those people are to take their energy somewhere where it’s valued.

https://erinkissane.com/blue-skies-over-mastodon
Extensions
Matches, Pebbles, Hair, Salt

Grief is weird in the nineteenth-century sense, uncanny and otherworldly; it eats words and turns up every card blank. Two years ago, I left the most meaningful and by far the most wrenching job I’ve ever had. I wrote about it before we wound down, and afterward said almost nothing.

Now I find myself having things to think again, and maybe write. First going back a little, then moving on.

The COVID Tracking Project started with a question about how many covid tests the US was doing in early 2020. This question was in service of a much larger one: What is happening to us?

If you want to know more, we wrote about why the COVID Tracking Project happened, how our volunteers worked, and how it affected the world.

To answer the first question and then the second, we built an organization to compile, contextualize, and disseminate a daily national pandemic dataset for the United States, running on almost entirely volunteer labor. It was a frankly unhinged thing to do, but then it worked. And then we had to keep doing it because so many organizations and people depended on it, so we did it not for the two or three weeks we’d planned on, but for a full year.

Nothing we did compared to front-line medical work, but it was also just untenable: barely doable in the technical sense, subject to immense pressures from every side, painfully exacting when no one’s brain was working well, and built on scattershot datasets that became fractally more complex over time.

With our faces pressed against the glass of all that data, we started to get a feel for the underlying patterns though which case numbers turned to hospitalizations, and then, some weeks later, to death counts. We weren’t any of us special Cassandras, just people confronting the stark mathematics of disease progression, fatality rates, and time. I think most of us shared the sense that if we could work harder, communicate more clearly, warn louder, we might be able to prevent some of the suffering we could see coming every time cases ticked up Also that if we ever seriously fucked up in a way that shook confidence in our work, we’d be responsible for not preventing that suffering. People were dying by the tens of thousands, and some of them, inevitably, were people we knew and loved.

And all the while, the federal government—the people who should have been doing the work to begin with—flailed and bit itself and hid its own best reports from the public. So our hundreds of volunteers and eventually paid core staff worked ridiculous hours, changed their lives, quit jobs, moved cross-country, and put degrees on hold. (I was the project’s managing editor and co-lead of the full org with Alexis Madrigal, and the only way I can describe what happened to me is that I was consumed.)

But while all that was happening, something else was running alongside: Our bare-bones work offered a severe kind of clarity and purpose, and the culture of gratitude, pragmatic support, and unabashed love we built inside the project kept us alive. It was grotesquely imperfect, but it worked well enough that a couple hundred core people kept showing up long past the point when any rational actor would have stopped.

after the waves wash out, what remains?

I came into the tracking project believing that there were better ways to work together than most of us experience, that most people want a chance to do good work, and that treating every human with unfeigned care and fully demonstrated respect isn’t a nice-to-have, but the thing itself. I came out of the project with those truths written in fire on my bones.

For the past two years, I’ve put my energy into my family, my damaged health, and my other work, and spent some time feeling my way through a whole tire-fire of unresolved anger and grief over what did happen to us in that first terrible year. The internet dropped clean off my list of things to care about. But then the world changed again and I got back some of that Le Guinean sense that the locked-up technological present was maybe more malleable than I’d thought. I started tracing the threads of work and sociability and community back through my two decades of work in tech and journalism. Shapes began to emerge, though less in the mode of a designer at a drawing board and more like a Miyazaki outtake of something getting winched up, rusted and dripping, from the water.

So: New website. New writing. What is happening to us?

I was rereading the old OSS Simple Sabotage Field Manual this week and was struck by this bit (emphasis all mine):

Use materials which appear to be innocent. A knife or a nail file can be carried normally on your person; either is a multi-purpose instrument for creating damage. Matches, pebbles, hair, salt, nails, and dozens of other destructive agents can be carried or kept in your living quarters without exciting any suspicion whatever.

Hair! It’s one jar of teeth short of folk magic.

I’ve been thinking about how the things we carry with us online—hopes and habits and norms and assumptions and histories—sabotage our ways of working and being together. But I’ve come to suspect that with careful examination, some of them can also serve as instruments of liberation and maybe communion, whether that means breaking the old machines or building new and better things of our own. So that’s what I’ll be writing about here.

https://erinkissane.com/matches-pebbles-hair-salt
Extensions
← Back to feeds