Robin Berjon

Don't impale yourself on the tipping point

Decent Imaginaries

Desert vista with a gradient effects going from black-and-white and grainy on the left to highly saturated colours on the right, including a bright blue sky.

What now? The digital world seems to be in the middle of a long-awaited apocalypse, in a teetering dance on a razor-sharp tipping point and no one knows what comes next. We've had a Reddit user revolt preceded by an Etsy strike. Google Search and those various Facebook things, online empires of metastatic suburban strip malls as far as the eye reaches, hurtle fast to ever deeper senescence in the ocean of garbage content that their capricious technocracies, perhaps unwittingly, demanded of the world. And, in a tailspin that comes as a shock to exactly no one, Twitter has turned into everything you'd expect from a site dedicated to the cringiest Space Karen sycophancy for terminally mediocre billionaires-boot-kissers. Or, at least, that's what what's left of it is like.

Many of us, across the online, can feel the era burn and rot away. But this twilight of the gods tells us nothing of what renewal ought to look like when morning comes. And so the question: what now?

Imagining renewal is hard work. Our possibility space — the space of what we can be and do — is the intersection of the space of what we can physically be and do with the space of what we can imagine ourselves being and doing. The imaginary part is often discounted, or at best given some trite Hollywood believe-in-your-dreams treatment, but it shapes the world just the same. None of us are immune to having our mind tangled up in the limiting ideologies and the bureaucracies of those with power: making the status quo seem natural and deleting affordances to other futures is the not-so-soft power of hegemony. Imagining renewal is how we break domination and make change when making change becomes possible.

In preparing for change, many of us talk about decentralisation (or "decent" for short) but that's an abstract word that fails to evoke just how thoroughly political a project it is. There is no technical solution that can produce decentralisation on its own; decentralisation cannot be technosolutionist and succeed at the same time. The "decent" project is about bringing democratic governance where it hasn't been before, and it should hopefully be obvious to all that this takes more than just technology. Some technologies have been necessary to the emergence of democratic forms of course, but never sufficient: you don't get an informed polity just by inventing writing.

In fact, deploying a novel kind of technology that makes some deep changes to the assumptions that people have learnt to live with and expect, and hoping that the matching social norms to make it work will simply materialise is naive and potentially dangerous. People will lose the safety nets and coping mechanisms of what they know (however imperfect it may be) without any sense of how to replace it, making them vulnerable to harm. There is no such thing as "build it and they will organise," building and organising have to go hand in hand.

In the same way that ecologists of all stripes are not just finding technical solutions to save the world but also popularising beguiling ideas that few of us would have imagined without them — silvopasture, farmer-managed natural regeneration, carbon architecture, microgrids — us technologists need to offer clearer, concrete, bewitching futures for us all. Or perhaps not technologists, exactly, but some kind of smuggler travelling from the techno to the social and back.

This is where those of us who understand the technology well enough to know what is possible have a responsibility to work on our common technosocial imaginary, to feed the mind's bestiary so that all of us can grasp how we can change the shape of our possibility space.

I want to live like decent people

One way to understand computer systems in large social settings is as automated bureaucracies. Forget the hype and the jargon, that really is what they are. I know that no one gets excited about bureaucracy (except for social scientists, no kinkshaming) but what kind of bureaucracy runs any given important part of your life (and of society at large) matters a lot. What are its rules, how accountable is it, what are your pathways to changing it — that's the stuff of democracy.

One bureaucracy that we should get right is trust & safety. Setting aside debates over some of the Bluesky team's unforced errors, people have been asking pointed questions about how trust & safety could work, or whether it can work at all, in a decent world. Does each instance in a federated protocol have to do its own trust & safety? How much labour is involved, how much is devolved to users, will this not put all the load on those who need a safe environment the most? Can technology actually even help? Without solving every last detail, we can begin to imagine what that might or might not look like.

We can immediately rule out reproducing the current system in which a tiny number of companies govern permissible speech and what safety means for billions of people. We don't want social media that's like Twitter except without a demented owner. You cannot fix social media by having a non-profit run it or by guiding it with some ethical principles (which at any rate means absolutely nothing without accountability to people). There is simply no universal ranking of content acceptability — there are in fact likely as many different rankings as there are people — and even if there were there wouldn't be a universal line to draw below which everyone agrees that content should be blocked. This is the Google Search Fallacy: that it's actually possible for a single, unified system to organise other people's information at any significant scale in a way that is equally respectful of all and that doesn't have undesirable sanitising side-effects on the world.

Cory Doctorow put it well: "The problem with, say, Meta, is only partially that Mark Zuckerberg is personally monumentally unsuited to serving as the unelected, unaccountable permanent social media czar for three billion people. The real problem is that no one should have that job. That job shouldn’t exist. We don’t need to find a better Mark Zuckerberg. We need to abolish Mark Zuckerberg."

Centralised moderation isn't the only option that doesn't work. We also need to eliminate any option that doesn't have a credible mean of doing the work — and it's a lot of work, meaning that either someone needs to get paid or you need a lot of volunteers. And punting it to the users can never work.

Importantly, you can't support trust & safety at the instance level in a federated system with each instance doing the work. It's simply too hard and intensive. Consider this simplified accounting: moderation can be tractable if you have a group of several dozen people who you mostly know. But if every instance has only on the order of a hundred users in order to be tractable, that means that we need millions of instances to serve all online people, and as an admin you need to decide which ones to federate with or not. You can move the numbers around making instance moderation harder and federation decisions easier but so long as instances act independently, you never get the complexity of the work to vanish.

What's more, you can't really defederate from large instances. Email is federated but 85% of it is owned by Google. If as an email server admin you feel that Gmail lets people send too much spam or some such disagreement, could you tell your users "ah, well, we'll just defederate from Gmail"? No. The primary value of federation - if you can keep it - is to create conditions that encourage the maintenance of an open protocol and serve as a forcing function for infrastructure neutrality. But the greater the load you put on instance admins, the higher the risk that you'll end up with most of the protocol captured by large entities and the rest a throng of dysfunctional fiefdoms.

For a democratic form of trust & safety to function, we need to distribute and to reuse work, and to create room for cooperation. Divide and conquer, allow for specialised excellence, support a variety of governance and funding models, and deploy at different scales from large groups of instances all the way down to a single person.

Bluesky have helpfully described composable moderation and also support subscribing to mute lists (which SkyWatch is putting to good use). We can also imagine shared blocklists, both for content that shouldn't just be labelled and people who shouldn't just be muted. (Blocking can be tricky in decent systems, but that's another conversation.) However they focus primarily on what the protocol can do and spend little time explaining what this looks like socially: who actually does the work and how that can support social media we might not regret being on (at least until someone brings up Alf again).

The complement to composable moderation at the protocol layer is polycentric governance: a system with multiple decision centres, each of which has limited and autonomous responsibilities, and operates under its own rules. That might sound abstract, but in practice it simply means that different types of issues can be handled by completely different bodies with different principles and governance, and that you can choose which ones you want. For example (staying light on details):

And then, because the protocol support composability, you bring your selection together at the instance and personal levels. I didn't go into too much detail, but each type has different choices, governance, appeals, etc. Everyone can get their own moderation blend.

The important part is that the workload gets distributed and shared, and that there are enough moderation sources to choose from that you don't get a sanitised world out of it. It's also important to acknowledge that this does require some amount of labour from users, but it's essentially the labour of tending to one's neighbourhood. I also want to be clear that I'm skipping over a number of challenges and overhead — I'm only outlining the general shape of it to make the case that it's possible. That doesn't mean that it's not hard.

We can make this happen, and we don't need to wait for Bluesky to magic it into existence. The whole point of building an open, democratic system is that we all own it — which also means that we can decide to pick up things to work on. If enough of us want something, we can assemble interested parties, have people bring requirements, sketch out details, and implement. (I realise that Bluesky isn't open yet, but we should plan as if it were.)

Decently real

Returning to the responsibilities of technologists, the first step towards making decent futures a reality is to acknowledge that, while decentralisation is a political project and seeks to benefit everyone, very few people will come and stay on your system just because they believe that your software practices or protocol architecture are ideologically correct. And neither should they. People do care about having better digital lives and they will act on it if they can — but users are smart and they've been fooled before: they need evidence that it's not just an improvement in your head. For decent to prevail, it has to be better, it has to be better on people's own terms, and it has to show it's better, not tell. I've been doing open source and open standards for going on thirty years and I have zero fucks left for the zealots there who live to snort pedantry off their own asses; I wouldn't expect anyone else to have to put up with them either. Authoritarian software from a bunch of hapless, smugly preening Silicon Valley technocrats is bad, but no one wants it replaced with a masochistic priesthood of preachy beards either.

Focusing on the needs of real people isn't something that you can push up to UI design and ignore at the protocolar level. Decentralisation isn't a property of your software or protocol, it's a measure of how much the people who use and are affected by a system can have agency over it — and that agency has to be reflected at lower layers too.

In order to support human agency, a protocol needs to achieve two things: it needs to prevent the accumulation of power imbalances between parties (maintaining equality) and it needs to make it easy for users to cooperate in building the the rules they want for how the protocol's operation affects them. To put it differently, the success of decentralisation and, more to the point, of a democratic digital world rides not only on liberation but also on organising. As Amartya Sen put it: "Individual freedom is quintessentially a social product, and there is a two-way relation between (1) social arrangements to expand individual freedoms and (2) the use of individual freedoms not only to improve the respective lives but also to make the social arrangements more appropriate and effective."1

That's a little abstract, so let's tease it apart some.

Maintaining equality, liberating people is, to the extent possible, supported by self-certifying protocols. A non-technical way of understanding their value is that they provide freedom from authority. For instance, people who interact with you can know that you are who you claim to be without relying on the authority of, say, Google's authentication system. Or people reading your posts can know that they came from you and weren't modified without trusting, say, Twitter to convey them faithfully. (Right now, both could impersonate you or make you say whatever they want.) Freedom from authority is important because authority becomes a chokehold for control as well as for rent extraction. That control can be subtle, as with nudge authoritarianism, the common practice of using high-powered statistical analysis to decrease user agency — but it's there. For a democratic digital world, we can't get rid of authority fast enough.

Then people need to be able to get that freedom to work for them. "What is already ours, we need not ask for; through the cracks, we seize it."2 They need the freedom to organise, in a variety of polities, so that they can cooperate towards their goals and solve problems that they could not begin to touch alone. Some of that work can be delegated to market options but ultimately those too need to be kept in check by organised people. And by "freedom to" I mean actual, substantive freedom: it needs to be easy (enough) and intrinsic to the system, not an impractical hypothetical.

A self-certifying protocol without a cooperation layer is (exactly) like git without GitHub. This offers a lesson: failing to build the cooperation layer leads right back to capture no matter how good the tool. That's why git is simultaneously an extremely successful self-certifying system and a failed attempt at decentralisation.

Cooperation is "the other invisible hand". There are good reasons to believe that it is better to build social software — a category much broader than just social media and that includes all knowledge & goods discovery as well as social interaction software, and so also includes search, feeds, chat, browsers… — on democratic principles. See Stewardship of Ourselves and The Internet Transition for much longer views on this. Ultimately, it's how you get Nazis to go fuck themselves as any civilised polity should.

There's great current work around cooperation. Computer-supported cooperative work has been a field of study since the 1980s and more generally cooperative computing (as Dietrich Ayala dubs it) is a blooming field. Between a massively online population, new primitives like CRDTs and content-addressed databases (that can be safely written to in parallel by anyone), the emergence of post-Ponzi, low-emissions blockchains with diversified approaches to consensus, and all the work on automating the bureaucracy of governance with DAOs we can legitimately hope to see cooperation emerge as this decade's defining aspect online as well as offline. Self-certifying protocols are also making great strides: IPFS, Peergos, ATProto, Nostr, and more.

That last paragraph had a lot of jargon, and even if you know what most of it means you might still be struggling to figure out how to put it to work. The entire project of decent systems today is figuring out how to assemble these parts into a technosocial system that provides freedom from authoritarian software and that makes cooperation easy, even pleasant. Technology cannot fix the fact that freedom is an endless meeting, but it can make the meetings shorter and less painful with new collaborative tools and by automating the worst of the bureaucracy. It's hard work because most people — not just the youngest, not just the non-technical — have an experience of computer systems that is almost entirely authoritarian: algorithms that rank for you, defaults that trick you, absurd password requirements, corporate IT systems…

But that's the job. That's why working on imaginaries matters. Technologists cannot (and should not) cross this chasm alone — the whole point is for everyone to have a say — but the brunt of the work in explanation, in usability, in support, in listening, and ultimately in imagination has to come from us. After all, we're not here to fuck spiders.


Acknowledgements

Many thanks to Matt Salganik for telling me about Freedom is an endless meeting and many other cool things.