In the past decade there hasn’t been a year without a politician calling for real names on the internet. Some even want to force people to use real photos as profile pictures. All in the name of stopping online hate, though enforcing real names has long been shown to actually make the problem worse.
This article presents another solution, one that has actually proven that it keeps communication friendly, even in the most anonymous environment of the fully decentralized Freenet project.
And that solution does works without enabling censorship.
The Web of Trust (WoT) was conceived when Frost, one of the older forums on Freenet, broke down due to intentional disruption: some people realized that full anonymity also allowed for automatic spamming without repercussions. For several months they drowned every board in spam, so peole had to spend so much time ignoring spam that constructive communication mostly died.
Those spammers turned censorship resistance on its head and censored with spam. Similar to people who claim that free speech means that they have the right to shout down everyone who disagrees with them. Since the one central goal of Freenet is censorship resistance, something had to be done. The problem of Frost was that everyone could always write to everyone. Instead of going into an arms race of computing power or bandwidth, Freenet developers went to encode decentralized reputation into communication, focussed on stopping spam.
How it works
To make your messages visible to others, you have to be endorsed by someone they trust. When someone answers some of your messages without marking you as spammer, that means endorsement. To get initial visibility, you solve CAPTCHAs which makes you visible to a small number of people. This is similar to having moderators with a moderation queue, but users choose their own moderators.
That still provides full anonymity, but with accountability: you pay for misbehavior by losing visibility. This is the inverse of Chinese censorship: in China you get punished if your message reaches too many people. In Freenet you become invisible to the ones you annoy — and to those who trust them but not you (their own decision always wins).
But wait, does that actually work? Turns out that it does, because it punishes spammers by taking away visibility, the one currency spammers care about.
It is the one defence against spam which inherently scales better than spamming. And it keeps communication friendly.
Now I repeated the claim three times that the WoT keeps communications friendly (including the title). Let’s back it up. Why do I say that the WoT keeps communication friendly?
For the last decade, Freenet has been providing three discussion systems side by side. One is Frost, without Web of Trust. One is FMS, with user-selected moderators as Web of Trust. And the third is Sone, with propagating trust as Web of Trust. On Frost you see what happens without these systems. Insults fly high and the air is filled with hate and clogged by spam. Consequently it is very likely that FMS and Sone are a target of the same users. With no centralized way of banning someone, they face a harder challenge than most systems on the clearnet (though with much less financial incentive).
Yet discussions are friendly, constructive and often controversial. Anarchists, agorians, technocrats, democrats, LGBT activists and religious zealots discuss without going at each others throats.
And since this works in Freenet, it can work everywhere.
Use in other systems
How can this be applied to systems outside Freenet — for example federated microblogging like GNU social?
You can translate the required input to the Web of Trust as described in the scalability calculation to use information available in the federation:
- As WoT identity, use the URL of a user on an instance. It is roughly controlled by that user.
- As peer trust from Alice to Bob, use "if Alice follows Bob, use trust (100 - (100 / number of messages from Alice to Bob))".
- As negative trust use a per-user blacklist (blocked users).
- For initial visibility, just use visibility on the home instance.
These together reduce global moderation to moderation on a smaller instance and calculations based on existing social interaction.
(Finally typed down while listening to a podcast by techdirt about content moderation)