Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Is content moderation a dead end? (ben-evans.com)
154 points by ksec on April 13, 2021 | hide | past | favorite | 296 comments


I think we're looking at the issue in a completely wrong way.

There's no objective definition of right or wrong in content moderation. Right and wrong is subjective, especially across cultures, and moderation should be subjective too.

I believe end users should have the choice to adopt blocklists, Adblock style. Those lists could contain single posts, accounts, or even specific words. A lot of content (like flashing images or spoilers) does not merit deleting, but there are users with good reasons not to see it. They should be given such an option.

There should be a few universal, built-in blocklists for obvious spam, phishing, child porn etc, but all the rest should be moderated subjectively.

A Clubhouse-stule invite system (with unlimited invites) would also be a good idea. It would make it much harder for spammers, cammers and troll farms to make new social media accounts.


I think there a fine line between these and curated lists (which most would use) that create echo chambers and comfirmation bias (which many platforms have).

I like the methods this site uses over others quite frankly and that seems to be quite a bit of human moderation to get to it. But i think scale also has something to do with it. Reddit was much like this site in its earlier days.


Echo chambers and confirmation bias are also known as community. I know everyone wants to “do something” but other people occupying communities whose identities and norms and beliefs you don’t like is a problem of humanity, not the internet.


See that take I don't agree with. There are plenty of communities where you can disagree without flaming. Where discussion and dissent can be had and heard without having to alienate or "pwn" them.

You don't have to just ignore or exclude minority viewpoints to be a community. Most of the more welcoming communities typically embrace those as a matter of fact, and try to educate or evolve based on challenging their viewpoints.


In a Christian community you can't deny Christ. In a Dungeons and Dragons community you cant call everyone a nerd and tell them to play Call of Duty. In the Black community you can't be a white woman with hair extensions and claim be to be the victim of discrimination. Communities exist specifically because they exclude people from them. Sometimes thats good and sometimes its bad but attacking the very concept of a group of people organizing for a specific purpose is simply dishonest.


I think there's an important distinction to be made here.

Communities aren't only exclusive, they can be anywhere on a gradient between exclusive and inclusive.

An inclusive community may have a central focus, but would allow outsiders to participate.

Each of your examples have both inclusive and exclusive communities.


'Communities can be anywhere on a gradient between (alphabetical) exclusive and inclusive (termed), but focused (themed), and would allow outsiders to participate (eigen).'

Um I like what you wrote, reminding: //wiki/Genitive;

'Indicating an attributive relationship of one noun to the other noun', 'serving purpose indicating relationships', 'may feature arguments', 'a head noun, in a construction'

...a shift, yes -but that may be way too offtopic (-;


>In a Christian community you can't deny Christ.

There are plenty that do. Maybe some of the fundies dont...But the entire point of Christianity is to welcome non-believers and convert them. And plenty of Protestants and even most Roman Catholic congregations I have seen are very much that way.

Especially when a school is involved.

> In a Dungeons and Dragons community you cant call everyone a nerd and tell them to play Call of Duty.

Well of course not. That would be like...the opposite of my point.

>In the Black community you can't be a white woman with hair extensions and claim be to be the victim of discrimination.

Of course not. But having grown up relatively poor and in a minority of my neighborhood at that time, if you were to ask anyone from my neighborhood if i was a part of THAT community I bet they would say yes. And they absolutely were willing to fight with me/defend me when i was singled out for my, get this, race. I know plenty of members of the black community that are fine to sit down and listen to my opinions on some certain topics, again its all about respect and approach. I know plenty that would hear me out and dismiss me, and plenty that would shut it down (and i wouldnt even bother voicing in the first place).


Ok. This has nothing to do with what I said.


Everyone is also very concerned about everyone else's echo chambers and confirmation bias but never their own.


Exactly. Internet has been able to connect people, so each community become larger and louder. But the real issue is that the voice of majority is actually missing.

It is like if you bought a doughnut from a shop, some will say it loud it is wonderful, if you have complains you probably will make sure you say it loud and clear, the majority people happy with it will just walk away silently.

This way, the voice you hear on internet are mostly from the two extreme of spectrum, nothing in the middle. It would be nice if we have some adjust somehow.


That's because there's no such thing as a majority on the internet. It's the purest form of the marketplace of ideas. And like any such marketplace, fads and ideas, behemoths or not, come and go. That's why it's important not to dive in with expectations that you're getting a true or definitive reading on issue. Outside of the most brilliant scholarly papers, one rarely does. Even in the most viewpoint-neutral and factually correct conversations, pieces and snippets are all you get.


It’s my personal opinion as someone who has watched the internet “grow up”, that at some point we are better off to mandate a percentage of “echo chamber busting” across all platforms.

Even if it’s 3%, the algorithms can promote not just the polar opposite opinion (which is currently done to create “sticky” polarity), but the things that let people say “huh, I never thought of it that way”, or “I never knew that there were people in that situation” because those lead to empathy, understanding, and recognizing the humanity of people in other “groups”.


I generally agree with you. I think a better solution would be reputation-based. An account created 4 seconds ago that posts dozens of videos that people generally find distasteful should wallow in obscurity. Still there? Yes. Findable? Yes. Promoted? No.

I recommend this solution because I think this is generally how human societies have traditionally worked. Humans have always been free to say whatever they want (meaning, it's physically possible to say any words), but there has often been a penalty for promoting ludicrous or hateful ideas: You weren't respected and were often ignored. If you gathered followers around you who threatened to become violent, you were removed. I don't see why the same system couldn't work for content moderation.

Of course, the fundamental problem is that controversial things trigger emotional responses which trigger engagement, and that's what social media's business model relies on.


In its earlier days reddit was a bunch of bots talking to themselves. HN works because the official moderation is more of a nudge than a fist, the community enforces its own standards, and it's a relatively small niche.


I think it has more to do with culture — the way moderation occurs enforces a certain style of good and bad behavior, and once the community is trained correctly, it becomes self-enforcing. And people have a natural tendency to conform, and if they refuse to, quickly leave to find another group more relevant to them.

The problem with most communities is that there is no culture being enforced — whatever grows, grows haphazardly and randomly.

It’s also why good communities tend to start small and grow slowly — and why eternal September is so deadly. If you have too many people enter the system too quickly, that culture can’t be enforced and trained before the next group of people come in. Rather than having the “elders” teach their ways to the “young”, it becomes the young teaching the young.

As an additional influence, those elders tend to leave and establish new communities, making it such that the problem can never be corrected.


The community kind of enforced its own standards (eg almost no jokes, especially no stupid Reddit-style joke threads).

But I think moderation was more important. However much pg wrote about being curious or “intellectual honesty” or whatever, it was the change in moderation (to dang) that saw changes. But that sort of thing doesn’t fit all communities.

For one thing this one is basically public (there’s no shared-private things like subgroups or DMs) and tends to have reasonably highly educated members. For another, the volume is low enough that dang can deal with it all with only some automated tools, giving a reasonably consistent approach to moderation. Also I think he is paid much more like a software engineer in Silicon Valley than a typical social media moderator.

There are certainly some other more general things though, like the anti-flame mechanisms, the lack of more inflammatory media like pictures or videos, the lack of printed comment score and downvote limits, and the generally public moderation trail (though moderator may privately email users too).


I agree, I meant that dang and sctb have a style that is more about nudging people into better behavior than about demanding compliance.


I'm learning to just downvote and not respond to troll bait, slowly. And I got my upvote points past the point where I can set the color of the title bar so I don't care about those any more which helps.


Doing that is worth a lot! Many will see something like that and give it thought. If they do, that is halfway there, maybe more.


We used to have editors at newspapers who did this. They had an opinion, tried to be objective, but called out excesses.

Sometimes this went wrong, see yellow journalism. One thing that is different now is that you no longer have a say in who edits your newsfeed. You can't very easily switch news-feed.

I feel like the variation in opinions in modern-day editors is much smaller than the variation in opinions in society. Or maybe it isn't but the editing, being done more implicitly, is not convincing the wider populace that this is the way. That is, content moderators have much less authority (in the sense of respect) than old newspaper editors.


Newspapers, which you point out used to moderate for us, don't want to play the moderation game in the Internet era either.

Many publications have ripped out their Disqus, Facebook commenting plug-ins that used to accompany their articles.

Internet moderation is too expensive. They'd be happy having Reddit and Twitter (users) do the moderation for them.


They used to moderate, but they moderated very heavily. Mostly by deciding what to publish on, and how to edit what they themselves publish. Heck, that isn't even moderation. Instead, its picked a context and a set of values against which to determine what is true.

User generated content moderation probably should include the step of "picking a context and a set of values" but it is a lot more than that. I can very easily see why someone who is great as an editor is horrible at user generated content moderation.

My point was that we used to have an easy way to pick (and switch) the "context and values" against which the media we consume are evaluated. But these days, 90% of people use the same set (as chosen by FAANG) and really, this "context and values" story is not part of media choice, nor is it a large part of media identity. In fact, I believe google and twitter try really hard to not identify as choosing what is true.


They also do not like being questioned. That factor is as significant as the cost is.


This is a stereotype of journalists I have not seen confirmed. Most journalists I know are happy to admit when they are wrong, if it means they are closer to the truth.


I know a few myself. To the point of having real conversations.

Those who are working for smaller shops, or who are freelancing are in a very different place than people working within or dependent on the big media machine.

What beat you cover matters too. Some topics come with more permissive norms all around. Contrast social issues with economics and foreign policy. How much agency one has to both report in an investigative sense and be questioned varies widely.

There is wrong, as in not adhering to the orthodoxy and or established narrative, and there is wrong in a factual sense.

There are also taboos.

How many stories have you seen written from the labor point of view, or that question the growing Washington Economic Consensus, or even recognize it, for example?

How often do you see major league corrections? Where are they placed, why?

If you are reading overseas, indie, freelance work, you have seen some, and often the work is clear, solid, easy to understand.

Side note: Investigative work is particularly troublesome right now, and where it is unorthodox, is being actively suppressed to an increasing degree.

And I want to be clear here!

This is not saying people in journalism are bad somehow, nefarious. No, they are working in a highly consolidated media environment, or they are out in the cold working hard to do the job, despite considerable pressure.

Secondly, the entities they work for really do not want those questions at all. This may well be the more significant force in play.

Doing real journalism these days is hard work that does not pay well. While that has generally been true, consolidation, which just got permission to contract even further, has made all this more difficult.

Access journalism is a real thing too. Ask the wrong question, despite it being one very large numbers of people want asked, and boom. Gone. No access. Gonna need a new gig and potentially have it covering a new beat.

Often, we see steganography more than journalism because of these things. People pointing that out in comments gets conflated with people being idiots, trolls and asses and it all gets shut down.

Finally, old media is more like broadcast than it is Reddit, or even Substack.

In broadcast, nobody actually cares to ask what people want, nor is there any real two way dialog. It is broadcast, focus grouped, researched all to the nines, very little actually being done ad hoc, or even outside advertisers interests, which are often the media entity interest, which makes it the journalists interest, or conflict of interest depending on where they are at personally.

There are facts and there are opinions as to what those facts mean.

Journalists who enjoy sufficient agency see it like you say and have no problem with critics.

Many do not have that agency, and the entities who employ them much prefer broadcast rules and norms where their authority comes from size and position not so much performance. And those people will do what they can, but will also admit it often isn't much.


"Access journalism". I feel the same about most reporting about non-public tech companies. "If you play nice, we give you a good 'story' (really PR) to print. If not, get lost."

In Japan, "access journalism" is also a serious problem with political reporting. But to be fair, so is the United States, but more subtle / less obvious. Example: Can Fox News really get a long-form interview with President Biden? (I am not trolling.)


Clubhouse is not a great example of a platform that handles abuse properly.

Putting the moderation burden on people is also not a solution, it's duct tape.


>Putting the moderation burden on people

I don't think GP is saying we should put the moderation burden on people. When you accept that there is no objective definition of right or wrong, you begin to see that perhaps there are ways for people to self-organize on the internet according to their values rather than being shoved into the same box like we often do today. Many people are looking for ways to efficiently and effectively organize on the internet in a more sustainable manner.


And yet, HN operates on a (not "the") definition of right and wrong. Stepping outside the boundaries gets you a visit from the moderation fairy, and might end with you being ejected.

That means the burden is not on "people" in the sense of individuals. You can expect a certain content and tone coming to HN because the moderators ensure that. Yes, they're people too, but not in the sense PP and GP used it.

That was clearly an "each individual user should..." statement - and that's likely unsustainable for large user groups.


Your HN example makes me think: if I'm talking to my spouse or close friends, we obviously don't have a moderation policy, we know each other well and share values well enough that any debates we may have are (almost always) focused on substance and not on conduct.

In political discourse, and in debates on big platforms like Twitter, it's the opposite- most of the discussion is about people's or groups' conduct and substance takes a back seat. Because a heterogeneous group with different values is involved.

So for social media and online forums, the question is, how big and diverse can the audience get while still supporting civil, substance focused discussion? HN does a pretty good job, and also has some obvious biases, scared cows, and weak spots. Online newspaper article comments probably have one of the lowest quality of discourse for a given participant size. What forum is best, I'm not sure, but its instructive to look at it this way, because it reflects politics generally, if we want to address real issues while maximizing participation.


I have a sneaking suspicion that you can safely support tremendous community size if you don't have enragement-driven ordering of the feed. (I.e. strict chronological ordering only)

We tend to pick our online friends according to mostly shared values, the same as real-life friends. Enragement ML then proceeds to amplify the differences instead of letting you experience that you're mostly similar, radicalizing both partners in the long term.

A chronological order minimizes that experience simply by the probabilities - if most of your values overlap for everybody in the feed, the chances of you seeing a large chunk you do not want to see is small. You're occasionally reminded that you don't always agree, but it's in a context of lots of other posts that don't violate your values.

(Part two of the problem are searchable feeds, where groups of people just wait to keyword-snipe)


"Right or wrong" maybe not, but for well managed communities on the internet, there are objective definitions for appropriate and inappropriate, based on shared values and context.

If you leave it up to each individual to decide what is appropriate or inappropriate, and provide them with the tools to block content they consider inappropriate, that's a burden on them, because you're not taking care of it at the community level.

And if the community's strength comes from shared values, and you leave that up to each individual to decide, what's shared, and what sort of "community" is actually offered?


> If you leave it up to each individual to decide what is appropriate or inappropriate, and provide them with the tools to block content they consider inappropriate, that's a burden on them, because you're not taking care of it at the community level.

The assumptions behind having individuals decide what's appropriate or not are:

1. The majority of the community would make the same choices

2. Those posting what most feel is inappropriate content would not get any responses because no one would see it

3. Because of the lack of response, those posting inappropriate material would move on elsewhere and no longer post.


And if I join this community, I need to go through the effort of setting up software/apps to enforce my content choices.

And if I'm viewing a conversation/thread/topic with something blocked in it, where I have blocked it but other members of the community have not, what does that experience look like to me?

And if a spammer posts a dozen links to their website, but I haven't blocked that content yet in my content filtering setup, that'll be fun.

This just sounds something like Parler. You can post anything you want there, nothing is blocked, and you're free to set up browser-based filters to hide the content you don't wish to see. In effect, you end up with a community biased towards those who are comfortable with vitriol, and as such it's filled with terrible content.


> And if I join this community, I need to go through the effort of setting up software/apps to enforce my content choices.

Typically, that involved using an existing application to connect to the service and read posts. As you spend more time reading through current and past threads, you would get an idea what you want to filter and what you want to see. In other words, you would set up your filters over time as opposed to doing everything at the start.

> And if I'm viewing a conversation/thread/topic with something blocked in it, where I have blocked it but other members of the community have not, what does that experience look like to me?

If no one responds to the post, you simply wouldn't see it. Otherwise, you would see responses to a post/comment that you couldn't see. The software could also allow you to hide any responses to that post if you had no interest in the subthread(s).

> And if a spammer posts a dozen links to their website, but I haven't blocked that content yet in my content filtering setup, that'll be fun.

I guess that depends if your client is configured to display linked content inline rather than just the urls. I don't use clients that display content inline and then just add a filter for the spam post.

> This just sounds something like Parler. You can post anything you want there, nothing is blocked, and you're free to set up browser-based filters to hide the content you don't wish to see. In effect, you end up with a community biased towards those who are comfortable with vitriol, and as such it's filled with terrible content.

It's how usenet worked and there were plenty of vibrant communities spread across many groups that lasted decades, if not longer.


>>"Right or wrong" maybe not, but for well managed communities on the internet, there are objective definitions for appropriate and inappropriate, based on shared values and context.

Objective in what sense? Consensus != Truth.

>>If you leave it up to each individual to decide what is appropriate or inappropriate, and provide them with the tools to block content they consider inappropriate, that's a burden on them, because you're not taking care of it at the community level.

You can't take care of anyone at the community level because a community as an indivisible unit does not exist. The word "community" is merely a label, an abstraction for the ideas and events that characters interactions between two or more different people.

>>And if the community's strength comes from shared values, and you leave that up to each individual to decide, what's shared, and what sort of "community" is actually offered?

Either this is a loaded question or you've really put the cart before the horse, but I'll bite. A community's value comes the achievements of its individual constituents. Shared values are not a required prerequisite. They naturally arise from personal observations and discovery, real or perceived. That's it. Every culture, superstition, law, religion was a consequence of personal examination by one person. What made these so-called shared values "shared" was war and trade.


>there are objective definitions for appropriate and inappropriate, based on shared values and context.

not necessarily. Taking out the more heated factors for a moment, think of a concept as simple as spoilers. spoiling others are widely considered to be an impolite move, but a community formed around talking about a work wouldn't want to have to blur every image or paragraph in order to have a conversation on what may be years old points. There's no objectively perfect solution, even if there may be an almost objective factor to address.

In instances like this, leaving options is valuable. Some may want to keep all screenshots unblurred, while others may want to take minimal risks.

>and you leave that up to each individual to decide, what's shared, and what sort of "community" is actually offered?

I don't think anyone is proposing an anarchy. At the end of the day, there may be some rulesets made by consensus, but it is beholden to the sub-community moderator and their personal whims, who is beholden to the community owner and their personal whims, who is beholden to some loose sets of laws based on their country. So this structure is impossible outside of some sort of decentralized p2p server setup (which sounds like a mess to communicate in).


This is a great point.

When I say based on shared values and contexts, I mean at the community level.

So for a community where people discuss Harry Potter books, many members are not going to want to be careful about everything they say for fear of spoiling someone else's experience. But some people might want to experience reading the books for the first time with other people in the same boat as them.

If this community was well managed, they might solve this by:

- Writing into the community guidelines that spoiler tags only need to be used when discussing something that's NEW.

- Users have the ability to have spoiler content hidden/shown by default. The key here is that the definition of what a spoiler is, is still defined by the community, not the individual.

- Creating a separate area within their community for new readers to chat about the books, where discussion about events in future books is not allowed.

By joining the community, and agreeing to the community guidelines, you are agreeing to abide by an objective definition for what's appropriate within that community.

If you don't agree with those guidelines, you probably don't want to join that community. But without any shared guidelines, you do trend towards anarchy. Certain topic/niches will approach it faster than others.


You and the toplevel commenter may be talking about two different kinds of systems.

You are describing "well managed communities". HN is arguably one of those. Many topic-specific forums, IRC channels, mailing lists, and communities on platformy things that seek to reinvent those are as well. They tend to be centered around a topic or purpose and have rules, guidelines, and social norms that facilitate that purpose.

I think the toplevel comment is talking about global many-to-many networks where people connect based on both pre-existing social relationships and shared interests (often with strangers). Those require a different model, and centralized moderation based on a single set of rules is probably not the best one.


I see the former just as a subset of the latter.

ie Twitter is huge, but there are sub-communities that exist within it (some well managed, others not so much), and Twitter is building community features to that point.

Twitter can certainly do a better job of moderating its sub-communities, taking into account shared values and context, but I still don't see how the solution is to have users deal with content moderation.


> based on shared values and context.

That's exactly the point GP was trying to make. That people should be able to organize in groups of shared values and context. Rather than there being a rather large rough mono-culture of moderation policies.


They can organize by shared values and contexts. What moderation monoculture is preventing that?


One big party of issue is people deliberately going out of way to harass those they want to leave. And it is not new tactic, or was going on for years.


The rooms aren't created by Clubhouse, so it makes sense for the creator of the room to moderate it according to the goals of their particular room. It's not a burden because it's not an open forum like HN or Reddit, where anyone can talk. The moderators have to specifically choose who gets to speak and can simply drop them back to the audience if a problem.


Yeah but if the goal of the room is to foment hatred and target harassment at an individual who isn't in the room, that's still abuse.


Does this apply to a phone call? To a zoom call with five people in it? Should trying to stop someone from doing something, even if abhorrent, in private, really be a priority? I can only see this ending badly. If there is a link to real world crimes, sure, intercepting a discussion is one method available to detect / deter / deny / disrupt, just like a wiretap. But (and maybe I misunderstand your comment) beginning with the idea that we should try and prevent conversations we find abhorrent from happening is, well, abhorrent.


Moderating private discussions (e.g. most Zoom calls) are very different from moderating public discussions.


Where's the distinction? It's like saying a phone call is functionally different from a conference call. Both Zoom and Clubhouse use private infrastructure, thus all discussions are private even if many people are involved in them.


No, and that's not what I said.

Clubhouse is not just private phone calls. It's a social network.


What is a social network? Fundamental its a graph structure of human relationships. Humans are the actors/nodes and edges are the interactions.

Two people chatting fits this criteria. You can have a private chat on Clubhouse between two people.


> Clubhouse is not just private phone calls.

Yes, you can have a private chat on Clubhouse. You can also have a private chat on Facebook or Twitter. They are all social networks.

I don't think trying to find edge cases for what constitutes a social network is helpful to this conversation.


Versatility is not an edge case. By your standard, conference calls can't be considered phone calls because few people ever used the feature.


I feel like I shouldn’t need to explain why a conference call and a social network are different.


[obvious disclaimer of I am NOT advocating for child porn]

Why would spam, phishing, child porn be the 'universal' ones?

If you are making an argument that it should all be opt in...then it should all be opt in. Otherwise, this is the same drawing of a moral line that we all tend to do where we call ours obvious and others subjective. Maybe some people want the spam? Shouldn't the spammers have the ability to share it in case people want it?

My point isn't to argue for those things...its to say if we just accept that content moderation is subjective...we can't then label some things as subjective and some things as not, the framework should just be - laws of the state/country/equivalent structure. Those provide mechanisms (theoretically but more soundly) for feedback that corporations do not and frankly should not on the boundaries of acceptable and unacceptable content.

C.f., Nazi imagery in most of Europe.


I think real child porn should be 100% on that list.

However, even what child porn "is" sometimes depends of the culture or even the political climate. Like with the Japanese "lolis" (erotic drawings of fictional minor-looking girls), illegal in some countries but fine in others (so far).


Where nobody is willing to stand in on the editorial side, the decision is being made, effectively, by the submitter pool with the largest content creation capacity, and its interaction with algorithmic and promoted content dissemination. That is, the decision is already being made, the only question is whether or not the results have strong truth valance.

The ship has, in a word, sailed.

If you create a vast, global, instant, high-fidelity, interactive content dissemination system with a strongly aspirational and appealing audience, then parasitic opportunists who seek only to serve their own rather than others' interests will flock to it. And have, in spades.

And I've seen this game play out repeatedly: in print, on radio, television, Usenet, email, and since approximately 1997, on the Web and mobile Internet.

There are parties already making truth determinations, and their doing a demonstrably horrible job at it from a common weal perspective. And that is the problem.

"Just don't use Facebook" doesn't work for two principle reasons:

1. Facebook is increasingly central to, or required for, numerous real-world interactions.

2. Even if, as I do, you don't use the service, you live in the world it creates. Facebook has massive negative externalities. Like, oh, say, civil war and genocide in Myanmar, to mention only one aspect.

(There are others closer to home for most readers here. I'm hoping HN won't lose its collective mind if I don't mention these.)

Another element that factors in is that Big Lie propaganda is reliant on disseminating the Big Lie. Scale is directly the problem, and offering unrestricted access to, choose the amplification metaphor of your choice, the printing press, microphone, camera, TV/Radio station, etc., has risks. Especially for those who would see the tools themselves burned down along with all else.

In which case, drastically curtailing the spread of any content from identified actors and their associated networks responsible for spreading obvious and notable disinformation ... is highly defensible.

I'm well aware of numerous arguments, predicated on or observing exceptions to free speech, which typically follow such statements. I find both the free-speech absolutist and the private property / private actor restriction privilege arguments tired and uninspired. The reality is complex, and I don't have either simple solutions or any which are conformant simultaneously with "free speech" or "property rights" positions.

Both, to my mind, exist in a nexus and network of overlapping interests and rights. I've suggested "information autonomy" as an alternative to "free speech", though the expansion of that notion quickly points out internal conflicts. Common weal might offer one path out.


This is a weird take.

You get lots of messages from Nigerian scammers, but the solution was not to prevent people from writing freeform emails. The solution was to build powerful spam detection algorithms, make it easy for people to classify emails to help strengthen the training set, and the problem is basically solved.

There's no easy answer to content moderation. There's no one size fits all solution, nor is there is some weird hack that's going to fix it. It's a part of your product. If you treat it as such, you're better off.

If you treat it as a separate problem that just needs money thrown at it or duct tape wrapped around it, you're never going to stop throwing money and tape at it.

Everyone wants an easy way out. You need everyone on your team in the room brainstorming solutions.


The problem isn't solved, just surpassed, given that Nigerian spammers continue to make money.


Fair, though I'm curious if the people falling for them are using email providers with quality spam filters. I'd guess it's a much older crowd that's more likely using an archaic email provider with no incentive to improve spam filters.


> I'm curious if the people falling for them are using email providers with quality spam filters.

I'm curious why the people falling for them need a spam filter to recognize them as scams. I still see an occasional one slip through my email provider's spam filter, and I've never had any problem figuring out that they were scams.


Because not everyone has competent internet skills.


I don't think it's internet skills per se, given that scams exist since before the internet.


Part of internet skills is increasing your prior that someone is lying to you.


People default to trusting other people


The threat models from 419 (Nigerian / advance-fee fraud) spam are:

- Individuals are subject to a fraud risk.

- Trust in the medium as a whole is decreased.

- Risk of false-positive spam flagging increases, decreasing legitimate message transmission reliability.

- Fraud will continue so long as costs of operation are lower than rewards.

Economic spam, in a word, follows an economic model, and is subject to economic attacks. The notion of some form of "email postage" is one ultimate form (thus far both impractical and widely rejected), but other "increasing the costs of business" have had ... significant if not perfect ... results in limiting the activity.

The calculus for political propaganda is vastly different, with the rewards of even a marginal success being tremendous, and the resources of antagonists being prodigious. It is very much like war, that most costly of economic activities.

It is war.

It is information warfare, rather than physical warfare. The weapons are messages, messengers, channels, informants, and audiences, rather than missiles and knives and fire and water and chariots (or tanks or helicopters or bombers or ...).

But Sun Tzu wrote of both types 2,500 years ago.

Unfortunately, the fundamental architecture and premises of a major contingent of participants fundamentally enables the attackes presently being conducted. (This is of course something of a tautology: if a terrain is fundamentally favourable to a specific mode of attack, that attack will tend to be employed, successfully, on that terrain.)


There are multiple ways to address problems with objectionable content. One is not to have it, which is what the author proposes.

Spam detection works well to eliminate email you don't want to see but nobody seems to have figured out how to apply it to social media content. Part of the problem (I guess) is that most of us agree what spam is whereas objectionable content is very much in the eye of the beholder.


I like the idea of an objectionable content filter that's personalized like a spam filter. People don't always agree on what spam is, and they can change their minds at any time.


It's hard to separate content moderation from the problem of Evil. Low entropy evil is easy to automate out, high entropy and sophisticated evil can convince you it doesn't exist.

This is also the basic problem of growing communities, where you want to attract new people while still providing value to your core group, while still managing both attrition and predators. What content moderation problems have proven is that even with absolute omniscient control of an electronic platform, this is still Hard. It's also yields some information about what Evil is, which is that it seems to emerge as a consequence of incentives more than anything else.

In the hundreds of forums I've used over decades, the best ones were moderated by starting with a high'ish bar to entry. You have to be able to signal at least this level of "goodness," and it's on you to meet it, not the moderators to explain themselves. There is a "be excellent to each other" rule which gives very reasonable blanket principle powers to moderators, and it's pretty easy to check. It also helped to take a broken windows approach to penalizing laziness and other stupidity so that everyone sees examples of the rules.

Platform moderation is only hard relative to a standard of purity as well, and the value of the community is based not on its alignment, but on its mix. If you are trying to solve the optimization problem of "No Evil," you aren't indexed on the growth problem of "More Enjoyable." However, I don't worry too much about it because the communities in the former category won't grow and survive long enough to register.


> In the hundreds of forums I've used over decades, the best ones were moderated by starting with a high'ish bar to entry.

I've had the same experience. And at the other end of the spectrum, the reason Facebook, Twitter, etc. have such problems with moderation is that there is no bar to entry--anyone can sign up and post. With what results, we see.


Agreed, with a thought:

I actually think that Facebook's business model was premised on the Steve Carell movie, "Dinner for Schmucks," where a tool originally designed to signal you went to an ivy school got that cohort on board, and then invited everyone else for as kind of a cruel joke where they didn't know they were only the entertainment. Now the party's long over and the only people left are loud drunks yelling at each other and couples acting out some very public domestics.

Imo, there is no solution to those platforms' moderation problems, only management into something even more banal and worthy of disruption.


>There is a "be excellent to each other" rule which gives very reasonable blanket principle powers to moderators, and it's pretty easy to check.

easy to check, but most sites tend to have almost zero accountability for moderators "staying excellent" and most moderators are often unpaid, normal human beings with their own personal beliefs. They won't be let go unless they frustrate the site owners.

And to begin with, people's perceptions of if "laziness" and especially "stupidity" should be aligned with "Evil" is a question for discussion in and of itself. Some people may just want to throw around some jokes and memes and not be in some pristine forum full of essays for response. It's not my personal preference, but I also don't think it's worthy of a blanket internet ban, nor do I necessarily want to say forums for those people are of "lesser quality" (even if you can argue that it attracts more truly "Evil" people).

I guess it's a tiptoe as moderation as always. I've also see some forums lean too deeply on quality and end up feeling gatekeep-y as a result (and not even for sensitive topics where it may be needed, but some "casual" forums, as advertised). You turn a community into work and people will look elsewhere.


This is extremely short sighted. The complete opposite is more likely to be true. Privacy is almost dead and soon it will be almost impossible to hide your real identity on the internet and thus avoid consequences for your actions. That will allow companies to black list you across the internet, so if you are an a-hole in lets say Facebook by harassing people, doxing them, do child grooming, scams/fake advertisement, etc. and Facebook bans you because you are bad for advertisement, they are going to be able to put you on a black list and ban you in all other sites even if you use a different IP, browser, account, etc. There are endless ways to tell you are the same person and it's getting worst, for starters you phone, SSO, browser/extensions fingerprinting etc.

It's going to be a lot like your credit score, your criminal record, etc.

At the end of the day companies want advertisement money and if you scare the adds away the same networks that control those adds are going to end up keeping track of you to keep you away.

Once that anonymity is completely gone, the internet will be just like real life. If you are a problematic assh*le, you'll get a record saying just that that employers, land lords, schools, etc. are going to check, just like they do now with credit scores, criminal records and school records and if you don't behave, you are going to be a marginal banned from polite society like it happens in real life outside the internet.


> That will allow companies to black list you across the internet

Thus giving full control of people to private entities that are increasingly not held accountable by anything or anyone.

Why would we want a private entity, not elected by the people, to decide our morality? Enforcing morality is dangerous and arguably immoral since it uses force to align people's thinking, which treats everyone like children that are learning instead of adults with freewill.


Like the credit bureaus? Seems like they've been doing this for decades already.


Yeah pretty much I think. We're definitely there as far as financial institutions, I just hope we're not on our way there with the communication and social institutions (increasingly dominated by the Internet).


This is very different. Credit bureaus are amoral. They're just gathering data and doing math.

GP is talking about a world where you can't shop on Amazon because you committed wrongthink on Twitter.


That's a naive view of credit bureaus. There are value judgments in there throughout the stack. You can't get a job because you didn't pay off a medical debt. You can't get a mortgage because you don't have the right history of past debt (for example, if you're too young).


The judgement happens from the employer or banker, not the credit bureau.


No, as long as you are paying I don't see that problem. Its only when you are getting "free" services were you are a product and not a costumer. People forget that corporations are there to make money and nothing else. All the rest is little more than theater. The reason that Twitter, Facebook, etc. bans you is because advertisers don't want to put money on the places you are because you are problematic. What's the use of you for YouTube as a content creator if they have to demonetize your channel because add buyers don't like you? Say Alex Jones for example. He is nothing but a cost to them, and on top of that, it gives them bad publicity all over the place. This is the bottom line, these are private corporations managing a private business trying to make money, the rest is just a lot of hot air and click bait created by other business that want to push outrage and controversy because it's what gets the more clicks.


Oh Amazon will take your money, you just won’t be able to write product reviews


What's the alternative to a credit bureau? Determining the loan amount you are qualified for by a magic 8 ball or a person looking at how nicely you're dressed (and your gender, religion and ethnicity) and guessing? Or the second option, but they also take into account your paystubs, which makes it better but has similar issues.


Credit bureaus are also highly regulated


Yes. I don't like how people make credit bureaus into some kind of magical demon because they collect credit history. In many (most?) countries, there are legislated monopolies (or oligopoly) for credit histories. This is intentional. And they are strictly regulated to provide essential information to banks and other institutions that provide credit.


How do credit bureaus "enforce morality"? I think the point is that the effect of a bad credit rating is much more limited than a universal blackout on the internet. In fact strictly more limited, to the extent that many creditors rely on social media in making their decisions.


I strongly disagree. Bad credit (sometimes even wrongly attributed) can block you from jobs, mobile plans, bank accounts, credit/debit cards, renting, etc. I'd rather be blocked from Facebook than be told I can't rent an apartment or be disqualified from a job.


Each time I am offered a job, I go through extensive background checks, including my credit history. You wrote: "Bad credit (sometimes even wrongly attributed) can block you from jobs..." Is this really true? I have see this phrasing a few times on the Internet, but I have never once read a blog post about someone specifically being denied a job due to poor credit history.

My point: They can "check" all they like. In many places, various personal background issues are protected by labour laws.

Also, your post makes it sounds like being denied a rental apartment contract is unreasonable for someone with poor credit history. Would you rent to that person? Again: Some jurisdictions do not allow credit checks or do not allow /discrimination/ due to credit checks when renting.


I know someone who was denied a job as a luggage handler at an airline due to a failed background check (though not a credit check). But I think the credit checks may matter more for people hired in positions of power, where being in debt is something you can be blackmailed for. If there's no use of the credit check, then why do companies pay money and impose extra process?


Thank you to share your anecdata. If I might speculate... perhaps the person's background check showed a previous trust issue. Example: Recently (last 3-5 years) convicted of theft. Or dismissed from a job for theft. If the person was convicted of something unrelated to trust -- had a fight at a bar or drink driving -- it could be overlooked. (Or more charitably, if the trust issue [theft] was related to a drug addiction many years ago, and the person had received treatment, it could be overlooked.)

I agree: Many items in the background check are ridiculous. Many years ago, I worked at a company where many of the Big Shots were regular, recreational cocaine users (confirmed by juniors who went bar hopping with them once a week). All new employees were required to pass drug tests. It was absurd. In that era, even marijuana was enough to have a job offer withdrawn.


This is a symptom of society thinking that social media is a necessity and somehow equivalent to the real world. It's not.


This seems like it was a defensible statement in 2001 or maybe even 2011, but in 2021 I think it's hard to defend.

Is it necessary?

Well - for many people literally the only way they communicate with their friends is in group chats on social media platforms. Losing that leads to social isolation.

Is it equivalent to the real world?

Of course it isn't equivalent - as in "the same as" the real world. For many people it is much much more important since most of their life is lived via online mechanisms.


That is called society. Have you ever worked in a private company? Went to a private business, like a restaurant or a shopping mall? What happens if you start insulting people and been rude and un-polite like people are in social media? You get fired or you are asked to leave. This is the reason why when you go to a new job they ask for references and call your old employers to inquire about you. The internet is going to be the same. You are going to have a reputation to protect or you are going to pay a price.


> Went to a private business, like a restaurant or a shopping mall? What happens if you start insulting people and been rude and un-polite like people are in social media?

People say "it's not worth potentially getting shot or otherwise attacked to try to get that asshole to put on a mask"


> Why would we want a private entity, not elected by the people, to decide our morality?

i got the impression from the separation of church and state that we don't generally want elected officials deciding what is or is not moral.

merely what is legal.


Right, ideally it would work that way, however shifting that over to non-elected and private entities with zero accountability is even worse. At least if government is babysitting it's partially people's fault for voting that way, but I agree even elected officials should know not to force morality on the people.


The separation of church and state is precisely because we DO want state laws to enforce morals.

We also want that enforcement to apply no matter what religion you have.

At least, I do.


> The separation of church and state is precisely because we DO want state laws to enforce morals.

i look forward to your support for making the tithe mandatory.


How much does your government charge you for income tax?


I assumed that the GP poster wasn't endorsing this...but perhaps they were?


I meant it as an observation, I'm not sure if OP was condoning it or not. Either way, that would be the reality of it, with such entities wielding so much control.


Today an algorithm can mistakenly throw my legitimate e-mails into Spam folder. Tomorrow, it will be able to throw me into Spam folder.


I wonder if, as a result, we will have as many thoughtful and interesting conversations with strangers on the internet as we do in real life.


This is pretty much the tech capitalist way of arriving at China's Social Credit system.

And a nightmare.

Can you imagine the social cooling if this was in place?


That is a nightmarish vision for a lot of non-“assh*le” people:

https://geekfeminism.wikia.org/wiki/Who_is_harmed_by_a_%22Re...


>>> One could also think of big European cities before modern policing - 18th century London or Paris were awash with sin and prone to mob violence, because they were cities.

Is the solution in the article? Do we simply need to recognise that as all society is online we now need online police? Online Community support officers (UK police adjuncts (think teaching assistants).

I suspect there is an overlap with the "defund the police" movement and the notion that we need to take away a lot of policing functions that are not actually violence / crime related - eg mental health. Social work is .. a lot of work.

Edit: It's worth noting that there are ~24 million or more police officers whose job description simply did not exist before Sir Robert Peel. That's a bigger number than i imagined !

wow: https://www.worldatlas.com/articles/list-of-countries-by-num...


>Do we simply need to recognise that as all society is online we now need online police? Online Community support officers (UK police adjuncts (think teaching assistants).

International issues would get in the way. As much as it sucks, most police won't act on a death threat made from some 12 year old in Brazil towards a normal US citizen. It's a lot of resources for no benefit on their end.

it wouldn't be enforceable unless websites are enforced to only be used by citizens within that country (and worldwide punished for not abiding), with specific embassies for international conversation. Some cases may even divide it within states/providences.

I see that as the worst case of a medium that was created with the goal to bring the world together.


International issues get in the way of policing now. We have some extradition and similar agreements for the worst crimes.

I am not convinced "online police" are the right solution, but if we want to avoid walled gardens everywhere, if we want public spaces, we need public peacekeepers.

In most cases, a quiet word from a police officer, who s enough to move the situation on, usually enough to prevent it escalating anywhere.

The rest is likely to be slowly dealt with as part of a society wide system of mental health.

A quick example - a long time ago I got a note from @dang saying my 2am comment was "racewar" (turned out he mean the comment above me) but it contains most of the issues we discuss here - a dumb comment, a quiet word from an official peacekeeper, a process of escalation / complaint and a just resolution - certainly left me thinking harder about my 2am comment habit.


>if we want public spaces, we need public peacekeepers.

that seems to be what moderators/admins do currently. I know many focus on the worst, most toxic communities, but for many a simple warning will diffuse some small quibble. It seems this article and other individuals argue that this isn't sufficient, however.


Yes.

And that brings up the issue of how, exactly, are we supposed to allow a literal “thought police”.


It's not a thought police firstly. It's a published statement police (pretty sure there are lots of laws like "disturbing the peace")

This will play out over time - call it 30 years - as we try to find out how to do lots of new things

- Handle mental health better. Partly we need medical breakthroughs, but social acceptance will be a huge improvement, as will standardised approaches, early years interventions and detection. It will take huge amounts of parent training - and the recognition that has genuine costs (how many start ups did not start because the parent decided to put their energy into supporting a child. And woe betide anyone suggesting that is another hurdle to be overcome with go getting attitude)

So we need huge investment in dealing with chronic mental health, not just medical, but social support, education etc.

- then sorting out acute. Look at any UK prison. It's basically young men with some mix of drug / mental health / abuse issues. Want to reduce the prison population - start 20 years ago. Don't define the police - simply introduce interventions so that in 20 years they won't need to do the social work job they do now

There is probably a dozen brilliant papers on this clearly showing what we should do and already modelled in a few enlightened communities globally. But it's going to take a decade of mistakes before those percolate up.

Let me know if you spot the me early.

After that it's social norms. We are trying to find a set of behaviours that are acceptable in the new online spaces. Public urination is frowned upon IRL - trolling is the online equivalent I guess. One can imagine things like loss of anonymity being the first part. Then slowly people develop tools to use humans inbuilt social mechanisms - so for example some asshat is intolerable, a record of their conversation is sent to the mother-in-law, 4 grandparents, all their wife's bridesmaids. (this may not work but you get the idea). We know this sort of thing works because every so often we all find that great viral thread where someone gets a comeuppance.

All of this does demand content moderation as ben says - and yes I do think all of this is too much. My take is social media will die back to manageable levels - if we are looking at a mass crowd situation, a swirling football crowd and asking, how can this crowd ever become a manageable city. Well crowds don't - crowds become cities, they disperse.

- There are too many forms of social media - each of us has one main form, and keep up with rest - just as we would have one friend in a crowd but interact with others. This will just die down. I mean how much value do we get from social media versus the constant worry / mind back that makes us check all the time. as humans develop social media innoculations this will die back

Secondly ad dollars will help die back social media - really it's amazing no one has noticed it's a scam and waste money f money.

At some point regulations and advertising will drive to a point where it is simply easier to dump the algorithm, stop driving for engagement and just supply the limited feed of friends posts, interspersed with billboard ads. Influencers will still influence, ads will still pop up, but as we are choosing who to follow we won't care. Just follow fewer people.

Agents - the other big silent one. I can run a refer over my emails but the facebook client won't let me. This is the other big change likely to occur - software agents acting in my best interest filtering shit for me.

Anyway bed time


The problem I have with Benedict Evans' commentary is that he is very quick to suggest that a proposed solution is wrong (especially if it impacts the FAANG execs who hang on his every word), but never offers up an alternative OR admits that a current solution might be better than no solution at all. Yes content moderation is not a scalable solution but no content moderation is far worse.

I get it, the platforms are his bread-and-butter, but at some point he needs to stop pooh-poohing the solutions others are putting up and, you know, actually suggest a solution himself.

In this article he presents the possiblity that moderation does not work in scale, and that we should stop pushing for it. Then what? Well, Benedict doesn't get into that. He suggests that change might be necessary and that it might be a complete change to a different model of interaction - but he stops short of suggesting that is might be up to the platforms (or the government) to enact that change. If he were to turn around and say "Twitter, you need to fix your shit. Stop allowing unlimited retweets of some poor shmuck's bad day tweet" then he would actually have a position worth considering.

Edit: Changed the last line because I got distracted when writing the original and forgot to finish it.


there is the possibility with lots of problems that there are no solutions that will work, as you point out that does not mean one should not continue to try.

The analogy might be that there is no solution to the problem of the mortality, but there are quite a few ways you can work to keep from dying of a heart attack in your 40s.


I wonder how well somethingawful-style signup fees solve this problem: pay $10 to sign up and if the platform doesn’t like your behaviour they can ban you or set you back to the pre-payment stage.

This then somewhat disincentivises bad behaviour and, in particular, repeated or automated bad behaviour. Surely this would harm growth and it doesn’t feel like you’d have success charging a smaller fee in poorer countries. It could also be a recipient for being very unpopular with your users whenever they get punished. And I would worry that it would just incentivise spammers to steal credentials.


If news sites are having trouble getting people to pay to read news, I don't feel much more optimistic about a forum paying for the privelege of commenting (I say this as someone with a SA account). Maybe in the 90's it could have been normalized, but nowadays there are dozens of "free" alternatives, so the idea is considered absurd. The last thing a site trying to compete with FB/Twitter needs is more barriers, and it won't fix the problem on the largest sites even if they end up moderately successful.


worth noting that traditionally, harshly moderated SomethingAwful didn't develop issues with fascists using the site as a grooming ground, while unmoderated sites like 4chan/reddit/twitter did.


It also means that people with large amounts of disposable income can get away with not following the rules.


Yeah this is definitely true but hopefully their repeated payments would help to offset that cost. And maybe some kind of ip or email based banning or account flagging could make moderation easier or account recreation more inconvenient.


SA does eventually permaban people


There's a guy called Seraph84 who is already permaban'd, multiple times a month he will buy new accounts, make one or two posts on each, get immediately permaban'd again, and repeat this process.

So it's not a foolproof system, but at-least it can help keep the lights on for people who are terminally online (unless they do CC chargebacks).


We're working on a social tool (maybe not a network?) that we want to be funded by users instead of ads and we're thinking that as you say a fee may help discourage automated spam.

Also we're finding that some of the features that enhance user privacy could help reduce bad behavior, like defaulting to smaller, closed groups of people.

But it's a fascinating challenge and there's so many facets to it – glad people are starting to think about different ways to approach the problems with social!

(Sneaky pitch: if this sounds interesting to you maybe let's work together! https://frenlo.com/)


Years ago, i suggested a algorithmic approach to moderation to a Open Source project i contribute too. They ultimately went another way (classic moderation), but the idea was pretty neat.

You basically create a feudal vouching system, were highly engaged community members, vouch for others, who again vouch for others. If people in this "Dharma-tree" accumulate problematic behaviour points, the structure at first, bubbles better behaved members to the top. If the bad behaviour continues, by single members or sub-groups, the higher echelon of members will cut that branch loose, or loose their own social standing.

Reapplying members, would have to apply at the lowest ranks (distributing the work) and those would risk dharma loss if they vouched for sb unworthy in the trial out phase.

We never solved the bad tree problem though. What if there exists a whole tree of bad apples vouching for another? You can not rely on the other feudal "houses" indicating this correctly, due to ingame rivalry.


I was part of solving a content moderation problem for a tech company forum once.

The most troublesome users were often the most prolific posters. The people who had a lot of free time on their hands to post every single day were often the most disgruntled, seizing every issue as a chance to stir up more controversy.

It was tough enough to reel in difficult users when they had no power. Giving extra power to people who posted the most would have only made the problem worse, not better.

The most valuable content came from users who didn't post that often, but posted valuable and well-received content occasionally. I'm not sure they would have much interest in moderating content, though, because they would rather produce content than deal with troublesome content from other people.

Content moderation is a pain. The trolls always have infinitely more free time than you do.


I used to moderate a few message boards, and I fully agree.

I think empowering the "power users" like this inevitably leads to Stack Overflow-style communities, where arrogant responses are the norm, the taste of a few regulates the many, and the culture of the community ossifies because new contributors are not welcomed.


The factor missing with a SO style "meritocracy" is that it needs to be hard to rise up, but very easy to fall down from misconduct: in other words, a built in accountability mechanism to ensure they don't abuse their power and become arrogant over it. Not necessarily ban worthy offenses (lest, they simply be banned), but ones that make a less welcome community.

I'm lost on how to do this within an internet community, however. You don't want burners being made to report over a disagreement in opinion and impact them. Or worse, power users tearing each other down in retaliation. I don't think "election periods" would work well either because few people really care to be a moderator to begin with. I can only see it being done by having professional moderators and having a manager review any potential user complaints. Which I'm sure has other problems.


The fact that there's very little overlap between those with the inclination to contribute and those with value to contribute, is fundamental to virtually all communication channels / fora. Unless the channel is exceedingly aggressively managed, the former will outweigh the latter.

An early form of this appeared as an observation on Hal Varian's page at UC Berkeley (he's an economist, was with the School of Information Management Science, and now works for Google).

On HN it's interesting to look at what kind of riff-raff appear high on the leaderboard, vs. revered names in the field who may have posted only a few times.

Neither truth nor quality are popularity (or volume) contests.


Why do you have to give people "unlimited" posts? Why not 5 per day?

More posts if people rate your posts highly, to some useful maximum.

Less posts if they rate your posts poorly, down to a set minimum that every starts at.

Then all the fancy IP detecting, fingerprinting etc tech could filter duplicate accounts and be a boon for once.


How did you go about solving the problem in the end?


> What if there exists a whole tree of bad apples vouching for another?

That's when you add top moderation, so the algorithm becomes a way to scale the moderators, not a full moderation solution.

You can't create an algorithm that solves moderation, unless you create a fully featured AI with a value system.


You can also test this, similarly to how Stackoverflow does it: send people a post that you know is bad (or good) and check that they flag it. If they don't, let them know that they are doing it wrong, lock them out of moderation, or silently ignore their voting and use it as a signal of voting rings.


yes, let computers do the repeatable work and humans do the original thinking.

i still haven't seen a moderation system better than slashdot, which community-sourced its moderation/meta-moderation semi-randomly. though it still had issues with gaming and spam, it seems like a good base to build from. and yet we ended up with twitter, facebook, reddit, yelp, etc. that optimize for (ad) views, not quality.


slashdot had good bones, but what it was missing was a pool of known good moderators. Without them, you end up with a death spiral as bad actors upvote and meta moderate each other and drive good mods away.


yes, perhaps an old-fashioned meatspace reputation network could be employed to bootstrap it.


Yes, I'm convinced at some point we're going to figure out an algorithm to solve content moderation with some version of crowdsourcing like this based on reputation, though I'd prefer a system based on building up trustworthiness through one's actions (consistently flagging similarly to already-trustworthy people).

But the challenge is still the same one you describe -- what do you do with competing "groups" or subcommunities that flag radically different things. What do you do when both supporters of each side of a civil war in a country consider the other side's social media posts to be misinformation and flags them? Or even just in a polarized political climate?

I still think (hope) there would have to be some kind of behavioral signal that could be used to handle this -- such as identifying users who are "broadly" trustworthy across a range of topics/contexts and rely primarily on their judgments, while identifying "rings" or communities that are internally consistent but not broadly representative, and so discount that "false trustworthiness".

But that means a quite sophisticated algorithm able to identify these rings/clusters and the probability that a given piece of content belongs to one, and I'm not aware of any algorithm anyone's come up with for that yet. (There are sites like HN which successfully detect small voting rings, but that's a far simpler task.)


In the Freenet Project we’re solving content moderation with propagating trust — and visibility slowly increasing with social interaction, which is free for honest users (who just want to communicate) but has a cost for spammers and disrupters.

The main tool is to drop the idea of a global definition of trustworthiness: You have seed IDs to find people who supply CAPTCHAs, but otherwise all trust is computed locally.

If you want to try this, there are two steps:

- First the current state in Freenet: https://github.com/xor-freenet/plugin-WebOfTrust/blob/master...

- Then the optimizations needed so this scales to arbitrary size: https://www.draketo.de/english/freenet/deterministic-load-de...

Here’s some data if you want to test algorithms: https://figshare.com/articles/dataset/The_Freenet_social_tru...

And some starting code of a more generic prototype for faster testing: https://hg.sr.ht/~arnebab/wispwot/

(if you prefer a slightly larger writeup that might expand as time moves: https://www.draketo.de/software/decentralized-moderation.htm... )


Thanks for that, I'll take a look!


I wonder if you could try to address this by limiting who can flag a given post.

Even just doing it very naively and choosing, say, a fifth of your users for each post and only giving them the option to flag it might help significantly. It would probably make it more difficult to motivate the members of these problematic groups to actually coordinate if the average expected result was the inability to participate.

And you could do it in more sophisticated ways too, and form flag-capable subsets of users for each post based on estimates about their similarity, as well as any other metrics you come up with - such as selecting more "trustworthy" users more often. This would help gather a range of dissimilar opinions. If lots of dissimilar users are flagging some content, that seems like it should be a strong signal.


Presumably instead of global moderation you could have pluggable meta moderation where you pick the moderators so you can have fun stuff like Moderator A whom you follow banned Bob therefore you can't see his posts or his comments on your posts but I don't follow A I follow B who believes Bob is a paragon of righteousness and so I see Bobs words everywhere and we all in effect have an even more fragmented view of the world than we have today with conversations that are widely divergent even within the same social media group or thread.


Sounds like a network analysis problem to tackle.


That sounds like a pretty fantastic way to build an echo chamber.


I think we need to critically evaluate what we call echo chambers. The continent, country, state, city, street you live in all exhibit patterns of echo chambers. In a sense, our planet itself is an echo chamber. Every human network is an echo chamber that boosts signals to varying degrees. A lot of times, this is a good thing! Like when people come together to help each other. The real problem is when the network itself is designed to boost certain signals (e.g. outrage, controversy) over others to a point where our society breaks down. Many of today's centralized networks profit greatly from misinformation, anger, and other negative signals. IMO that is the problem we need to tackle.


It's funny that you comment that on HN


Which has a single front page which shows the same headlines to everyone where people who disagree can all see each others posts and we can disagree with each other so long as we can avoid being jerks to one another.

At worst you lose imaginary internet points if you say something that the group doesn't agree with.


HN doesn't much create internal filter bubbles.

It is something of a collective filter bubble, and there is a pervasive criticism from those who largely don't participate here of typically HN behaviours. I agree with a fair bit of that. There are certainly topics HN doesn't seem to be able to reasonably discuss. (This seems to be a frustration of the mods as well. They're certainly aware of the issue.)

My own (in my view at least) largely contrarian voice seems to have been reasonably well tolerated here, though.


Okay


it would become an echo chamber.


This is easy to solve if you don't have a "public timeline", e.g. if you only see posts that have been vouched by people you follow. Like using Twitter but without topics, hashtags, and search: the only content you see has been either authored by someone you directly follow, or retweeted by someone you directly follow.

If you keep seeing content you like (through retweets), you can follow that person directly to get more. If you see content you dislike, you can unfollow the person who brought it into your timeline (by retweeting it).

Of course this would work a bit better if there was a way for accounts to categorize posts they author or retweet. You might follow me for tech-related content but not care much about my French politics content, which I would be happy to categorize as I post/retweet but have no way to do on current Twitter.


I think lousy annoying manual moderation in smaller communities is hard to beat. Human beings have flaws but we have hundreds of thousands of years of adaptation to making small social circles work that might not work AS well in groups of hundreds or low thousands but they can be made to work acceptably.

When you say highly engaged community members I hear people who have no life who derive self importance via imaginary internet points and social position not by doing things but by running their mouths. While it claims to encourage community it discourages it by potentially punishing association. It would make people afraid of being associated with ideas others consider bad which sounds great if you imagine communities run by people that are largely or mostly good and intelligent when in fact people are largely bad selfish and stupid.

It would be ruthlessly gamed by individuals whose status would be based on their efforts to stir up drama that sounds fantastic when its directed at people like Epstein or Harvey Weinstein less so when you realize that this would be effective at silencing people regardless of guilt because people would need to as you say cut the branch loose.

I have literally never heard a worse system of meta moderation proposed.


Every system that relies on crowdsourcing and/or reputation to solve such problems is doomed to fail. Remember when manual curation and recommendation of products/places/content was supposed to be fully replaced by online ratings & reviews?


> was supposed to be fully replaced by online ratings & reviews?

I mean, it has though.

When I want to buy something new, I find Amazon reviews to be far more helpful than anything else that has ever existed. Obviously you can't only look at ratings or only read the first review, but it's pretty easy to find the signal amid the noise.

Similarly, TripAdvisor has given me far better recommendations of sights to see while traveling when compared to Lonely Planet. Yelp is eons better for finding great restaurants than Zagat ever was. And so on.

I don't understand how you think these systems are "doomed to fail" when they already exist, are used by hundreds of millions of people, and are better than what they replaced?


Rather than use high engagement as a basis for vouching, arbitrarily selected communities, perhaps of about 50--150 active participants (posting or moderating) might be better. Think Mastodon, but in order to be federated, good posts must be "voted off the island", in the good sense.

I've been applying a somewhat similar notion to collecting and managing reading material and suggestions, what I call BOTI, or Best of the Interval. It's a round-robin style system (think "43 Folders" from GTD), where I compile a list of references over a period of time (monthly to annually seems to be most appropriate for me, though hour / day / week / month / year / decade / ... could be applied). At the end of an interval, some limited number of items is carried forward.

This is one way of addressing the "firehose of content" nature of information, recognising that in any given time period, you only have so much personal bandwidth to dedicate.

With the federated model, content federating is itself subject to assessment, and is effectively re-vetted. Note that different communities might favour different content: cooking, kittens, Kabbalah, canoodling, cypherpunk, core meltdowns, classic cars, concerthalls. Vetted / re-vetted streams themselves might be of interest.

Both community- and time-based elements of this could get interesting.


Just as there is no stopping “crime” there is no stopping bad content.

Besides - these are evolutionary games being played between grazers (content consumers) and predators (alarmingly large group).

As long as there is signal in a forum, there will come to be a method to subvert it.

Honestly the question I would ask people is how do you measure bad behavior on a forum.

Any technical idea, such as your tree, is doomed to eventual obsolescence. The question is how long it would take, and how effective it would be, and how you would measure it.


Also, if there is a severe penalty for vouching for bad people, but not much gain for vouching for someone, will you end up with no one wanting to vouch for anyone else?


Bad behaviour of this nature tends strongly to operate at scale.

Bad actors form massive communities, and quickly. You can look at the rise of various networks, such as T_D on Reddit, or the rapid growth (and often fast decline) of other such forums elsewhere. I find it very difficult to believe that such growth is organic.

(Commercial and brand accounts / pages / platforms often follow similar trends, and for similar reasons. Advertising and propaganda are one and the same.)


Generally the benefit of vouching for someone is that they join the community, and you personally want them to join the community.


Another reason to vouch for someone is that you trust their judgement and want them to have more power in the system to hide content that you personally don't want to see.

It's true that this will lead to echo chambers, but by looking at vouching relationships rather than the contents of posts, it should be easier to detect the echo chambers and give people the opportunity to expand their horizons.


In a system that tends to reward closing your horizons with a sense of safety and belonging the trend wont be towards expanding horizons. Don't build systems that don't work like you want them to in practice because in theory people could use them better.


Our existing systems already generate echo chambers (due to the emergent properties of algorithms and/or the choices people make when they consume media), and that's in the best case where your horizons aren't just set by the political leanings of the employees of the companies that run the systems that provide most of the content.

As long as people could choose different communities that had moderation policies which suited their members, it would allow those who wanted to hear other perspectives to do so, and might make people aware of what sort of echo chamber they are in.


I like this, a lot. Well, I don't like that it sounds like it will prevent outsiders from participating, those who have no-one who will vouch for them, and it does sound like it would encourage a mono-culture of thought. But I like the idea of socializing the costs of bad behavior. Indeed, those socialized costs would extend to the real world. I'm intrigued and perturbed at same time.


This is a web of trust, except that you have a designated root.


Guilty by association it is, then. And, no way to undo/pay for a negative score. This is a terrible solution.



The lazy solution to rivalry getting out of control is bicameralism. Make tree-based governance where most of the action is, but design another chamber that can veto it without the same rivalries involved.


This is much like how the private torrenting trackers are doing, but no very number of points. So maybe is existing some precedence for some system in this like.


Sounds like a way to create highly entrenched filter bubbles.


Lobste.rs uses a similar tree model. It is invitation only and if you invite bad people repeatedly, you will get smooshed.


I see it as kind of a funnel. First you decide how much participation you want to allow in the first place, and that's a big lever. Smaller niche communities are easier to moderate because a lot of policies are customs, meaning you don't need to make them explicit.

Another lever is related - decide how much you want to limit the kind of content. A like button is easier to moderate than upvote/downvote, which is easier than a poll response, which is easier than restricted markup, which is easier than allowing unsanitized html/js/sql. (I think there's a lot of unexplored territory between "poll response" and "restricted markup", in terms of allowing people to participate with the generation of content.)

Then there is distributing the moderation abilities themselves. Can users become moderators or only admins? Is there a reputation system? I miss the kuro5hin reputation system and would like to see more experiments along those lines.

And then finally you get to the hard stuff, the arguments about post-modernism and what truth is, creating codes of conduct, dealing with spirit-vs-letter and bad faith arguments, etc. Basically the "smart jerk" problem. I hate that stuff. I want to believe something simple like, as soon as you have a smart jerk causing problems, it means you've given them too much opportunity and should scale back, but I think it's not that simple.


Kuro5hin's moderation and reputation system had some novel-for-the time (and still comparatively rare) features, though it ultimately suffers much the same failure as most other crowdsourced moderation systems, and ultimately doesn't work under sustained attack or concerted effort.

For those unfamiliar, Kuro5hin, inspired by Slashdot, was an online discussion site where individuals would submit items, along with a (hopefully) brief description, or an original essay. There was no editorial queue (as Slashdot had and has). Both posts and comments could be rated on a five-point scale by individuals, and the ultimate score landed in the bound 1-5 (later extended to 0 or -1 with flags). Moderation converged on a mean with time, at least in theory. Users "mojo" (karma) would follow similarly.

This avoids both the runaway endless votes problem of sites such as Reddit (or Hacker News), and the behaviour of Slashdot in which moderation would rise or fall but never converge and was artificially capped at 5 points. On Kuro5hin, the 100th moderation score was weighted as 1/100. On Slashdot, the 100th moderation score had a weight of 1.

If this sounds something like a star-based review system such as Rotten Tomatoes, or Amazon ... it's because it is. And ultimately the same problems apply:

- Sybil attacks in which sockpuppet or shill accounts are created to sway moderation / review scores are quite possible.

- If the site fails to attract (or motivate) consciencious moderators, the moderation quality still falls.

- Information on divergent (strongly bimodal) rating is lost. 10 moderations split between 5 '1's and 5 '5's look the same as 10 '3' ratings. The information contained is not the same. (Summarising complex data is, it turns out, complex.)

- There's still no distinction between popular and truthful content. Or between "this is true" and "I disagree with this" (or don't like it, or the submitter looks funny, or whatever).

The K5 system was interesting, but ultimately not a magic bullet. The site itself failed, though for complex reasons.


Ah, interesting. I could have sworn K5 had a vouching system, where the strength of a rating was partially dependent on the rating of the person doing the rating.


Not AFAIA, though that's possible.

The original concept didn't have that.

Another approach was tried by Advogato, which had a "trust root" (ultimately Raph Levien). It was based around individuals rather than content and ... that also has issues. Nothing pathologically bad in the case of Advogato, though the site largely languished.

For Levien's concept, see http://www.levien.com/thesis/compact.pdf


The problem with a "smart jerk" is that (from my experience) many mods are awful at separating tone from opinion. Because the discussion always seems to inevitably shift from the starting spirit of "don't be an asshole" to "don't have asshole opinions". However, depending on how invested one is, they may see certain opinions themselves as being unwelcoming. And so the filter bubbles occur, be they for something as impactful as politics or as inane as your favorite TV show.


Agree with the author that moderating every interaction on a social network is a fool's errand. I'd go a step further and say that the future isn't simply restricting some features like links and replies, but rather more closed networks where entry is guarded (think PC software downloads -> app stores) and only a very limited set of specialized actions and interactions is allowed (think app sandboxing).


There are already platforms where you can set up closed groups, if you want one. Banning links or replies seems a lot more heavy-handed than content moderation.


Obviously not, since we’re discussing it on a site with content moderation right now that, in my opinion, works much better than sites without.


It's only under control, I think, because it's very specific in terms of content. The weakness many social networks have is that it's basically an open platform for any discussion, and that makes it harder to put boundaries around. It makes sense to heavily restrict subject matters on sites with specialized content but general social networks are still facing the issues mentioned in the article, IMHO.


I agree that being specific about content makes things easier, but HN is not so specific. Anything intellectually interesting is on topic (https://news.ycombinator.com/newsguidelines.html), and that makes for a lot of long, difficult explanations - e.g.:

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...


Greater institutions have fallen; thats when we got the term Eternal September. It just takes the right influx of users to overcome the human moderators.


Sure, maybe it's not that constrained, but consider something like memes, which is expected of many sites, but definitely not here.


I think when the content becomes too broad then tribalism becomes more apparent as people start to form separate groups within the community. This creates a lot of drama as the tribes are forced to be under one roof.

When the content is more specific, like PC master race, or people that drive VW bugs, then the community identifies itself as a single tribe, and they tend to treat each other well.


Another site with moderated comments is Ars Technica. It works out great for them.

On the other side there is unmoderated Phoronix, which has the worst comment section that I've ever seen.


Is it worse than YouTube comments? I have never looked at it but that has to be something..


YouTube comments are actually pretty good these days, I find.


Most comments on YouTube are vacuous, it's not worth wading through them.


You are right, not worth scrolling down beyond the top handful.


vacuous does not equate to toxic. I would gladly take an internet where the biggest problems on the internet were tired jokes and excessive emoji spam.


As far as forums go HN is very low volume.


HN is hardly as diverse a site to moderate as YouTube and Reddit

Should all forums rules conform to HN?


Could you please stop creating accounts for every few comments you post? We ban accounts that do that. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

You needn't use your real name, of course, but for HN to be a community, users need some identity for other users to relate to. Otherwise we may as well have no usernames and no community, and that would be a different kind of forum. https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...


I personally don’t think your comment puts the effectiveness of moderation is in question, just the efficiency. So yes, I think places like YouTube and Reddit would greatly benefit from similar moderation (the rules don’t have to be exactly the same), but the difference in scale and, as you note, variability in the rules for different parts of those sites makes it so much harder to apply.


It's worth noting that IMO, the scale is not just volume (depth) but also cultural heterogeneity (breadth). HN is for all intents and purposes a single community with a stated set of values. It's not a constellation of communities some of which are polar opposites of one another. The question of moderating Reddit always boils down to /which subreddit/ -- I don't even know where to start with YouTube.


> HN is for all intents and purposes a single community with a stated set of values

In the sense that we all value curiosity, yes.

We are all also very different. When political posts go up you see it; value, perspective, economic, and educational differences are all highlighted simultaneously.

I think what keeps us from destroying each other is that the binds that bond are curiosity, and those bonds are strong enough for now. dang also sacrifices his sanity going around and nudging people back in the right direction.

Those bonds are nothing to be understated though. I've tolerated some edgy opinions on this website, and probably given some too, but I also come here to learn about different perspectives and genuinely do enjoy them even if some offend or hurt me. Other people see those differences and talk about making lists.


> Those bonds are nothing to be understated though... Other people see those differences and talk about making lists.

Yes, exactly! I think you nailed the core of it. The communal drive to pursue curiosity is, for now, stronger than the communal drive to make lists; in comparison, there are many large subreddits where you couldn't expect the same thing.


> It's not a constellation of communities some of which are polar opposites of one another.

It absolutely is, which is the main reason dang is constantly having to talk people down off of ledges. This site has old-school hackers and new school VC startup hackers, anarchists, communists, socialists, capitalists, libertarians, Libertarians, atheists and believers, alt-right white supremacists, BLM supporters, QAnons, scientists, "scientists", skeptics, shitposting memelords, furries, pro Stallmanite free software zealots, anti-Stallmanite free software zealots, Stallman agnostic free software zealots, FAANG millionaires and people living in their cars.

Name a political position, no matter how repellent someone will come along to advocate it, sometimes just for kicks. Name a programming language, you'll find someone who loves it, someone who hates it, someone who's never heard of it, and probably the person who invented it. That diversity is one of the good things about this place, and one of the worst, because it brings out the worst in people.

There's a very small domain in which discussion on Hacker News is merely contrarian and contentious, but stray outside of that tranquil oasis and you're suddenly in the Thunderdome being chased around a gladiatorial arena by an overclocked sea-lion and a troll with a sniper rifle. But the troll is civil, and that makes them better than the trolls on Reddit.

It's honestly a wonder this place hasn't burned down yet. It's made me want to burn it down a few times.


Au contraire, I'd argue HN is a very diverse site but it's the moderation by both users and moderators that makes it seem orderly and uniform. There are many stories that get flagged a-la blogspam, low quality, clickbait etc, and same with comments. Those that don't have merit by the community don't reach the wider audience. I think the conception that HN is uniform is a somewhat common misunderstanding, as evidenced by discussions on supposedly-trolls and the like: it isn't usually the case that someone is shilling or trolling, more often than not it's just someone with a different world view.


I think HN moderation works because it is relatively obscure and several orders of magnitude smaller than FB/Twitter/etc. I wonder long do it would take for HN to be completely overwhelmed if a large subreddit decided to come in and start trolling/spamming?


There are very few meaning of word diverse that HN conforms to. Education, profession, gender, hobbies, location, all tend to be from one cluster.


HN is very heavy on Academics / Developers / Engineers.

Very different demographics then world at large.


Downvoted for sophistry and complete lack of addition to the discourse. Or do you happen to do a better job moderating a site elsewhere?


My response to the title is: "No, but it requires more resources than most are willing to admit or give."

The crux of the issue is that there's no "cost" to be bad. If there was a "cost" then bad actors would go away for quickly. Any "cost" you impose will be diametrically opposed to popularity - but a low volume/unpopular site is unlikely to be abused to begin with.


Even with infinity resources available for manual human moderation, you eventually hit a wall where different sub-communities will simply have different standards for what is acceptable to them— what might be reasonable debate in one circle is gaslighting and triggering to people elsewhere. It's not really up to the platform to impose a global code of conduct, and attempting to do so (outside of banning the most obvious of bad behaviours or things that are actually illegal) never seems to go well for platforms.

So yeah, I agree with TFA in the sense that these are problems to be solved largely at the system level. For example, compared to Twitter (where anyone can reply-to, quote, RT, and @user anyone), Twitch and Tiktok seem to do well at permitting individual creators to have their own space with their own exclusive authority over what is and isn't okay in the space. And they have (or at least enable to exist) lots of tools for exerting that authority— witness things like "bye trolls" scripts on Twitch that do have to be set up in advance, but then can be used at the drop of a hat in response to brigading to immediately close the stream to new followers, and disallow posts from non-followers, plus delete chat posts from anyone who joined the stream in the last X minutes.


>you eventually hit a wall where different sub-communities will simply have different standards for what is acceptable to them

with the utopic assumption of infinite moderation, I don't see a problem here. Different houses will have different rules and levels of moderation (a meme community will have different standards from a more sensitive topic). As long as the moderation informs newcomers of the rules, they behave under that sub-community if they choose to participate (while respecting global community rules, and of course server country laws).

This is all under the assumption that "sub-community" is distinctly defined. For something truly monolithic like Twitter, the only choice seems to be to leave control to the individual, and perhaps allow private groups to be formed.


The problem is the costs of content moderation are not linear. You are not dealing with a few thousand trolls. You're dealing with bot farms impersonating possibly over a million accounts. Huge groups of networks operated by just a few dozen people.

Automating that away is the only path to being on equal footing. If you introduce any human element, not only will it be a bottleneck, but the cost could be large enough to bankrupt even the largest companies.


"You are not dealing with a few thousand trolls."

In my own experience, it's the trolls that are rather confounding.

Go to any twitter poster with even a slightly political bent. Look for the first shitty person. Look at their posting history. It'll nearly always be 100 post/day of shittiness telling you just what you need to hear. Unless the evil Russians are extremely clever, it all appears to be grassroots poor behavior.

I guess you can view social media as a giant laboratory showing the behavior of people when they are not nose-to-nose with you in a bar. It's all super disappointing.

Maybe there's a place for highly curated social media.


How about making people put down a bond of even a small amount say $10 and something tied to your actual ID. The registrar knows who you are but sites only know you are a verified person but not who you are.

If you are found to be a fake person you forfeit the bond. Now it costs $100,0000 to create 10k fake people and you can lose it all tomorrow.


This seems nice, but it would be nicer if there was a cryptographic way to be sure that the site couldn't obtain your id from the registrar.

Is this doable? Or maybe these requirements together are contradictory?


If you are using an email to sign up and verifying that user actually controls email as is normal presumably they could look up the bond with your email.

Essentially registrar and major sites would coordinate to get bot accounts revoked your registrar wouldn't want to cancel user accounts erroneously. In most cases logically the registrar could be your email provider which would just be selling a verified email address but a third party could offer the same thing.


> but the cost could be large enough to bankrupt even the largest companies.

In my opinion, if you can't hire enough people to moderate without going bankrupt, then bankrupt you go! Would this mean we can't have social media? Maybe. Probably for the best.

But you can't moderate then you go out of business. That's the way it should be. Probably then people would find a way to moderate and stay afloat.


>Would this mean we can't have social media? Maybe. Probably for the best.

the vast majority of websites operate perfectly fine without a comment section (and some already have removed them). This isn't as far off a reality as you jest.

In any case, it's ultimately a cultural choice, like any other social element. Maybe America somehow bans comments without ties to real world ID and all the major american sites pivot into little more than coporate news aggregators. It wouldn't stop other countries and their respective forums that users seeking discussion would run to. And I doubt there would be some strong enforcements to make sure Americans aren't commening on WeChat or Weibo or some other non-American service.


That would apply to every website with a comments section, sadly.

I think the solution to this is smaller, more isolated groups -- with limited edges between. Back to email threads and Mumble servers, imo. The downside is we'll all be living in filter bubbles, but I think any shared community with a common value (like a video game) is better than some ginormous platform like Facebook.


I am really disappointed that all the php bulletin board forums I used to visit as a teen and young adult have all died off or been sold to an advertising conglomerate and most of the users have fled to Facebook groups. Facebook is just not the same as a forum. The quality of posts is lower, the reposting is much higher, and the sense of community is actually lost.

Facebook has even killed craigslist. Or at least greatly reduced the usefulness.


>The downside is we'll all be living in filter bubbles

That already seems to be the case.


This article is absolutely fantastic. Excellently written.

I've always maintained Facebook made a mistake when they took responsibility for misinformation posted on their platform. Now, four years later, they're continuing to double down on this stance, and forming "ministries of truth".

Freedom of speech is a powerful concept, and does not like to be stifled by types that argue it should only apply to the government. When you fight against that principle, you win in the short term, and lose in the long term. We are now starting to see those ugly realities of the long term losses, four years later.

The author is touching on something prescient here, but I disagree with some of his observations. For example, that the solution to "virus scanners" playing whack a mole was to move to cloud computing. (The solution was clearly to improve software, with things like memory safe programming langauges. Moving to cloud computing reduces freedom, not enhances it, and centralizes all the valuables in a single location a la Tower of Babel.)

If you remove the "algorithmic feed" mechanic, much of the abuse vanishes instantly. I am shown what came latest. Not this weird algorithmic mash of content that has been gamified for my attention. RSS is the way to go.


Any ex-Usenet moderators out there?

I don't remember this being such a huge deal, and there was always the alt groups.


Things have changed since the 1990s. Some mild homosexual slurs might have made their way through Usenet moderation, and trans issues were not even on the radar. Today, those favouring content moderation expect what they see as anti-LGBT attitudes to be filtered out.

Also, Usenet was so niche that state actors weren’t running troll armies for propaganda purposes, but this is something any modern social network has to deal with.


A lot of the language we use for dealing with moderation was developed for Usenet and related systems: trolling, spam, flames, etc. The problem was bad enough that we developed names for it.

There was certainly less of it; all of Usenet fit in a box of magtapes. But it could still have some pretty big tempests in that teapot.

It never even got close to solving spam, which exploded after Eternal September. It took AI-esque systems (and enormous heaps of data to feed them) to reduce spam to a manageable level. Trolling is a harder problem than spam.


Just to chip in a comment about the size of Usenet...

I did some analysis comparing Slashdot (with distributed moderation) traffic volumes to Usenet (centralized moderation) in the late 90s.

If I recall, the peak volume of posts to the largest (moderated?) Usenet newsgroup was about 3000 posts/day. I seem to recall it was alt.fan.rushlimbaugh or something. I don't know about the quality of moderation but I took that to be the "peak traffic a single/paired human moderator" could handle before we had databases and distributed user-moderated content.

Not sure what dang et all are handling here but it seems higher than that nowdays (but not more than 10x higher with all the newer technology they've developed?)


Usenet self-selected for (relatively) wealthy (usually) Americans who had the intelligence and know-how to get online at a time when it was costly and difficult.

And it fell to Eternal September.

The really only way to moderate a group is keep the group small.


Current Usenet moderator here, of an exceedingly low traffic group.

In the last year I've only had to kill two posts, both from the same troll.

On the other hand, I use my personal killfile quite liberally.


> Microsoft made it much harder to do bad stuff, and wrote software to look for bad stuff.

Before MS there was no bad stuff on the commodore 64. It just didn't exist. Loading things from tapes, disks or the internet doesn't matter. You switch it off and on again then load the next thing. I see no reason why this cant scale. You would have problems if you allow unchecked io and remote code execution and you would have to deal with that but even then a simple reset would clean it up. There is no need to give strangers the keys to your home and offer them a place to hide where you cant find them.

> Virus scanners and content moderation are essentially the same thing - they look for people abusing the system

The problem is that it is not your page. This forces you to live up to someone else's standards (if not forein laws) It is like the PC architecture where the computer is not yours.

Facebook is really what web standards should have offered. I would probably have been against it myself but in hindsight it is what people really wanted.

>...content moderation is a Sisyphean task, where we can certainly reduce the problem, but almost by definition cannot solve it.

I don't know, perhaps we can. Should we want to?

> I wonder how differently newsfeeds and sharing will work in 5 years

Ill still be using RSS and Atom.


Don't you have a potential malware problem as soon as sharing data between apps (i.e., having a hard disk with a persistent file system which multiple apps can access, even if serially/one at a time) becomes commonplace? Hard disks with persistent filesystems seem awfully useful.

as a nitpick, C64 wasn't really "before MS" - it booted to a BASIC which was derived from Microsoft BASIC!


I'm oversimplifying. Basically, I switch on the c64, insert a floppy, load a program and get exactly that. The floppies are physically separated. Without physical access the application can't write to its own floppy[0]

Say you physically contain each app by giving it its own processor and its own memory/storage. If it wants access to anything else it needs specific permission managed by something baked onto a chip. A complex detailed permission system. You don't just get to read keyboard input but if you do you don't just get to read it when the application is not focused. If an application is registered to work with a file type it can have access to those but only in defined locations and only that file type, reading and writing separately. If it wants access to anything outside its scope of permission the user not the developer can pick what permission is to be granted and for how long.

Add 2 mechanical switches that make it physically impossible to write to and/or read a drive.

It seems annoying and bureaucratic but it would allow us to download and run any code from the internet which would be truly exciting.

Right now I cant even plug in an usb device and when I do there is no way of knowing what happens to it. It seems absurd.

[0] - https://www.researchgate.net/profile/Stephen-Cobb-4/publicat...


Sorry dude that wrote this article, but it is a bunch of words saying nothing. No conclusion, just a bunch of bad comparisons and analogies. This sentence distils the article entirely:

> an app can’t run in the background, watch what you do, and steal your bank details, because the move to a sandboxed model means applications can’t run in the background and watch what you do.

It can't because it can't. Got it.

To talk about the topic in the title though, content moderation is a dead end, not because of the need to do it, but because people will not tolerate it. Nobody has a monopoly on information sharing on the internet, no amount of justification will get people to sit and allow it. All moderation is good for is keeping your community on focus, trying to prevent any ideas, even bad ideas, from bring shared is a bad idea in itself. It is a dead end not because it is an unsolveable problem, there's a deeper root here. It is a dead end because the people you're trying to moderate don't see a problem that needs a solution in the first place, and they're going to avoid your attempt at controlling their voices and the flow of information. And they should.


Moderating a forum where anyone can post is playing whack-a-mole, especially if registering the new account is simple.

One possible approach is something like Stack Exchange does: new users acquire their rights gradually. New accounts can only do little damage (post an answer that appears at the bottom of the list, and is made even less visible when someone downvotes it), and if they produce bad content, they will never acquire more rights.

Another possible approach would be some vouching system: moderator invites their friends, the friends can invite their friends, everyone needs to have a sponsor. (You can retract your invitation of someone, and unless someone else becomes their sponsor, they lose access. You can proactively become a co-sponsor of existing users. Users inactive for one year automatically retract all their invitations.) When a user is banned, their sponsor also suffers some penalty, such as losing the right to invite people for one year.

There are probably other solutions. The idea is that accounts that were easy to create should be even easier to remove.


> There are probably other solutions.

For moderated deliberation to achieve consensus in decision making, here's a write-up for a system that combines ideas from StackOverflow, Reddit, and Wikipedia:

https://bitbucket.org/djarvis/world-politics/raw/master/docs...


I don't think the issue is that it is too easy to change identities. Vouching or slow starts both lead to much more closed systems. You could say that is a solution for the posed problem. "A more closed system".

But to me, a more closed system is less valuable. Certainly it lacks the network effects that seem to be needed these days to make it financially.


> vouching system

https://en.wikipedia.org/wiki/Web_of_trust

I've not examined it closely, but Web of Trust follows that train of thought at least to an extent.


"Trust" in this context means "I believe this key indeed matches this identity". Nothing more is meant by that.


The feature I'm talking about is direct vs indirect trust in WOT. If an indirect trustee is deemed untrustworthy, it may say something about the chain of direct trust links leading to him. If someone I introduce into the trustee network turns out malicious, I might get somehow penalized, as well as others I introduced. I realise WOT is designed with authentication in mind, but maybe it could serve more general trust systems. Not sure.


People with moderating experience:

Why not just delete [most] offending comments, immediately, no questions asked (and ban repeat offenders)? For maybe 95% (as a wild guess), there's no question - it's clear that the comment is inflammatory or disinformation or whatever. It surprises me that I see so many of them permitted in so many forums, even here on HN. Why tolerate them? One click and move on.

Tell people about the policy, of course, and if the comment is partly offending and partly constructive, delete it. They can re-post the constructive part. It's not hard to behave - we all do it in social situations. If you want your comment to be retained, don't do stupid stuff.

------------

Also, it's telling IMHO that in this conversation among people relatively sophisticated in this issue, organized disinformation is barely discussed. It's well-known, well-documented, and common, yet we seem to close our eyes to it. It's a different kind of moderation challenge.


(Have run and moderated a forum for 15+ years.)

You are outnumbered. Spammers or bad actors run 24/7 whereas you need to sleep or holiday. You are criticised for moderating (“censorship!”) or not moderating, if you are ever inconsistent you’re “playing favourites”. Persistent bad actors skirt the boundaries and dedicate themselves wholly to getting past you. Trolls work in the grey areas. You and your parents get doxxed for handing out a minor imposition.

You are up against spam bots working constantly to bypass you. Up against bots skirting what contravenes rules, or spammers asking an “innocent question” to set up another account to provide a spam answer.

Any question remotely involving politics/gender/race devolves immediately into a shitshow (this is a newish phenomenon).


This is what is done by any experienced content moderation team, except for trying to rescue the good part.

That’s essentially editorial work, and the “good” part is subjective.

Mods also spend 90% of their time dealing with bad content, which means they end up be far more sophisticated at smelling false arguments or traps that would snare normal users.

Eventually, attackers start targeting the mod team, and the rule set - since the experience/content delta between mods and users keeps growing, attackers are able to exploit it for PR.


> "Why not just delete offending comments..."

How do you define offending? Some people are offended by a great many things. China is offended if you point out they have human rights abuses. The US is offended if you point out their interference in other country's affairs. Thailand literally makes it illegal (jail time) to offend some people that in the US is not only legal, but encouraged by our culture!

It's not so simple or easy.


IMHO this argument is potentially interesting philosophically, if someone has something new to say. It's an appeal to the logical extreme of post-modern relativism (and when we see logical extremes, I believe it's a good question to ask - is this a real problem or just philosophical). It also is misleading, IMHO, because it conflates morality, offensiveness, and power. Regardless, these kinds of philosophical arguments can be continued indefinitely, but so is the one that says the Internet is a figment of my imagination. I'm talking about reality.

In reality, the human mind doesn't need and very rarely uses the extreme of hard and fast algorithms; we are not computers. I can judge good from bad, constructive from destructive, etc. and it is easy to identify most of the problematic comments. When it's a forum with rules, it's easy to identify (again, a wild guess) 95% of them.


No it is easy. You define offending as whatever the moderator finds offending. Like a strike in baseball.

But you can always moderate the moderators.


Then people will get offended for getting censored out?


Some will, but is that a loss? If you aim at accommodating destructive behavior, you'll have it and attract more of it. If you aim at accommodating constructive behavior, you'll have it and attract more of it. I'd happily let some other sites have the entire market of destructive behavior.

But we are just speculating; can someone with actual experience and expertise say how it would work?


You are correct. Eventually moderation becomes opaque to not give information to bad actors (trolls/botters/harassers).

At that point it becomes about attack surface areas and patterns.


I think a clever mod will realize that they are not the Supreme Court. They are certainly not the best judge out there. Just clicking accounts and comments out of existence wont solve misinformation - it will just make the mod a tyrant. I know for a fact if i were moderating, there would be few safe users.


> I think a clever mod will realize that they are not the Supreme Court. They are certainly not the best judge out there. Just clicking accounts and comments out of existence wont solve misinformation - it will just make the mod a tyrant.

We're not talking about prison and the law of the land; we're making decisions about the disposition of some comments on an Internet forum. Far more consequential decisions are made without any due process - for example, managers decide on whether people will keep their jobs; they are 'tyrants'.


I agree with the article but it's a bit shallow or too short maybe. It's not factoring identity, which is a huge factor when it comes to moderation. Most accounts are basically people being anonymous in respect to their real identity. Then there is bots and AI, and the problem of detecting who is legit or a bad actor.

Therefore, having a relatively miniscule number of people be the judge and expect them to not abuse power, or thinking some clever algorithm won't be exploited, is short sighted and maybe technically naïve. I don't know what the solution is, but it might be having the Internet become independent from all nations, and have it be its own nation with laws, etc... not sure, but it does seem like an analogous problem to physical humans living together in a civilization, it's just still in the making, it seems.


On a little bit of a corollary --

“Who gets to decide what’s true?” is the wrong question. We should be asking “how do we determine what’s true?” https://blog.nillium.com/fighting-misinformation-online/


Dreadful as it sounds, maybe the truth truly don’t matter. You don’t have to live fully anchored down in the baseline reality, just your worldview has to be able to do more than sustain you.

Maybe it’s okay to be waist deep into QAnon schizophrenia so long as the rest of your life is also okay, and vice versa. Though those imaginations aren’t the kind of powdered grape juice to me.


Who is this "we"?


We are lying to ourselves, and we are doing it through colonialization writ large, once again. People who use Twitter and Facebook seem to be completely unaware that thousands of content moderators in the Philippines are being subjected to images of utter depredation and cruelty by the minute because we refuse to take responsibility ourselves. History will not look well on what we did to these these heroic underpaid people. I do not blame Mark Zuckerberg, whom I despise. He is doing this with our full consent.

In my view the only proper way to handle content moderation is that every user of these “free“ social media platforms over the age of 18 should should be required to moderate some proportion every month to understand what’s actually going on.


> "we refuse to take responsibility ourselves"

I'm to be responsible for what someone else's views are?

Nonsense. This is the cry of the censorship apologist. This moderation draft you speak of... what guidelines will the draftees follow? Will they moderate out of the goodness of their hearts? Or will they follow some standard? (I'm sure many people would be extremely eager to author! You mean I get to decide what is permissible to discuss online? What a wonderful avenue to advance my political causes by force!)

The author is right. The solution is not to double down on moderation.


> In my view the only proper way to handle content moderation is that every user of these “free“ social media platforms over the age of 18 should should be required to moderate some proportion every month to understand what’s actually going on.

Does that mean no vetting at all of the moderators? Anybody can become a moderator? But then you have QAnon in large numbers moderating content on like the CNN Facebook page or something? I really, really, really don't think that is a "proper" or even tenable moderation solution.

There are too many people who would abuse the moderation power. Moderation should at least be a paid position, paid well in fact, and vetted before allowed to moderate. Otherwise it will be worse than before.


Those are great points. I believe that there should be essentially no censorship at all, subject to First Amendment restrictions. Posts could be hidden to people based on age, political, or other preferences, but would always be accessible to adult users willing to sign a waiver.

I believe very much that bad material no matter how disgusting is best handled through public exposure, not censorship.


The first amendment protects you from hosting content you don't want to host. Hosting providers must be allowed to remove content they don't want there or their rights are directly violated.

Do you think that a site should be forced to host other people's vile content?


How do you deal with misinformation? Anti vax misinformation could literally devastate an entire society if given enough oxygen. Are your free speech ideals more important than the health of an entire economy and hundreds of thousands of lives at risk?


This reminds me of a line from an NPR story. It said that "false" information spread twice as fast and twice as far as "true" information on twitter.

No interrogation of why it spread was investigated. It immediately made me wonder, maybe people just thought "false" information was a whole lot funnier, and hence sharable.

The problem with misinformation is it vaguely doesn't exist. There's parody, non-orthodoxy, true things people disagree with for political reason, unproven things, urban myths, rumors, etc etc. These are all classes of information people blame all the ills of society on.


By the time these papers are distilled to an NPR report, its simplified to an extent that nuance is lost. There are many papers on this, which will answer your question.

The transmission cascades and cascade sizes for false info are studied. Misinformation for example is repeated, like a nuclear feedback loop, or more like gossip being repeated between people till it becomes a “fact”.

Science related info propagates through the network only a few times, as if people who read it understand it and move on.


> The problem with misinformation is it vaguely doesn't exist.

You don't think that people lie on the internet?


[flagged]


I argued that content moderation probably isn’t the answer and we need to something else. I really don’t know how anyone could possibly read what I wrote and believe I was saying we shouldn’t do anything. Frankly, I struggle to see that as anything other than deliberate bad faith.


I think your argument was in bad faith as well, so not much difference there. The "something else" is algorithms, which are cheaper than people.


I tried googling whether these are good jobs and it seems mixed. Even with good pay it's gotta have some impact on your psyche to spend the day flagging, among more mundane content, the occasional dick pics and beheadings?


The system works as it is. Each website does its best to moderate content without harming the business. From a business perspective, it'd be ideal to never moderate content, because more viral content means more money, but advertisers, governments, and users have a problem with some content, so the websites have to take a more nuanced approach to balancing the desires of these groups. At the end of the day, there is no perfect solution, but that's ok, the web is a federated network of websites and each node can set their own priorities with respect to the interests they determine make the most sense for them, leaving the users and advertisers the freedom to use as few or as many websites as suit their own prerogatives.


A system that limits the number of posts individuals can make on a per-article or per-diem basis would go a long way to silencing overly strident voices which, out of their sheer verbosity and pertinacity make it appear that their (often) extreme views are more prevalent and widely accepted than they in fact are.

An alternative is to generate an in-forum currency that can be spent on comments, either on a per-post or per-word basis. This currency could be earned based on reputation--but as we see here on HN, and many other places, upvotes does not always go to the most thoughtful and engaging comments--or some other metric (statistical distinguish-ability of posts from the corpus of posts? not sure).


Sybil problem.

Per-account limits don't apply to those with unlimited accounts.

Some form of payment (or hash cash) is an option, but hugely favours those with the ability of those who can pay (or hash). This correlates strongly with current sources of disinformation.


> we have to decide what we want, just as we did for cars or telephones - we require seat belts and safety standards, and speed limits, but don’t demand that cars be unable to exceed the speed limit.

off topic but, yikes, what a bad analogy.

Considering the fact that motor vehicle crashes are the leading cause of death for children, young people, and unnatural death in general I dunno if one should compare anything positively to our society's decision to embrace the car.

Accepting the insane deaths, dangers and terrible second order effects deriving from car oriented infrastructure and then just shrugging because of convenience has been one of the most misguided policy mistakes of the 20th century.


You've definitely run with the most negative take on the automobile here. Being able to travel with ease enriches lives in so many ways.

It's been an absolute disaster for the environment, and safety is not great, as you say. But I don't think it's as cut and dry as you make it.


Agreed, if that logic is applied with the same intensity everywhere, the world would be painfully bland.

Obviously we've made some mistakes and possibly have gone too far in one direction on that axis, but particularly with regard to personal safety, people need to be willing to draw a line as to what is "good enough". Cars at this point are extremely safe by any reasonable metric, even if the absolute totals feel large.


I feel like I'm taking crazy pills here.

Cars are not safe. They're the number one killer.

In contrast, how many people are killed by the automobile alternatives of cycling, public transit? Seems that those are safe to me.


Something will always be the number one killer. Doesn't mean it isn't safe.


I can't help but be negative when the numbers don't lie. Cars are a (very convenient) child killer.

"safety not great" is an understatement


The analogy is not only bad, it's also wrong https://en.wikipedia.org/wiki/Speed_limiter


It's hard to agree with you without an effective counterexample. How many lives have been saved, improved, or created by the staggering economic benefits of a car based economy?


My guess is the economy would have grown but just in a different direction.

Singapore has similar GDP per capita to the US but about a fifth of the number of cars. Most of Europe is at half the number in the US.


And how much of that growth was on the back of the US consumer economy?


The bulk of Singapore's trade is in Asia, America isn't as economically important as it used to be.


Emphasis on used to be, as in it used to be and was critically important in bootstrapping that economy.


It's not even really that hypothetical. Oslo has gotten down to zero cyclist and pedestrian fatalities in 2019.

https://www.bicycling.com/news/a30433288/oslo-vision-zero-go...


> ...the answers to our problems with social media are not more moderators, just as the answer to PC security was not virus scanners, but to change the model - to remove whole layers of mechanics that enable abuse. So, for example, Instagram doesn’t have links, and Clubhouse doesn’t have replies, quotes or screenshots.

This is effectively already what many newspapers do in disabling commentary on certain articles.

I often notice this on the CBC that whenever there's any article about indigenous people posted, comments will be disabled to prevent incredible amounts of racism.


> we require seat belts and safety standards, and speed limits, but don’t demand that cars be unable to exceed the speed limit.

That isn't how cars generally work. Without advanced electronic controls, it's not possible to build a car that can't exceed the speed limit. You can try to limit the acceleration but not the speed. It's basic physics. It's worrying that the author used it as an analogy (metaphor?) but doesn't seem to wonder why cars really don't have speed limits.


It's actually pretty trivial to build a mechanical governor and they have existed for hundreds of years [1]

Everything comes down to political will. The German manufacturers have an informal agreement to limit all their cars to 155mph as to avoid the government adding further limits to the autobahn.

[1] https://en.wikipedia.org/wiki/Centrifugal_governor


Not just content moderation but also app moderation. And moderation has gone hand-in-hand with vertical integration which is bad for innovation. Soon facebook will be writing people's posts for them (because you can't trust people with keyboards) and apple will be delivering a computer together with the software soldered in a SoC. Both solutions will be bad for innovation though, they 'll be making a very fast horse, but both will miss the next big thing.


I disagree with the sentiment and conclusions drawn in this post. Moderation is not dead, here is my public response:

https://www.remarkbox.com/remarkbox-is-now-pay-what-you-can....


No?

I’ll argue that the main product of reddit is not the community, its the content moderation. Maybe eventually the content moderation toolkit.

At this point reddit mods and users must have collectively built the largest collection of regexes to identify hate/harmful speech For a huge number of sub cultures.


I'd love seeing an anonymous peer review approach, to post anything you need to review X other posts. Until there are N accepts posts are invisible. I think it could work, but I am sure HN can tell me how I am wrong :)


In other words, before posting you need to randomly click on X other posts, and to make your posts visible, you need to have N accounts. I believe I could write such script in an afternoon.


I even started, but I have no social network to go with it so I stopped. It should also work for reviews.


There's still the Sybil problem.

And good submitters != good reviewers.

That said, a variant in which less vetted content has to be reviewed by more vetted reviewers, before gaining wider disclosure, seems to me to have merits.


Good point. Currently I think good submitters are rewarded by karma/likes etc, maybe rewarding good reviewers would useful too.


I think content moderation can be effective in smaller communities where social norms can be formed and effectively enforced. Perhaps the problem is that Facebook and Twitter are too large to be allowed to exist?


Uh, I gave up reading from here: "On an iPhone, an app can’t run in the background, watch what you do, and steal your bank details, because the move to a sandboxed model means applications can’t run in the background and watch what you do."

Maybe that applies to iPhones but Android certainly allows it.


The quote literally says it's regarding iPhones.

Why would you stop reading after a true statement about iPhones...?


> Anyone can find their tribe, ...but the internet is also a way for Nazis or jihadis to find each other.

I'm delighted that the net is a way for nazis and other jihadis to find each other. It's a global honeypot. Driving them underground doesn't make them go away, just harder to find. We saw this with the January 6th crowd.

We also saw this with the publicity-seeking attorneys general who got rid of craigslist hookups and back page: sex trafficking continues but is harder to find and prosecute.


Driving them underground makes them weaker and less influential. I don't need them to be known. I want them weak.


There are at least a few kinds of bad:

Spam, google bombing, and related activities. These are noise, generally.

Misinformation is slippery. Often this gets conflated with differences of opinion. That is happening a lot right now as moderation is politicized and weaponized. More than we think is debatable and should be debated rather than legislated or canonized into an orthodoxy, flirting with facism.

Clearly criminal speech, kiddie pr0n, inciting violence, etc. These are not noise and can be linked to either real harm as a matter of the production of the speech (kiddie pr0n), or can be linked to the very likely prospect of harm. Material harm, is an important distinction segway to:

Offensive material.

Being offended is as harmful as we all think it is. Here me out, please:

To a person of deep religious conviction, some speech can offend them just as deeply. They may struggle to differentiate it from criminal speech, and in some parts of the world this is resolved by making the speech criminal anyway. Blasphemy.

That same speech might be laughable to some who are not religious, or who perhaps hold faith of a different order, sect.

Notably, we have yet to get around to the intent of the speaker.

Say the intent was nefarious! That intent would hit the mark sometimes, and other times it would not.

Say the intent was benign. Same outcome!

With me so far?

Before I continue, perhaps it makes sense to match tools up with speech.

For the noise, rule based, AI type systems can help. People can appeal, and the burden here is modest. Could be well distributed with reasonable outcomes more than not. Potentially a lot more.

Misinformation is a very hard problem, and one we need to work more on. People are required. AI, rule based schemes are blunt instruments at best. Total mess right now.

For the criminal speech, people are needed, and the law is invoked, or should be. The burden here is high, and may not be so well distributed, despite the cost paid by those people involved.

Offensive material overlaps with misinformation, in that rule based, and AI systems are only marginally effective, and people are required.

Now, back to why I wrote this:

Barring criminal speech, how the recipient responds is just as important as the moderation system is!

I said we are as offended as we think we are above, and here is what I mean by that:

Say a clown calls you an ass, or says your god is a false god, or the like. Could be pretty offensive stuff, right?

But when we assign a weighting of the words, just how much weight do the words of a clown carry? Not much!

And people have options. One response to the above may be to laugh as what is arguably laughable.

Another may be to ask questions to clarify intent.

Yet another option is to express righteous indignation.

Trolling, along with misinformation share something in common, and that is they tend to work best when many people respond with either righteous indignation (trolling), or passionate affirmation and or concern. (Misinformation)

Notably, how people respond has a major league impact on both the potency and effectiveness of the speech. How we respond also has a similar impact on how much of a problem the speech can be too.

There are feedback loops here that can amplify speech better left with out resonance.

A quick look at trolling can yield insight too:

The cost of trolling is low and the rewards can be super high! A good troll can cast an entire community into grave angst and do so for almost nothing, for example.

However, that same troll may come to regret they ever even thought of trying it in a different community, say one where most of its members are inoculated against trolling. How?

They understand their options. Righteous indignation is the least desirable response because it is easily amplified and is a very high reward for the troll.

Laughing them off the stage can work well.

But there is more!

I did this with a community as it was very effective:

Assign a cost to speakers who cost more than their contributions deliver value! Also, do not silence them. Daylight on the whole process can be enlightening for all involved as well as open the door for all possible options to happen.

People showed up to troll, stayed for the high value conversation and friends they ended up with.

Others left and were reluctant to try again.

The basic mechanism was to require posts conform to one or more rules to be visible. That's it.

Example costs:

No 4 letter words allowed.

Contribution must contain, "I like [something harmless]"

Contribution may not contain the letter "e".

And they have to get it right first time, and edits are evaluated each edit. Any failure renders the contribution hidden.

Both of these did not limit expression. They did impose a cost, sometimes high (no letter "e"), sometimes subtle (no four letter words)...

But what they did do was start a conversation about cost, intent, and


I was unable to return and edit this inside the edit window.

Sorry, it is incomplete.


Some intertwined problems, which we mostly try to nervously look away from in this forum/industry/epoch:

- (effectively all) humans are first of all impulsive, driven by emotion and tribal identity, and most of all, subject to cognitive errors and for that reason and others, predictably effective manipulation

- the tools/practices for such manipulation, which many on HN are actively advancing because yaaay $$$, are now so powerful as to in aggregate overwhelm all current defenses (including good intentions, individual discernment, and various technical solutions ITT)

- the financial/power incentives to abuse such tools and short circuit, bypass, block, subvert, etc., attempts to defend against these tools are overwhelming

Look at the news cycle, for the 99th time the intersection of Facebook's utter sociopathology and the utter toothlessness of its sham oversight board and nominal internal protocols and standards is headline news.

I got 99 problems, and stochastic mind control is one of em

The only thing remotely effective is total deplatforming, and yeah, that's not the sharpest or most popular tool.

Left hook climate change Right hook surveillance capitalism

DOWN WE ALLLLL GOOOOOOOO


Open systems ungood. Duty of thinkpol to enforce goodspeech. Prevent crimethink. Users read only prolefeed.[1]

[1] https://genius.com/George-orwell-nineteen-eighty-four-append...


I'd (sincerely) appreciate if you'd expand on this in more considered language. After reading the article I was hoping to see more comments here reacting to and engaging with the central analogy this essay is built around (between the evolution of desktop and mobile client software platforms and the evolution of social media) but was disappointed to see that nearly all the comments ignored it and just made some tenuously related point about content moderation in general.

Your comment on the other hand seems to react (negatively, which is fine) to the central analogy, but its knee-jerk hostile phrasing is unlikely to be appreciated (or even understood) by anyone who doesn't already know what you're talking about and share your sentiments. So, if you were to phrase your sentiments in a more friendly (to the public, not necessarily to the writer of this essay) manner it could make this comments section more interesting and potentially fruitful.


Sigh. A "certain level of bad behaviour on the internet and on social might just be inevitable" has been an excuse since the beginning. I worked on the first web-based chat, bianca.com, and I heard it back then. More recently, I worked on anti-abuse at Twitter a few years back and hear the same talking point. Now I work on the problem at a not-for-profit, and it's still a talking point. Ignoring that the social media landscape has shifted dramatically over the decades, as have the technologies and our understanding of the problem.

It was always a terrible point, but it's especially ridiculous to see techno-utopians turn techno-fatalists in an eyeblink. The same people will go right from "innovation will save the world" to "I guess progress has now stopped utterly". And what they never grapple with is who is bearing the brunt of them giving up. I promise you it's not venture capitalist and rich guy Ben Evans who will be experiencing the bulk of bad behavior. It's easy enough for him to sacrifice the safety of others, I suppose, but to me it seems sad and hollow.


I did think I’d made it extremely explicit that I don’t think any of that at all, but perhaps not (although the way you throw in a rather childish ad hominem sneer suggests you’re not thinking very clearly). What I actually wrote is that though (of course!) there will always be some bad behaviour, we want to minimise it (I compare it to malware, for heaven’s sake), but moderation might not be the best way to minimise it, and we might need different models.

As it happens, I would suggest that the idea that somehow we CAN just stop all of bad human behaviour online would be the most extreme techno-utopianism possible.


> ad hominem [...] you're not thinking very clearly

Huh. Not totally sure you understand what "ad hominem" means.

But moving on from that, I'm not objecting to the notion that we might need different models. The way we do anything today is unlikely to be the best way for all of time. Given that I've spent years trying to improve things, perhaps you can take it as read that I think we can improve things.

What I'm objecting to is your fatalism that bad shit is probably going to happen to somebody (a note you include in your closing paragraph) combined with your failure to examine exactly who's going to bear the brunt of it. Something you conspicuously didn't do in your reply here, instead suggesting it's some sort of shocking rudeness to point out that as a rich person, it's unlikely to be you.


Ok, so you have a lot of experience of this subject - would you mind suggesting your preferred approach(es) to moderation? What can work?


It is not that surprising having gone through this ebb and flow myself.

All utopias and utopic dreams rely too much on human nature being entirely good.

The comunist utopia relied on the goodness of the people in government. The capitalist utopia relied on the goodness of the entrepreneurs. Online communities and communities in general rely on the goodness of the members.

The reality is human nature contains both good and bad. And as a utopist, being faced with pure destructiveness like you are in content moderation is demoralizing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: