"Right or wrong" maybe not, but for well managed communities on the internet, there are objective definitions for appropriate and inappropriate, based on shared values and context.
If you leave it up to each individual to decide what is appropriate or inappropriate, and provide them with the tools to block content they consider inappropriate, that's a burden on them, because you're not taking care of it at the community level.
And if the community's strength comes from shared values, and you leave that up to each individual to decide, what's shared, and what sort of "community" is actually offered?
> If you leave it up to each individual to decide what is appropriate or inappropriate, and provide them with the tools to block content they consider inappropriate, that's a burden on them, because you're not taking care of it at the community level.
The assumptions behind having individuals decide what's appropriate or not are:
1. The majority of the community would make the same choices
2. Those posting what most feel is inappropriate content would not get any responses because no one would see it
3. Because of the lack of response, those posting inappropriate material would move on elsewhere and no longer post.
And if I join this community, I need to go through the effort of setting up software/apps to enforce my content choices.
And if I'm viewing a conversation/thread/topic with something blocked in it, where I have blocked it but other members of the community have not, what does that experience look like to me?
And if a spammer posts a dozen links to their website, but I haven't blocked that content yet in my content filtering setup, that'll be fun.
This just sounds something like Parler. You can post anything you want there, nothing is blocked, and you're free to set up browser-based filters to hide the content you don't wish to see. In effect, you end up with a community biased towards those who are comfortable with vitriol, and as such it's filled with terrible content.
> And if I join this community, I need to go through the effort of setting up software/apps to enforce my content choices.
Typically, that involved using an existing application to connect to the service and read posts. As you spend more time reading through current and past threads, you would get an idea what you want to filter and what you want to see. In other words, you would set up your filters over time as opposed to doing everything at the start.
> And if I'm viewing a conversation/thread/topic with something blocked in it, where I have blocked it but other members of the community have not, what does that experience look like to me?
If no one responds to the post, you simply wouldn't see it. Otherwise, you would see responses to a post/comment that you couldn't see. The software could also allow you to hide any responses to that post if you had no interest in the subthread(s).
> And if a spammer posts a dozen links to their website, but I haven't blocked that content yet in my content filtering setup, that'll be fun.
I guess that depends if your client is configured to display linked content inline rather than just the urls. I don't use clients that display content inline and then just add a filter for the spam post.
> This just sounds something like Parler. You can post anything you want there, nothing is blocked, and you're free to set up browser-based filters to hide the content you don't wish to see. In effect, you end up with a community biased towards those who are comfortable with vitriol, and as such it's filled with terrible content.
It's how usenet worked and there were plenty of vibrant communities spread across many groups that lasted decades, if not longer.
>>"Right or wrong" maybe not, but for well managed communities on the internet, there are objective definitions for appropriate and inappropriate, based on shared values and context.
Objective in what sense? Consensus != Truth.
>>If you leave it up to each individual to decide what is appropriate or inappropriate, and provide them with the tools to block content they consider inappropriate, that's a burden on them, because you're not taking care of it at the community level.
You can't take care of anyone at the community level because a community as an indivisible unit does not exist. The word "community" is merely a label, an abstraction for the ideas and events that characters interactions between two or more different people.
>>And if the community's strength comes from shared values, and you leave that up to each individual to decide, what's shared, and what sort of "community" is actually offered?
Either this is a loaded question or you've really put the cart before the horse, but I'll bite. A community's value comes the achievements of its individual constituents. Shared values are not a required prerequisite. They naturally arise from personal observations and discovery, real or perceived. That's it. Every culture, superstition, law, religion was a consequence of personal examination by one person. What made these so-called shared values "shared" was war and trade.
>there are objective definitions for appropriate and inappropriate, based on shared values and context.
not necessarily. Taking out the more heated factors for a moment, think of a concept as simple as spoilers. spoiling others are widely considered to be an impolite move, but a community formed around talking about a work wouldn't want to have to blur every image or paragraph in order to have a conversation on what may be years old points. There's no objectively perfect solution, even if there may be an almost objective factor to address.
In instances like this, leaving options is valuable. Some may want to keep all screenshots unblurred, while others may want to take minimal risks.
>and you leave that up to each individual to decide, what's shared, and what sort of "community" is actually offered?
I don't think anyone is proposing an anarchy. At the end of the day, there may be some rulesets made by consensus, but it is beholden to the sub-community moderator and their personal whims, who is beholden to the community owner and their personal whims, who is beholden to some loose sets of laws based on their country. So this structure is impossible outside of some sort of decentralized p2p server setup (which sounds like a mess to communicate in).
When I say based on shared values and contexts, I mean at the community level.
So for a community where people discuss Harry Potter books, many members are not going to want to be careful about everything they say for fear of spoiling someone else's experience. But some people might want to experience reading the books for the first time with other people in the same boat as them.
If this community was well managed, they might solve this by:
- Writing into the community guidelines that spoiler tags only need to be used when discussing something that's NEW.
- Users have the ability to have spoiler content hidden/shown by default. The key here is that the definition of what a spoiler is, is still defined by the community, not the individual.
- Creating a separate area within their community for new readers to chat about the books, where discussion about events in future books is not allowed.
By joining the community, and agreeing to the community guidelines, you are agreeing to abide by an objective definition for what's appropriate within that community.
If you don't agree with those guidelines, you probably don't want to join that community. But without any shared guidelines, you do trend towards anarchy. Certain topic/niches will approach it faster than others.
You and the toplevel commenter may be talking about two different kinds of systems.
You are describing "well managed communities". HN is arguably one of those. Many topic-specific forums, IRC channels, mailing lists, and communities on platformy things that seek to reinvent those are as well. They tend to be centered around a topic or purpose and have rules, guidelines, and social norms that facilitate that purpose.
I think the toplevel comment is talking about global many-to-many networks where people connect based on both pre-existing social relationships and shared interests (often with strangers). Those require a different model, and centralized moderation based on a single set of rules is probably not the best one.
ie Twitter is huge, but there are sub-communities that exist within it (some well managed, others not so much), and Twitter is building community features to that point.
Twitter can certainly do a better job of moderating its sub-communities, taking into account shared values and context, but I still don't see how the solution is to have users deal with content moderation.
That's exactly the point GP was trying to make. That people should be able to organize in groups of shared values and context. Rather than there being a rather large rough mono-culture of moderation policies.
If you leave it up to each individual to decide what is appropriate or inappropriate, and provide them with the tools to block content they consider inappropriate, that's a burden on them, because you're not taking care of it at the community level.
And if the community's strength comes from shared values, and you leave that up to each individual to decide, what's shared, and what sort of "community" is actually offered?