Right, crowdsourced moderation works for badly done spam, poorly coded bots, etc. It does nothing for pseudo-science, "self-interested proselytizers" or PR teams. I think there's a mentality that if we just "crowdsource" something, somehow the work just goes away. It's a little like hand-waiving "the cloud" or "serverless" -- just because it's not your problem doesn't mean it disappears. It's still work that still needs to be done by somebody
On top of that, moderation of anything sufficiently popular isn't easy -- e.g., dealing with trolls or PR teams targeting a forum can get extremely complicated sussing out who is who and figuring out where to draw the line -- and like many things is inherently subjective, which means it's the last thing you'll want to hand-waive, but is instead integral to whatever is being built.
> I think there's a mentality that if we just "crowdsource" something, somehow the work just goes away. It's a little like hand-waiving "the cloud" or "serverless" -- just because it's not your problem doesn't mean it disappears.
Yeah, and this has been proven time and time again over at least the past 20 years. Crowdsourced moderation is also usually founded on the fallacy that people who are popular contributors will also be effective moderators, which isn't true.
Though certain crowdsourced features, like flagging, can definitely be moderation force-multipliers.
On top of that, moderation of anything sufficiently popular isn't easy -- e.g., dealing with trolls or PR teams targeting a forum can get extremely complicated sussing out who is who and figuring out where to draw the line -- and like many things is inherently subjective, which means it's the last thing you'll want to hand-waive, but is instead integral to whatever is being built.