>I use hate-speech as an example because the "groomer" as a slur issue is at a peak right now and is fresh in my head.
Mm. I think the problem is that censorship advocates like you try to add words like this into the lexicon of "hate speech" and try to have it censored. It's not, and it shouldn't be, and you should be ashamed of even thinking it.
It is not other people's responsibility to stay up to date on what may or may not be triggering you on any particular day.
> It's not, and it shouldn't be, and you should be ashamed of even thinking it.
I didn't see the OP advocating for censorship, but for the enablement of moderation for unacceptable content within a community. Do you think LGBTQ spaces on reddit - for instance - should be prevented from banning users that compare them all to child molesters? Is that "censorship" that the OP should be ashamed of? Must all spaces be entirely laissez-faire about all user-generated content legal within the confines of the law? I do not think so.
You may quibble with a segment of our population being derided as "groomers" as "not hate speech", but at the very least it is received as hateful by many. We're not talking about legal statutes in this conversation, we're talking about community moderation.
Frankly, I think you should be ashamed of your own comment. It was incredibly aggressive, presumptuous, and built on a straw-man argument.
Your argument for "spaces" and "communities" goes out the window when moderators of subreddits are forced to make sure their space adheres to what Reddit wants. That's not your space, it's Reddit's. No one is stopping subreddits from moderating how they want, the problem is when they are forced to moderate how Reddit wants. Furthermore, your "communities" and "spaces" are nothing more than an excuse to create echo chambers, which are not productive for free speech either, but that's just Reddit for you.
Forgive me, but I'm copy-pasting my comment from elsewhere.
----
I can't quite get a clear picture of what you actually believe. Come down to my level and talk specifics vs. generalities:
1) Do you think that individuals or communities creating a content-specific space for user interaction should not have any right or ability to moderate the content?
2) Do you think individuals or communities creating a space dedicated to specific interest, hobby, or lifestyle should have no ability to moderate that content?
3) Abstracting a level higher, should all space for user-generated conversation and content necessarily welcome all contributions? In other words, should everything goes, everywhere, in all communities where user interaction is invited? Or are you just arguing the point semantically?
I truly, sincerely understand the point you are making but I also have no idea what you're advocating for.
>I didn't see the OP advocating for censorship, but for the enablement of moderation for unacceptable content within a community.
This is the same thing. Facilitating and encouraging the use of moderation policies which prohibit speech of users is censorship. I've often found that Americans confused "censorship" with "violating the Constitution." Most of us are not given Constitution protections because we don't live in America. Further, the Constitution only covers some kinds of free speech, not all. Censorship is much more broad than the Constitution, and covers private companies censoring citizens.
> This is the same thing. Facilitating and encouraging the use of moderation policies which prohibit speech of users is censorship.
You are using "censorship" as a big scary word when talking about generalities. I can't quite get a clear picture of what you actually believe. Come down to my level and talk specifics vs. generalities:
1) Do you think that individuals or communities creating a content-specific space for user interaction should not have any right or ability to moderate the content?
2) Do you think individuals or communities creating a space dedicated to specific interest, hobby, or lifestyle should have no ability to moderate that content?
3) Abstracting a level higher, should all space for user-generated conversation and content necessarily welcome all contributions? In other words, should everything goes, everywhere, in all communities where user interaction is invited? Or are you just arguing the point semantically?
I truly, sincerely understand the point you are making but I also have no idea what you're advocating for.
> It is not other people's responsibility to stay up to date on what may or may not be triggering you on any particular day.
The internet moves fast now, if there's a growing trend of a slang-word to get around hate speech laws without actually breaking any, and everyone reacting to it understands it... then it's hate-speech right?
"groomer" is only used in the context of hatred towards a particular group without technically violating any hate-laws. It's quickly becoming a way to say "fagg*t" without saying it...
We wouldn't have any words left to use if it was up to you I guess.
And what "hate-laws" are you referring to, in regards to speech, that people are "getting around"? You do know Twitter TOS is not the actual law right?
> We wouldn't have any words left to use if it was up to you I guess.
Good point, that's a bit of an extreme though and context/intent does matter.
> And what "hate-laws" are you referring to, in regards to speech, that people are "getting around"? You do know Twitter TOS is not the actual law right?
True, I meant the Twitter TOS I guess. But Musk wants to model the TOS as anything within the confines of the law right?
It is 100% hate speech to call all LGBTQ people groomers, to imply ones sexuality is because they were groomed.
It's hateful and a disingenuous political argument to use that cast that term on anyone who objects to DeSantis' banning of discussions of LGBTQ topics. Which is why we are discussing this.
Not knowing that is triggering is gross.
If you have certain distasteful beliefs, at least have the respect not to say it in public.
This isn't perception. it is the reality and factual that a large amount of people / politicians on the right are saying this exact slur. re their counter response to critique of FL and TX laws, and book bannings across the country.
You should see the comments we get on FB. I run ads for political campaigns. It's insanely effective how much this has seeped into that electorate. And very very sad.
Well, it depends. I've never heard this term "groomer" before. I don't even understand the term but it sounds like there's a community or subculture out there that takes it very seriously. So, if I were in public spaces where people from that community where known to communicate with one another I would absolutely want to know if something I said was offensive or hateful. I would expect to have the issue brought to my attention if I inadvertantly said something problematic. I'm not trying to hurt anyone, I just want to talk with people. I don't feel like my free speech is being impinged upon either.
The term originally referred to an adult gradually influencing a child to make them vulnerable to sexual abuse. Now it's commonly used to refer to teaching children progressive values, which I think is hateful.
It's also generally used in extremely low signal comments. For example, a person might talk on Twitter about how they think teachers putting up posters saying they support gay students is helpful. They'd then get tens of replies with only the exact phrase "Ok groomer".
>The term originally referred to an adult gradually influencing a child to make them vulnerable to sexual abuse. Now it's commonly used to refer to teaching children progressive values, which I think is hateful.
The problem is, you seem to be intentionally misconstruing the definition of the word "hateful" in order to categorize words you don't like as "hate speech".
There is no other way that the thing you described meets any reasonable definition of "hate".
Hate speech is usually full of euphamisms, indirect language, and inference. It also often depends a lot of context.
In the specific case of the use of "grooming," I can see how it is not obvious that this is hate speech, but once you connect the dots, based on the context, it becomes quite obvious.
For example, teaching books that feature any LGBTQ characters has been called "grooming." What is the usual definition of grooming? Preparing a child for sexual abuse, which is morally reprehensible -- a behavior that is, frankly, deserving of hate.
However, when the only reason "grooming" is brought up is specifically because a book feature a gay character, we are expected to "hate" the "morally reprehensible." Why is it morally reprehensible (hateful), in this context? Certainly not because children are being groomed for sexual abuse. The only reason is that the gay character is in the book.
Now, some people might think that presenting characters that are gay is morally reprehensible. They might think the same thing about featuring Black characters, Asians, Jews, or some other group. It's all hate speech. Even if it's not using the most blatantly obvious inflammatory language.
Saying teaching a child progressive values is similar to teaching a child to be vulnerable to sexual abuse is accusing progressives of being like sexual abusers of children.
That's hateful. (I deliberely didn't say hate speech)
We aren’t talking about lectures extolling high marginal rates.
Some parents object to exposing children to inappropriate sexual themes for the purpose of conditioning them to accept the same as normal, i.e. grooming.
That is not generally occurring, and even it were, it would not qualify as "grooming." Grooming is when an adult tries to gain the trust of a child so that they can manipulate and eventually sexually assault them. If a teacher showed furry porn to kids in class to teach them that it's normal to be a furry, that would be extremely inappropriate, but it would still not be grooming unless the teacher also planned to have sex with them.
I wasn't talking about that. When that happens I find it alarming. I've yet to see the snide throwaway line "Ok Groomer" used in a case of actual child sexual abuse.
The only reason it's up for discussion is because it's being used as hate attack against queer communities and those standing up against censoring of teachers, students, and banning of books on lgbtq topics and gender.
We are talking about it because Republicans are using it as a slur in political arguments. There is strong pushback against DeSantis' Don't Say Gay Bil, there has been a mass banning of LGBTQ books in schools, and plenty of other attacks/laws on the community (mostly in just a couple states whose Govs are running for Pres...)
The response of the proponents, which sadly seems politically effective, is to label anyone who objects to these laws groomers.
Implying/whistling that queer people exist because they were groomed, e.g. sexual predator turned them gay.
It is aimed at queer people.
* And ironically those of us who are standing up against, you guessed it.... free speech. *
I did not want to focus on the example as it was but one example of common user behavior, but let's do it anyway. I should preface this by saying that I have been a moderator on online spaces for the last two decades. First starting on BBS, IRC, and more recently social networks. Social networks contain some of the worst behaviors I have ever experienced.
It is not words that need to be banned, banning words would be useless. Users could simply call people "gr00mers". Another way people use to talk about trans people in a hateful way is "troons", which is defined on Urban Dictionary as:
Troon, Noun. A transphobic slur referring to a trans woman.
The word Troon used to mean "an ugly woman who looks like a man." However, over time the word has been co-opted and turned into a particularly nasty slur against trans women.
This word was quickly identified in groups and people would get removed from groups for using it in an abusive way by setting that word to trigger an moderation warning. The people using it quickly adapted, proving that banning words is useless. This changed to "trains", which was added to the watch list. The little dance kept going on, right now the way it is written is by using a train emoji. So you would see "Look at this filthy[train emoji]".
Banning words is useless. Words are not hardcoded constants that cannot be replaced, they are descriptive and evolve in meaning. Banning "groomer" would simply lead to people using another word. Most slurs are just that, words that are borrowed and then infused with hate. Just take a look at https://en.wikipedia.org/wiki/List_of_ethnic_slurs, you will find that a lot of the terms are harmless when they are not weaponized or used in a hateful context. Even one of the oldest word used to indicated an intolerant position toward transgender people was simply a way to write "[a car's] transmission", or "bundle of sticks" towards homosexual people. This list is not exhaustive at all and only community websites such as Urban Dictionary is able to track slurs as the list grows constantly.
What I am talking about is seeing through the thing veil of false politeness and banning harmful actions.
Adding to the previous example: people often come ("raid") in private Facebook support groups. To enter those groups, you need to answer several questions. For example, "Are you transgender?/Do you understand that this is a support group and that confrontation is frowned upon?/Do you agree with the rules of the group?". Every single day, people lie to those questions to enter. Upon entering, they quickly drop all false pretense to quickly find the most vulnerable of users and spam them with hate via their inbox (where moderators cannot keep a watch). On threads about depression, they leave comments encouraging the person to end their lives through words that are absolutely legal. Imagine the worst 4chan has to offer but directly in your inbox. Many users are not web-savvy and this type of content will be absolutely crushing. Worst of all, trolls gather in groups where the only purpose is to list the profile of the vulnerable users they find. Some of those groups that I am aware of have up to 25k members. This is worrisome to me. Since they only contain links to profiles and are worded using legal ways, Facebook does not shut them down. They result in users receiving hate and thinly veiled threats from up to a hundred users a day. Since those message are worded in a way as to avoid reports from sticking.
Trans issues are a hot and controversial topic, so let's take a step back and use another community as an example.
Imagine you ran a community of people with anorexia gathering to talk about treatments, solutions and review doctors. One facet of anorexia is body dysmorphia where the person sees themselves as very overweight even when dangerously underweight. The hateful person or troll can find someone on door's death and convince them that they are fat. Is it ethical? Not at all. But if worded by someone skilled at trolling, it can be done in a legal way. They can send that person image of fat people and tell them that they look the same. Another way they go through that is by undermining the authority of doctors and hinting towards false and often dangerous treatments. This kind of trolling will go on for weeks, even months. With people advocating strange concepts under a false account with a false profile photo. Some of them even staying undercover for so long that they manage to gain a high standing in those community only to then use the insider information they have gathered to nefarious purposes.
Often, the trolls do not even hold a grudge or hate towards the targeted communities. To them, it is a game. Many are very young teens acting under the impression that they are great hackers for managing to infiltrate support networks and destroying them.
We can all agree that those behaviors are not only hateful, they are dangerous and target the most vulnerable of individuals. Worst of all for the corporations running those websites, they result in huge exodus of users. One I can think of is a 40k strong Facebook group that moved to Discord due to the constant abuse their members received.
Can Twitter decide to accept the behavior described above? Absolutely. It is a private entity. However this kind of "everything is allowed" at all cost mindset comes with a cost. Often, this cost is the migration of users to other platforms. What happened to Tumblr is a great example.
Mm. I think the problem is that censorship advocates like you try to add words like this into the lexicon of "hate speech" and try to have it censored. It's not, and it shouldn't be, and you should be ashamed of even thinking it.
It is not other people's responsibility to stay up to date on what may or may not be triggering you on any particular day.