I find it fascinating to read comments from a lot of people who support open models without guardrails, and then to read this thread with seemingly the opposite sentiment in overwhelming majority. Is it just two different sets of users with differing opinions on if models should be open or closed?
Context matters. In this case we're talking about Grok on X. It's not a philosophical debate if open or closed models are good. It's a debate (even though it shouldn't be) about Grok producing CSAM on X. If this was about what users do with their own models on their local machines then things would be different since that's not openly accessible or part of one of the biggest sites on the net. I think most people would argue that public facing LLM's have some responsibility to the public. As would any IP owner.
I think the question of if X should do more to prevent this kind of abuse (I think they should) is separate from Grok or LLM's though. I get that since xAI and X are owned by the same person there is some complications here, but most of the arguments I'm reading have to do with the LLM specifically, not just lax moderation policies.
I think there's a difference between access without guardrails, and decrying what folks do with them, or in this case a site that allows / doesn't even care if their integrated tool is used to creep on folks.
I can argue for access to say photoshop like tools, and say folks shouldn't post revenge / fake porn ...
They ban users responsible for misusing the tool, and refer them to law enforcement when appropriate. The whole point of this article is to say that's not good enough ("X blames users for [their misuse of the tool]") implying that merely making the tool available for people to use constitutes support of pedophilia. (Textbook case of appealing to the Four Horsemen of the Infocalypse.) The prevailing sentiment in this thread seems to be agreement with that take.
Making the tool easy to use and allowing it to just immediately post on Twitter is much different than simply providing a model online that people can download and run themselves.
If you are providing a tool for people, YES you are responsible to some degree.
Think of it this way. I sell racecars. I'm not responsible if someone buys my racecar and then drinks and drives and dies. Now, I run an entertainment venue where you can ride along in racecars. One of my employees is drunk, and someon dies. Now I am responsible.
In, like, an "ask a bunch of people and see what they think" way. Consensus. I'm not talking legality because I'm not a lawyer and I also don't care.
But I think, most people would say "uh, yeah, the business needs to do something or implement some policy".
Another example: selling guns versus running a shooting range. If you're running a shooting range then yeah, I think there's an expectation you make it safe. You put up walls, you have security, etc. You try your best to migrate the bad shit.
Misuse in this case doesn't include harassing adult women with AI generated porn of them. "Oh we banned the people doing this with children" doesn't cut it, in my mind.
As of May posting AI generated porn of unconsenting adults is a federal crime[1], so I'd be very surprised if they didn't ban users for that as well. The article conflates a bunch of different issues which makes it difficult to understand exactly what is and is not being talked about in each individual paragraph.
I am glad that open models exist. I also prefer that the most widely accessible AI systems that have engineered prompts and direct integration with social media platforms have guardrails. I do not think that this is odd.
I think it is good that you can install any apk on an android device. I also think it is good that the primary installation mechanism that most people use has systems to try to prevent malware from getting installed.
This sort of approach means that people who really need unbounded access and are willing to go through some extra friction can access these things. It makes it impossible for a megacorp to have complete control over a computing ecosystem. But it also reduces abuse since most people prefer to use the low-friction approach.
When people want open models without guardrails they're mostly talking about LLMs not so much image / video models. Outside of preventing CSAM what kind of guardrails would a image or video model have? Don't output instructions on the image for how to make meth? Lol
How do you even train a model to do that? For closed / proprietary models, that works, but for open / offline models, if I want to make a LoRa for meth instructions in an image... I don't know that you can stop me from doing so.
The thread is about a model-as-a-service. What you do at home on your own computer is qualitively different, in ternd of harassment and injury potential, that something automatically shared to Twitter.
Any mention of Musk on HN seems to cause all rational thought to go out the window, but yeah I wonder in this case how much of this wild deviation from the usual sentiment is attributable to:
1. Hypocrisy (people expressing a different opinion on this subject than they usually would because they hate Musk)
vs.
2. Selection bias (article title attracts a higher percentage of people who were already on the more regulation, less freedom side of the debate)
vs.
3. Self-censorship (people on the "more freedom, less regulation" side of the debate being silent or not voting on comments because in this case defending their principles would benefit someone they hate)
There might be other factors I haven't considered as well.
Been thinking about this more, and regarding #1 I wonder if perhaps part of what we're seeing is that a significant number of people just weren't thinking in terms of principles to begin with. (Probably most people in fact; it's not really something that comes naturally with system 1 thinking.)
We see stories on HN about companies forcing guardrails on the models they release to the public, see a bunch of people in the comments talking about how terrible that is, and think "cool, looks like the majority has a principled stance in favor of open models without guardrails". But really only a small percentage of commenters were thinking in those terms. What most actually support is just the idea of themselves and people they like getting access to open models without guardrails. When a different story comes along about a company not imposing those guardrails and people they don't like doing bad things with that freedom they have a completely different opinion.
You could call it hypocrisy, except it's not really hypocrisy to go against principles you never had to begin with.
It feels a little snobbish to talk about it this way so I'll inject a bit of humility by adding I'm probably guilty of this too sometimes. Like I said, it's a natural product of system 1 thinking. But it's probably healthy to give people a little grief over this, because having consistent principles is important to a well-functioning society.
Gee, I wonder why people would take offense at an AI model being used to generate unprecedented amounts of CSAM from real children, or objectify millions of women without their consent. Must be that classic Musk Derangement Syndrome.
The real question is how can the pro-Musk guys still find a way to side with him on that. My leading theory is that they're actually pro-pedophilia.
I think regardless of source, sharing such pictures on public social media is probably crossing the line? And everything generated by this model is de-facto posted publicly on social media (some commenters are even saying it's difficult to erase unwanted / unintended images?)
I'd also argue commercialization affects this - X is marketing this as a product and making money off subscriptions, whereas I generally think of an open model as something you run locally for free. There's a big difference between "Porn Producer" and "Photoshop"