Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> That’s true, but it doesn’t increase the chance of a phony defamation lawsuit from going through in the first place.

I’m not sure what you mean by “phony” or “going through”, but it definitely increases the chance of any defamation lawsuit against the S.230 protected party surviving to any stage of the process beyond an initial demurrer or motion to dismiss.

> It just changes the outcome of a successful defamation lawsuit.

It also changes the length and expense of many defamation lawsuits that would be unsuccessful in any case, by making it easier for the defendant to get them dismissed sooner because they are invalid as a matter of law before even getting to the facts of the alleged defamation.



My point is, there is no S.230 protected party to be concerned about, if there was no crime to begin with. AI producing slanderous results is just not something covered by defamation law to begin with, unless somehow some very bizzare circumstances are met. Can't have step 2 without step 1, and the theoretical possibility of step 2 doesn't increase the chance of step 1 happening.

Legal costs are a good point, though. Defamation lawsuits, even ones that are phony, still present problems by clogging up the court system and incurring costs.


> My point is, there is no S.230 protected party to be concerned about, if there was no crime to begin with.

Defamation is a tort, only rarely a crime (and when it is a crime, S230 doesn’t apply, because S230 specifically does not impact criminal law).

And, yes, in an idealized analysis S230 only makes a difference in the final outcome if the court would have ultimately found liability withou it – but that’s, frankly, not a meaningful analysis in the real world. It assumes that all cases either go to trial or are resolved exactly as they would have been had they gone to trial, which is of course not even remotely the case, the overwhelming majority of all tort cases that are even filed, and even larger percentage of all potential tort cases, are resolved by settlement which account for costs, time, and uncertainty of actual trial results, so any consideration which is favorable to one side realistically effects not only the course but also the ultimate outcome of vastly more cases than it would in the simplistic analysis.

> AI producing slanderous results is just not something covered by defamation law to begin with

There’s a lot of bad analysis around AI which starts with the false premise that there an instance of AI software constitutes an entity which is both legally cognizable (so that it somehow serves as responsibility break between a person, natural or corporate, and an action that would otherwise be subject to legal liability) and legally null (so it neither has liability itself nor creates vicarious liability the way, say, a human agent would by way of respondeat superior), when in fact, an AI is no different than any other tool like a hammer or, perhaps more relevantly here, a printing press. If you claimed you weren’t liable for libel because it wasn’t you doing it, it was your printing press, everyone would just laugh at you, but for some reason everyone seems to think that “an AI did it” somehow means no human is on the hook.


Good response, you covered some things I haven't thought of.

> but for some reason everyone seems to think that “an AI did it” somehow means no human is on the hook

Yeah, "an AI did it, not a human" wouldn't be the reason it would be thrown out. A human did ultimately cause it to happen by creating/using a tool, and people often get trapped in the fallacy that an AI's calculations are like an earthquake (or other natural event), or on the other end of the spectrum like an independent human with thoughts and feelings. Both of these views would be pretty legally ridiculous to try to argue (though transhumanist legal framework could change this, hopefully only after it's demonstrated such a framework is actually needed)

What makes it likely to be ignored by the courts is the unlikely series of events needed to meet the standard. There would need to be some sort of false statement purported as fact or negligence, which is difficult because OpenAI is openly saying that they've made an AI product that sometimes spews nonsense and can't be trusted. There would need to be some sort of damages, which is unlikely because most journalists are unlikely to publish AI hallucinations as some sort of whistle-blowing attack on someone's reputation, so it's unlikely to influence a large number of people to believe the claims in the first place. And in some jurisdictions there may need to be malice involved. These standards are difficult to meet even in cases that seem pretty clear-cut. Maybe a future scenario will meet this standard if everything goes wrong.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: