I think the question is less what people do with ChatGPT's output, and more with the output itself.
If ChatGPT makes the same libelous claim in its output to 100,000 people, that doesn't seem particularly different from a libelous claim printed in a newspaper with circulation of 100,000.
Microsoft/OpenAI can put down all the legalese disclaimers they want, but if they market ChatGPT/Bing as a tool that provides useful answers, the disclaimers don't protect from libel. By analogy, the NYT can't put a disclaimer in its paper that none of its reporting should be considered truthful in order to protect itself from libel suits -- it just won't work in court. (And yes there are tabloids that print nonsense stories about aliens and Elvis and Brad Pitt, which are for "entertainment purposes only", but the different is the average consumer knows it's a bunch of nonsense, just like The Onion. Parody/fiction is protected.)
So I actually think this is going to be the biggest question/risk by far in terms of commercializing ChatGPT etc. -- much more important than the copyright status of training material.
Because the courts don't decide libel on disclaimers, they decide on harm and how a reasonable person interprets things. If they market Bing/ChatGPT as a useful research tool with advertisements showing it giving correct answers etc. that leads to people believing its lies are true, then there's a real risk libel suits are going to shut the whole thing down.
On the other hand, to make sure they can operate, they may have to market the thing basically as a toy -- a magic 8 ball, a Mad Libs generator. A disclaimer isn't enough, it means they need to avoid any kind of advertising or product positioning that ever depicts it as giving useful/correct information at all. No more homework help, no more trip planning, no more search tool. Which basically sinks the product. But which is also arguably the best outcome -- that Microsoft/OpenAI shouldn't be permitted to market it as anything but a constantly-lying toy.
If ChatGPT makes the same libelous claim in its output to 100,000 people, that doesn't seem particularly different from a libelous claim printed in a newspaper with circulation of 100,000.
Microsoft/OpenAI can put down all the legalese disclaimers they want, but if they market ChatGPT/Bing as a tool that provides useful answers, the disclaimers don't protect from libel. By analogy, the NYT can't put a disclaimer in its paper that none of its reporting should be considered truthful in order to protect itself from libel suits -- it just won't work in court. (And yes there are tabloids that print nonsense stories about aliens and Elvis and Brad Pitt, which are for "entertainment purposes only", but the different is the average consumer knows it's a bunch of nonsense, just like The Onion. Parody/fiction is protected.)
So I actually think this is going to be the biggest question/risk by far in terms of commercializing ChatGPT etc. -- much more important than the copyright status of training material.
Because the courts don't decide libel on disclaimers, they decide on harm and how a reasonable person interprets things. If they market Bing/ChatGPT as a useful research tool with advertisements showing it giving correct answers etc. that leads to people believing its lies are true, then there's a real risk libel suits are going to shut the whole thing down.
On the other hand, to make sure they can operate, they may have to market the thing basically as a toy -- a magic 8 ball, a Mad Libs generator. A disclaimer isn't enough, it means they need to avoid any kind of advertising or product positioning that ever depicts it as giving useful/correct information at all. No more homework help, no more trip planning, no more search tool. Which basically sinks the product. But which is also arguably the best outcome -- that Microsoft/OpenAI shouldn't be permitted to market it as anything but a constantly-lying toy.