Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught.

If you replace this guy with my name I'd be upset. In my non-software networks the hallucination part isn't common knowledge. It's just a cool Google replacement.



> In my non-software networks the hallucination part isn't common knowledge

I think that's one of the main issues around these new LLM's, the fact that most users will take what the bot tells them as gospel. OpenAI really should be more upfront about that. Because what happens when regulations and policies start getting put forth without the understanding of LLM hallucination, we could very well end up in a situation where regulators want something that is not technically feasible.


> OpenAI really should be more upfront about that.

I mean they are quite upfront. When you load the page it displays the following disclaimers with quite large font:

"Limitations

May occasionally generate incorrect information

May occasionally produce harmful instructions or biased content

Limited knowledge of world and events after 2021"

2 out of the 3 disclaimers are about the fact that the software lies.

And then in the bottom of the page, right below the input box they say: "Free Research Preview. ChatGPT may produce inaccurate information about people, places, or facts"

Sure they could make them even larger and reword it to "This software will lie to you", and add small animated exclamation marks around the message. But it is not like they hide the fact.


People don't read text: https://www.nngroup.com/articles/how-users-read-on-the-web/

A better way, like the sibling comment says, is to force people to type a sentence so they consciously acknowledge it. It's similar to college exams which ask you to specifically write out something like "I have not cheated on this assignment or test."


One thing they could try is force users to type "I understand the information presented by ChatGPT should not be taken as fact" before they can use it.

I've seen that sort of thing used to enforce people to read the rules on discord servers, this is higher stakes IMO.


I agree that they provide that disclaimer on the homepage. I was talking more broadly that society (namely the news media and government) should be aware of the limitations of LLM's in general. Take this article from NYT[1], depending on how well you understand the limitations of LLM's will depend on how you react to this article, it's either alarming or "meh". All I'm staying is society in general should understand that LLM's can generate fake information and that's just one it's core limitations, not a nefarious feature.

[1]: https://www.nytimes.com/2023/02/08/technology/ai-chatbots-di...


If I search my name, it doesn't come up with anything defamatory. (Not that I tried leading questions.) But it does come up with plenty of hallucinations including where I've worked, lived, gone to school, etc. And that's with a bunch of bios online and AFAIK a unique online name.


anyone using it is shown a page saying this bot makes things up




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: