Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have treatable cancer.

ChatGPT has been helpful for fairly in-depth research, as well as answering questions I don’t want to bother my oncologist with.

Actually, I prefer it to be less empathetic and more robotic with its answers. When it tries to be empathetic it goes beyond what I’ve asked it and sprinkles on typical safety filter bs.

In contrast my oncologist could be a bit more empathetic. This person directs your care, and if they do so in a way that seems like a black box it can be rather scary. They know enough about me, with all of their experience, to be a “helpful human” in a difficult situation.



Are you confident that its responses are accurate? I’ve had it just make up facts or tack made-up facts onto actual facts. It will also fabricate references to research papers.


I’m interesting in your statements do you have an example of a instance where it made up facts?

I’ve have yet to experience any of the domain knowledge I have being incorrect.


I do. Multiple times I've asked it to summarise plotlines for TV shows and when it doesn't know (if for example the show isn't super popular and there isn't a lot of discussion about it on the internet), it'll just make stuff up outright and give me something that sounds absolutely reasonable based on the name of the show but is actually entirely hallucinated.

Similarly, today I saw this [0] comment and in an attempt to help the user went to ChatGPT.

Prompt:

> User I remember in the 2000s, there was some site using a decimal IP address (as a single number, not dotted quad) that had hacking/crypto puzzles. Something with a "Alice in Wonderland" theme. Does that ring a bell for you?

Response:

> I'm sorry, but as an AI language model, I don't have personal experiences or memories. However, based on your description, it's possible that you might be referring to the website known as "Project Wonderland." It was a website that featured a series of cryptographic puzzles based on the story of Alice in Wonderland, and it used a decimal IP address (203.208.46.147) as its URL. The website was active in the early 2000s and gained a cult following among enthusiasts of cryptography and puzzle-solving. However, the website is no longer active today.

I got really excited to have found an answer until through Google and the Wayback Machine I realised that ChatGPT just made this all up and no such website existed at that URL.

I tried starting a new chat with ChatGPT to ask it about this "Project Wonderland" website and it had no idea what I was talking about.

[0]: https://news.ycombinator.com/item?id=35748714

(I am using ChatGPT 3.5 (March 23, 2023))


The important bit of context - that ChatGPT completely missed - is that the address was a single number like e.g. http://3520653040

The server might even have refused connection if the HTTP "Host: " header wasn't in that format, but as a human, rather than a language model, I'm not sure about that and might be misremembering :)


GPT 4 responds with cicada 3301 which as best asi can tell is a valid solve for your query.

* 3301 is one of three numbers that had to be added to get the .com url * The Wikipedia page cites someone close to the group with "follow the white rabbit" * Years don't quite match up but given that you only asked if it rang a bell, that is fair enough


I have a lot of examples. I just tried it out. I asked Who killed JR? It correctly replied about JR in Dallas.

But then I said No, I'm talking about JR in babylon 5

  The response was 
I apologize for the confusion. "JR" in Babylon 5 refers to the character John J. Sheridan, who was the captain of the titular space station during the later seasons of the show.

  Sheridan was never refereed as JR in Babylon 5. There was no JR in Babylon 5. But because I asked, ChatGPT happily made up things.


I can. I've always struggled with the difference between polyptoton and antanaclasis. (Lucky for me, it doesn't come up very often!) I like what ChatGPT can do, and I've had some good luck with it, so I asked.

What I got back was confidently incorrect, and gave me examples that were neither of the two. (Interestingly, most of the examples were chiasmus instead.) I tried correcting it to get better responses, but it kept apologizing and then giving me the incorrect examples back.


more confident about my understanding of chatgpt, how to use it appropriately, and abilty to filter its output or do expanded research as needed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: