Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In order to consistently output the same fake prompt, that fake prompt would need to be part of GPT’s prompt…. In which case it wouldn’t be fake.

You can envision some version of post LLM find/replace, but then the context wouldn’t match if you asked it a direct non-exact question.

And most importantly, you can just test each of the instructions and see how it reacts.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: