Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The fact system prompts work is surprising and sad.

It gives us the feel of control over the LLM. But it feels like we are just fooling ourselves.

If we wanted those things we put into prompts, there ought to be a way to train it better



Why train the model to know how to use very specific tools which can change and are very specific only to ChatGPT (the website)? The model itself is used in many other, vastly different contexts.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: