Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hmm didn't notice any difference yet, you are saying it got worse last weeks?

For kids story writing I've been getting better results with 3.5 at times.

Where 4 is way better af coding.



No, we have no access to the original model, unfortunately.

The fact that RLHF broke the calibration comes from the GPT-4 paper, possibly the only interesting technical detail that they include.


What's with the obsession with children's stories and GPT? Is it just that children have low standards?


As a parent my guess would be that people see it as a way to introduce welcome variety and whimsy into the daily routine of reading a bedtime story. While also feeling like you're using a hobby interest to help with a real practical issue.

I have a small library of children's books and we've read them all several times, the good ones many times.

That said, I wouldn't personally turn to these language models. From what I've seen they tend to generate rather bland and boring stories. I would rather make up my own or reread "Kackel i grönsakslandet" for the hundredth time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: