Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is there a big enough dataset of 'good' code to train from though?
 help



I (and lots of people) used to think the models would run out of training data and it would halt progress.

They did run out of human-authored training data (depending on who you ask), in 2024/2025. And they still improve.


> They did run out of human-authored training data (depending on who you ask), in 2024/2025. And they still improve.

It seemed to me that improvements due to training (i.e. the model) in 2025 were marginal. The biggest gains were in structuring how the conversation with the LLM goes.


> And they still improve.

But what asymptote are they approaching? Average code? Good code? Great code?


I'd argue that "good", or at least "good enough", is when they reach a point where it becomes preferable to spend your time prompting rather than reading and writing code. That the final output meets the feature specifications is more or less the goal.

A lot of developers are having a difficult time accepting that the code doesn't matter nearly as much anymore, myself included. The feedback cycles that made hot fixing, bug fixing, customer support, etc. so expensive, have shrunk by orders of magnitude. A codebase that can be maintained by humans is perhaps not a goal worth pursuing anymore.

To really see this and feel this, I think it's worthwhile to spend at least a weekend or two seeing what you can build without writing or reviewing any of the code. Use a frontier model. Opus 4.6 or Codex 5.3. Probably doesn't matter which one you choose.

If you give it an honest try, you'll see that a lot of the limitations are self-imposed. Said another way: the root problem is some flavor of the user under specifying a prompt, having inconsistent design docs, and not implementing guard rails to prevent the AI from reintroducing bugs you previously squashed.

It's a very new way of working and it feels foreign. But there are a lot of very smart, very successful people doing this. People who have written millions of lines of code over their lifetime, and who enjoyed doing it, are now fully delegating the task.


They ran out of passively collected data. RLHF allows them to gather deeper more targeted data.

There is a lot of RLHF effort around this.

AHEM

Let me repeat myself.

I think it goes without saying that they will be writing "good code" in short time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: