Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Somehow regresses on SWE bench?
 help



I don't know how these benchmarks work (do you do a hundred runs? A thousand runs?), but 0.1% seems like noise.

That benchmark is pretty saturated, tbh. A "regression" of such small magnitude could mean many different things or nothing at all.

i'd interpret that as rounding error. that is unchanged

swe-bench seems really hard once you are above 80%


it's not a great benchmark anymore... starting with it being python / django primarily... the industry should move to something more representative

Openai has; they don't even mention score on gpt-5.3-codex.

On the other hand, it is their own verified benchmark, which is telling.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: