Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Great comment. I'll add that despite being a bit less powerful, the Composer 1 model in Cursor is also extremely fast - to the point where things that Claude would take 10+ minutes of tool calls now takes 30 seconds. That's the difference between deciding to write it yourself, or throwing a few sentences in Cursor and having it done right away. A year ago I'd never ask AI to do tasks without being very specific about which files and methodologies I want it to use, but codebase search has improved a ton and it can gather this info on it's own, often better than I can (if I haven't worked on particular feature or domain in a few months and need to re-familiarize myself with how it's structured). The bar for what AI can do today is a LOT higher than the average AI skeptic here thinks. As someone who has been using this since the GPT4 era, I'd say that I find a prompt about once a week that I figured LLMs would choke on and screw up - but they actually nail it. Whatever free model is running in Github Copilot is not going to do as well, which is probably where a lot of frustration comes from if that is all someone has experienced.


Yeah the thing about having principles is that if the principle depends on a qualitative assessment, then the principle has to be flexible as the quality that you are assessing changes. If AI was still at 2023 levels and was improving very gradually every few years like versions of Windows then I'd understand the general sentiment on here, but the rate of improvement in AI models is alarmingly fast, and assumptions about what AI "is good for" have 6-month max expiration dates.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: