Exactly! My ideal vision for the future is that agents will be doing all grunt work/implementation, and we'll just be guiding them.
Can't wait til I'm coding on the beach (by managing a team of agents that notify me when they need me), but it might take a few more model releases before we get there lol
If you think you could do that on the beach, couldn't you do traditional software dev on the beach?
I actually think there's a chance it will shift away from that because it will shift the emphasis to fast feedback loops which means you are spending more of your time interacting with stakeholders, gathering feedback etc. Manual coding is more the sort of task you can do for hours on end without interruption ("at the beach").
How nice when just hung up with a demanding stakeholder who knows you can deliver a lot “instantly” you switch to your phone and your “agents” are just stuck into some weird stuff that they cannot debug.
What happens is the status quo changes. Like what happened with Dev/Ops. If you find yourself with the time to lead agents on a beach retreat you might find yourself pulled into more product design / management meetings instead. AI/Dev like DevOps. Wearing more hats as a result. Maybe I'm wrong though.
I still think that human taste is important even if agents become really good at implementing everything and everyone's just an idea guy. Counter argument: if agents do become really good at implementation, then I'm not sure if even human taste would matter if agents could brute force every possibility and launch it into the market.
Maybe I'll just call it a day and chill with the fam
Seems like your vision is to let AI take over your livelihood. That’s an unusually chipper way to hand over the keys unless you have a lifetime of wealth stashed away.
There is enormous money and effort in making AI that can do that, so if it's possible it is eventually going to happen. The only question is whether you're part of the group making the replacement or the group being replaced.
If their livelihood is solving difficult problems, and writing code is just the implementation detail the gotta deal with, then this isn’t gonna do much to threaten their livelihood. Like, I am not aware of any serious SWE (who actually designs complex systems and implements them) being genuinely worried about their livelihood after trying out AI agents. If anything, that makes them feel more excited about their work.
But if someone’s just purely codemonkeying trivial stuff for their livelihood, then yeah, they should feel threatened. I have a feeling that this isn’t what the grandparent comment user does for a living tho.
I neither know nor care what the C-suite at my company thinks, as long as they provide me the resources necessary to get my job done effectively.
And, so far, it seems like they are fairly understanding, as they are happy about the output of my work. After all, they aren't paying me per-line-of-code delivered, they are paying me to solve problems. If they think that an LLM can replace me fully, they are more than welcome to try it and see how it works out for them.
The entirety of my report chain is just former engineers (with some of them being pivotal to things like GMaps SDK for iOS and such), so I am not really worried about them testing this theory out in practice. And if they do and decide that an LLM can replace me, well, there are always other jobs out there I can take. From my personal experience at this company, I will be just fine.
Can't wait til I'm coding on the beach (by managing a team of agents that notify me when they need me), but it might take a few more model releases before we get there lol