Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is what I don't understand: everyone who thinks we're still relevant with the same job and salary expectations.

Everything just changed. Fundamentally.

If you don't adapt to these tools, you will be slower than your peers. Few businesses will tolerate that.

This is competitive cycling. Claude is a modern bike with steroids. You can stay on a penny farthing, but that's not advised.

You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.

What remains to be seen is how many of us the market needs and how much the market will pay us.

I'm hoping demand and comp remain constant, but we'll see.

The one thing I will say is that we need ownership in these systems ASAP, or we'll become serfs to computing.





I don’t think that the real dichotomy here. You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.

The management has decided that the latter is preferable for short term gains.


> You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.

It's actually worse than that, because really the first case is "produce 1x good code". The hard part was never typing the code, it was understanding and making sure the code works. And with LLMs as unreliable as they are, you have to carefully review every line they produce - at which point you didn't save any time over doing it yourself.


It's not dogshit if you're steering.

That's what so many of you are not getting.

Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.

I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.


  > Look at the pretty pictures AI generates. That's where we are with code now.
Oh, that is a great analogy. Yes, those pictures are pretty! Until you look closer. Any experienced artist or designer will tell you that they are dogshit and don't have value. Don't look further than at Ubisoft and their Anno 117 game for a proof.

Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.


Not to mention how hard it is to actually get what you want out of it. The image might be pretty, and kinda sorta what you asked for. But if you need something specific, trying to get AI to generate it is like pulling teeth.

I agree entirely, except i don't know that I've seen pretty pictures from AI.

"Glossy" might be a good word (no i don't mean literally shiny, even if they are sometimes that).


I've developed a new hobby lately, which I call "spot the bullshit."

When I notice a genAI image, I force myself to stop and inspect it closely to find what nonsensical thing it did.

I've found something every time I looked, since starting this routine.


Since we’re apparently measuring capability and knowledge via comp, I made 617k last year. With that silly anecdote out of the way, in my very recent experience (last week), SOTA AI is incapable of writing shell scripts that don’t have glaring errors, and also struggles mightily with RDBMS index design.

Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.


The problem is not that it can’t produce good code if you’re steering. The problem is that:

There are multiple people on each team, you can not know how closely each teammate monitored their AI.

Somebody who does not car will vastly outperform your output. By orders of magnitude. With the current unicorn chasing trends, that approach tends to be more rewarded.

This produces an incentive to not actually care about the quality. Which will cause issues down the road.

I quite like using AI. I do monitor what it’s doing when I’m building something that should work for a long time. I also do total blind vibe coded scripts when they will never see production.

But for large programs that will require maintenance for years, these things can be dangerous.


> You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.

I agree, but this is an oversimplification - we don't always get the speed boosts, specifically when we don't stay pragmatic about the process.

I have a small set of steps that I follow to really boost my productivity and get the speed advantage.

(Note: I am talking about AI-coding and not Vibe-coding) - You give all the specs, and there are "some" chances that LLM will generate code exactly required. - In most cases, you will need to do >2 design iterations and many small iterations, like instructing LLMs to properly handle error gracefully recover from errors. - This will definitely increase speed 2x-3x, but we still need to review everything. - Also, this doesn't take into account the edge cases our design missed. I don't know about big tech, but when I have to do the following to solve a problem

1. Figure out a potential solution

2. Make a hacky POC script to verify the proposed solution actually solves the problem

3. Design a decently robust system as a first iteration (that can have bugs)

4. Implement using AI

5. Verify each generated line

6. Find out edge cases and failure modes missed during design and repeat from step3 to tweak the design, or repeat from step4 to fix bug.

WHENEVER I jump directly from 1 -> 3 (vague design) -> 5, Speed advantages become obsolete.


> You can write 10x the code - good code.

This is just blatantly false.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: