Hacker Newsnew | past | comments | ask | show | jobs | submit | riffraff's commentslogin

I have come to the conclusion that many people are going to live this AI period pretty much like the five stages of grief: denial that it can work, anger at the new robber barons, bargaining that yeah it kinda works but not really well enough, catastrophic world view and depression, and finally acceptance of the new normality.

I'm still at the bargaining phase, personally.


What's the 'new normality' in the fifth stage? Do you think you'll start to believe it actually works 100%? Or that you won't change your assessment that it works only sometimes, but maybe pulling the lever on the slot machine repeatedly is better/more efficient than doing it yourself?

Business will start accepting bad uptime to be the norm. Following the lead of Github: https://mrshu.github.io/github-statuses/

No this is still the "bargaining/negotiating" phase thinking. After this is when depression hits when for your usecases you see that the code quality and security audit is very good.

Is this a new "delusional" phase?

I'm not sure, but I think it boils down to accepting that some things we were attached to are no longer important or normal (not just software building).

But specifically to your examples, the latter: I think the "brute force the program" approach will be more common that doing things manually in many cases (not all! I'm still a believer in people!).

Edit: Well, I wrote a bad blog post on this some time ago, I might as well share it: I think the accepting means engaging with the change rather than ignoring it.

https://riffraff.info/2026/03/my-2c-on-the-ai-genai-llm-bubb...


People will accept it as a way to build good software.

Many are still in denial that you can do work that is as good as before, quicker, using coding agents. A lot of people think there has to be some catch, but there really doesn’t have to be. If you continue to put effort in, reviewing results, caring about testing and architecture, working to understand your codebase, then you can do better work. You can think through more edge cases, run more experiments, and iterate faster to a better end result.


When you resolve bottlenecks, new bottlenecks become apparent. Right now, it's looking like assessment and evaluation are massive bottlenecks.

I'm kind of excited about that though. What I've come to realize is that automated testing and linting and good review tools are more important than ever, so we'll probably see some good developments in these areas. This helps both humans and AIs so it's a win win. I hope.

> it's looking like assessment and evaluation are massive bottlenecks.

So I think LLMs have moved the effort that used to be spent on fun part (coding) into the boring part (assessment and evaluation) that is also now a lot bigger..


You could build (code, if you really want) tools to ease the review. Of course we already have many tools to do this, but with LLMs you can use their stochastic behavior to discover unexpected problems (something a deterministic solution never can). The author also talks about this when talking about the security review (something I rarely did in the past, but also do now and it has really improved the security posture of my systems).

You can also setup way more elaborate verification systems. Don't just do a static analyis of the code, but actually deploy it and let the LLM hammer at it with all kinds of creative paths. Then let it debug why it's broken. It's relentless at debugging - I've found issues in external tools I normally would've let go (maybe created an issue for), that I can now debug and even propose a fix for, without much effort from my side.

So yeah, I agree that the boring part has become the more important part right now (speccing well and letting it build what you want is pretty much solved), but let's then automate that. Because if anything, that's what I love about this job: I get to automate work, so that my users (often myself) can be lazy and focus on stuff that's more valuable/enjoyable/satisfying.


Fuzz testing has existed long before LLMs...

When writing banal code, you can just ask it to write unit tests for certain conditions and it'll do a pretty good job. The cutting edge tools will correctly automatically run and iterate on the unit tests when they dont pass. You can even ask the agent to setup TDD.

Cars removed the fun part (raising and riding horses) and automatic transmissions removed the fun part (manual shifting), but for most people it's just a way to get from point A to B.

It doesn't have to work 100% of the time to be ubiquitous! This is just the strangest point of view. People don't work 100% of the time either, and they wrote all the code we had until a couple of years ago. How did we deal with that? Many different kinds of checks and mitigations. And sometimes we get bugs in prod and we fix them.

The new normal will be: Everything will get worse and far more unstable (both in terms of UI/UX and reliability), and many of us will loose their jobs. Also the next generation of the programmers will have shallower understanding of the tools they use.

AI doesn't need to outrun the bear; it only needs to outrun you.

Once the tools outperform humans at the tasks to which they were applied (and they will), you don't need to be involved at all, except to give direction and final acceptance. The tools will write, and verify, the code at each step.


> Once the tools outperform humans at the tasks to which they were applied (and they will)

I don't get why some people are so convinced that this is inevitable. It's possible, yes, but it very well might be the case, that models cannot be stopped from randomly doing stupid things, cannot be made more trustworthy, cannot be made more verifiable, and will have to be relegated to the role of brainstorming aids.


>I don't get why some people are so convinced that this is inevitable.

Someone once said that It is hard to make a man understand things if their profit depends on them not understanding it...


I don't make money coding, so it doesn't apply to me in this case.

I think they meant that people insisting total genAI takeover of coding is inevitable are likely people who stand to profit greatly by everyone giving up and using the unmind machines for everything.

the original post is an example of how. Every programmer is discovering slowly, for their own usecases, that the agent can actually do it. This happens to an individual when they give it a shot without reservation..

Large scale AI datacenters require a very expensive physical supply chain that includes cheap land, water, and electricity, political leverage, human architects and builders to build datacenters, and massive capital investments. Yes, AI will outperform humans, but at some point it may become cheaper to hire a human programmer.

Wait till you hear about the resources required to sustain an equivalent number of humans.

I’m at the fucking loom smashing stage personally.

We don’t have to accept things.


I hear you, but let me point out that Ned Ludd didn't stop the industrial revolution.

I think in the foreseeable future we have open models running on commonly available hardware, and that is not a change that can be stopped (and arguably it's the commons getting back their own value). What we can do is fight for proper taxation, for compensatory fees, for regulation that limits plagiarism, for regulation of the most extreme externalities.

But it makes no sense, to me, to fight the technology tout court.


How long can you afford to stay in this phase? Is there some framework you can suggest where this path works?

My existence is defined not but what I adopted but what I sabotaged or refused to deal with. 30 years in I haven't made a mistake and I don't think I am making one here. The positive bets I made have been spot on as well. I think I have a handle on what works for society and humanity at least.

When I say AI, I mean specifically LLMs. There isn't a single future position where all the risks are suitably managed, there is a return of investment and there is not a net loss to society. Faith, hope, lies, fraud and inflated expectations don't cut it and that is what the whole shebang is built on. On top of that, we are entering a time of serious geopolitical instability. Creating more dependencies on large amounts of capital and regional control is totally unacceptable and puts us all at risk.

My integrity is worth more than sucking this teat.


When you say sabotage, how exactly?

Or is it limited to refusal to use LLM, which is a strategy, but more like becoming a hobbyist programmer then.


“The reasonable man adapts himself to the world: the unreasonable one persists in trying to adapt the world to himself. Therefore all progress depends on the unreasonable man.”

— George Bernard Shaw

The antidote to runaway hype is for someone to push back, not to just relent and accept your fate. Who cares about affording to. We need more people with ideals stronger than the desire to make a lot of money.


Sure, hype pushback is fine.

> yeah it kinda works but not really well enough

I mean, at some point it was true.

I remember that around 2023, when I first encountered colleagues trying to use ChatGPT for coding, I thought "by the time you are done with your back-and-forth to correct all the errors, I would have already written this code manually".

That was true then, but not anymore.


I think it's still true, but very domain specific. I am not confident it will stay true.

No, it's still very much true. Every now and then I use an LLM to write code and the vast majority of the time it turns out to take just as much time (if not more) than it would've taken to write the code myself.

Exactly. Verification is not cheap at all

You are either using it wrong or you are writing extremely niche code that has bad llm coverage

I suspect I fall into the former camp, but I'm not sure where to start when it comes to learning how to use llms "the right way".

I'm not a proper software engineer, but I do a lot of scripting and most of my attempts to let a model speed up a menial task (e.g. a small bash or python script for some data parsing or chaining together other tools), end up with me doing extensive rewrites because the model is completely inconsistent in naming convention, pattern reusage, etc.


or you are in denial about what he is saying

This is true for things you already understand. It works for implementing yet another CRUD view because I've done it a million times before. I know exactly what the code should look like, but it takes a while to type it in. When my typing speed is the bottleneck then of course LLMs win (and I use them for that all the time).

But the interesting stuff where you don't understand the problem yet, it doesn't make it quicker. Because then the bottleneck is my understanding. Things take time. And sleep. They require hands-on experience. It doesn't matter how fast LLMs can churn out code. There's a limit to how fast I can understand things. Unless, of course, I'm happy shipping code I don't understand, which I'm not.


Less than 6 months ago I would say about 50% of HN was at the denial phase saying it's just a next token predictor and that it doesn't actually understand code.

To all of you I can only say, you were utterly wrong and I hope you realize how unreliable your judgements all are. Remember I'm saying this to roughly 50% of HN., an internet community that's supposedly more rational and intelligent than other places on the internet. For this community to be so wrong about something so obvious.... That's saying something.


It doesn't actually understand anything...let alone code. And I think you are the one who is in denial.

If it doesn’t understand anything why the fuck are we letting it write all our code when it doesn’t understand code at all? Does that make any sense to you? Does that align with common sense? You’re still in denial.

You gonna give some predictable answer about next token prediction and probability or some useless exposition on transformers while completely avoiding the fact that we don’t understand the black box emergent properties that make a next token predicted have properties indistinguishable from intelligence?


I'm letting it write (type out) most (80-98%) of my code, but I see it as an idiot savant. If the idea is simple, I get 100 lines of solid Ruby. Good, saves me time. If the idea is complicated (e.g. a 400-LOC class that distills a certain functionality currently scattered across different methods and objects) and I ask 4 agents to come up with different solutions, I get 4 slightly flawed approaches that don't match how I'd personally architect the feature. And "how I'd personally architect the feature" is literally my expertise. My job isn't typing Ruby, it's making good decisions.

My conclusion is that at this point, LLMs are not capable of making good decisions supported by deep reasoning. They're capable of mimicking that, yes, and it takes some skill to see through them.


Follow the trendline. It went from autocomplete to agentic coding. What do you think will happen to your “good decision making” in a couple years?

As of right now the one shot complex solutions AI comes up with are actually frequently extremely good now. It’s only gonna get better and this was in the last 6 months. You could be outdated on frontier model progress. That’s how quick things are changing.


This is not an appeal to authority, but this video probably contains the answers to your questions if you are open minded about it

https://www.youtube.com/watch?v=qvNCVYkHKfg


What questions do I have? I didn’t even mention a single question and you hallucinated an assumption that I have questions.

I don’t have any questions about LLMs. At least not any more than say an LLM researcher at anthropic working on model interpretability.


Can't you count? Are you an LLM?

No. I'm not an LLM, but you have intellectual issues. Counting? What does that have to do with anything?

Go count the number of questions in your comment..

It's called rhetorical questions. Look it up.

Oh, I thought you were genuinely wondering...

Yes, I do find it a little funny how the developer community got it all wrong and the non technical people who were thinking AI is going to change everything in 2023 were the right ones. Maybe they know more than developers think.

They don't know more. Humanity mostly doesn't know how LLMs work because most of the properties just emerged from the soup of billions of weights whose sheer complexity is so high that understanding any of it holistically is impossible.

The difference is the arrogance. Developers think they know more. Developers think they're smart. And also there's an existential crisis where the LLM are poised to take over developer jobs first. So the developer calls every other layman an idiot and deludes himself into thinking his skills will always be superior to AI.


> To all of you I can only say, you were utterly wrong and I hope you realize how unreliable your judgements all are.

They weren't wrong though. It objectively is just a next turn predictor and doesn't understand code. That is how the thing works.


Not true. You’re a next token predictor and clearly the tokens you predict indicate that the way you predict the next token is much much more then simply a probabilistic detection. You’re a black box and so is the LLM and the evidence is pointing at emergent properties we don’t completely understand but are completely inline with what we understand as reasoning.

Don’t make me cite George Hinton or other preeminent experts to show you how wrong you all are.

Use your brain. It is changing the industry from the ground up. It understands.


>Don’t make me cite George Hinton or other preeminent experts to show you how wrong you all are.

https://www.youtube.com/watch?v=qvNCVYkHKfg


Yann Lecunn was vocal about his stance against LLMs very early on and claimed they were a dead end. Well he's been proven fucking wrong. Completely.

George Hinton was his mentor and George is the main god father of AI while Yann is more of a malfunctioning student still holding onto the stochastic parrot monicker. Here's George saying what you need to know:

https://www.reddit.com/r/agi/comments/1qwoee7/godfather_of_a...


> Well he's been proven fucking wrong. Completely.

How was he "proven" wrong?

> Yann is more of a malfunctioning student...

lol what?


He’s proven wrong by reality. Look at what LLMs are doing right now. It’s utterly obvious now that hallucinations are getting reduced, AI is extremely effective now…

Yann is malfunctioning because he cant reconcile his past statements with reality. He can’t admit he’s wrong. As time goes on his past statements will look even more and more absurd as progress on AI keeps moving forward.

At the same time we have Terence Tao using ai to develop new math and Hinton saying the opposite with actual evidence and the entire industry. Yann is a clown: https://www.reddit.com/r/singularity/comments/1piro45/people... and his opinions are not mainstream at all.


I'm quite fond of https://ooh.directory/

..and three months to review the false positives

this is always overlooked. AI stories sound like "with right attitude, you too can win 10M $ in lottery, like this man just did"

Running LLM on 1000 functions produces 10000 reports (these numbers are accurate because I just generated them) — of course only the lottery winners who pulled the actually correct report from the bag will write an article in Evening Post


> these numbers are accurate because I just generated them

Is it sarcasm, or you really did this? Claude Opus 4.6?


For those who have chicken coops: why do you have separate doors for chicken and people?

My dad used to keep chicken and they just went through the same door and we'd just open it in the morning and close it in the evening.

Other people in my home town have similar arrangements and I feel I'm missing some important thing :)


Separate doors make it easier to access without releasing the chickens, if you need to (they're relatively habitual and if they never use the "people door" they won't really try to).

Easier to automate a small door, and control where it goes.

It looks cooler; people like "small doors for small things" - like the half-height garage doors at Walmart for shopping carts.

If small enough, it can reduce at least SOME predator incidents (but this is minor).


For me: the chicken door goes into a fenced enclosure to keep the chickens safe from predators.

I don't want to enter the enclosure, so I have my own door to go in and service the coop, fetch the eggs, etc.

The enclosure has a gate when I want to let the chickens out, as well.

Having an enclosure lets me leave the house for a couple days, at least, and not feel like I've imprisoned them.


The dedicated chicken door can be automated. It's nice to be able to go on vacation, or sleep in, or not suddenly wake up and wonder if you remembered to shut the door.

> I suspect fascism is currently winning

I think this war is actually pushing many away from fascism. Trump was the reference for a lot of the European right and this is showing people he was terrible and, by extension, embarrassing them all.

Heck, Orbán is currently running an electoral campaign as "the candidate of peace".


If Trump wasn't embarrassing for them before I doubt they're embarrassed now.

With the price of petrol skyrocketing, what I see in France are people complaining about taxes, not the war started by Trump.

And they still don't see the point of EVs.

Those short-sighted people are the ones cheering for fascism, so the current events have no impact on their vote.


Did you completely miss the disaster of DOGE in the first year of this administration?

I remember that! It was awesome!

Ditto. Worked perfectly and nice UI. Great work!

This is partially the case in Italy, though it changed over the years.

The assignment of funds is based on refunding prints/sales, so money goes to help newspapers that do print "something" of interest to the public.

The problem is that people don't want "independent" journalism, they want "my ideas" journalism.

Which.. still good somehow? Italy had plenty of newspapers which were the literal extension of political parties and a few independent ones in the past and still does.

But these days, they are all dying anyway.


But parents do that all the time with babies.

It is disgusting (I hated doing it) but you get somewhat used to it relatively quickly.


We seem to make a disconnect with our own children. I certainly did. But it doesn't extend to even other people's kids!

> But it doesn't extend to even other people's kids!

I think it's a question of exposure and tolerance, otherwise it'd be much harder for daycare workers, for instance.


IIRC from the book " packing for mars" the American man astronauts begged NASA to provide them with diapers at some point, which is what women astronauts got, because the earlier male-only system was a sort of sucking condom which was incredibly bad.

This really tells you how "bad masculinity" pervaded everything. I'm speaking of the designers here, not the astronauts. Why not a diaper also for male astronauts from the beginning? Isn't manly enough? Does it show weakness, like a toddler or an old dying man?

I think the designers just didn't think of it.

Women also started with a feminized version of the uncomfortable device and then switched to diapers, and then men followed.

It's possible there were no women on the design team but I don't think it's a case of bad masculinity.


I don't think that having or not having women in the design team is the key here. IMO it's more about how men perceive how men should be.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: