Hacker Newsnew | past | comments | ask | show | jobs | submit | encyclopedism's commentslogin

LLM's don't 'think' nor do they 'reason'.

I find it interesting, the comments on this post (not just this particular comment per se) and the sheer inability to relate or ATTEMPT to relate to another persons experience or feeling. The post itself articulated a viewpoint and experience, your having a different one does not negate the other. Nor does your perspective mean the other does not exist. I'm dumbfounded at many of the comments.

Here are some clipped comments that I pulled from the overall post

> I don't get it.

> I'm using LLMs to code and I'm still thinking hard.

> I don't. I miss being outside, in the sun, living my life. And if there's one thing AI has done it's save my time.

> Then think hard? Have a level of self discipline and don’t consistently turn to AI to solve your problems.

> I am thinking harder than ever due to vibe coding.

> Skill issue

> Maybe this is just me, but I don't miss thinking so much.

The last comment pasted is pure gold, a great one to put up on a wall. Gave me a right chuckle thanks!!!


When I read the article, I feel the same emotions that I feel if someone were to tell me "I keep trying to ride a bike but I keep falling off". My experience with LLMs is that the "lack of thinking" is mostly a quick trough you fall into before you come out the other side understanding how to deal with LLMs better. And yes, there's nothing wrong with relating to someone's experience, but mostly I just want to tell that guy, just keep trying, it'll get better, and you'll be back to thinking hard if you keep at it.

But then OP says stuff like:

> I am not sure if there will ever be a time again when both needs can be met at once.

In my head that translates to "I don't think there will ever be a time again when I can actually ride my bike for more than 100 feet." At which point you probably start getting responses more like "I don't get it" because there's only so much empathy you can give someone before you start getting a little frustrated and being like "cmon it's not THAT bad, just keep trying, we've all been there".


If I can 'speak' for the OP:

> I keep trying to ride a bike but I keep falling off

I do not think this analogy is apt.

The core issue is that AI is taking away, or will take away, or threatens to take away, experiences and activities that humans would WANT to do.

The article is lamenting the disappearing of something meaningful for the OP. One can feel sad for this alone. It is not an equation to balance: X is gone but Y is now available. The lament stands alone. As the OP indicates with his 'pragmatism' we now collectively have little choice about the use of AI. The flood waters do not ask they take everyone in their path.


I think the disagreement is over what exactly will be taken away. Certainly, like any technology that came before, AI will automate something. A programmer that finds joy in the raw act of coding- thinking of how to solve a problem and crafting the resulting logic line by line will indeed have something taken away by AI.

But there is a spectrum here. AI is a cruder, less fine-grained method of producing output. But it is a very powerful tool. Instead of "chiseling" the code line by line, it transforms relatively short prompts along with "context" into an imperfect, but much larger/fully formed product. The more you ask it to do in one go, usually the more imperfect it is. But the more precise your prompts, and the "better" your context, the more you can ask it to do while still hanging on to its "form" (always battling against the entropy of AI slop).

Incidentally, those "prompts" are the thinking. The point is to operate at the edge of LLM/machine competence. And as the LLMs become more capable, your vision can grow bigger.


I think if OP had said "I miss getting paid for (a particular type of) thinking hard" I would find it to be a lot more agreeable. But he's just saying he misses it in general. I think that's what I (and, from OP's summary, many other people) find confusing. Can't you still do it? AI is not physically preventing you from thinking hard.


It's honestly just an example of ego and coping. These people are trading temporary gains in pseudo-productivity for training their replacements and atrophying their skills.

The truth of the matter is that the people who will choose to partake already weren't thinking; otherwise they wouldn't have chosen. This is the same with every bubble or major revolutionary thing that ends up having negative effects in the long run. GLP-1 is another great example of one that is still on the upward arc.

Sheep will always be sheep, and, in my opinion, this is the reason for the changes in hiring and the large scale encouragement of folks that obviously are not smart enough for this career to suddenly join in over the past 10 years.


Fundamentally these shortcomings cannot be addressed.

They can and are improved (papered over) over time. For example by improving and tweaking the training data. Adding in new data sets is the usual fix. A prime example 'count the number of R's in Strawberry' caused quite a debacle at a time where LLM's were meant to be intelligent. Because they aren't they can trip up over simple problems like this. Continue to use an army of people to train them and these edge cases may become smaller over time. Fundamentally the LLM tech hasn't changed.

I am not saying that LLM's aren't amazing, they absolutely are. But WHAT they are is an understood thing so lets not confuse ourselves.


I don't understand why this point is NOT getting across to so many on HN.

LLM's do not think, understand, reason, reflect, comprehend and they never shall. I have commented elsewhere but this bears repeating

If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model. Then once you have trained the model you could use even more pen and paper to step through the correct prompts to arrive at the answer. All of this would be a completely mechanical process. This really does bear thinking about. It's amazing the results that LLM's are able to acheive. But let's not kid ourselves and start throwing about terms like AGI or emergence just yet. It makes a mechanical process seem magical (as do computers in general).

I should add it also makes sense as to why it would, just look at the volume of human knowledge (the training data). It's the training data with the mass quite literally of mankind's knowledge, genius, logic, inferences, language and intellect that does the heavy lifting.


> If you had enough paper and ink and the patience to go through it, you could take all the training data and manually step through and train the same model.

But you could make the exact same argument for a human mind? (could just simulate all those neural interactions with pen and paper)

The only way to get out of it is to basically admit magic (or some other metaphysical construct with a different name).


> But you could make the exact same argument for a human mind?

It would be an argument and you are free to make it. What the human mind is, is an open scientific and philosophical problem many are working on.

The point is that LLM's are NOT the same because we DO know that LLM's are. Please see the myriad of tutorials 'write an LLM from scratch'


We do know that they are different, and that there are some systematic shortcomings in LLMs for now (e.g. no mechanism for online learning).

But we have no idea how many "essential" differences there are (if any!).

Dismissing LLMs as avenues toward intelligence just because they are simpler and easier to understand than our minds is a bit like looking at a modern phone from a 19th century point of view and dismissing the notion that it could be "just a Turing machine": Sure, the phone is infinitely more complex, but at its core those things are the same regardless.


I'm not so sure "a human mind" is the kind of newtonian clockwork thingiemabob you "could just simulate" within the same degree of complexity as the thing you're simulating, at least not without some sacrifices.


Can you give examples of how that "LLM's do not think, understand, reason, reflect, comprehend and they never shall" or that "completely mechanical process" helps you understand better when LLM works and when they don't?

Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well. The "they don't reason" people tend to, in my opinion/experience, underestimate them by a lot, often claiming that they will never be able to do <thing that LLMs have been able to do for a year>.

To be fair, the "they reason/are conscious" people tend to, in my opinion/experience, overestimate how much a LLM being able to "act" a certain way in a certain situation says about the LLM/LLMs as a whole ("act" is not a perfect word here, another way of looking at it is that they visit only the coast of a country and conclude that the whole country must be sailors and have a sailing culture).


We know what an LLM is in fact you can build one from scratch if you like. e.g. https://www.manning.com/books/build-a-large-language-model-f...

It's an algorithm and a completely mechanical process which you can quite literally copy time and time again. Unless of course you think 'physical' computers have magical powers that a pen and paper Turing machine doesn't?

> Many people are throwing around that they don't "think", that they aren't "conscious", that they don't "reason", but I don't see those people sharing interesting heuristics to use LLMs well.

My digital thermometer doesn't think. Imbibing LLM's with thought will start leading to some absurd conclusions.

A cursory read of basic philosophy would help elucidate why casually saying LLM's think, reason etc is not good enough.

What is thinking? What is intelligence? What is consciousness? These questions are difficult to answer. There is NO clear definition. Some things are so hard to define (and people have tried for centuries) e.g. what is consciousness? That they are a problem set within themselves please see Hard problem of consciousness.

https://en.wikipedia.org/wiki/Hard_problem_of_consciousness


>My digital thermometer doesn't think. Imbibing LLM's with thought will start leading to some absurd conclusions.

What kind of absurd conclusions? And what kind of non absurd conclusions can you make when you follow your let's call it "mechanistic" view?

>It's an algorithm and a completely mechanical process which you can quite literally copy time and time again. Unless of course you think 'physical' computers have magical powers that a pen and paper Turing machine doesn't?

I don't, just like I don't think a human or animal brain has any magical power that imbues it with "intelligence" and "reasoning".

>A cursory read of basic philosophy would help elucidate why casually saying LLM's think, reason etc is not good enough.

I'm not saying they do or they don't, I'm saying that from what I've seen having a strong opinion about whether they think or they don't seem to lead people to weird places.

>What is thinking? What is intelligence? What is consciousness? These questions are difficult to answer. There is NO clear definition.

You see pretty certain that whatever those three things are a LLM isn't doing it, a paper and pencil aren't doing it even when manipulated by a human, the system of a human manipulating a paper and pencil isn't doing it.


A cursory read of basic philosophy would surely include the arguments against Searle's Chinese room, no? It's hardly settled.


I fully agree with your sentiments. People really need to study a little!


LLM's have surpassed being Turing machines? Turing machines now think?

LLM's are known properties in that they are an algorithm! Humans are not. PLEASE at the very least grant that the jury is STILL out on what humans actually are in terms of their intelligence, that is after all what neuroscience is still figuring out.


> Am I supposed to want to code all the time? When can I pursue hobbies, a social life, etc.

I feel you. It's a societal question you're posing. Your employer (most employers) deal in dollars. A business is evaluated by its ability to generate revenue. That is the purpose of a business and the fiduciary duty of the CEO's in charge.


I tend to agree with your assessment. The increase in demand cannot possibly equal the loss from AI.

Given projections of AI abilities over time AI necessarily creates downward pressure on new job creation. AI is for reducing and/or eliminating jobs (by way of increasing efficiency).

AI isn't creating 'new' things, it's reducing the time needed to do what was already being done. Unlike the automobile revolution new job categories aren't being created with AI.


Lots of users seem to think LLM's think and reason so this sounds wonderful. A mechanical process isn't thinking, certainly it does NOT mirror human thinking. The processes being altogether different.


This type of naive response really is bothersome!

Humans are probabilistic systems?! You might want to inform the world's top neuroscientists and philosophers to down tools. They were STILL trying to figure this out but you've already solved it! Well done.


I don't think it's a naive response. Perhaps it's obvious to you that human doctors can't produce an "exact correct solution", but quite a lot of people do expect this, and get frustrated when a doctor can't tell them exactly what's wrong with them or recommends a treatment that doesn't end up working.


There's nothing naive about it. Most doctors work off of statistics and probabilities stemming from population based studies. Literally the entire field of medicine is probabilistic and that's what angers people. Yes, 95% chance you're not suffering from something horrible but a lot of people would want to continue diagnostics to rule out that 5% that you now have cancer and the doctor sent you home with antibiotics thinking it's just some infection, or whatever.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: