Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Helen Keller famously said that before she had language (the first word of which was “water”) she had nothing, a void, and the minute she had language, “the whole world came rushing in.”

Perhaps we are not so very different?



All LLMs have seen more words than any human will ever experience.

Yet they cannot take action themselves.


That’s a safety thing that we have placed upon some LLM’s. If we designed them to have an infinite for loop, the ability to learn and improve, access to mobility and a bunch of sensors, and crypto, what do you think would happen?


Yes, anyone can do it already. E.g. I am sure people have built simple robots with wheels in their home that LLM is controlling by reciving camera, microphone, lidar etc input and then putting output like commands where to turn, what to put in the speakers etc next and could theoretically go indefinitely if there is electricity.


> Yet they cannot take action themselves.

Neither could Hawking, once the motor neurone disease got far enough.


I like the sentiment, but reality says otherwise - just watch a newborn baby make it's demands widely known, well before language is a factor.


Ummm. Maybe you should look up Helen Keller.


Helen Keller did in fact make her demands they just couldn’t be known. In contrast the LLM does nothing of its own volition.


If you put the LLM in a never ending loop, it would definitely be doing something.


A something defined by someone else, yes.

Additionally, thinking organisms don’t get stuck in never ending loops because they can CHOOSE to exit the loop. LLMs don’t have that ability


My analogy of being in loop means being in a live state. So we as humans are in the loop continuously, we do have a way to exit the loop, but in that comparison it means taking our own life. We are in loops of getting input and producing output. You can also give LLM a tool to shut itself down, or you can give it tools to build on its knowledge base, so it would always be outputting new tokens that are based on new input and are producing different output.

E.g. it could have access to camera and microphone feed, which is automatically given to it in interval as part of the loop, it could call tools or functions to store specific bits and pieces of information, to store in its RAG or whatever based knowledge base. It is not going to be in the loop of producing the same token over and over, it would be new tokens because the context and environment is constantly evolving.


We put the LLM in a loop with no instructions with whatever tools you want. Now what?


We will observe what it would do. We could write a script to try it out.


It just gets into an endless loop. Human brains are ridiculously good at avoiding those somehow, you almost never see a biological brain stop functioning without being physically damaged. The error handling is so very robust.


> It just gets into an endless loop. Human brains are ridiculously good at avoiding those somehow, you almost never see a biological brain stop functioning without being physically damaged. The error handling is so very robust.

We get constantly changing input. And yet, look at this thread, where the same points are being echoed without anyone changing their mind.


Have you tried it already? What is the endless loop it gets into?


Sure, so I just tried it with visual and audio input.

It does nothing. Because there is not impetus for it to do anything by itself.


What do you mean by nothing? How did you put the visual and audio input, which model, how did you loop it etc?


It’s preferred method of text.

4o

Maintain context and trigger at 1 second intervals.

It has no desires of its own. Nothing that motivates it. It’s not conscious.


It produced no tokens at all?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: