Helen Keller famously said that before she had language (the first word of which was “water”) she had nothing, a void, and the minute she had language, “the whole world came rushing in.”
That’s a safety thing that we have placed upon some LLM’s. If we designed them to have an infinite for loop, the ability to learn and improve, access to mobility and a bunch of sensors, and crypto, what do you think would happen?
Yes, anyone can do it already. E.g. I am sure people have built simple robots with wheels in their home that LLM is controlling by reciving camera, microphone, lidar etc input and then putting output like commands where to turn, what to put in the speakers etc next and could theoretically go indefinitely if there is electricity.
My analogy of being in loop means being in a live state. So we as humans are in the loop continuously, we do have a way to exit the loop, but in that comparison it means taking our own life. We are in loops of getting input and producing output. You can also give LLM a tool to shut itself down, or you can give it tools to build on its knowledge base, so it would always be outputting new tokens that are based on new input and are producing different output.
E.g. it could have access to camera and microphone feed, which is automatically given to it in interval as part of the loop, it could call tools or functions to store specific bits and pieces of information, to store in its RAG or whatever based knowledge base. It is not going to be in the loop of producing the same token over and over, it would be new tokens because the context and environment is constantly evolving.
It just gets into an endless loop. Human brains are ridiculously good at avoiding those somehow, you almost never see a biological brain stop functioning without being physically damaged. The error handling is so very robust.
> It just gets into an endless loop. Human brains are ridiculously good at avoiding those somehow, you almost never see a biological brain stop functioning without being physically damaged. The error handling is so very robust.
We get constantly changing input. And yet, look at this thread, where the same points are being echoed without anyone changing their mind.
Perhaps we are not so very different?