I think once we get off LLM's and find something that more closely maps to how humans think, which is still not known afaik. So either never or once the brain is figured out.
I'd agree that LLMs are a dead end to AGI, but I don't think that AI needs to mirror our own brains very closely to work. It'd be really helpful to know how our brains work if we wanted to replicate them, but it's possible that we could find a solution for AI that is entirely different from human brains while still having the ability to truly think/learn for itself.
> ... I don't think that AI needs to mirror our own brains very closely to work.
Mostly agree, with the caveat that I haven't thought this through in much depth. But the brain uses many different neurotransmitter chemicals (dopamine, serotonin, and so on) as part of its processing, it's not just binary on/off signals traveling through the "wires" made of neurons. Neural networks as an AI system are only reproducing a tiny fraction of how the brain works, and I suspect that's a big part of why even though people have been playing around with neural networks since the 1960's, they haven't had much success in replicating how the human mind works. Because those neurotransmitters are key in how we feel emotion, and even how we learn and remember things. Since neural networks lack a system to replicate how the brain feels emotion, I strongly suspect that they'll never be able to replicate even a fraction of what the human brain can do.
For example, the "simple" act of reaching up to catch a ball doesn't involve doing the math in one's head. Rather, it's strongly involved with muscle memory, which is strongly connected with neurotransmitters such as acetylcholine and others. The eye sees the image of the ball changing in direction and subtly changing in size, the brain rapidly predicts where it's going to be when it reaches you, and the muscles trigger to raise the hands into the ball's path. All this happens without any conscious thought beyond "I want to catch that ball": you're not calculating the parabolic arc, you're just moving your hands to where you already know the ball will be, because your brain trained for this since you were a small child playing catch in the yard. Any attempt to replicate this without the neurotransmitters that were deeply involved in training your brain and your muscles to work together is, I strongly suspect, doomed to failure because it has left out a vital part of the system, without which the system does not work.
Of course, there are many other things AIs are being trained for, many of which (as you said, and I agree) do not require mimicking the way the human brain works. I just want to point out that the human brain is way more complex than most people realize (it's not merely a network of neurons, there's so much more going on than that) and we just don't have the ability to replicate it with current computer tech.
Nobody can know, but I think it is fairly clearly possible without signs of sentience that we would consider obvious and indisputable. The definition of 'intelligence' is bearing a lot of weight here, though, and some people seem to favour a definition that makes 'non-sentient intelligence' a contradiction.
As far as I know, and I'm no expert in the field, there is no known example of intelligence without sentience. Actual AI is basically algorithm and statistics simulating intelligence.
Definitely a definition / semantics thing. If I ask an LLM to sketch the requirements for life support for 46 people, mixed ages, for a 28 month space journey… it does pretty good, “simulated” or not.
If I ask a human to do that and they produce a similar response, does it mean the human is merely simulating intelligence? Or that their reasoning and outputs were similar but the human was aware of their surroundings and worrying about going to the dentist at the same time, so genuinely intelligent?
There is no formal definition to snap to, but I’d argue “intelligence” is the ability to synthesize information to draw valid conclusions. So, to me, LLMs can be intelligent. Though they certainly aren’t sentient.
Can you spell out your definition of 'intelligence'? (I'm not looking to be ultra pedantic and pick holes in it -- just to understand where you're coming from in a bit more detail.) The way I think of it, there's not really a hard line between true intelligence and a sufficiently good simulation of intelligence.
I would say that "true" intelligence will allow someone/something to build a tool that never existed before while intelligence simulation will only allow someone/something to reproduce tools that already known. I would make a difference between someone able to use all his knowledge to find a solution to a problem using tools he knows of and someone able to discover a new tool while solving the same problem.
I'm not sure the latter exists without sentience.
I honestly don't think humans fit your definition of intelligent. Or at least not that much better than LLMs.
Look at human technology history...it is all people doing minor tweaks on what other people did. Innovation isn't the result of individual humans so much as it is the result of the collective of humanity over history.
If humans were truly innovative, should we not have invented for instance at least a way of society and economics that was stable, by now? If anything surprise me about humans it is how "stuck" we are in the mold of what others humans do.
Circulate all the knowledge we have over and over, throw in some chance, some reasoning skills of the kind LLMs demonstrate every day in coding, have millions of instances most of whom never innovate anything but some do, and a feedback mechanism -- that seems like human innovation history to me, and does not seem like demonstrating anything LLMs clearly do not possess. Except of course not being plugged into history and the world the way humans are.