Personally, it feels to me that maybe AGI is possible, but a computer capable of it would look nothing like the computers we have today. And, most critically, I'm aware of no highly-progressed research into major new physical hardware systems which more closely resemble our neurons.
There are billions of neurons in a brain. There are billions of transistors in a CPU or graphics card. Somehow, somewhere along the line, we convinced ourselves that brain neurons and transistors are fungible.
At some layer of abstraction they probably are. But, it seems to me that the sheer number of transistors necessary to emulate a neuron would lead to tertiary negative impacts to the overall system. Imagine, for a second, it takes a billion transistors to emulate one neuron; given we're quickly approaching the physical size limit of the universe in our production of transistors, this means you'd need many chips, and actually many computers, to emulate many neurons. Introduce many computers, and you have to introduce network latency and communication problems; both problems that the brain really does not have. And while you could argue "ok, the simulation will be slower, but it would still work", maybe, just maybe, the latency of communication between neurons is actually a critical component of cognition. In fact, that seems likely to me.
Many people are trying to build a brain on tensorflow with their nvidia graphics card originally designed to make unreal tournament run 20% faster. Google was among the first groups with the insight that custom silicon would make training and running these intelligences faster. But, what we're talking about here isn't "running faster"; its running fundamentally differently. We buy supermicro motherboards with PCI busses, plug in silicon that's just a little different than the silicon I use to play Doom Eternal; is it really any surprise that very little progress on AGI has been made?
I don't know what chips to truly, more accurately emulate a neuron would look like. I suspect no one knows. I suspect that, if anyone figures it out, it won't be Google, or Microsoft, or Apple, or China, or Russia; organizations with so many processes, procedures, and immediate-term outcome expectations that selling an idea as wild as "we can't use any of the pre-existing computing theory out there, we need to start from scratch" would be impossible, in favor of "can't you just make the Tensorcore V2 20% faster?" If it will be invented, it will be invented by one person, in their garage, with a unique insight and decades of work.
But I also suspect that it will never be invented. If we can't even solve alzheimers, or psychosis, or even depression, brain disorders which impact hundreds of millions of people every year, what level of hubris is necessary to think we have even 0.1% understanding of what goes on in our heads? We live in a society which refuses to even address, let alone help alleviate, mental illness, and you think we're going to be able to build, let alone maintain and debug, a simulated brain?
There are billions of neurons in a brain. There are billions of transistors in a CPU or graphics card. Somehow, somewhere along the line, we convinced ourselves that brain neurons and transistors are fungible.
At some layer of abstraction they probably are. But, it seems to me that the sheer number of transistors necessary to emulate a neuron would lead to tertiary negative impacts to the overall system. Imagine, for a second, it takes a billion transistors to emulate one neuron; given we're quickly approaching the physical size limit of the universe in our production of transistors, this means you'd need many chips, and actually many computers, to emulate many neurons. Introduce many computers, and you have to introduce network latency and communication problems; both problems that the brain really does not have. And while you could argue "ok, the simulation will be slower, but it would still work", maybe, just maybe, the latency of communication between neurons is actually a critical component of cognition. In fact, that seems likely to me.
Many people are trying to build a brain on tensorflow with their nvidia graphics card originally designed to make unreal tournament run 20% faster. Google was among the first groups with the insight that custom silicon would make training and running these intelligences faster. But, what we're talking about here isn't "running faster"; its running fundamentally differently. We buy supermicro motherboards with PCI busses, plug in silicon that's just a little different than the silicon I use to play Doom Eternal; is it really any surprise that very little progress on AGI has been made?
I don't know what chips to truly, more accurately emulate a neuron would look like. I suspect no one knows. I suspect that, if anyone figures it out, it won't be Google, or Microsoft, or Apple, or China, or Russia; organizations with so many processes, procedures, and immediate-term outcome expectations that selling an idea as wild as "we can't use any of the pre-existing computing theory out there, we need to start from scratch" would be impossible, in favor of "can't you just make the Tensorcore V2 20% faster?" If it will be invented, it will be invented by one person, in their garage, with a unique insight and decades of work.
But I also suspect that it will never be invented. If we can't even solve alzheimers, or psychosis, or even depression, brain disorders which impact hundreds of millions of people every year, what level of hubris is necessary to think we have even 0.1% understanding of what goes on in our heads? We live in a society which refuses to even address, let alone help alleviate, mental illness, and you think we're going to be able to build, let alone maintain and debug, a simulated brain?