"Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all. One of Dreyfus’ main arguments was that human knowledge is partly tacit, and therefore cannot be articulated and incorporated in a computer program. "
I have not read the rest of the article but in the introduction it's stated:
"The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world."
Wiktionary defines tacit as "Not derived from formal principles of reasoning" [1].
So the main argument is that humans have intelligence that is impossible to express through reason or codification. In other words, humans have a literal soul, divorced from physical world, that cannot be expressed in our physical world thus making any endeavour to create artificial intelligence impossible.
This is a dualist line of reasoning and, in my opinion, is nothing more than theology dressed up in philosophy.
I would much rather the author just flat out say they are a dualist or that they reject the Church-Turing thesis.
Tacit knowledge is knowledge that results from adapting to experience, like learning to tie shoelaces with practice, rather than something like finding the derivative of sin(x) by the usual mathematical proof method (reasoning step by formal step).
Every deep learning system has tacit knowledge: it knows a chair when it sees one but can't explain how it knows. It just adapted its connections to training until it got it right most of the time.
So computers are provably capable of what, in humans, is defined as tacit knowledge and can be given sensors and actuators to learn from. A car can learn to parallel park with practice. It can't explain how it does it, but you can copy the trained system into a new car.
I don't see why you couldn't produce a combination of sensors and actuators that vastly exceeds what any human is capable of.
But AGI isn't that. It's a variety of information processing techniques (algorithms) deployed as a toolbox managed by meta-techniques (more algorithms) that know how to deploy the others in various combinations. We don't yet know much about the management algorithms, but I don't see any reason in principle why we couldn't eventually find some and invent others.
> A traditional computer program that can find the derivative of sin(x) also can not explain how it knows.
Oh but it could, and that's the point. Some computer differentiation techniques just follow the same rules you learned when you took calculus. They typically don't show you which rules they followed, but they easily could. Other differentiation techniques are more exotic but there's no reason they couldn't show you the chain of computations and/or deductions they went through to arrive at that derivative. Such programs can easily justify their results and even teach humans calculus if configured properly.
Contrast that with the chair example. It is impossible right now to write a program that can show a human the chain of reasoning it went through to decide some image is a chair, because no such chain exists. There's a giant iterated polynomial with nonlinear threshold functions and a million coefficients, but there's no chain of reasoning.
I'm not sure a human can explain how they know a chair is a chair, either. They can come up with a post-hoc rationalisation, but that's not guaranteed to really represent the decision-making process they went through.
At best you get an answer that describes one or more conscious decisions and leaves the unconscious decisions out, such as "it looks a lot like a stool because it's low to the ground and has three legs, but it has a back, so I think it's a chair"; when the real answer is that they have a bunch of pattern-matching visual neurons, and those neurons feed into other neurons that detect more complicated patterns, and the concept of a chair eventually emerges.
That lack of a chain of reasoning just doesn’t feel any more significant to me than the fact that a human can also not endlessly regress upwards explaining every bit of knowledge they have or every reason they made a decision. Likewise the computer algebra software can only answer “how did you know that?” so many times in a sequence.
People can't explain how they know something either. They know it has something to do with their brains, but they don't know how exactly the mechanism works.
At a certain level, "knowledge" is baked into the execution hardware.
Hubert Dreyfus in his 1986 book "Mind Over Machine":
> The digital computer, when programmed to operate by taking a problem apart into features and combining them step by step according to inference rules, operates as a machine—a logic machine. However, the computer is so versatile it can also be used to model a holistic system. Indeed, recently, as the problems confronting the AI approach remained unsolved for more than a decade, a new generation of researchers have actually begun using computers to simulate such systems. It is too early to say whether the first steps in the direction of holistic similarity recognition will eventually lead to devices that can discern the similarity between whole real-world situations. We discuss the development here for the simple reason that it is the only alternative to the information processing approach that computer science has devised. [...] Remarkably, such devices are the subject of active research. When used to realize a distributed associative memory, computers are no longer functioning as symbol-manipulating systems in which the symbols represent features of the world and computations express relationship among the symbols as in conventional AI. Instead, the computer simulates a holistic system.
Further down, this is quite a good summary of Dreyfus general argument:
> Thanks to AI research, Plato's and Kant's speculation that the mind works according to rules has finally found its empirical test in the attempt to use logic machines to produce humanlike understanding. And, after two thousand years of refinement, the traditional view of mind has shown itself to be inadequate. Indeed, conventional AI as information processing looks like a perfect example of what Imre Lakatos would call a degenerating research program. [...] Current AI is based on the idea, prominent in philosophy since Descartes, that all understanding consists in forming and using appropriate representations. Given the nature of inference engines, AI's representations must be formal ones, and so commonsense understanding must be understood as some vast body of precise propositions, beliefs, rules, facts, and procedures. Thus formulated, the problem has so far resisted solution. We predict it will continue to do so.
I think that Dreyfus has unfortunately set back the cultural understanding of computers by decades, by confidently declaring certain tasks impossible for computers to do, because minds have "insight" or "tacit knowledge" or are "holistic", each of which functionally lets a mind be a ghost in the machine.
A lot of the rhetorical momentum comes from pointing at the progress of technology at various stages in human history, especially the fits and starts of AI/Language research in the mid 20th century, and remarking at how little progress has been made.
And the terms used to define how computers were are also vague.
>Given the nature of inference engines, AI's representations must be formal ones
When AI trained on images of dog faces "dreams" on an image, and progressively twists flowers and purses into dog faces and noses, is the connection made between patterns and dog faces "formal" ? Are the images generated by ThisPersonDoesNotExist informal? The ways computers work on data now deals with abstractions & fuzziness in a way that I think Dreyfus did not imagine to be possible. I think Dreyfus wanted to say that the higher-level methods that we now employ to generate images, human-like language, transpose art styles and create nearly photorealistic faces are on a foundation of principles that are new and distinct from the characteristic principles that he understood to be central to computing. But all of our new progress is implemented on a foundation of silicon and bits, too, which simulate neural networks, meaning those are just as computational as the desktop calculator app. I think Dreyfus just couldn't imagine that 'computing' could include all this extra stuff, and, to take a term from Dennet, Dreyfus mistook his failure of imagination for an insight into necessity.
He is talking about AI as presently conceived when writing. The quote I posted has him explicitly imagining what you say he could not imagine. His critique was in fact INFLUENTIAL for the currently successful approaches.
It's him imaging things that he think couldn't be done on computers under one definition, based on vaguely defined terms. Dreyfus was open to another, more expansive definition that included things like 'holistic' and 'tacit' knowledge, which he believed were outside the scope of what computers of a certain sort could do. That distinction turns out to be moot because all the 'new' stuff: e.g. neural networks, GAN, GPT-3 etc are, while in some sense new and innovative, ultimately are running on foundation the same old of logic gates, zeros and ones, and ultimately are, really are, computable in the classical turing machine sense, which is exactly what he had spent his whole career denying. It was a limit of Dreyfus' imagination that he didn't understand that computation, even the kind he criticized, could model the higher order conceptual structures he thought were inaccessible to classical computers. He's not wrong to think that something called 'tacit' knowledge would be important, and would call for specialized approaches and new concepts. Where he went wrong was in veering to the insane, overconfident extreme of denying that these were computable.
Computers can’t heal themselves. Our bodies do that on their own without conscious prompting.
Humans evolve the complexity of a computers electron states, not the computers own inherent properties. My use doesn’t force transistors to evolve into better transistors. It has no self regeneration.
You don’t see the issue in principle but here you have an article by an expert pointing them out.
Perhaps be a better listener?
A computer has observable, literal limitations relative to a humans mechanical functionality.
You can’t scrape away literal reality to arrive at some reductionist idea of what a consciousness is.
Our only known good model for a machine that can create our consciousness is us. It took billions of years of the universe churning at random to accidentally generate us. We have no clue how to replicate that scale.
A computer literally lacks a whole lot of literal information that’s embedded in the hardware and software of a person.
Watch this and tell me the last time your computer reconfigured it’s literal shape when you altered its electric field properties: https://youtu.be/RjD1aLm4Thg
There’s something to “life” we’ll never be able to jam into silicon.
This is exactly the sort of conflation of completely unrelated concepts as I found in the article. What on Earth has healing got to do with reasoning? You even say our bodies do it without conscious prompting, in other words it’s a completely irrelevant issue.
Yes computers aren’t biological, they aren’t life, but so what? Why does that constrain their ability to interact with and learn from the world?
AGI is defined as human like intelligence, human go to the toilet, computers don’t go to the toilet, therefore AGI is impossible. See? It’s easy to “prove” AGI is impossible. That’s really all their argument boils down to. I kept reading the article expecting to hit some essential argument, not to find one. Very disappointing.
For all the pontificating about souls as a means to discredit OP's beliefs, you all seem to avoid this statement:
> Our only known good model for a machine that can create our consciousness is us. It took billions of years of the universe churning at random to accidentally generate us. We have no clue how to replicate that scale.
Would you please provide a counterargument? Do you understand how difficult this really is? We can't even comprehend the physical constraints.
That quoted sentence argues that we do not have AGI now. I have no counterargument against that. We do not have AGI now. That sentence on the other hand fails to argue it is impossible to develop AGI.
Someone before the invention of the aeroplane could have said:
Our only known good model for flying is birds and insects. It took billions of years of the universe churning at random to accidentally generate birds and insects. We have no clue how to replicate that scale.
And yet we know that it's not impossible to create flying machines.
Flight and consciousness are not in any way compareable concepts. We literally cannot perceive the constraints of the latter's underlying physical system.
Sometimes I feel computer scientists choose to misunderstand physicists because it would make them feel stupid if they did understand.
>Flight and consciousness are not in any way compareable concepts.
For pete's sake, it's an analogy, not a direct comparison, and it is perfectly valid as such when interpreted with due charity.
You can say "Brains are complicated. They took time and evolution. That sure is hard. See how hard it is?" The same can be said of flight at a certain level of abstraction as a valid analogy, which can be charitably interpreted as such without the need for claiming anyone is purposely choosing to misunderstand physics.
The fact that the examples of brains and of flight given to us by nature sure seem complicated doesn't establish as a matter of principle that their salient properties can't be modeled in machines, and that's the real thing that's at stake. Disputing that requires a different kind of argument than saying "gosh it sure is complicated", and that's what the analogy is pointing out.
This is interesting. Is aeroplane flight akin to bird or insect flight? Rolling down the tarmac, peering out the window, the planes look more like elongated fish bodies than soft bird bodies, or compact insect bodies. Our planes rather swim in the air than fly in it, I think.
Our flight is some other kind of thing (whatever we uncovered the model-able, salient properties of flight to be). Computer consciousness might similarly be some other kind of thing. And that’d be ok.
But can they be equated? Only at some abstraction level. A plane is obviously not a bird or an insect or a fish. Aeroplane flight is not bird or insect flight either, nor is it swimming. But it is safe travel through the air, from one earth-bound destination to another.
Technological progress is many orders of magnitudes faster than evolutionary change; assuming the 'magic' of the brain is indeed in the neural circuity, hardware is expected to be powerful enough to simulate that on a timescale of O(100) years in the future instead of O(10e9).
Of course, it's not a given that simulating connected neurons with action potentials or whatever is sufficient to capture the relevant features of the brain (perhaps long-distance em interactions are relevant? quantum magic? do we need to drop to the molecular level?) - but without proof to the contrary, we'll just have to wait and see.
This is a well-balanced take. Maybe we will see emergent properties at that scale. If that were the case, then we could catch enough of a glimpse of what is really happening...
But another part of me says that is silly. We had an expectation of what the Higgs was before we found it. Here, we are shooting in the dark.
We had an expectation of what the Higgs was before we found it.
However, the standard model (which formed the basis for the prediction of the Higgs boson) was created to bring order to the chaos of unexpected experimentally discovered particles. In the words of I.I. Rabi on the discovery of the muon, "who ordered that?"
It's not entirely a given that consciousness is needed for intelligence.
It's also not clear what consciousness is. Plenty of animals are self-aware, but they don't have human level intelligence.
We only have a single example of general intelligence. It seems possible that there could be other kinds of general intelligence that don't require consciousness.
We don’t understand something. It’s a complicated phenomenon. So it’s impossible to replicate? I think the “argument” is so silly it doesn’t need to be disproved. If one wants to prove something impossible they’d better avoid logical fallacies. A priori we can’t say that it’s impossible, nor that it’s certainly doable.
Except that like flight it’s already been done - by evolution. I’m very confident that people will be ‘proving’ that it’s impossible like this article, right up to the day we actually do it.
I meant that it's dubious whether we can reproduce a brain-like machine until we'll understand the matter well enough to either prove it possible or impossible.
Anyway, I feel like you. If it's been done once, it can't be impossible in any meaningful sense.
Further, the argument from lack of understanding would work only if knowledge (and science) could not advance any further; but that's tough to prove -- not to say that it's used over and over in faith vs. reason debates to the point it's become annoying -- so the argument is quite weak.
You are taking huge liberties in forming equivocations which mislead your conclusion.
The tacit knowledge description only talks about the acquisition process of the knowledge, not that the knowledge itself is outside bounds of rationality or the physical world. By the same token when you claim "humans have intelligence that is impossible to express through reason or codification" the entire argument hinges on the actual meaning of impossible. Is it impossible in itself, or ever, because human intelligence is non-reason based or are we using a meaning of impossible that at this day and age the process of doing that is still intractable for all intents and purposes. If you're claiming the former, that itself is such a strong claim it requires its own strong proof. If it's the latter, well we have been developing psychotechnologies for several millennia to be able to express ourselves and our cognitive processes and getting better at it, you just need to be patient.
If you found a working x86 chip in the wild, but all documentation and knowledgeable people were wiped out for some reason, I bet the process of finding out how that x86 worked would look quite similar. It wouldn't make x86 otherworldly or in need of a soul.
This is a good point, but I will add, from close discussions with Dreyfus, his position was that it's impossible because it's fundamentally impossible, not because it's intractable. (see my other comment for more)
Interesting to learn about Dreyfus, I was aware of this line of thought from Francois Chollet's article. He is the creator of Keras and some pretty advanced research papers in the nature of intelligence.
He states that the environment and embodiment are crucial for the development of intelligence. Even for humans, 'our environment puts a hard limit on our individual intelligence'.
The implausibility of intelligence explosion (2017)
..
In essence we need simulators on par with reality to train human-like intelligence. BTW, take a look at ThreeDWorld, just came out: 'A High-Fidelity, Multi-Modal Platform for Interactive Physical Simulation'. We're getting closer, and AI scientists are aware of the environment problem.
I have been interested in this debate for perhaps a decade now, and to me one of the most important things to get clear is whether skeptics are just claiming X is really hard, or whether they are claiming it's impossible as a matter of principle, which are two very different things. I think this discussion is about the latter rather than the former, but that many people talk about the former as if it's relevant to the latter.
I don't think it has anything to do with souls. Most "knowledge" deep learning systems have seems tacit to me: it would be practically impossible for people to write programs that articulate and incorporate that knowledge without machine learning (people tried, for decades), and it certainly isn't "derived from formal principles of reasoning" in anything but a tangential mathematical sense.
(I too have not read the whole article; I'm just replying to this comment.)
I have read it and you’re not missing anything. One of the examples of tacit knowledge they give is walking and that therefore it is impossible to teach a computer to walk. They should watch one of the videos from Boston Dynamics.
That's not teaching a computer to walk, that's building a walking machine. Subtle difference, but it's easy to prove; no matter how well you build a Boston dynamics robot, it will never like walking in the same way your TV will never like entertaining people. There's no "I" there to learn.
Way to change the goal posts. We built a machine than learned how to walk. Mission accomplished. The assertion in the article was either wrong or irrelevant, pick one.
Then a bald assertion, computers will never X. Says you. Just because we’re not there yet is no proof it’s impossible.
My son learned how to stand up at 8 months, pretty soon he was walking and even running. No one had to teach him anything, he did this by observing the world around him and drawing his own conclusions.
This is not about definitions, since we don't even know what exactly we're chasing there are no ways to express the difference unambigously. What we get instead is one side trying to (unsuccessfully) define the difference and the other pretending it doesn't exist.
We're a long, long way from an AI learning how to walk by itself. Neural networks and machine learning is one piece of a puzzle, expert systems are probably in there somewhere as well. Perhaps one day we will identify all the pieces but we're definitely not even close.
Cows can walk minutes after they are born. I guess they're even smarter. You might argue that the walking calf is not intelligent because it came preprogrammed to walk, but human babies will reflexively start making stepping movements when you hold them upright and let their feet touch the ground. I think it's ridiculous to argue that your son somehow learned to walk through observation and reasoning alone.
"Liking things" is just one part of a biological reward system not a metaphysical event, it is possible to create it just not with our current tech (so saying "never" is a stretch imo)
>That's not teaching a computer to walk, that's building a walking machine.
I think that's redefining teaching so that teaching, whatever it is, includes a subjective human 'ghost' inside of it.
But Dreyfus wasn't just saying that machines can do those things, only without a soul. Dreyfus was arguing that things such as walking are clever and subtle in ways that depend on tacit knowledge to execute successfully, and things that depend on it simply aren't even achievable by machines at all, because the nature of those tasks is such that they require a special magical soul. Being able to do the task at all, with or without a special magical soul, stands as a counterpoint to the argument Dreyfus had been making for half of the 20th century.
> it would be practically impossible for people to write programs that articulate and incorporate that knowledge without machine learning
That could be true, but I think that's a different argument. That's more like claiming that it is impossible for a computer to become intelligent unless it can experience its environment and remember those experiences. That seems like a much more plausible and less dualist claim.
> Most "knowledge" deep learning systems have seems tacit to me
it's an interesting question if they're actually tacit in the human sense or not. At the base level even deep learning systems certainly rely on digital manipulation. It's almost certain that the human-brain due to speed constraints of biochemical processes doesn't run on tons of matrix-multiplication or loss functions, so it's an open question I guess if deep learning systems really just resemble the tacit capacity of humans or if there's something fundamentally different in the architecture of organic systems that cannot be replicated at least in today's machines. Which I think is actually fairly likely to be honest and I always wonder why it's disregarded.
It's funny that the top comment of this chain asserts dualism, but I think dualism is overwhelmingly common among CS folks, who almost seem to treat intelligence like some sort of platonic thing, completely ignoring the stuff it's made out of.
I'd argue that deep learning is derived from formal principles of mathematical reasoning in a very concrete sense. Deep learning learns a predictive function of the features that minimizes the loss function (with some caveats). If the loss function and training data are well chosen, that minimizes the probability of being wrong.
The Church-Turing thesis only states that the lambda calculus and Turing machines can compute the same functions. It has nothing to do with materialism or dualism and it certainly doesn’t state that the universe is a Turing machine.
"Every effectively calculable function is a computable function" [1]
The definition is a little terse so we have to expand on what "effectively calculable function" and "computable function" mean.
By a "computable function", we mean a Turing machine. The term "effectively calculable function" is a little unclear but one definition that I think is the closest to the intent is "it can be done by a human without any aids except writing materials." [2].
In other words, the Church-Turing thesis is saying:
"All physically computable functions are Turing computable"
That is, the physical world, including human cognition, can be realized by a Turing machine.
While one formulation might be cast in terms of lambda calculus, this is hiding the underlying assumption that lambda calculus is used as a proxy to simulation of the physical world and, as a subset, human cognition, effectively saying "if it can do lambda calculus, it can do the physical universe and can do human cognition".
"That is, the physical world, including human cognition, can be realized by a Turing machine."
How did you get to the conclusion that all physical world is computable? Rather a huge jump i would say. Sure some things are, but "all of it" would rather be a BIG assumption. Physical theories of the world are limited to our current state of observations and knowledge of the world, they AREN'T the actual world. Who is to say this continuous search, observation and refinement of theories will ever end and we'll have a FINAL theory of everything that we can then plug into a computer and simulate?
Sure you can now say "I don't require a theory of everything, I just need a "sufficient" amount of theory to simulate the part of the world from which I can have my intelligence & cognition emerge". Sure you can say that but that would again hinge on the assumption that such cognition is reducable to these "sufficient" laws.
Likewise saying that the whole physical world can be realized by a Turung machine is a bit rich when we don't even know if such a complete reduction of the physical world is possible and when such reduction to physical laws is surely not yet complete.
How did you get to the conclusion that all physical world is computable?
That’s not the conclusion, that’s the whole thesis. The whole point of it is that yes, it’s not provable, but so far we haven’t seen anything to suggest the contrary. All of our current physical theories are very much computable, for example.
Church-Turing thesis is that for every algorithm you can compute
you can define a turing machine that can compute it too. You still
have show that you can compute answers to your questions about
physical world or human cognition.
And we know that we can define an infinite amount of problems that can
not ever be computed (or else you can for example solve the halting
problem). So there would be infinite amount of questions about the
world that we can not ever answer.
And there would be more questions that we can not answer (they are
uncountable) than we can answer (they are countable). So if you have a
question - chances are such that it can never be answered.
My point is not whether all physical laws are computable, I have no doubt they are insofar as these laws are expressed mathematically. My point is rather that this search for laws might never finish and a complete ruleset never come about.
Like I said all current theories are based on current state of observations. Who is to say we don't observe something in the future for which these laws need to be revised? Who is to say this doesn't keep happening indefinitely? If such a "bottoming out" cannot even be conceived of, saying that the physical world, even a part of it, is exactly computable AS IT IS (all aspects of it) is utmost arrogance.
What do we know about the brain? how does it generate cognition? why does the color 'red' look like the way it does to you; does it look like the same to me? Unless the nature of such cognition and such a being which embodies this cognition is known one cannot say it is reducible or "emergent" from the CURRENT set or even any FUTURE set of physical laws we know or will know about.
The map is not the territory however minute in detail it becomes. Sure this map can become the territory itself by becoming it but then we cease to call it a "map". Say you want to understand how pendulums work, you make a pendulum, play with it. No one's going to call this actual pendulum a "simulation".
This belief is widespread, but mistaken. It tries to slip in a conjecture as an axiom in the unstated leg of the enthymeme. It’s the worst sort of hand waving, but the conclusion apparently flatters the biases of those who wish to believe it such that they happily overlook the sloppiness.
Also our physical theories are only computable with an arbitrary halt thrown in at some level of accuracy deemed good enough. And there too theoretically questionable trickery like renormalization is used to reduce the problem to a tractable size.
In might be computable in principle but not in practice for a long time, similar to how it is possible to simulate all molecules of air in one cubic meter (neglecting quantum effects), but not actually feasible in practice. Any argument for computability all needs an argument why a "coarse graining" or "effective model" of the underlying physical system exists.
In the case of transistors that is because all that matters are binary stable states, which reliably abstract over the complicated device physics. In the case of biological cells and neurons in the brain it is much less obvious what the reliable abstraction is. Right now a lot points towards "it is just a bunch of linear algebra and lots of data", but especially when we come to things like memory, online and few shot learning, the answer becomes far less obvious.
It's funny, because I'd argue that the belief that "the universe is a Turing machine" is a kind of secular religion. Hardcore physicalists often betray themselves as not only being bad materialists but actually just idealists in denial.
An amusing retort, to be sure, but the "creator" could be an entirely automated process, akin to instantiating a VM or container.
In other words, God has been a replaced with a very small shell script.
So, even if an intelligent creator is ultimately responsible for our plane of existence, there may not be much in the way of intent or even observation associated with that responsibility at any scale we would find meaningful.
Heck, who is to say that our particular simulated universe isn't just a honeypot of some sort?
>So the main argument is that humans have intelligence that is impossible to express through reason or codification. In other words, humans have a literal soul, divorced from physical world
No, in other words, humans have tactile, empirical, emotional, social, etc intelligence that is perfectly physical but not available to mere software in a computer.
It might be available to software running in humanoid robots, that can see, walk around, hang with other humans to learn, etc.
But even in that case, it won't be codified in any axiomatic way "through reason". Think more of neural networks and less of an 1960s AI program...
This seems like a weak argument to me. We don't really understand how human intelligence works yet so how can we claim that computers will never realize similar intelligence? We don't know for a fact that human intelligence depends on these things.
I'm personally skeptical that we'll see AGI any time soon but I don't think we know enough to say this definitively.
>This seems like a weak argument to me. We don't really understand how human intelligence works yet so how can we claim that computers will never realize similar intelligence?
My comment doesn't say that "computers will never realize similar intelligence".
It says that they will never realize it through reasoning - and rule based systems, 1960s-1990s AI style.
Which isn't the way we realize it either, even if we don't fully understand how we do realize it yet.
The complexity that makes human intelligence possible is already accessed by a brain-in-a-box through a limited set of interfaces; it's just a mushy organic brain and bone box rather than an electronic brain and steel box.
It's probably reasonable to argue that an AGI would require interfaces to all of the outside world's complexity to be self aware, but there's nothing stopping us from building it those interfaces.
There is a major league difference, and that is the closed loop our body, the aspect of being, missing from everything we have made so far.
In a technical, not inclusive sense, I agree with you. Brain in a box is part of the story.
I do question "limited"
Again, in the technical sense, we do build interfaces that offer superior capability. But, they are nowhere near as robust and integrated.
I am not saying complexity itself makes us possible, though I do believe it is a part of the story.
Higher functioning animals display remarkable intelligence, yet they are simpler than we are in many ways, including the intelligence itself.
We feel, for example. Pain, touch, etc. And when we pay close attention to that, we can identify where, how, when, and map all that to US, what we are and know it is different from others, and the world overall.
Pain is quite remarkable. There are many kinds. Touch is equally remarkable as is pleasure.
Ever wonder why pain or pleasure is different depending on where we experience it? Why a cut on my leg feels different from one on my foot, or hand? Same for a tickle, or something erotic.
I submit these kinds of things are emergent, and happen when the whole machine has enough complexity to be self aware. Even simple creatures demonstrate this basic property.
Beings.
We have not made a being yet. We have made increasingly complex machines.
As we go down that road further, I suspect we will find emergent properties as we get closer to something that has the potential to be.
Not just exist.
I realize I am hand waving. That is due to simple ignorance. We all are sharing that ignorance.
Really, I am speaking to a basic difference that exists and how it may really matter.
Could be wrong too. Nobody is going to know for some time yet. Materials science, our ability to fabricate things, all are stones and chisels compared to mother natures kitchen.
We are super good at electro mechanical. We are just starting to explore bio-mechanical, for example.
The latter contains intelligence that we can see, even if we do not yet understand.
The former does not. Period.
Could. Again, nobody knows.
There are things stopping us, and I just articulated them.
But not completely!
Scale may help. If we did build something more on par with a being, given our current tech, it would end up big.
And every year that passes lowers the bar too.
We can make things today that were science fiction not so long ago.
One other pesky idea out there too:
There may be one consciousness.
A rock, for example, literally is an expression. It has a simple nature, no agency due to low complexity. But, it's current state is what happened to it, how it formed, where it moved. And it is actually changing. The mere act of observing it changes it in ultra subtle ways.
Now, look at bees, ants. Bees know what zero is, appear to present far more complexity in how they respond to the world, what they do, than their limited, small nature might suggest.
Why is that?
What we call emergent may actually be an aggregation of some kind. Given something is a being, perhaps a part of that is a concentration of consciousness.
I am not a believer in any of that. I just expressed our ignorance.
But, I find the ideas compelling and suggestive.
They speak to potential research, areas where we could very significantly improve our ability to create.
Doing that may open doors we had no idea even existed.
We may find the first intelligence we end up responsible for is an artifact, not a deliberate construct.
In fact we may find a construct is not possible directly. We may find it just happens when something that can BE also happens.
Anyway, I hope I have been successful in my suggestion there remains a lot to this we flat out do not know.
> So the main argument is that humans have intelligence that is impossible to express through reason or codification. In other words, humans have a literal soul, divorced from physical world, that cannot be expressed in our physical world thus making any endeavour to create artificial intelligence impossible.
No, it just means there is no simple symbolic path that is human understandable towards human level intelligence. We need systems like neural nets and simulators to 'learn' things that can't be directly formalised. And we have tried for 50 years to formalise intelligence in symbolic representations.
>there is no simple symbolic path that is human understandable towards human level intelligence
I imagine that, piece by piece, if we really wanted to, we could look at the 175 billion parameters in GPT-3, test which are 'active' when this poem is written, which are active when that imitation of copypasta is active, and through a torturous interrogation, find that one particular parameter, say, is for weighting the 0.001% likelihood that you would use an accented é during a particular rhetorical flourish in certain contexts. Perhaps another parameter codes a meta-meta-meta abstraction about how a meta-meta rule governs a meta-rule for how to use a sometimes-used linguistic rule.
And the totality of those 175 billion parameters could in principle be uncovered and described in ways that are satisfactory to humans. It would be tedious and unproductive, and akin to the project of archeologists patiently, tediously uncovering a dinosaur.
But the point is it would be practically difficult, not something forbidden as a matter of principle.
More importantly though, is that I don't think the supposed incomprehensibility to humans has relevance to anything. What's the argument supposed to be? Humans don't depend in any explicit, conscious way, on having conscious grasp of our own tacit knowledge. I don't know why I unconsciously shift my weight a certain way when going up stairs. This doesn't stop me from walking up stairs. And it doesn't stop us from making machines that could walk up stairs.
We need neural nets? Okay, sure, we need them. But we can run those on machines that, at the end of the day, are silicon and 0s and 1s, which are every bit the brute, formal systems that supposedly can't model intelligent things. Weren't we supposed to have encountered a barrier to what computers can do at some point in this thought exercise? Because it appears that what began as an extremely bold claim, that computers can't do X, Y, and Z, ends in a whimper, as a vague exhortation to appreciate that neural nets are in some sense structurally different from logic design. Nothing about that latter claim is making any bold statements about the limits of what computers can or can't do, which makes me feel like the argument forgot what it was supposed to be about halfway through.
That's a bit of a straw-man. It does not follow that the existence of knowledge that isn't derived from formal principles of reasoning proves the existence of a soul.
Replace the word soul with some essential quality of reason than humans have that isn’t teachable, and the article is arguing that whatever that is can never be acquired by a computer.
You don't need a soul to justify the existence of knowledge that can't be reasoned. Qualia, emotion, feeling and knowledge soley from sensory observation fill that gap too. All of these things are perceived as a meaning by our brains well before they are rationalized. Feelings and emotions in particular break down under direct reasoned examination.
The mission to simulate a human mind seems more like a cultural precept for programmers. The physical task to create a human mind simulation is a fool's errand. The human body is incredibly complex and I doubt we'll stumble over the ability to sythensize that system any time this century or next.
Why does general AI have to be like human intelligence?
Free will might have something to do with it as well. Seld awareness is the basis of our desire to find our place in the world and the meaning of reality relative to self. This desire might be a core driver of human intelligence, the intial lack of purpose,self awareness and the need to survive by solving arbitrary problems is important as well.
Computers may not live in the real world but a virtual world of games, coupled with self aware program that trains subprograms to adopt and solve solutions to survive might be an interesting approach.
If by "soul" you mean an emotional core, then you'd be correct. Emotions allow us to short-circuit the processing required to ascribe value to a thing or situation.
But that has nothing to do with dualism or theology.
I don't think it is dualist, or at least I didn't get that impression reading Dreyfus. It is more that knowledge is part of a system as a whole and cannot be captured piecemeal, which is how we'd have to do it if we created AI by hand.
But, I don't think that argument is especially strong. Knowledge doesn't seem to be tied up in the system, otherwise we wouldn't have abstract subjects like math. Additionally, if knowledge is a function of a certain process of developmwnt, we can at least in theory reproduce any physical process computationally.
Yes, completely agree and was thinking along these lines as I read. I'm really interested in a good argument against AGI. Turing had the right intuition in thinking that if our brains are processing information then they can be simulated by a Turing machine. Everything points to our brains doing information processing at all levels.
The article still has a point about general intelligence without a body, but I think this can be solved by developing robotics and AI together, and I've seen some research in this area (I think Japan).
> So the main argument is that humans have intelligence that is impossible to express through reason or codification
I think what it means is not about codifcation, but that to achieve GI machines would need to go through human experiences that are not possible to a machine.
I don't agree, but I think that's the point the paper makes.
I'm not sure. If we think about it mechanically, and the output of the a computer system is a function of it's input, then perhaps we are woefully underestimating the role of the body as both a monitor and input system.
I don't think an artificial intelligence has to resemble anything we would recognize as human intelligence, while still being "intelligent". The motivations of humans and the motivations of computers are very different. If we can give a computer motivation, as well as a way to act and react within an environment, and a framework for learning (neural networks?) then what it "thinks" will be determined by the experience of its own existence. I'm not sure we're there yet.
At this stage at least, the recreation of the interaction between that artificial body and the environment will be impressionistic at best...
Like we don't even have a full conscious understanding of what we are made from, or what we need to survive.
How do we 'install' those ideas and that imperative in an agent, when we don't fully understand it ourselves?
I'm not entirely supporting one side or another, but I think it's reasonable to bet against the imminent arrival of AGI at this stage, unless some radical discovery comes to light soon.
And if (when?) that discovery eventually comes, I'd suspect it to be a biological one. But then, who knows...
What is "a body" other than inputs and outputs to interface with the environment, and perhaps a mechanism of perceiving the environment and remembering what was perceived?
The capital T-truth is that no one really knows since we're all observing the system subjectively from the inside.
Many who practice meditation and/or experiment with psychedelic drugs will tell you that there's something in there that's not a computer.
We could assume, as many do, they're fooling themselves; since we can't measure it. Or, we could trust our authentic experience of the real world even when it can't (yet) be measured.
The author of the article specifically says this: I have earlier said that neural networks need not be programmed, and therefore can handle tacit knowledge.
Also you are misunderstanding what tacit means. It doesn't mean anything mystical - merely that it is gained from observations rather than logically reasoning about something.
You might be right but I'm not inclined to give the author the benefit of the doubt. From the abstract, the author clearly says:
"The article further argues that ... computers are not in the world."
The specific quote you mention is in a larger paragraph which says:
"Computers are not in our world. I have earlier said that neural networks need not be programmed, and therefore can handle tacit knowledge. However, it is simply not true, as some of the advocates of Big Data argue, that the data “speak for themselves”. Normally, the data used are related to one or more models, they are selected by humans, and in the end they consist of numbers."
My reading of this is that "tacit" is used as a kind of dog whistle to dualists. It's ambiguous enough so that the author can claim they meant "learned" while still suggesting an underlying dualism.
Regardless of the meaning of "tacit" in this context, the author pretty much flat out says they're a dualist by repeatedly claiming "computers are not in our world" and "in the end they consist of numbers".
I think you're giving the author too much credibility that they aren't making a plea to mysticism.
I don't think the author makes a strong argument, and I disagree with their conclusion.
But the author's argument actually appears to be mostly that science itself is insufficient to understand the world. This argument is outlined in the section starting "But the replacement of our everyday world by the world of science is based on a fundamental misunderstanding. Edmund Husserl was one of the first who pointed this out, and attributed this misunderstanding to Galileo."
I think it's a pretty weak argument and I'm surprised Nature published it - the author wouldn't last 2 minutes trying to defend it on HN.
I do think that a better articulated version of his argument would be something like this (which attempts to capture what he means by "not in the world"): "despite all the advances in neural networks encoding tacit knowledge, it still takes a deeper human-level set of tacit knowledge in a wider context to make these neural networks useful. While we have surpassed human skills in-the-small, science based benchmarks, we seem no closer to achieving embodied human-level intelligence from machines in-the-large."
Again, I think you're giving the author the benefit of the doubt when it's not warranted. Your paraphrasing "science itself is insufficient to understand the world" is code for dualism.
I forgot to add the reference in the comment above but tacit means what I said it meant. I quoted directly from Wiktionary [1]. I'll do so again here:
Adjective
tacit (comparative more tacit, superlative most tacit)
1. Expressed in silence; implied, but not made explicit; silent.
tacit consent : consent by silence, or by not raising an objection
2. (logic) Not derived from formal principles of reasoning; based on
induction rather than deduction.
I chose the "logic" interpretation as it seemed the most appropriate given the context.
I don't have any strong opinion about if the author is a proponent of dualism. I'd note that Quantum Bayesianism[1][2] (discussed the other day on HN) seems much more mystical than this, and yet is usually considered within the realms of science.
I build neural networks in my day job. They encode tacit information because they are "based on induction rather than deduction". But that's not anything mystical - it's just learning from data, and it's not a dog whistle towards mysticism either.
I have a perspective on this, since I took a rather engaging philosophy class from the late Prof Dreyfus at Berkeley.
As the article hints at, his line of argument was a completely valid criticism of AI based on a set of rules written with symbolic logic. He arrived at this conclusion after studying Heidegger's concept of dasein. The best way to describe dasein is through a classic example [1]:
"...the hammer is involved in an act of hammering; that hammering is involved in making something fast; and that making something fast is involved in protecting the human agent against bad weather. Such totalities of involvements are the contexts of everyday equipmental practice. As such, they define equipmental entities, so the hammer is intelligible as what it is only with respect to the shelter and, indeed, all the other items of equipment to which it meaningfully relates in Dasein's everyday practices. "
In Heidegger's mind, meaning is distributed across the web of interrelationships between objects and their various uses and ideas to humans. Intelligence was thus a process of knowing those relationships, and being a part of them. Being-in-the-world (dasein is a German word related to 'being') is a result of us as humans being 'thrown' into the web of meaning, in fact we are born already finding ourselves in it.
Dreyfus' objections to AI stemmed from the idea that computers are thrown into the world differently from us. They are the hammer in the anecdote, and not the human. During the class, we argued a lot with him about whether humans are the only creatures that have 'being-in-the-world'. We asked about the idea of the soul, and why humans are unique in this paradigm. My feeling after the end was that his entire philosophy comes directly from Heidegger. And since Heidegger didn't mention animals, or the lack of souls, his conclusion was "dunno".
Related to this, Dreyfus had an understanding of physics that was quite antiquated and classical. One example was the concept of time. According to Dreyfus, the physicists' notion of time was a series of discrete observations along a timeline, whereas humans experience time in stretches and long segments. I remember telling him that the notion of a single instant of time is poorly defined in quantum physics, and that we do recognize that every event has some time uncertainty. I remember asking him why we couldn't model human experience with some complicated function. For him, everything came back to Heidegger. To him, we physicists were being too reductionist, and there is indeed something about humans that cannot be described using physics. He stopped shy of calling it a soul, but it was essentially that.
[ETA: I now remember talking to him about Conway's game of life, and how a simple set of rules could result in complex, often hard to model behavior. My point was emergent systems exist within physics, and the way we describe them is different from single particles. His reply suggested that he was certain that human experience wasn't just 'emergent' - it was fundamentally different from anything in physics, no matter what.]
The class was on physics vs philosophy, and we disagreed. I don't think he really understood what I was trying to tell him. This was in 2013, before most of the deep learning revolution, but I think he would have the same objections with what we have now. Here are two possible directions we can go from here:
1) Argue with continental philosophers about reductionism and whether humans have a unique essence that cannot be modeled.
2) Understand Heidegger's and Dreyfus' thoughts about being and time, and drive AI research in better directions.
I prefer (2), because I already tried (1) with Dreyfus and it wasn't successful or productive.
I think understanding and modeling the graph of inter-relationships between objects and humans is exactly where we need to improve when it comes to AGI. It's probably going to need a good degree of embodiedness, an idea of a computer being thrown-into-the-world. Dreyfus would tell us that it's fundamentally impossible, and that we should all just give up on it. I think he came to the wrong conclusion. What I get from Heidegger is not "AGI is impossible" but rather "hey, this is what we should be worrying about. This is how humans see the world".
TLDR: Dreyfus had some good ideas about AI. I think it's extremely insightful to pay attention his thoughts. Don't waste time worrying about his arguments against physicalism.
Thanks for the comment and confirmation that Dreyfus is essentially a dualist/believes in souls/etc.
I've idly had similar thoughts about the web of interconnections of concepts. Our idea of "cat" is not a single unit but an array of different ideas of body shapes, fur colors, sensory touch experiences, sounds, and other concepts that are rolled into one word.
I've often argued that general artificial intelligence will probably look like a complex series of individual 'utilitarian' components working on concert to achieve what looks to us like consciousness. A kind of "unix philosohpy" of AI: small, targeted tools working in concert to create a larger "operating system" of consciousness.
I've taken a few undergraduate philosophy classes myself in addition to talking to many philosophy grad students. It pains me to see people waste so much time on this subject. Philosophy itself is wonderful. Academic philosophy is a hollow shell where all the significant subjects have branched off into their own disciplines, leaving the husk of theology parading as science. At least theologians have the honesty to admit they study theology.
I would rather learn from engineering lore and draw from their wisdom on how to build complex systems than listen to academic philosophers.
Maybe Dreyfus's course already did this, but it's worthwhile reading other perspectives into Heidegger. Rorty kind of shoehorns him into a pragmatist, while Graham Harman does, well, Graham Harman philosophy. His book "Tool-being" cannot be recommended enough.
I was actually introduced to Heidegger via "Tool-being" and read Dreyfus's "Being in the world" later. I think honestly I didn't pay much attention to Dreyfus's arguments since I had just read Harman's book, which is argumentatively and anticipatively of objections etc. to the extreme.
I'm not sure how you can say computers have no body and no childhood. I think someone who doesn't know computers very well just doesn't see it, but they actually have both.
Their body is their hardware, sensors, interfaces and means of changing the world through controlled devices which can include robot arms and legs or vehicles. Computers can learn and be trained to do things and get better at them, such as using genetic algorithms and machine learning. They’re rudimentary, but real.
Sociopaths can pretend to be like normal people. An AI with non-human intelligence can do its own thing and just mimic human behavior with sufficient fidelity to fool us. Dumb machines can already employ usable conversational interfaces. In 100 years, less dumb machines will be far more convincing.
"Hubert Dreyfus, who argued that computers, who have no body, no childhood and no cultural practice, could not acquire intelligence at all. One of Dreyfus’ main arguments was that human knowledge is partly tacit, and therefore cannot be articulated and incorporated in a computer program. "
I have not read the rest of the article but in the introduction it's stated:
"The article further argues that this is in principle impossible, and it revives Hubert Dreyfus’ argument that computers are not in the world."
Wiktionary defines tacit as "Not derived from formal principles of reasoning" [1].
So the main argument is that humans have intelligence that is impossible to express through reason or codification. In other words, humans have a literal soul, divorced from physical world, that cannot be expressed in our physical world thus making any endeavour to create artificial intelligence impossible.
This is a dualist line of reasoning and, in my opinion, is nothing more than theology dressed up in philosophy.
I would much rather the author just flat out say they are a dualist or that they reject the Church-Turing thesis.