Every couple of months we get hit with the same wave of AI stories, and every couple of months I post these two links, inspired by ideas from that book.
Godel's Incompleteness Theorem and Turing's Halting Problem.
You can't build a perfect machine, because that would imply understanding reality perfectly.
Ugly reality is going to break your perfect machine, eventually. With long enough time horizons, the probability approaches 1.
When your machine breaks, you are going to need something else, either another, newer machine which can fix or replace it (in Godel's example, the new book of logic/truth), or something dumb like a human wetware, just flexible enough to know the right answer is "unplug the machine and plug it back in"
Godel's Incompleteness Theorem and Turing's Halting Problem.
https://plato.stanford.edu/entries/goedel-incompleteness/ https://en.wikipedia.org/wiki/Halting_problem
Two different versions of the same idea.
You can't build a perfect machine, because that would imply understanding reality perfectly.
Ugly reality is going to break your perfect machine, eventually. With long enough time horizons, the probability approaches 1.
When your machine breaks, you are going to need something else, either another, newer machine which can fix or replace it (in Godel's example, the new book of logic/truth), or something dumb like a human wetware, just flexible enough to know the right answer is "unplug the machine and plug it back in"