What if we define adaptation and learning as the ability to concentrate. Our single cell ancestors would have to concentrate and deliberately store the first memory. Otherwise it would have just taken in the world with its sensors but never do anything with it.
Adapting and learning means it chose to concentrate on packing the world into retrievable storage.
When do we not adapt and learn? When we ignore our inputs and do nothing with it (don’t store it, don’t retrieve it).
In the example you gave, those classical programs cannot concentrate, it’s a one and done.
> When do we not adapt and learn? When we ignore our inputs and do nothing with it (don’t store it, don’t retrieve it).
In the example I mentioned, the program clearly is taking inputs (load), storing them, querying for previous values that satisfy certain conditions (the value at a certain timestamp for each of the last 7 days), running computation (computing a mean), and operationalizing the result (pre-emptively scaling) to achieve a goal (avoiding having insufficient capacity) that affects the world (whoever interacting with the system will have different experiences because of the actions taken by the system). That seems like it satisfies your criterion to me.
C’mon now, you and I know both know your program is discontinuous. A few null or unknown inputs that you didn’t even consider will break it. You’ll have to keep going in there and adding more if/else statements. Your program couldn’t survive a 100 years without you debugging it.
Not sure I follow your usage of discontinuous: clearly the load inputs are continuous variables (albeit discretely sampled in time and magnitude, but I don’t see the significance of either of those sampling frequencies, each of which can be chosen to be effectively arbitrarily small). Also don’t understand the relevance of continuity at all since you didn’t reference it in the post I replied to.
> A few null or unknown inputs that you didn’t even consider will break it.
Null handling should be assumed to be a trivial problem here (ie just use the last 7 valid values for the time step of interest), and I’m not sure what “unknown” values would be here, can you give an example? I think it’s safe to assume that the inputs have already been normalized to some meaningful scale as an implementation detail of the load sensors. Even if the telemetry scheme changes because the instrumentation changes or the infrastructure changes, so long as the instrumentation produces values at the same scale and the actuators can still respond to the outputs, the core kernel of sense/compute/actuate can still be left unchanged.
I’m assuming that you are citing the failure to respond to null or unknown conditions as a lack of adaptation capabilities, especially as you are referencing its inability to last 100 years - though good luck finding any piece of software that can last 100 years, especially more robust machine learning algorithms that have to deal with problems like data distribution drift, hardware changes, etc etc - same goes with wetware (besides turtles?). I’m strawmanning you a little bit on the appeal to time though, since I assume your main point was to just emphasize that “it won’t deal with changing conditions/new scenarios well” which I think is somewhat fair. On the other hand, I think it is safe to say that what I described does at least respond to the world and in turn affect it in a kind of cybernetic feedback loop, which includes adapting to the shifting conditions to meet some desired state. So maybe it would be helpful if you could define more precisely what you mean by adaptation? Not trying to be snarky, just trying to genuinely push you to try to make a clear definition of what you mean by adaptation.
Also don’t understand the relevance of continuity at all since you didn’t reference it in the post I replied to.
A calculator is discontinuous graph/program because there is no point to plot when you divide by 0 (undefined). This is true for your classical program. Over time it won't know how to adapt its f(x) function to handle something as crazy as dividing by zero (it can never do it, but unless you put in an error output for that input, the program will never do so on its own).
The belief is these AI programs will be able to make that adaptation. All the changes you bring up that your program can apparently handle are still under your conceptual control (you can map in your mind if this happens, then that happens). You can prove this to yourself when you read your own statement "inputs have already been normalized" - sure, in a world you control.
I'm suggesting a world where your telemetry monitoring function can change into a Reddit function if need be, so yeah, kinda batshit. Why would it need to do that? Remember, we can't imagine why, that's the whole point.
That's how a program could possibly live a hundred years in a world that is constantly changing. Your program can only exist in a very static and predefined world.
There are some that will tell you that you are an ever-changing function. I try to stay out of that simulation :)
Adapting and learning means it chose to concentrate on packing the world into retrievable storage.
When do we not adapt and learn? When we ignore our inputs and do nothing with it (don’t store it, don’t retrieve it).
In the example you gave, those classical programs cannot concentrate, it’s a one and done.