I had a conversation with some friends earlier this week regarding the reduction of labor due to automation and artificial intelligence. What happens to our society when our Walmarts and McDonalds stop needing a significant human workforce?
My friend, who is a teacher, suggested that his field might be exempt from the trend. But, I imagine the purpose and methods of education will be radically different in the next few years/decades.
Look at what's going on in US public education with the move to Core Curriculum. The focus is on reading and math, with outcomes measured via computer based testing. You think when cash-strapped state legislatures realize they could reduce the number of teachers down to a single proctor watching over self-paced computer based instruction that they won't go for it, especially when "the test" (since that's what we're doing here... teaching to the test) is computer-delivered?
This isn't to say that teachers will go away anymore than retail clerks, or buggy-whip makers or barrel makers before them, will go away, but their ranks will thin and where they are employed will change. There will still be people willing to pay for personal instruction and there will still be courses that are difficult to deliver via CBT alone.
I always thought online / computer-based education, especially for grade school, sounded terrible. But once I saw how little my kids are actually getting in public school, and especially with federal meddling ala No Child Left Behind and Core Curriculum, I would actually embrace it - at least it would be self-paced and allow more advanced students to move ahead rather than sitting bored waiting on their peers.
Look what's happening at the university level with MOOCs already. That's only going to accelerate, and university professors are teachers by another name - maybe calling it "automation" isn't exactly right, but we'll certainly see a thinning of the ranks.
Judging by the mistakes my daughter's teachers make, I would happily have her class like you describe. self-paced and continually graded. She brings home 40 math problems, it's obvious she has no trouble with some of them, yet they all need to be done. Meanwhile she struggles on subtracting negative numbers, a smarter system would be for the software teacher to recognise when she has something understood and slowly transition to the weaker areas.
In the early years when teacher is more like supervised play I think teachers better serve the children, but middle school and beyond, I'd vote to kick out all the useless teachers, keep the few that are good and massively switch to computerised learning.
In California education has sunk so low that one of my daughter's teachers explained the common core standards now being taught as a last ditch attempt to get most of the children that leave school to be able to do something as basic as read an instruction manual. Half her class are children with parents that have no more than 3rd grade education themselves, and the other half have parents that work at google, twitter, oracle etc. When parents were recently surveyed how to improve the school district, less than 2% of the immigrant parents completed the survey citing they couldn't understand the questions, which had even been translated (at an unfathomable cost of $24k) into Spanish. While we have such a huge range of abilities, a self-paced learning computerised scheme would at least mean the children that are capable of learning will do so and not just sit twiddling their thumbs while the children struggling to do last years level work soak up the educators time.
The question is, if walmart and macdonalds sre run by robots, who will their customers be? At some point we'll need to decide whether we're happy to be a society comprising a few shareholders in Mom's Friemdly Robot Company living in an fortified compound and a vast multitude of impoverished rabble foraging in the surrounding landfill.
"Having to decide" assumes we'll be given a choice in the first place. Technological advances have a mind of their own, as it were, and simply deciding not to be affected by them is not a strategy.
Over a century ago, stop signs and traffic lights were optional for some locations / situations / vehicles, and traveling long distance without using roads was permissible. Now, whether you choose to use a car or not, you must obey traffic laws or else. You don't get a choice in the matter.
Kevin Kelly wrote pretty well about this, a few years back:
...except that Marx was concerned with "mere" capital while we're talking about automation, robotics, etc.. It turned out that capitalism on its own did not create the social unrest Marx predicted (at least in societies which didn't already suck for other reasons), but it may be that automating the workers out of being able to earn even a subsistence might in fact do so.
Another way of looking at it is that while Marx identified a real problem, his solution was essentially a bunch of wishful thinking which, when pitted against free market economics did rather poorly. But, free market economics is all about managing scarcity; automation points the way to a post-scarcity society (at least post-material-scarcity). Free market economics doesn't have anything to say about how to handle this -- so we're all in the dark.
It seems to me that as the marginal value of material goods diminishes relative to say the marginal value of not being beaten and robbed we should see a natural progression towards some kind of redistribution of wealth -- driven as much by enlightened self-interest as anything. ("We'll pay you welfare as long as you keep taking your contraceptive pills...")
Marx also treated capital as a way to control the means of production. One could argue that with automation, especially software automation, controlling the means of production is possible without a significant amount of capital.
Seems to me we're already seeing the natural progression you described in your last paragraph - basic income initiatives. This really looks to me like the anti-segregation movement from the 60s - a radical idea at the time, then 50 years later people don't understand how it was possible to have a society without it.
You may be right. Certainly the gay rights movement has succeeded beyond my wildest hopes or expectations during a period of generally right-wing dominance amd stalemate (the democrats have only had two years to pass legislation since 2000), and now it looks like the war on drugs might collapse. We seem to be enjoying one or two tectonic shifts in politics every decade right now. Amazing times.
I'm not from the U.S., so as an outsider it seems to me socialism, which is the prominent political stance in some parts of Europe and a respectable opposition stance in others, was/is constantly demonized there, similarly to how gay people were demonized decades ago ("recruiting" young children etc.).
So a not-unreasonable guess as to "what's next" might be socialism, regardless of whether we're post-scarcity or not, and regardless of the success of basic income initiatives. Maybe ACA will be considered the first sign 50 years from now...
Maybe the idea of shareholders and companies will vanish completely. I'm talking about the time when(if) working will not be required from the human being. When automation will be so widespread, that even occasional work from those who want to do something will be enough to sustain the system. Anyone would have access to any food they want. Anyone can go anywhere. There will be millions of free empty houses all around the world waiting for someone to check in (and all your stuff, or furniture you want will be there in couple of hours). So, there's plenty resources for everyone and nobody have to work. I wonder if it's possible and what will happen then.
The most interesting question to me is if/when all menial tasks (and each step in complexity thereafter) can be performed by robots/AI: are there people who aren't capable of performing any other useful tasks, and if so, what to do about that?
I would propose we are somewhat in that situation already, where many employees are nearly useless in the capacities they are supposed to be working because they just aren't capable, though in possibly all cases this is solely a temporary problem due to inefficiencies in the system. It is logical to assume that problem will only get worse though. Hopefully this will lead to advancements in education, and in means of organizing and developing the abilities of those whose skills are obsolete in addressing the many problems that we still need to tackle. Essentially, mining and engineering humanity for unrealized potential.
Moving beyond McDonalds workers, think about doctors even. Many of the tasks currently performed by doctors will be automated in the next 50 years (and many tasks earlier than later). So their job description will change drastically, which hopefully means their efforts will be redirected to the areas we are currently deficient, leading to better care and health for everyone.
Efficiency and productivity advancement is of course a large net positive, as long as the benefits do not go increasingly disproportionately to a smaller subset of humanity (as has been the case the last 20 years in the US for example), and that the obsoleted workers can be redirected effectively to other tasks. Similarly, I am concerned that many human skills could be increasingly devalued as natural resources become the limiting factor, as is already happening to some extent. I don't have an elegant solution, but we really need to find a better means of balancing rewards for shrewd trading/management with maintaining a high level of parity between contribution to society and consumption of goods and services (including capital accumulation for eventual consumption). If we don't, I could certainly see a future where, despite vast increases in efficiency, wealth/power is far more concentrated in the hands of a few: natural resource owners(which China seems to realize), shrewd businessmen/politicians, engineers/scientists in frontier fields that haven't been mastered yet like AI, biotech, etc. I suppose this is really just more of the same situation we have now, but I am concerned about it being exacerbated by this trend. Hopefully political/economic/social dynamics will keep this effect in check.
Up until recently the forces of natural selection have largely applied to the human race. Since we have become more civilized and are trying to live in harmony, we attempt to respect everyone's right to exist and ensure everyone is given opportunities to thrive. It will be very interesting how policies regarding reproduction evolve as we approach the manageable population limit. Certainly the current US policy cannot persist indefinitely, which offers substantial assistance in providing the basic needs and education for an unlimited number of offspring. How will this be limited? How will it be enforced? Will we enact an equal limitation as China did? Will we adjust policy to reintroduce elements of natural selection, such as requiring some proof of genetic utility to society (perhaps simply as ability to financially provide for the child without external support). If so, will this trend in automation etc play a role in shaping the genetic evolution of the human race? I expect it will in one way or another once population growth becomes a problem, just as societal changes had for centuries before our recent socialist policies (and in many ways still does globally). Perhaps this will be a part of how the question raised by parent comment is addressed (though I am certain it wouldn't be McDonald's workers in question, it would be many many years down the road, not to mention that "McDonald's workers" comprises a very non-homogenous set).
AI + substantial assumptions = almost free labour force
You are assuming:
- cost of materials and the energy consumed by an AI manual laborer or scientist would be negligible compared to a human for every n beneficial position, including the last position at the margin. That is, we would run out of resources to make AI scientists and laborers before we ran out of uses for scientists and laborers
- Zero R&D costs to build that AI, or that they have been sufficiently recooped so that the end-user cost is negligible
- AI can be equivalent (or superior) in every way to a human being in every possible task
- An AI scientist would be able to perform every function as well as a human, and yet would not demand an income, equal rights, etc. Essentially, disabling those parts of the human psyche would have no limiting effects on the capabilities of the AI system in any useful task. That seems unlikely, and at least is impossible to know at this point.
And then, if your assumptions hold, then given appropriate population controls so resource contention isn't an issue, I imagine it would be a pretty remarkable upgrade in quality of life for everyone across the board. In which case, the sentiment is not that no one is exempt, but that no one is left out.