Depends what people are motivated to do - if you just want to build cool things quicker, you will probably be excited by ML.
If you like writing algorithms and enjoy the mental problem-solving aspect of it, then you might not like it.
If your main motivation is to protect your job/livelihood and ensure your existing skillset is in demand, then you will probably be worried.
But in any scenario, the cat is out of the bag - you can't un-invent it so might as well get excited and be on the train, rather than be the person that gets left behind.
> But in any scenario, the cat is out of the bag - you can't un-invent it so might as well get excited and be on the train, rather than be the person that gets left behind.
there are plenty of other ways to deal with it
politically push for AI output to be banned, made un-exploitable or highly taxed
or the luddite approach
time will tell how the several billion people about to be made destitute will react
Pushing for AI output to be banned is not a long term approach IMO unless all countries do it in unison - countries who do not ban AI output will be much more competitive, and those that ban it will be left in the dust. Any country that banned computers in 1979 would have really hurt themselves.
Society will need to adapt - doesn't necessarily make sense having several billion people doing something that a computer can do quicker, more accurately and more easily, just to keep people in employment (doing something that a lot of people hate).
> Pushing for AI output to be banned is not a long term approach IMO unless all countries do it in unison - countries who do not ban AI output will be much more competitive, and those that ban it will be left in the dust. Any country that banned computers in 1979 would have really hurt themselves.
this argument is extremely similar to that which was used against proposals to ban slavery
"if we don't have child labour/slavery/no workplace rights/lax environment law/planning restrictions/data privacy law/... then we'll be outcompeted by those that don't"
unregulated AI fits in there perfectly
these persisted for so long because of economics over ethics... maybe we'll make the correct decision this time
(separately: if we do get to AGI then the slavery comparison suddenly gains an extra dimension... as that's literally what its operators will be doing)
I agree the last/separate point has interesting/complex ethical implications.
But in terms of the first point - the ethics of forcing someone into labour are just not the same as automating peoples work.
Although some people do genuinely enjoy their work, most people work because they are forced to (pay for shelter, food and a good life for their kids). Not forced in the same way as slavery, but there is still mostly a requirement to work in modern society.
It’s one thing asking people to do tasks that are valuable to society, but asking people to do jobs they hate which could be easily automated in the future just for the sake of them “being in a job” seems pointless as a society level to me. Why would we have people doing things they hate that don’t really deliver ‘real’ value because we could automate it and have them do something else? (Preferably something they hate less).
I find the gitlab employees (developers) being excited over copilot to be almost the definition of insane
short term gain, but in the long term it destroys demand for their own product
developers that don't need to be employed don't need a subscription to github
automating your own company/business out of existence