Modern Artificial Intelligence has been taking jobs for over half a century. The main problem with appreciating the scope of job ālossesā is that āAIā keeps getting redefined to exclude the things that AI can do, and only include the things AI canāt yet do.
Once upon a time, the ability for a machine to pick up a fragile object without crushing it was deemed the domain of AI. Now machines flip burgers and rotate eggs in incubators, but that, somehow, is no longer considered AI. Countless other examples exist.
Funding isnāt as available for solved problems, so academics, researchers and doctoral students are always slanting their theses towards unsolved problems, tagging them as AI, and in-so-doing the definition of AI shifts slowly along with the titles of those theses.
Folks who do not appreciate that the current definition of AI is an ever-changing aspirational target will keep pushing dates further and further into the future. Like an oasis or mirage, it will therefore never be reached. āAI is over-hypedā is the sort of thing such people will say.
If, on the other hand, you simply consider the most basic definition of AI ā something along the lines of āmechanised human thoughtā ā then AI has existed for thousands, perhaps tens of thousands of yearsā¦ and taking (or redefining) jobs every step of the way. An abacus is AI.
Historically, AI tended to redefine labour, instead of replace it. That notably changed during the Industrial Revolution. Since then outright replacement has grown as a fraction. With computing in the 20th century, replacement exploded. The limiting factor at the close of that century was that the vast majority of intelligence had to be explicitly programmed. That changed in the first two decades of this century.
Now we have āMachine Learningā as a field, and extraordinary advances have been and continue to be made. The thing that is different, this time in history, is that up until now we decided what the machines knew, and how they thought. We created them in our own image. Now the machines are learning for themselves. We increasingly do not know what they are thinking at any point in time. AI once clearly implied Artificial āHumanā Intelligence. That is no longer the case. The AIs we are creating now are still artificial, undeniably more intelligent, but also decreasingly Human.
With all of that as a backdrop, if your period of interest is 2050-2100, then realise that unless you are on the cutting edge of machine learning, your current understanding of AI is almost completely outdated and irrelevant.
Society is increasingly (already almost blindly) trustful of AI. We believe the results calculators spew out. We turn right at the next intersection when the SatNav tells us to. We believe that the primary function of search engines is to help us find answers to questions. Unless that ā for some unpredictable and doubtful reason ā changes, we will increasingly submit ourselves to whatever AI decides for us.
When the AI was completely Human-like, the decision to trust what it did was a rational one. When the AI is no longer Human-like, does that decision remain rational?
āWill AI take jobs or create jobs?ā was a good pre-2000 question. A better 2050-2100 question might be āWhat role, if any, does AI see for Humanity?ā
The whole issue of jobs may be irrelevant if Humans have been deemed obsolete, and āgoal-seekedā to zero.