Will we still have jobs after AI?

Technological progress and resistance to change

Public concerns about the consequences of technological advancement are not new to global society. As a matter of fact, humans are anthropologically far more averse to change, than they are keen and favourable to it. For instance, the conservation of the status quo might be easily interpreted as a guaranteed permanence within one’s comfort zone, whereas a deliberate step towards change and innovation requires us to consider the chances of something not going as planned. This is what we commonly define as “risk aversity”.

In this context, the first and second industrial revolutions are probably the greatest example to be brought onto the table. The progress in the mechanics sector beginning from 1760, and the invention of the combustion engine later on in 1854, enabled for the first time the creation of economies of scale, comporting a substantial increase in productivity and therefore lower prices and more purchasing power to middle- and low-class population.

Although the long-run consequences of the first and second industrial revolutions are now clear when looking at historical data and overall society development during the last two centuries, back then just like now, automation and machinery were often seen as a threat to employment and social cohesion. In fact, popular protests caused the same industrial machinery to become object of violence and vandalism acts, generating thousands of strikes around Europe and consequent measures of repression by national governments and regimes. A well-fitting example comes from the first decades of the 19th century in Great Britain, when the “Luddites”, a secret organization of textile workers were fully representing this wave of protests. More specifically, their actions would include attacking factories and destroying their textile equipment to denounce the unfair competition policies embraced by companies, leading, in some circumstances, to direct and gory clashes against the British army.

Accordingly to what we discussed before and without diving into the complexity of a specific case, violence and strikes might represent the symptoms of society’s resistance to change and risk aversity: people were – and still are – averse to the risk of losing their jobs in favour of machines and their owners. Such reactions might be justified or at least understood if considering that only a very small portion of the global population had access to education, and even more importantly, many of the breakthrough scientific discoveries had not yet been made. As to this, the absence of complex micro- and macro-economical models made it impossible for states to provide society with realistic predictions, comporting technological progress to be entirely economically driven, and leaving like this the population into doubts, uncertainty and fear towards the future.

Generally speaking, most of the categories striking against technology and automation were right, the vast majority of them were actually going to lose their jobs; yet what they didn’t know is that their workforce was soon going to be needed somewhere else. By taking as an example the farming industry, we will notice that the trend in the share of labour force working in agriculture fell vertiginously from 50-60% in 1700 down to approximately 2-3% nowadays (Source: World Bank). This phenomenon known as “labour displacement” implied millions of workers worldwide shifting from farms to mines, factories, retail sector and services, meaning that employment levels would remain stable or even growing thanks to the increase in birth rate taking place after the first industrial revolution.

Seen from an historic point of view, this suggests that economy and the labour market were able to sustain and adapt themselves throughout more phases of disruptive innovation in the first, second and even third industrial revolution, acting as complex ecosystems in which variables and balances are, at least on the long-run, self-regulatory mechanisms.

So, can we trust history and assume this will still be the case for the 4th industrial revolution as well? Hard to say, but many believe that this time it will be a whole new thing.

Is this time actually different?

The third industrial revolution came along with unforeseeable and disruptive innovations. Throughout the 20th century, mathematics and physics exponentially improved our scientific understanding of the world, while statistics started providing us with significant prediction power, hence with a tool to forecast the likelihood of future events to happen. These factors jointly with the technological progress in electronics and the invention of computing extended human skills beyond their technical and physical limitations, enabling relentless execution of repetitive and heavy tasks in the case of robotics, as well as fast and precise calculations in the case of computers. Lastly, the invention of the internet and its arrival into households at the end of the century boosted the commercial use of all internet-related technologies and services, unrolling the red carpet for the next industrial revolution to come.

As promising as this might sound, extending human capabilities is still far away from Hawking’s singularity and Asimov’s novels. For instance, what we are experiencing right now is more like a transition towards total automation, one in which robots complement people’s jobs by collaborating with them; this phenomenon is often described by the neologism “cobotics” – a human-robot collaboration, where a “cobot” is a robot specifically designed to collaborate with humans. In other words, our jobs are increasingly supported by robots and computers that help us achieving more quality and efficiency in the production of value, which is now especially true when talking about jobs that require low-skilled workers. On the other side, highly skilled judgement roles are not going to be overtaken by computers any time soon as they require creativity, ethics, philosophy and emotional understanding, but what does “soon” mean?

Ground-breaking artificial intelligence-related technologies have been gaining momentum during the last decade, provoking media hypes and making it hard for society to get a fair and realistic view about the status quo of things. Complex technical notions such as neural network, machine learning, deep learning, natural language processing and cognitive computing, are just some of the terms standing under the AI umbrella and finding space in our daily lives throughout tv, social media and newspapers. Yet the average understanding of these topics is very limited and keen to misunderstandings, causing the public to build on exaggerated or underestimated interpretations.

On the other side, what is easy to grasp for most people is what we already got in hour hands: virtual assistants like Siri and Alexa, face and vocal recognition features, word suggestion functions in keyboards and so on. Although these commodities provide the general public with a concrete idea about the potential of such technologies, they are not very good at shaping the collective imagination about the consequences they will have on society and its development. One way to better understand the possible future implications is to look at less exposed contexts in which AI is currently being applied, and that’s what we will do now.

The US start-up “LawGeex” recently succeeded in creating a software for the review of legal and commercial contracts; thanks to AI, this online tool is able to perform a semantic analysis of the uploaded contract in just few seconds, proposing a list of edits to the attorney that will finally accept or discard them. At the time being, the software was proved to reduce the workforce need by 65 to 70%, but of course this isn’t it. In fact, each single lawyer accepting or discarding the edits proposed by the platform is actually feeding a machine learning algorithm with precious data, training the system to become more accurate, efficient, and independent over time, hence potentially contributing to the elimination of his/her own job in the future.

What about supermarkets? Something is happening even in physical stores. Facing the competition of big online retailers is making it hard for local stores to obtain reasonable profit margins, forcing physical retailers to reduce their costs by implementing automated operational processes which require less human workforce. Self-checkout cashiers are a common example for this even if it does not necessarily involve the use of AI technologies.  This is different for the first physical Amazon store, called “Amazon Go” and situated in New York, which is already taking advantage of AI – by gathering real-time data from hundreds of cameras and proximity sensors – to allow its customers to walk in with their smartphones, grab the desired products and just walk out the store without checking out.

Another field on which AI is having profound repercussions – although many times not evident to the general public – is represented by the transportation and logistics industries. Self-driving busses are being tested in Sweden, Australia and Singapore; autonomous trucks are already being used in private industrial complexes and tested in low traffic areas in several countries. Barcelona Metro line 9 together with Singapore, Sidney and Copenhagen Metro already run autonomously without onsite human supervision. As to now, even airplanes’ autopilots are able to take off, flight and land from point A to point B, and military drones are the perfect example for this. Even self-driving civil vehicles are already on sale and have already proved to be safer than human drivers, yet we lack the infrastructure and a legal frame for this to work systematically.

Now, if we shortly reflect on the different type of professionals involved in the above-mentioned cases, it is now clear that not only clerks and low-skilled workers are threatened by technological progress. Lawyers need to study 5 to 7 years and then pass the bar in order to review legal and commercial contracts. Licensed bus and trucks drivers need to undergo long and heavy trainings before start working. Airplane pilots need to study a good amount of physics and mechanics before even stepping into a plane. And this extends to surgeons, designers, pizza makers, testers and many more.

What we often tend to ignore, is that technological progress does not take place as a continuum, but rather as an exponential curve meaning that it can take six years to develop 1% of a technology, but it will take only six more years to develop the 99% left. Going back to the example of self-driving cars for example, it is realistic to assume that additional improvements in technology and infrastructure jointly with the increase in self-driving cars’ adoption will result in reduced complexity of the overall car transportation system.

As a matter of fact, autonomous vehicles are now operating in a very complex ecosystem in which thousands of variables come into play: near cars trajectories and speed, traffic lights, road signs and lane lines, people crossing the street, asphalt humidity and so on. Among all of these factors, the ones related to human behaviours are also the least likely to be regularly predicted by a computer: a driver falling asleep, one suddenly turning left, one exceeding speed limits or one not giving way to the right at the stop line are just few examples. In other words we can say that the least human-driven vehicles we have in our roads, the more predictable our traffic-system becomes for computers, the lower the amount of car accidents, the lower the cost of car insurances gets and further on with an infinite chain reaction which will mutate the whole ecosystem.

Would you finally say that this time is going to be different? The most realistic answer is that we just don’t know, but the risk for controversial consequences exists as it has been announced by several experts and pioneers of the technology sector, and such risk is too big to be ignored.

Approaches to automation

After surveying the existent different point of views characterized by opinions that necessarily involve the evocation of political ideologies, three main approaches to automation and AI technological progress have emerged:

  • A strongly liberal and passive approach;
  • A liberal and proactive employment-oriented approach;
  • A socialist approach;

Interestingly enough, when talking about AI and technology the conservativism movement of thought does not seem to have gained a relevant share of consensus, which translated into facts means that almost nobody wants to hold back technological progress as it is dangerous or negative for our society. On the other side, conservativism gains a stronger position when it comes to ethical and moral questions towards AI; is it ok to have 100 cameras inside a supermarket? Is it ok to be permanently listened by our smart home devices? Yet nobody seems to care so much about privacy to fight back.

The strongly liberal and passive approach endorses the cyclical historicity of economics and markets, assuming that just as in the past the labour market will autonomously find its balance, often with an argument such as “old professions will disappear, and new types of jobs will be created”. Notably, media and public opinion show concerns about passive and non-interventionist politics.

Skipping to the liberal and proactive approach, the social parts who share this set of opinions assume that technological disruption will have profound consequences on the labour market and that therefore we need to increase incentives to private businesses in order to create more jobs. Mostly, such incentives should come under the form of reduced taxation, which would allow more investments and consequently increased employment rates.

Finally, the socialist approach is based on the hypothetical transition from a work-based society to one which prioritizes welfare, life quality, collective and personal development. This involves the believe that AI technologies will irreversibly reduce the global workforce need to a level in which full employment won’t be a realistic and pursuable goal any longer. In order for this transition to happen, pioneers assume that it is essential to gradually detach income from the act of working, while linking it as a right to the concept of citizenship and existence: universal basic income (UBI) as a right to economic subsistence.

Countries embracing this vision, for example Sweden and Finland, have already started – at least experimentally – lowering the daily working hours and providing some kind of basic income. In such a model, downsizing the number of working hours means redistributing the existent workforce need among citizens while increasing tax pressure on automation. This would give more disposable time to individuals for further education and private life, making them able to do more complex and specialized jobs, and contribute to the public development of society. Among the experts supporting this vision we find prestigious names like Joseph Stiglitz, Amartya Sen, Elon Musk, Mark Zuckerberg and Richard Branson; endorsements that might explain the broad media coverage such opinions had at a global level.

Personal reflection

In my opinion forecasting the development path of AI-related technologies and their consequence on society is something close to the impossible. Yet the chances of a radical change taking place in our socio-economical system during the next 10-20 years look real and sound likely to happen as to the opinion of a significant portion of experts and academics operating in this sector.

If this was the risk for an epidemic for which the world top epidemiologists had expressed their worries, national health systems would most likely – and legitimately – start ordering millions of vaccinations from pharmaceutical companies as a measure of national social security. But wouldn’t a 10% drop in national employment rate cause a greater socio-economical damage than a flue epidemic? Probably it would, but most of us and our governments are just ignoring the risk and the catastrophic consequences that this scenario would bring with itself. Going back to the initial question “Have we gone too far?” my answer is no, we haven’t.

Consciously approaching AI and its disruptiveness as a chance for prosperity rather than a risk for humanity might be the way towards a better society, and we are still in time for this although the clock hands are spinning fast. Computers and robotics could represent our way out of those tasks and jobs in which purpose and motivation are missing, which is also a way into advanced education, art, human relations, public participation, meaningful work and love for what we do.

Enrico P.

View all posts

Add comment

Your e-mail address will not be published. Required fields are marked *