The single biggest question around AI isn’t really “How close are we to Artificial General Intelligence”.
It’s “Will AI take my job?”
And as with all tough questions, the truthful answer – which is “yes, and soon” if your job is manual or cognitive repetitive; and “potentially yes, eventually” for everything else – isn’t one that a great many people would like.
Which is why the more common responses tend to be:
- an oblique “It’ll create more jobs than it destroys!” (unfortunately, enterprise-level AI is sold as a cost reduction tool, and it wouldn’t achieve its objective if it created a new job for every job it eliminated); or
- the gleefully inane “AI won’t take your job, a person using AI will!” which has echoes of the gun lobby’s “Guns Don’t Kill People, People Do” defence – one which, for some reason, judges in gun crime cases never really warmed to.
But you can’t fight progress, as we’ve learnt over generations of technological advancements, from the steam engine to the iPhone. And with every turn of the tech wheel, there will be winners and losers. All you can do is try to stay one step ahead of the wave, and hope that your government has the political will to pass legislation that mitigates its less desirable outcomes.
So, where are we in the job replacement cycle when it comes to AI? And how are governments responding to it? Ronin Legal takes a closer look.
THE AI WAVE
A recent paper published by the National Bureau of Economic Research examined the dynamic relationship of labour and capital, and outlined how software is increasingly substituting the labour workforce due to its efficacy in production, thereby reducing labour income shares. It found that labour and equipment are not easily interchangeable, while labour can be more readily replaced by software in the production process. This study reinforces the fact that since the 1980s, investments in technology have reduced the share of income that goes to workers, proving that software is the main driver of this reduction in labour share.
Another paper published by the NBER in 2023 studied the impact of Generative AI Technology (specifically ChatGPT in this case), on firm valuations and equity returns. It found that the introduction of Generative AI and Large Language Models (LLMs) had driven value for firms whose labour forces were more exposed to Generative AI and related LLMs, but had also negatively impacted firms in other industries in a significant manner. Furthermore, it concluded that the occupations most affected by generative AI are those with a high share of non-routine cognitive analytical tasks or routine cognitive tasks, while manual physical tasks have been relatively unaffected.
REGULATORY EFFORTS
United Kingdom
A 2021 report commissioned by the UK government revealed that 7 percent of jobs in the UK labour market were at a high risk of being automated in the next five years and that this percentage would rise to 30% after 20 years. Notwithstanding calls to pause AI development and adopt a step-change in regulation, the UK adopted a pro-innovation approach to AI in 2022 with a consultation paper, followed by a white paper in 2023, which contained several proposals for regulatory reform. The crux of these proposals was that the country must accept AI innovation to be in line with future technological advances, but also limit its risks and build public trust with the existing regulatory framework. This would not, however, include the establishment of a new AI regulator.
European Union
In July 2024, the European Union enforced the EU AI Act, the first major legislation pertaining to AI. It places obligations on organisations adopting AI systems to ensure human oversight, designates risk levels for different kinds of AI, and has the underlying goal of promoting innovation without infringing any fundamental rights, such as employment. The Act, in Recital (9), even states that it is complementary to union laws on employment and protection of workers, thereby seemingly promising that such rights are not to be transgressed.
United States
In the USA, In October 2023, President Biden issued an executive order intended to promote AI development while establishing guidelines for federal agencies to follow when designing, acquiring, deploying, and overseeing AI systems. Section 6 of this order expressly includes protections for workers and compels the US government to identify the implications of AI deployment on workforce and job displacement. Accordingly, it states that AI must be deployed as per the ‘principles and best practices’ published by the Secretary of Labor, so as to secure the rights and well-being of workers.
As Ronin Legal has covered in a prior article, writers, artists, and performers in Hollywood have already begun to acknowledge the significant potential AI has for job replacement in their fields. Hollywood labour unions such as the WGA and SAG-AFTRA have even gone on strike to ensure that adequate protective clauses are included in their standard guild agreements to safeguard their members against replacement by AI.
CONCLUSION
Regulatory efforts related to AI are gathering pace but are generally at a nascent stage outside the EU, as the world still comes to terms with this groundbreaking technology. Perhaps the biggest challenge with respect to the regulation of AI is – what to regulate? AI presents a cornucopia of legal issues, overlapping across the jurisdictions of individual governmental agencies and regulators: job replacement, discrimination and bias, data security and privacy breaches, pornographic deepfakes, copyright infringement, unauthorised commercial use of celebrity likenesses, social manipulation, political influence and election fraud, and so on.
To regulate such a multifaceted threat in a meaningful co-ordinated manner, across various regulators and agencies, is difficult, time-consuming, and politically tricky. It does not help that the legal systems and structures, and regulatory frameworks of most countries were established to respond to the challenges of the industrial era, and it takes a certain imagination and legal dexterity to apply them to the rapidly morphing face of the digital era.
Authors: Varun Alase, Shantanu Mukherjee, Shruti Gupta