top of page

AI And The Law: The Rapidly Evolving Legislative Landscape


AI And The Law: The Rapidly Evolving Legislative Landscape

The AI landscape is continuously, and rapidly, changing. New technologies pop up and disappear all the time. Sources of data are discovered and shut down on a regular basis. Billions of dollars are invested in start-ups, and thousands of job losses are announced seemingly every day. It can be difficult to keep up with all the changes (although following Aiifi is a good start!) but some changes deserve more attention than others. One such example is the recent announcements impacting the legal environment surrounding artificial intelligence.


Safe, Secure, and Trustworthy AI


On October 30th US President Joe Biden announced a sweeping Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The goal is to promote the safe development and use of AI that is in the “interests of the American people.” That is a very broad aim and there are obvious questions about how it will work in practice.


The AI Bletcherly Accord


Meanwhile, UK Prime Minister Rishi Sunak hosted an AI summit, bringing together industry leaders and government officials to discuss the potential of catastrophic harm to humanity from AI. This led to the signing of “the Bletcherly Accord” bringing the world’s superpowers into alignment on how to control and monitor “frontier AI”.


The EU View on AI Law


The EU, not wanting to be left behind in this race to legislate, is looking to bring in sweeping new laws to ensure AI is overseen by people, and not automated systems, to prevent harmful outcomes. Although the European Parliament first introduced AI legislation in 2021 that was essentially a framework for identifying and classifying AI. At the time of writing, there are no EU laws to control the development or use of artificial intelligence. Rather the first push is to register AI and ensure it follows existing legal frameworks within the bloc.


Why Legislate Artificial Intelligence


So what does all this mean? In truth, it will probably do very little to change the trajectory of artificial intelligence. Business interests have long been happy to skirt their own governments by moving production offshore to lower-legislation locations, utilizing complex legal structures to avail of tax havens or just outright cheating the system.


Why then are governments across the globe rushing to legislate AI? There are likely many answers to that question but one thing is certainly true. The threat from “runaway AI” is very real. While the public focus of AI has been on generative AI such as ChatGPT and potential job losses, many other applications are far more worrisome. Nefarious actors might use AI to develop biological weapons beyond our current ability to defend against them. Much of our strategic infrastructure (think power grids, air traffic control, and telecommunication networks) runs on software. Where we celebrate the application of AI in these are with developments such as self-mending networks, we are equally vulnerable to autonomous AI designed to attack this same infrastructure.


Why Current AI Legislation Will Fail


These recent attempts by governments to protect society from harmful applications of AI are too little, too late. The advancements in AI in the last two years have been astounding and these moves are akin to closing the stable door after the horse has bolted. Even if requiring companies in Europe and the US to register their AI, authenticate it, and share it with government works for some companies, almost by definition, groups or individuals with bad intentions will simply ignore the law. Even well-intentioned developers will likely move offshore to avoid the red tape associated with these laws. The development of AI does not require laboratories, heavy equipment or hard-to-obtain supplies that have historically been simple to regulate. Anyone with a laptop and a rudimentary understanding of Python can leverage open-source libraries and build AI models from anywhere on the planet.


Even if AI legislation were to succeed, exactly as intended, it would not necessarily be a good thing. One obvious concern is about stifling innovation. This applies to specific nations and humanity as a whole. If the US, for example, becomes a highly regulated jurisdiction for anyone looking to build artificial intelligence systems, the most likely outcome is that other countries become the global leaders in terms of AI development. Already, many Western countries lag behind the likes of Russia and China in terms of technology development. Would more legislation help in that regard?


Moreover, some of the potential benefits of artificial intelligence are in hugely complex areas such as biotechnology and healthcare. Research in such fields often requires access to academic institutions, medical equipment and vast funding. This is something that would be much better served with a joined-up, cohesive, global approach. A nation-by-nation approach to legislation will almost certainly not help humanity achieve these AI benefits.


How Can We Protect Ourselves From AI


Rather than legislating the outcomes of AI, a more holistic approach is urgently needed to address all of the legitimate concerns around artificial intelligence. Can we build strategic infrastructure that is not so vulnerable to AI hacking? Can we de-digitalize some areas that are particularly vulnerable to attack? Whatever we do, to have any chance at preventing catastrophic outcomes a truly global response is needed. The USSR and USA were able to agree to nuclear non-proliferation during the height of the Cold War. A similar approach might be needed to protect humanity from some of the more extreme outcomes of AI. I have no doubt that artificial intelligence will benefit humanity in the long run, but executive orders and registration frameworks will do little to offset any of the downsides.

bottom of page