29 Feb

Assessing the Global Regulation Landscape, and How to Get Your House in Order

DBS
David Bartram-Shaw

The current Generative AI boom has led to many organisations racing to drive the adoption of Large Language Models (LLMs) within their business.

However, alongside this awareness and intrigue from business leaders, there is a heightened level of anxiety from regulators, legislative specialists, lawyers and governments about potential infringement in areas such as ethical standards and intellectual property rights.

In November 2023, 28 countries – including representatives from the US, EU and China – came together in the UK to reach a world-first agreement around the opportunities and risks posed by AI.

At the summit it was agreed that there was an urgent need to understand and collectively manage the potential risks to ensure AI is developed and deployed in a safe and responsible way. This momentous gathering highlights just how high the stakes are surrounding this fast-evolving area of technology and that with great power comes even greater responsibility.

In this blog we will take a look at the emerging regulatory landscape and explain what businesses can do to ensure they are ready as some key milestones approach.

Governments Around the World are Taking Notice about AI Adoption

With the uptake of AI accelerating at an unprecedented rate in multiple sectors, it’s not surprising that governments are rushing to formalise their responses on how the technology can be implemented without risk. Let’s take a look at some of the key milestones that are coming up:

UK

The UK government launched an AI white paper in March 2023 that aims to drive responsible innovation while maintaining the public’s trust. Recognising the real social and economic benefits that AI can deliver, the UK government wants to take an adaptable approach, rather than introducing ‘heavy-handed legislation which could stifle innovation’. This has culminated in the latest paper titled ‘A pro-innovation response to AI regulation.’ With the UK aiming to be at the head of AI safety, the paper says, it wants to foster an environment where businesses can “remain agile while remaining robust enough to address key concerns around potential societal harms, misuse risks, and autonomy risks”. 

It’s described as a more laissez-faire approach, where “ease of compliance” is championed above other countries. By the end of March 2024, UK regulators have been tasked with issuing practical guidance to organisations that’s tailored for their sector, as well as resources such as risk assessment templates. Further down the line, legislation could be introduced to ensure consistency among regulators.

EU

The EU announced the details behind the EU AI Act at the end of 2023, which is billed as the world’s first comprehensive AI law. The framework, which was first unveiled in April 2021, proposes to classify AI systems according to the risk they pose to users and then use this to assess whether they need more or less regulation. Generative AI is addressed specifically, with the Act proposing that it complies with transparency requirements, is prevented from generating illegal content and publishes details of any copyrighted data used for training. 

US

US president Joe Biden signed an executive order on the safe deployment of AI at the end of October 2023 that covers areas including privacy, consumer protection and workers’ rights. The order also addresses the issue of deepfakes with guidance to watermark AI-generated content and requires companies developing AI models that pose a threat to security or health and safety to share test results with the government before they are released.

China

China is currently rolling out AI regulations that are set to transform both how the technology is built and how it is deployed internationally. In August 2023, the first law that specifically targets Generative AI was launched with new restrictions for companies regarding both the training data used and the outputs. The Cyberspace Administration of China (CAC) stated that any language or data that could disrupt national unity would be banned from use in the training of LLMs, while data security and copyright were identified as top concerns.

These government approaches to AI highlight safety and privacy as key areas that need prioritising, alongside concerns over bias and explainability.

Explainable AI, which aims to make the workings and output of models transparent and understandable to humans is key here as it helps to make the outputs more trustworthy.

Being able to understand, explain and therefore trust the outputs of your AI solutions is almost as important as the results themselves.

It’s Important to Get Your House in Order Ahead of Regulation

The regulation’s definitions will become a lot tighter as they are cascaded down for regulators to create industry specific standards. So, expect to see regulators for financial services, energy and utilities, manufacturing and more take this top line guidance and apply their own. Once defined, these regulations will need to be harmonised at a global level with cross-border considerations also taken into account for any businesses that operate in multiple territories.

We’ve already discussed some of the steps you can take before adopting Generative AI, including establishing an ethics framework. As these regulations come into force, it will be critical for businesses to have taken the time to break down the legal corpus of any regulation and map it to their existing control environment, ensuring it’s fit for purpose. And, if it isn’t, they will need to have determined how they will evidence these controls and demonstrate them end to end within their organisation.

Use an LLM to Build Your Controlled Environment

A really smart way of doing this is by using an LLM to cross-reference all of the regulations that are applicable to your business in the regions you operate in. This will ascertain instances where standards are higher in some regions and lower in others and allow you to identify the lowest common denominator that you need to aim for in order to put the requisite controls in place.

In instances where there are multiple regulators in a sector with different jurisdictions, such as the FCA and PRA, organisations will need to make decisions based on their appetite for risk. In these cases where there is an intersection of the regulations, an LLM can be used to map the requirements into a multi-layered control environment and ingested into a knowledge graph to map regulations against systems, important business services and perhaps most importantly, your data sets. 

There isn’t a one-size-fits-all approach when it comes to setting up a controlled environment so businesses should select a use case with a level of risk they are comfortable with, ensuring it’s one that will allow them to demonstrate how they are going to audit their processes to demonstrate they are meeting industry standards.

Conclusion

As AI comes under increasing scrutiny, businesses will have to keep on top of regulations that continue to adapt as they attempt to keep up with the fast-evolving technology. However, the specific measures they apply will depend on the organisation's specific use case needs and their appetite for risk.

Once all the relevant regulations and their respected impact has been taken into account, organisations will need take these considerations into account when creating a risk framework that is right for their company’s AI project.

Latest Stories

See More
This website uses cookies to maximize your experience and help us to understand how we can improve it. By clicking 'Accept', you consent to the use of these cookies. If you would like to manage your cookie settings, you can control this in your internet browser. Find out more in our Privacy Policy