11 Dec

Responsible AI: Creating a Dynamic Generative AI Risk Framework in Five Steps

TJ
Tom Jenkin

As organisations experiment with Generative AI, we’re finding that not every business has all of the required guardrails in place. This introduces risk and potential exposures to the leakage and or loss of sensitive data. 

For highly regulated enterprise organisations, the risk is even greater. Not only does a lack of verifications and controls risk incurring the interest of regulators, but also for reputational damage. A dynamic Chain of Verification is what’s needed to scale AI adoption in a responsible way.

What do we mean by Chain of Verification? 

The "Chain of Verification" in the context of large language models (LLMs) refers to the comprehensive process that ensures the accuracy, reliability, and appropriateness of the data and algorithms used in these models, particularly in regulated industries where transparency and explainability is increasingly key. 

In this blog, I’ll briefly outline five steps to build a risk framework for Generative AI to give you the right controls over the input, process and output of your Generative AI. In essence establishing a Chain of Verification across your end to end Generative AI lifecycle.

Find out how we’ve helped enterprises leverage Generative AI safely and securely across multiple industries

Step 1: Establish a Capability Model

Firstly, you’ll need to establish a capability model with clear AI leadership, a strategy that is in line with overarching business objectives and an education and literacy programme that will offer support to impacted individuals across your organisation.

The model should take into account both the ideation and delivery of your project. It also requires cross-functional teams who can ensure the project is delivered in a way that is trustworthy and without a detrimental impact to the business. These teams should be made up of 1st, 2nd and 3rd Line risk management professionals familiar with your product, data & AI engineering teams to ensure that your control environment can be embedded into the very fabric of your ML & AI development and training procedures. 

Step 2: Embrace Key Risk Objectives & Indicators

Like well established risk mitigation, we need to make generative AI risks measurable. This is where Key Risk Objectives & Key Risk Indicators can help. Borrowing heavily from Google’s site reliability engineering concepts, Key Risk Objectives and Key Risk Indicators are a great way to bridge the gap between data & AI engineering teams and their 3 Lines of Defence contemporaries who operate across risk, compliance and audit teams. Key Risk Objectives and Indicators should be established to ensure any risks around your Generative AI are measurable.

What is a Key Risk Objective?

Key Risk Objectives (KRO’s) in generative AI are strategic goals focused on minimising risks associated with the development and deployment of AI models. These objectives aim to ensure the AI operates safely, ethically, and effectively within its intended scope. 

They align with broader organisational goals, ensuring that AI technologies contribute positively without causing unintended harm or ethical concerns.

What is a Key Risk Indicator?

Key Risk Indicators (KRI’s) in the context of generative AI are measurable metrics that help in identifying and quantifying risks. They serve as early warning signals to detect potential issues before they become problematic. 

Effective KRIs are specific, measurable, and aligned with the  Key Risk Objectives. Ideally they can be captured and visualised in a codified manner. 

Step 3: Get Your Data in Order & Embrace LLMOps

You’ll need to ensure your data is of high quality and that you have stringent data governance in place. This is where we testify for a modern distributed data architecture founded on data mesh principles. In doing so, we apply a data product mindset that ensures we are able to identify, curate, own and govern primary data sets that can be the fuel for our LLM that powers Generative AI. 

In addition, by putting in place governance controls using metadata management practices and data contracts, we can be sure on the origin of the data, its purpose and how it has been applied to feed into our generative AI inputs. This is super important in respect of explainability and being able to determine what data has been used, where, when and how to establish your generative AI insights. 

Businesses should look to establish a Chain of Verification that offers an end to end appreciation of the input that is feeding into the LLM as well as the output that is coming out the other end. This is especially important when fine tuning models on your own data. This is where a Language Model Operations (LLMOps) approach can help. 

LLMops refers to the various operations, techniques, and methods employed in the functioning and utilisation of LLMs. These include a range of processes such as training, fine-tuning, inference, and deployment of language models. At a high-level LLMOps is broken down into a series of steps and is illustrated in the image below:

Step 4: Use Generative AI to Test Your Generative AI Applications

Consider using Generative AI to govern your Generative AI. 

For example, you could use Generative AI to validate your control environment by getting your LLM to analyse AI regulations and industry best practices from around the world and compare them with your control environment policies. Your results will determine which legislation is relevant to which control, and how it is supported by your organisation. 

This enables you to apply the controls across your technology estate. Leveraging the technology capabilities of we outlined in Step 3 above in order to (i) identify the regulation, (ii) document how will you comply with the regulation, (iii) use tools, people and processes to evidence the control and (iv) demonstrate through automated controls where possible your capacity to audit the enforcement of the controls. 

You organisation could also use Generative AI to:

  • Create adversarial examples to fool your AI models, helping to identify weaknesses in their ability to accurately interpret data
  • Create vast amounts of synthetic data to stress test your AI applications
  • Develop Generative AI algorithms that evaluate the robustness of your application by pushing it beyond its typical operational parameters
  • Analyse your applications for potential ethical issues or biases in your AI’s decision-making process by simulating various demographic and situational contexts

Step 5: Implement Continuous Improvement and Adaptability Protocols

This step involves creating a system for ongoing evaluation and enhancement of the generative AI risk framework. It requires the establishment of protocols for regular review, feedback incorporation, and adaptation to new risks as the technology and its applications evolve. 

This step ensures that your generative risk framework remains relevant and effective in the face of changing circumstances, such as advancements in AI, shifts in regulatory standards, or emerging ethical considerations.

We could write full whitepapers on each of these stages outlined above to develop your own Generative AI risk framework. But the foundational concepts above give you an idea of the necessary steps to adopt and develop Generative AI in a safe and secure way. 

Doing so without these controls and an understanding of risk not only leaves organisations exposed, but it means your project won’t ladder up to the goals of the overall business and therefore fail to make an impact. 

Latest Stories

See More
This website uses cookies to maximize your experience and help us to understand how we can improve it. By clicking 'Accept', you consent to the use of these cookies. If you would like to manage your cookie settings, you can control this in your internet browser. Find out more in our Privacy Policy