In order to better understand the use and impact of AI/ML, including benefits, constraints and risks, the Financial Conduct Authority (FCA) and the Bank of England (BoE) have established a forum to facilitate discussion between the public and private sectors.
The Artificial Intelligence Public-Private Forum (AIPPF)—consisting of financial companies actively developing AI, public authorities and academics—held four meetings and a series of workshops to explore the best path to safe adoption of these technologies within financial services.
They recently released their final report, which is packed full of juicy insights into how AI is evolving and where the main opportunities and risks are for financial services enterprises. You can also read a summary of the minutes from each of these meetings here.
In this article, I’m going to summarise the most important and intriguing takeaways as well as highlight the key actionable next step.
The report was published on 12 October 2020 and sought to understand the challenges and risks of using AI in financial services.
We read through the report and here are the top 10 most relevant takeaways for financial services enterprises looking to adopt AI in 2022.
Despite the constraints of the pandemic, the use of AI, ML and data science in financial services is stable or increasing across the board.
The reason for this expansion is that AI can bring significant benefits to consumers, firms and the financial system. It enables more personalised financial products and services as well as seamless customer journeys and experiences, supported by natural language processing, voice, document image and facial recognition.
Action point: consider not only how your growing use of AI may create opportunities for your customers and business, but where it can be applied to new use cases and create a more intelligent enterprise that is operationally resilient.
Many of the benefits and risks of AI can be traced back, not to AI systems or algorithms, but to the data that underpin them.
As such, data quality should be the number one priority when it comes to delivering successful data science programs. But there are many challenges here given the complexity of the sources and structures used, as well as the increased importance of documentation and versioning for data and code.
Action Point: pre-existing ways of managing and governing data quality may not always be appropriate for AI models because of lack of scale and complexity. It is likely that new systematic data quality processes will be needed that are transparent, reproducible, auditable and integratable with other processes.
A distinguishing feature of AI is its ability to process large volumes of ‘alternative’ data from many different sources, often including third parties: satellite images, biometrics and so on.
As the data intended for AI consumption increases in volume and the sources become more diverse and uncontrollable, data quality issues are bound to increase. There is also the question of the split between data ownership and AI ownership.
Firms will have to develop AI-specific data standards to help them develop the appropriate governance structures in their organisation to accommodate these new trends.
Action Point: while the financial sector already has data standards and regulation in place, additional incremental standards may be needed. For example, the Alternative Data Council has started to produce standards for the use of third-party alternative data by investment firms.
Most of the risks related to AI models already existed with non-AI models, the challenge is the scale, speed and complexity with which AI is beginning to be used.
There are so many different factors to consider when it comes to AI that the complexity of the task at hand skyrockets, along with the risk: complex inputs, relationships between variables, sophisticated models (e.g. deep learning) as well as a wide array of possible outputs from actions to algorithms to images and text.
As the number of AI models within a given network increases, the complexity increases further.
Action Point: model risk management functions and processes must adapt to the various challenges that AI models introduce or increase. Also consider that risks can arise from how a model is used, rather than the model itself (e.g. customer perception).
Given the complexity and risk, managing AI must consider a wide range of factors.
On the one hand, in order to ensure that models behave as expected and do not drift, their performance must be monitored and frequently reported on to look out for changes that could signal the need for model retraining.
On the other hand, to ensure that the models serve the business and users, firms must focus also on consumer engagement and being able to clearly communicate and explain model outputs.
Action Point: existing change management processes in your organisation may not be suitable for rapidly-changing AI models because they can be too slow. Adapt your change management processes to meet the need for faster re-training and validation.
You need two key documents and if possible a method for dynamically capturing:
- A monitoring plan covering frequency, metrics, mitigating actions etc.
- A change plan setting guardrails, how models are allowed to adapt over time and a documented change process.
One of the distinguishing features of AI is its capacity to make autonomous decisions. This means that it can limit or even eliminate human judgement and oversight from key decisions, which is a potent and novel challenge for data governance.
AI systems also touch upon various governance functions, which makes it difficult to carve clear lines of accountability around the technology. Skill and knowledge gaps exacerbate this challenge.
Action point: make a plan for ensuring effective accountability and responsibility for these novel aspects of AI: both its capacity to make autonomous decisions and the blurred lines around who is responsible for the data and AI systems themselves.
Because AI models interact with other risk and governance processes (data governance, model risk management, operational risk management) existing governance structures are a necessary starting point for AI models and systems.
There are different approaches, but overall governance frameworks and processes should be tiered and aligned with each individual use case.
For example, high-risk and high-impact use cases (such as consumer credit) will require more due diligence and more time and resources. Comparatively low-risk use cases like chatbots could be suitable for a more streamlined approach.
Action Point: identify and focus on potential risks that are not already covered by your existing governance structures, including where staff may need to be trained. You could develop a cross-functional body with representatives from compliance, audit, data, etc. to address the need for a broader and more diverse set of skills to ensure risks are not overlooked.
A key question is whether an organisation should centralise or decentralise responsibility for AI. There are two aspects to this question: firstly, setting the standards and, secondly, implementing them.
A centralised body for setting governance standards is most likely the best option. This should have a complete view of all AI models and projects so it can set standards for managing the associated risks.
This kind of centralised accountability can be more effective and comprehensive than having multiple business areas setting different (and possibly conflicting) standards within a firm.
However, responsibility for the AI models themselves could be decentralised, with the relevant business area that is using the model being accountable and responsible for the outputs as well as the execution of centrally-defined standards.
Action Point: a federated two-pronged approach to governance is recommended. Firstly, a strategic level that develops overarching standards centrally and, secondly, an implementation level that applies those standards based on use cases at the local level.
Accountable executives in business areas need sufficient understanding of the AI models in use. At the same time, the management functions above them need clear parameters for the performance and outcomes of AI models, including the ability to measure and monitor them.
Given the complexity of the task at hand, a cross-functional approach is required, seeding a diversity of skills and perspectives into the AI governance body and ensuring that a wide range of business functions and units are represented.
Action Point: ensure that your centralised governance body contains representatives from different areas of the business and that it has all the skills needed to cover all the bases of AI governance.
The AIPPF forum marks only the beginning of public-private collaboration on this topic. Next steps that would support the professionalisation of data science would include voluntary codes of conduct and auditing regimes to foster wider trust in AI systems as well as regular support for the adoption of AI.
Action Point: stay alert to new developments within the field of AI regulation and adoption so you can act on new data and recommendations as they emerge in order to maximise the opportunity for AI in financial services while minimising the risk.
The opportunity with AI and ML is unbounded, but without the right skills in place, and the right focus on the business opportunity, many of these initiatives never make it to production.
Our AI & ML Assessment & Strategy Accelerator is our tried-and-tested framework that will enable your organisation to boost adoption of AI and ML at scale. You can download our free accelerator e-book here.
If you want to be competitive, you need to sort your data constraints, and that's where Mesh-AI can help. Identify the areas in your organisation that require the most attention and solve your most crucial data bottlenecks. Download our Data Maturity Assessment & Strategy Accelerator eBook.
Interested in seeing our latest blogs as soon as they get released? Sign up for our newsletter using the form below, and also follow us on LinkedIn.