With the adoption of ML & AI increasing in financial services, there is undoubtedly a growing level of intrigue from governments and regulators as to how and where these capabilities are being deployed. As with any use of technology, it is imperative to ensure that those who deploy ML & AI ensure that their use cases are appropriately governed and controlled to protect against financial, legal and ethical misuse with the overall target of ensuring that undue bias is avoided. Ultimately, whenever ML & AI is deployed the outcome needs to be fairly balanced for the customer and the financial services provider.
As to what exactly is meant by fair, is very much a shade of grey. However, firms need to demonstrate that they have been responsible in their creation of ML & AI systems. Which inevitably means demonstrating an audit trail of events, hypotheses, tests, decisions, impacts and outcomes. Regulation is often a guiding principle for the controls that firms need to demonstrate as part of their use of technology and this is not too dissimilar in the financial services sector and its use of ML & AI.
As discussed at the top of our blog series, The BofE has established a public-private forum that is aimed at answering many of the open-ended questions around how ML & AI can be deployed and governed in financial services. Whilst the PRA has also defined a model risk management (MRM) regulation that defines four key principles around optimal MRM. They are;
1) Model Definition: Define a model and record such model in inventory.
2) Risk Governance: Establish model risk governance framework, policies, procedures and controls.
3) Lifecycle Management: Create robust model development, implementation and usage processes.
4) Effective Challenge: Undertake appropriate model validation and independent review.
Indeed, when organisations hear the word governance and compliance. The earth shudders and hell freezes over. More often than not governance is associated with slow, cumbersome, box ticking exercises. However, that does not need to be the case!
This is where MLOps can supercharge ML & AI in financial services….without exceeding governance thresholds and risk appetites!
Whilst it is still a young and evolving concept, MLOps or ML Ops is a set of practices that aims to deploy and maintain machine learning models in production reliably and efficiently. The word is a compound of "machine learning" and the continuous development practice of DevOps in the software field. At its core, MLOps is the standardization and alignment of machine learning lifecycle management practices.
By applying an MLOps approach, firms can take a business problem, identify how data & machine learning can address it and execute a series of complex, interrelated tasks in a transparent and governed way. With the intention of deploying it in production in order to turn business challenges into measurable outcomes.
It is rapidly becoming an essential part of successful data science, ML & AI projects across the enterprise. As a concept, approach and framework it enables business and technology leaders to demonstrate that there are guardrails in place to govern their development and deployment of ML & AI. Whether this be acoss internal or externally facing business applications and products.
In the context of financial services we believe there 3 key themes as to why MLOps is a logical solution to support scaled adoption of ML & AI across the sector.
The financial services industry is built on a constantly evolving tectonic plate that is responsible for managing the global economy. With trillions of dollars being moved and transacted each day, even a minor distortion of financial balance can cause shockwaves through the economic system.
A poorly configured model operating in production could ultimately cause significant bias to customers applying for a new mortgage product. Alternatively, it could distort financial markets and cause seismic disruptions within seconds of being deployed at scale in production.
As such, an audit trail of events explaining the following evidence points can be critical to control and govern risk:
1) Why was the model developed?
2) What was the hypothesis of its impact and intent?
3) How was the model developed?
4) Who created it?
5) What components were used to build it?
6) Which data was used to create and train the model?
7) Where are the testing reports and what did they tell us?
8) How and where was that data accessed from?
9) What was the state of the data?
10) When did all of these events take place?
11) Who approved its deployment into production?
12) Did the model perform as expected in production?
Executing these activities as ticklists and manual processes is not scalable. As such, MLOps borrows heavily from DevOps concepts, and requires the end to end build, test and release process to apply the following principles:
1) Stringent version control and configuration management practices.
2) Robust automation across tooling and high levels of trust between teams.
3) Real-time insights to support rapid feedback loops.
4) Cross functional multidisciplinary teams that collaborate iteratively.
5) End to end traceability.
6) Quality first approach, with small batches of change.
In addition to the above, MLOps aims to establish an approach that is founded on inclusiveness. Namely, where the best parts of machine driven automation is harnessed with human in the loop (HITL) processes with approval gates and sign-off procedures established to evidence testing results, sign off subsequent and release on demand deployments. All of this can be governed in a way that separations of duty are adhered to. Whilst date and time stamps can provide further demonstrations of auditable events.
The deployment and use of AI needs to be responsible in nature. As referenced already, any disruption in the equilibrium of how a model behaves then the consequences of its actions can be disruptive in nature. With MLOps financial services organisations can ensure two things:
Ensuring that the purpose and use of the model is understood, documented, developed and tracked in a way that derives the intended business outcome. By ensuring the right intentions are tracked and monitored, firms can be more informed about their initial hypothesis and in turn in a richer position to explain “Why” and “How” their models are behaving.
By understanding intentionality and having it documented in an auditable manner, firms can address their explainability anxieties.
In financial services, accountability for when things go wrong is essential. This is often managed by the assignment of material risk takers. Whom in the event of things breaking, are held to account and often pulled in front of regulators and government bodies when customers are impacted!
With MLOps, firms can centrally govern and have an overall view of what data has been used, how and in which models. Furthermore, by having these audit driven events in place, those who consume and interact with the data ultimately need to be aware of any regulatory controls and policies that need to be complied with. This drives better awareness and accountability of people's actions because if something goes wrong it is much easier to identify what the root cause was across the MLOps pipeline.
By addressing risk management and responsibility requirements, financial services organisations have ticked two major boxes. However, for ML & AI to work in an enterprise setting, it needs to work at scale. This is another area where MLOps is suitably positioned to address the adoption challenges of ML & AI across the financial services sector.
Stringent MLOps practices enable organisations to scale adoption through:
1) Strong version control behaviours. Particularly when teams are experimenting with new models to prove or disprove their hypotheses.
2) Validate whether retrained models are better than their current incarnations operating in production.
3) Monitor in real-time, the ongoing performance of models and ensure that notifications and alerts are in place to trigger awareness when expected behaviours are degrading beyond acceptable thresholds of risk.
By having automation at its heart, firms will be able to scale an MLOps approach across multiple business units, products and services. Ultimately, scaling ML & AI use cases into the thousands, running in production at any one time.
Over the course of this blog series we set out to discuss the following areas:
By taking a use case driven approach we have been able to demonstrate many of the interwoven challenges facing financial services businesses. Whilst illustrating the unbounded opportunities that ML & AI can bring to life in order to improve customer engagement, optimise how firms operate by reducing manual overheads and ultimately drive better decision making to improve financial returns for their customers.
Whilst there are barriers to adoption that persist in holding back full blown adoption of ML & AI, regulation is not seen as a hindering factor. Data quality, accessibility, antiquated governance practices and legacy systems are seen as the biggest burdens to firms who wish to apply ML & AI capabilities at scale.
In order to address these concerns, we have suggested that firms take a data mesh approach to address their data quality, governance and accessibility issues. Whilst combining data mesh with an MLOps framework and modern engineering practices, firms will be able to:
1) Demonstrate the requisite governance and oversight to ensure firms don’t add risk when ML & AI capabilities are deployed
2) Deliver speed of execution, whilst being able to maximise business value in an accelerated manner.
ML & AI can no doubt enable firms to build innovative products and services that differentiate themselves from the competition. Whilst the business efficiencies it can further enable introduces possibilities for financial services businesses to further optimise their balance sheet and operating costs. Ultimately, if firms are able to reduce their cost to serve and stand-out from the competition then ML & AI can be a win, win scenario for customers, the wider markets and financial institutions across the world. However, ongoing oversight, governance and risk management practices are essential to ensure its deployment is fair, transparent and explainable.
If you want to be competitive, you need to sort your data constraints, and that's where Mesh-AI can help. Identify the areas in your organisation that require the most attention and solve your most crucial data bottlenecks. Get in touch with us at email@example.com for a Data Maturity Assessment.
Interested in seeing our latest blogs as soon as they get released? Sign up for our newsletter using the form below, and also follow us on LinkedIn.