27 Oct

How a Data Mesh Approach for a Financial Services Organisation Put Data at the Core of Innovation

TA
Tareq Abedrabbo

I’m just going to go ahead and make a bold claim: data mesh is absolutely brilliant in financial services organisations.

Now to back up my claim.

Given the mind-bending data challenges that financial services organisations face (tons of data, highly siloed, complex latency/compliance requirements, hybrid infrastructure etc.),  they need an approach that can create flexibility, coherence and cohesiveness across their entire estate.

Data mesh is such an approach.

I say this on the basis of having worked in many such organisations: on the one hand, I’ve seen time-and-time again how monolithic data approaches fail to provide these things. On the other hand, I have helped them to introduce data mesh-style approaches that have helped them bridge technology divides and connect siloes, all within the context of a domain- and product-centric mindset and operating model that puts the consumer at the heart.

I’d like to summarise a recent data-mesh-in-financial-services experience to give you an idea of how we think about data mesh and the deep-seated transformation that it can effect when properly supported.

In the organisation in question, we helped them to put their data at the very core of their business in a scalable and resilient way, making it widely available for use in their product innovation going forward. This was a massive change in contrast to how they were operating previously and has really opened the doors for rapid innovation in the future.

In this blog, I’ll be explaining the challenges the company was facing, why they opted for data mesh, the principles we applied to ensure that the implementation was a success and what the business outcomes were.

Core Challenges

The company was a very successful online financial trading platform that hosted thousands of financial instruments.

It’s a cliche, but in this case data truly is the core of their business. Everything the business does is centred around trading data, pricing data, risk data and so on.

They had been undergoing a broader digital transformation with the goal of diversifying their offerings by leveraging modern technology to accelerate innovation and experimentally develop a wide range of fabulous new products.

Given the centrality of data to any potential product they could produce, the plan was to put data at the core of this digital transformation such that it was the starting point for all technological and innovation efforts going forward.

In this they came up against a variety of significant pain points.

Firstly, the data, while decentralised across the organisation, was heavily siloed within different teams. These data sets were shaped according to each silo’s requirements, rather than by what made sense for the consumer. In the end, the main way that business users discovered data was just by asking around and hoping they got lucky. As a result, consumers of the data struggled to find, identify, understand and then use the data they needed.

Secondly, the data was too complex and simply too voluminous to be moved to a whole new infrastructure. At the same time, the data was constantly in use, which would have made such an operation much like trying to change the engine of a moving car.

Thirdly, their data strategy had until that point been focused much more on the producers of data, with very little thought given to how data was consumed in the organisation (which is of course the end goal!).

Fourthly, as a result of the previous issues, much of their data was very hard to discover. Consumers would have to queue to make last-minute impromptu requests of the relevant data teams for certain data sets. Even if they succeeded with this, there was no way of knowing if they had the most up-to-date data and the ad hoc nature of the system was highly inefficient, leading to huge amounts of unplanned work for data teams.

Fifthly, much of their most valuable data was hosted on-premises for compliance and latency reasons, yet their development environments (for innovation and new products) were hosted in the cloud. So they faced a major challenge in not only linking the various siloed datasets but also bridging the gap between on-prem and the cloud. This is simple enough to do in a basic, lipstick-on-a-pig kind of way, but to do it in a way that is secure, streamlined, sustainable, highly-scalable and highly-available is very difficult.

So how did we tackle these issues?

Our Approach

Me and my team wanted to come up with a foundational solution that would be able to provide business value in the long term, not just put up some shiny dashboards and be done with it.

There is no simple recipe for taking on such a range of challenges. Instead, we put together a few key principles that we used to guide us through the implementation.

Firstly, we wanted to avoid the ‘low-hanging fruit’ approach that so many consultancies take, which prioritises easy wins, but kicks the can of foundational transformation ever further down the line. Instead, we knew we had to come up with a solution that would address the root of the challenges and would be sustainable and scalable over time.

Secondly, we knew we had to address the absent focus on the consumers of data and instead put them front and centre of any solution. The whole point is to enable consumers to find data that, by definition, they do not know exists. Making this data easily discoverable and highly available, then, was a major concern.

Thirdly, we knew that scalability was a non-negotiable. The way the company had been doing innovation had been to speak to half a dozen teams to get tens of data sets, do lots of manual work and then copy the data to the cloud. This is not sustainable for one product. Never mind ten or twenty.

Given the company’s challenges, our previous experiences of what does (and does not) work in these massive financial institutions and recent evolutions in thinking in the data space, data mesh approach was the obvious choice.

Data mesh is a decentralised, yet federated, approach to data. It aims to strike a balance between data flexibility, and consistency.

This allows for common data capabilities across the business on the one hand, while keeping the data decentralised (i.e. where it already is) on the other.

Critical to its success is its focus on domains: each business domain becomes accountable for producing its own data (in line with centralised governance standards), which reduces bottlenecks and delays produced by more centralised approaches.

Done well, a data mesh is a super-scalable, highly available web of nodes that are each producing and consuming data for and from each other.

The process of decentralising, democratising and productising data is a quantum leap in enterprise data architecture that opens the door to massive experimentation and innovation.

Data Discovery and Metadata Management

How Did We Introduce Data Mesh?

The first port of call was to get buy-in for the proposed data mesh approach.

We used value stream mapping to highlight the inefficiency and time wasted of their existing system. We demonstrated that whenever somebody needed a new data set, or even an adjustment to an existing data set, consumers would spend lots of time queuing to ask the few central data teams ad hoc questions, which created loads of unplanned work for those teams (because they had no way of anticipating the ad hoc requests).

Then we compared this to our proposed data mesh model, demonstrating that providing clarity around data discovery and allowing different domains to be accountable for their own data (rather than a few central teams being bogged down with all data-related requests) would massively enable consumers and also free producers of much unplanned work.

With an enthusiastic mandate secured from the leadership, we then proceeded to identify those datasets that would be fundamental to product innovation.

We could have just strung together a few data silos and stuck a dashboard on the top, but this would be glossing over the problem. Instead, we made a conscious decision to start at the source: fundamentally changing how data is produced, managed and consumed from the earliest possible point.

Because if you are a product owner that wants to build a new financial services product, what is the first thing that you will do? You need to find the data that will enable it and understand its attributes, form, quality, lineage, relationship to other data sets, etc. So we knew that we needed to enable these product owners by making these fundamental data sources highly discoverable and available.

We conducted a data mapping exercise to identify those datasets that would be required by the business, so they could be made available in data mesh as a priority.

With the core data sets identified, the next task was to bridge the (largely) on-premises data sets with the cloud-based development environment.

In order to make the fundamental data sets highly discoverable they needed to break the dichotomy between on-prem and cloud, while maintaining the company’s extremely demanding latency and quality requirements.

As a result, we put a lot of effort into a sophisticated solution for streaming data from on-prem to the cloud in a resilient, scalable and low-latency way. This was only possible because we started at the source: the data itself.

Along the way, we leveraged the fact that, for the purposes of broader digital transformation, the organization was restructuring its teams to be cross-functional.

We recognised that this development was perfect for data mesh and jumped on this opportunity! The thing about data projects is that they are multi-faceted and require expertise across the business domain, the data, cloud, product and the end users. Cross-functional teams are the perfect way to do this, while encouraging cross-pollination and collaboration.

We put a lot of effort into enablement work, helping data people to learn about the business and vice versa as well as helping squads to work together effectively and take ownership of data domains from end-to-end.

Underpinning all of the above changes was a shift in operating model to maximise data enablement.

Previously, data was a core requirement for every single person in the business and, as a result, was the critical bottleneck. The existing operating model had proven to be completely unscalable and, without real data that is widely available, new products will never go beyond the prototype stage.

By making the teams cross-functional, demonstrating the value of data mesh and enabling teams to operate in the right way, we laid the organisational foundation for fundamental data capabilities that are massively scalable.

A Business Example: Pricing

An example of how all of this came together is our work with one of the organisation’s most fundamental datasets: pricing.

We streamed the company’s pricing data to the cloud in a resilient and low-latency way. This is a fundamental data source that any team in the company could then easily use for whatever purposes they want.

Any and all product owners were then able to directly feed high-quality pricing data into new innovation projects in an autonomous, self-serve fashion without having to rely on successfully badgering a central data team.

What Was the Business Outcome?

By purposefully opting for a deep, fundamental data transformation we were able to generate far better business outcomes than had we taken an easier, but more superficial approach.

Here are some of the key outcomes:

Highly-available, super-scalable consumer-friendly data sets

The previously bottlenecked, unscalable data operating model was replaced with widespread access to data that is designed from the outset to meet the needs of consumers, that is high-quality, resilient and scalable.

Enabling cohesive on-prem/cloud hybrid environment

We created an on-prem/cloud hybrid environment that functioned as a single, cohesive environment. It was not two environments stitched together. This cohesive environment featured a single centralised view with centralised observability and controls.

This is super important as it allows the organisation to keep some data on-prem, while at the same time being able to leverage that data to build new products at speed and scale in the cloud.

Reduce unplanned work and accelerate innovation

Data sets that are critical to product innovation are easily discoverable and usable without delay, meaning they are no longer a blocker on product experimentation.

Because the responsibility for producing and preparing the data has been spread out among cross-functional teams in their specific domains, the amount of unplanned work due to ad hoc data requests has fallen through the floor.

Consumers can autonomously access trusted data sources

At the same time, consumers no longer have to ask around and hope that they can find the most up-to-date datasets, they are empowered to discover data by themselves that they know is up-to-date.

Data now at the core of innovation!

These changes have enabled the organisation to put data in its rightful place: at the core of rapid product innovation in the public cloud. Without this, product teams looking to innovate would have to go round the whole organisation asking for data and hoping that it was what they needed!

Siloes are out, inclusivity is in

A critical but underappreciated benefit is how data mesh fosters inclusivity. Data is no longer a separate silo where magic gurus wave their wands to mysteriously produce what the business needs. Instead, the whole organisation is involved, resulting in a web of nodes that are constantly producing and consuming data for and from each other.

Achieve what was previously impossible

Ultimately, the business can now achieve things that previously would have been impossible.

With data instantly discoverable and available, the floodgates of product innovation have been opened. At the same time, data engineers have much more time and energy to devote to improving their corner of the business, rather than fighting ad hoc fires.

What Were the Key Requirements for Success?

During the project, it was clear that several key requirements emerged that were necessary at various points to keep the project on track.

Automated, self-serve infrastructure

In order to facilitate a truly democratic data mesh, it’s important that the underpinning technology be as seamless as possible, almost invisible.

Using automation to give producers the infrastructure they need to look after their own domains is critical so that consumers are empowered to find the data they need themselves.

Strong sponsorship

Any deep transformation requires a commensurately enthusiastic sponsorship from the powers that be. We put a lot of effort into demonstrating the value a data mesh could deliver so that we could get the mandate we needed to make fundamental changes.

Cross-functional teams

Cross-functional teams are critical for addressing the problem of data silos within an organisation. Horizontal structures need to be instead verticalised so that each domain has the skills and resources to look after their own data so they can make it available for the rest of the organisation.

Willingness to change!

It’s important to note that moving to a data mesh is not a technology exercise. It’s not just linking a bunch of data lakes together.

Data mesh is primarily an exercise in organisational change. While new tech is a part of it, the willingness to change from the ground up how an organisation operates is by far the most important success factor.

Business participation

Because a data mesh is decentralised, it requires high-quality input from all areas of the business. Good data citizenship must be encouraged so that each business unit knows what they stand to gain and what they are responsible for.

Final Thoughts 

Data mesh is an incredibly powerful approach to getting the maximum business value out of your data.

But know that technology is not the answer!

The secret sauce is simply a deep willingness to change at a fundamental level how the organisation is structured and how people work.

Without that, no amount of technology will make anything more than a small dent in your data issues.


Interested in seeing our latest blogs as soon as they get released? Sign up for our newsletter using the form below, and also follow us on LinkedIn:

https://www.linkedin.com/company/wearemesh-ai/

Latest Stories

See More
This website uses cookies to maximize your experience and help us to understand how we can improve it. By clicking 'Accept', you consent to the use of these cookies. If you would like to manage your cookie settings, you can control this in your internet browser. Find out more in our Privacy Policy