In 2022, we saw many enterprises start to shift away from the monolithic, centralised data architectures that have dominated the last decade.
In the face of the sheer complexity and diversity of data, we are seeing the continued adoption of dynamic, decentralised and federated systems. These offer new perspectives on data that are capable of turning this challenging complexity into valuable new business use cases.
In 2023, these new perspectives will continue to adapt and evolve, especially as new regulations and economic pressures are forcing the need for trustworthy, transparent and accessible data.
Let’s take a look at some of these new perspectives and use cases that we expect to see as the year plays out!
The data demands being made on enterprises are starting to grow beyond the capacities of the traditional, centralised, monolithic data architectures that are in place.
Levels of data wizardry that would have been unthinkable a few years ago are being routinely asked of enterprise data teams, while the volume, complexity and diversity of data are exploding in all directions. On top of that, critical business decisions are increasingly being made on the basis of these challenging data sets.
A step change in how enterprises work with data is required. And that’s exactly what the data mesh will be delivering in 2023.
Data mesh decentralises, democratises and productises data in a federated model so that it can be accessed by anyone as and when they need it. Treating data as a product is a fundamental shift that opens the door to widespread data innovation and advanced use cases (AI/ML) across the organisation.
As more and more data mesh success stories are shared, we are seeing more and more enterprises embark on - or consolidate - their data mesh journeys.
We expect that the data mesh will build a reputation as an indispensable foundation for a modern, first-class enterprise data outfit.
Data mesh has shown that the unlocking data innovation revolves around being able to share high-quality data at scale.
The first step towards this is making that data discoverable. Enterprises have historically relied on a single, technical view for this, which tends to get messy at scale. It’s hard to determine what data is available and how to make sense of it. And discoverability takes a hit as a result.
So, along with the adoption of data mesh, in 2023, we are seeing a shift in how enterprises are making their data discoverable. Rather than relying on traditional data catalogues, they are shifting to data product catalogues: an organised inventory of all the data products available in a meaningful, secure, private and easily consumable way. This means that users can directly access and make use of the data straight away, without having to worry if it's trustworthy, up-to-date, high-quality and so on.
This trend is evidenced by the rapid growth and modernisation of data discovery tooling. This highlights that Data Product catalogues will become an inevitable part of any modern data architecture, accessible and available to all data consumers across the business.
Product thinking and data are increasingly converging, with a data-as-a-product mindset spreading rapidly throughout the industry.
Traditionally technology organisations viewed data as a technical liability that has to be managed by the IT department. This view misses the business value of the data!
A data-as-a-product paradigm is different in that products persist over time — they are never ‘done’ and their value is measured by the outcomes they enable. Product thinking is about solving customer problems in a way that creates value for the organisation.
Data products are created and owned by a product team who focus on meeting the needs of their users. They also reduce complexity and create simple paths to adoption for them.
The data-as-a-product approach will continue to gain momentum because it allows us to generate scalable business value from data. This being the case, the understanding of Data Products and how they can be developed and incorporated into large complex enterprises will mature as a result.
The trustworthiness, quality and reliability of data is a major limiting factor in generating value from data at scale.
As the above data-mesh-centric trends accelerate—with data organised increasingly by decentralised domains, thought about in product terms and so on—it means that greater ownership of data can be taken in each individual data domain.
This approach empowers domain teams to produce more trustworthy, higher-quality and more reliable data. This will be reinforced by data contracts, which are set to become even more important as more enterprises continue to adopt more interconnected approaches and therefore find ways to guarantee interoperability.
With data that is discoverable, trusted and understood comes the ability to use it correctly.
A product-based approach provides the mindset and practices to build fit-for-purpose and long-term valuable digital products that are aligned with the needs of the users (or consumers) of the data, including the adoption of AI.
Until recently, data has been seen as a technical, mandatory exercise whereas AI has been viewed as a niche approach to problem solving. By having reliable data of guaranteed quality, enterprises can solve problems from the ground up — taking advantage of the tailored nature of data domains and identifying the correct issues and then solving them in the appropriate way. This will open the door for more sophisticated data capabilities, such as AI and Ml, that can truly change the way business processes are implemented.
In the same way the adoption of a data-as-a-product mindset will lead to a reimagining of data, we will see a revised understanding of what a data engineer actually is and what their capabilities should be.
Data engineering is often viewed as a technical plumbing capability — moving data around in a hyper-specialised way across siloed environments. However, as data is everywhere, building reliable and scalable data systems will become essential in supporting the various data sources as well as the data behind any modern enterprise capabilities (streaming, graph, AI, ML etc.).
Enterprises will need to help their data engineers upgrade their skill sets to include architectural, product and system thinking and move away from a purely technology-focused, mechanical view of data engineering.
Traditional data governance has been pushed to its limit with the emergence of novel data use cases that need to combine a diverse set of data sources and the agility that enterprises need to adopt to respond to the needs of their customers.
Accordingly, the regulatory and security pressure on data will continue to evolve data governance into a pragmatic, holistic and practical discipline.
The adoption of less centralised data models will allow enterprises to scale their solutions as their ambitions and prospects grow, with the resulting scenario being that data governance will become much more dynamic.
Rather than being seen as a limiting factor, data governance will become a powerful enabling force, the value of which is recognised by everyone in the business.
Digital twin will allow enterprises to build virtual representations of their most valuable assets using real-time data. Any change in the physical system will lead to a change in its digital representation, and such changes can be tracked to the most minute level.
Digital twins enable in-depth exploration of data and support various real-world activities. Whether it be industrial sectors such as manufacturing or energy, or in business critical domains such as supply chains, risk management, impact analysis, optimisation, and maintenance planning — a digital twin can bring greater efficiency to systems and processes. They do this by being fed high-quality data, which allows them to predict what might go wrong and stress test against any challenges that are set to occur.
The creation of higher value, more transparent datasets will supercharge digital twins, allowing enterprises to better visualise, predict and track data better than ever. This will enable enterprises to rethink many of their business-critical processes that are now delivered through manual and labour-intensive approaches.
We are now reaching a point wherein organisations can begin looking at digital twins as a first class capability, fueled by data that is accurate and applicable — making digital twins both practical and valuable moving forward.
While we understand that these trends are not always black and white and that many of them are part of a longer journey, this year will be one that is characterised by a shift in how organisations deal with their data — the protagonist of their digital transformation stories.
Here at Mesh-AI, we take an end-to-end view of how enterprises can take advantage of these new data methodologies. Our specialist expertise means we have an eye for what’s now and better still for what’s next, and we are here to help enterprises set the right technological foundations and install the correct data governance in order to ensure that everything is delivered in the right way and to full effect.
Contact us today to find out more about how we can help begin your data journey.
Interested in seeing our latest blogs as soon as they get released? Sign up for our newsletter using the form below, and also follow us on LinkedIn.