23 Jun

Why You Need Knowledge Graph NOW For Next-Generation Business Decision Making

TA
Tareq Abedrabbo

For over a decade now, graph technology has promised to enable enterprises to leverage the connectedness of data to deliver deep new insights into their business, products and customers.

The last few years have seen knowledge graphs, in particular, rise to prominence. While much has been written on this topic, in this blog I want to take an especially pragmatic look at why enterprises should be jumping on knowledge graphs NOW as a way of enabling the business decision-making of the next generation!

Let’s jump in. So, what is a knowledge graph?

A knowledge graph is a way of organising data from multiple sources and then establishing connections between these data sources to derive new human knowledge from the connectedness of that data.

There are many technologies that can summarise and compare data points but only the knowledge graph represents the data points in such a way that shows the deep interconnection between them.

Now, you would think that this is a common concern for any business today, but the knowledge graph is often seen as this far out, niche, experimental technology.

As a result, many companies tend to overlook it.

Which I think is a HUGE wasted opportunity!

Because the technology is really very mature and can be deployed under enterprise conditions far more readily than most people would assume.

So I’m here to say that—used well—the knowledge graph has absolute game-changing potential and revolutionary power!

In this blog, I’m going to explain why you need the knowledge graph NOW, demonstrate this through a couple of use cases, explain the main barriers to KG and give you a few tips on how to start building your knowledge graph now.

Why You Need The Knowledge Graph NOW

Any business that isn’t using the knowledge graph is missing out on an absolute gold mine of business-critical knowledge.

Not only that, they are left stuck trying to make their business decisions using inadequate data and suboptimal approaches.

How so?

Well, imagine a spreadsheet: it can show you a single, shallow layer of data.

And you can query this data to produce insights that will support you in making business decisions.

Fine.

Adding any more dimensions to a spreadsheet is usually painful because you’re going against the grain (they are literally two-dimensional!).

But the knowledge graph isn’t just a set of insights, it’s a rich and interactive web of endless novel insights. It is one of the only approaches that can generate entirely new insights from already-existing data.

This is because the KG can create new knowledge by bringing datasets together that otherwise would never have gone anywhere near each other.

Many datasets are siloed and separate or people just cannot foresee what insights would come from connecting them (so they don’t).

There are profound, but unknown, insights that are lurking in the interstices of your datasets, waiting to be discovered. But this will only happen if you use an approach like the KG to bring the data sources together and deeply query the connections between them.

(The issue is that many enterprises are structured as functional silos and under these conditions these datasets will never come together naturally. Which is why you need to change how you design your data architecture. I address this below in the section about data quality and the data mesh).

So, while spreadsheets tend to be rigid, a knowledge graph is a living system that captures complex multi-dimensional knowledge. It can show you things connected to things connected to things connected to things.

And these can be used to powerfully augment human decision-making to optimise your business over time at a very fundamental level.

Importantly, this isn't about automating decisions in a simplistic way. It’s about supporting human expertise with rich, real-time knowledge when it comes to making business-critical decisions (that are otherwise made in a shallow and labour-intensive way).

What I mean can best be grasped through some example use cases. So let’s have a look at a couple from the real world.

Knowledge Graph: Two Example Use Cases

The problems that a knowledge graph can help to solve do not need to be niche or obscure. In this section, I want to show how it can be applied to common business problems by taking two example use cases from my own experience: compliance in a financial services company and investment decisions in an energy company.

1) Compliance in Financial Services

Firstly, let’s take the example of a financial services company trying to stay compliant with various financial regulations (examples include GDPR and DORA).

Figuring out how compliant they are involves many different datasets.

Firstly, you have the individual stipulations of all the regulations and what they mean for your business.

Secondly, you have the data points relating to all the business activities that come under the remit of the regulations.

Thirdly, you have all the data around who is responsible for compliance in the different areas of the business and what the processes are.

Determining compliance manually would involve months of difficult work poring over regulatory documents, determining which articles apply, then comparing these to large spreadsheets of business data to determine if you’re compliant or not. If you aren’t, you then need to find the people responsible and work to fix things.

Using the knowledge graph, you can model the regulatory documents (you can even use  natural language processing to help interpret the documents), along with the compliance data points from across your business, as well as your internal processes and employee data and instantly derive the connections between all of these data points.

You can keep the knowledge graph up-to-date by feeding it data in near-real-time, which will enable you to understand the regulations you need to abide by, how compliant the different aspects of your business are and who is responsible for each data point on the graph.

In this way, you can move from an inaccurate, point-in-time compliance view to a continuous data-driven paradigm.

You not only save vast amounts of time and effort, but the end result will be much more accurate, allowing you to reduce the risk of penalties, use your budget more efficiently and use the real-time aspect to ensure continuous compliance.

2) Investment decisions in an energy company

Secondly, let’s take the example of an enterprise in the energy industry that is trying to make important business decisions about which energy assets to invest in.

Energy companies have all sorts of data: real-time telemetry from their energy assets, geological studies, climatological data, economic predictions, historical data, market data, customer data, internal company data and so on. These documents cover a whole range of structured and unstructured datasets including numbers, graphs, descriptions, illustrations, text, images and so on.

Unfortunately, these rich data sources tend to be isolated and fragmented across multiple functional silos. But it is easy to see the amount of value you can deliver by leveraging their connectedness.

However, manually sifting through these to try to turn them into ‘knowledge’ is incredibly difficult.

Using the knowledge graph, it is possible to bring together all these data points (in all their different formats!) simultaneously in one cohesive ‘universe’ and highlight the connections between them all in order to tell a relevant story.

For example, photos of wind turbine sites can be combined with historical geological data and expert models in order to get a more accurate sense of their risk profiles. Economic projections could be adapted to climatological predictions or customer trends that are emerging. The potential is endless.

By connecting the disparate data points, we are able to turn the raw data into a story about how our business should operate under different circumstances.

And the consequences of the business decisions enabled can be massive. If you build your wind turbine in the wrong place, there’s not much you can do!

How to Build a Knowledge Graph

A knowledge graph isn’t some fantastical, Back-to-the-Future technology that is way out of reach of ordinary enterprise businesses.

The technology is undoubtedly mature enough to support enterprise-grade usage. The enterprises have the data! There’s no reason to hold back.

It’s well within reach if you get a few basic pillars of your data program in place.

A knowledge graph allows you to turn your data into knowledge by bringing together three key ingredients:

1) Rich datasets and sources:

Your datasets will ideally need to be broad and deep in order to get the most out of your knowledge graph. Any complex or interesting domains will need multiple data sources to be brought together to capture the different facets that constitute it fully.

2) A cohesive yet flexible data model:

A cohesive model is needed to bring together different data sources under one umbrella so that we can connect different pieces of data into knowledge. This model needs to accommodate different data types and structures and give them space to evolve over time. It needs to allow the data to be queried to derive new insights. (This is exactly what the “graph” part of the knowledge graph provides!).

3) Trustworthy and timely data:

You need to feed the model with up-to-date data so that they represent the situation now, not at some point in the past, preferably in a way that people know they can rely on!

The barriers have much less to do with getting the right knowledge graph technology in place and everything to do with ensuring that high-quality, trustworthy data is flowing through your organisation in near-real-time and can be queried with impunity!

Once you have that in place, how would you get started?

How to Get Started with the Knowledge Graph

When getting started with your knowledge graph journey, the trick is to keep your focus on use cases and data quality.

Firstly, the use cases.

Rather than getting all the technology set up and then asking what use cases you should use it for, you want to flip your approach around: decide on a good business use case and then ask what technology, processes and skills you need to enable it.

For example, you might decide that you want to focus on helping out compliance in a niche domain of the business, such as finance. You would then work on transforming that domain to make it knowledge-graph-suitable.

You can then take your learnings from implementing the knowledge graph in that domain and then apply them to the next most appropriate domain.

Secondly, you need to make sure that your data is on point. After all, your knowledge graph is only as good as the data you put into it.

In a traditional enterprise data architecture, data is collated in a giant, monolithic data lake, is not particularly trustworthy and quite difficult to access (the central data team functions as a critical bottleneck to the free flow of data throughout the organisation).

When data is centralised in this way it creates a single point of failure that means that introducing a knowledge graph can be incredibly risky and can easily disrupt the rest of the organisation if something goes wrong.

Under these circumstances, it’s difficult to use a knowledge graph and the company’s data is likely to remain as data and never be turned into knowledge.

In this context, by far the most appropriate approach to counter these issues is the data mesh approach.

The Data Mesh and the Knowledge Graph

A data mesh is a new approach to data architecture that emphasises federated capabilities, cross-functional teams and end-to-end, domain-driven accountability. (You can check out our introduction to the data mesh approach).

The big shift that the data mesh enables is being able to decentralise data, organising it instead along domain-driven lines, with each domain owning its own data that it treats as a product that is designed to be easy to consume by the rest of the organisation.

When each domain team makes their data highly-discoverable to other teams in the organisation we suddenly create a web (or mesh!) of high-quality and highly-accessible data sets. In this way, the data mesh creates the perfect environment to bring together the datasets that the knowledge graph needs to capture and generate new knowledge and insights. In a siloed, centralised organisation these datasets will never meet!

The data mesh architecture is a perfect mirror of the knowledge graph: decentralised yet interconnected. It happens to be designed in such a way as to maximally facilitate the deployment of knowledge graph technologies.

You develop your data mesh, similarly, by picking a business domain or use case and building your data mesh capabilities around that particular use case.


Final Thoughts

Enterprises have vast quantities of data that are being criminally underutilised.

And the knowledge graph is one of the key pieces of the puzzle that will allow businesses to start to use all the data they have at their disposal to powerfully upgrade their decision-making capability.

Your data has secrets to tell you! Let it!

Interested in seeing our latest blogs as soon as they get released? Sign up for our newsletter using the form below, and also follow us on LinkedIn.

Latest Stories

See More
This website uses cookies to maximize your experience and help us to understand how we can improve it. By clicking 'Accept', you consent to the use of these cookies. If you would like to manage your cookie settings, you can control this in your internet browser. Find out more in our Privacy Policy