20 Nov

How Do We Build Trustworthy and Trusted AI Systems? Five Key Takeaways from our Data Scientists at Mesh-AI

TJ
Tom Jenkin

Trust in AI is crucial from both an ethical standpoint and also in terms of reliability as a tool. In 2018, it was reported that Amazon’s AI-powered recruitment software had been scrapped after the company realised that the tool was biassed against female candidates. From a business-perspective, trust in the AI systems we are using is crucial in order to avoid, at best, poor performance, and at worst, highly-damaging scandals.

In all cases, trust is needed from both the company producing or employing the system, as well as consumers using the AI models. The trust users have in an AI model is often closely linked to the trust they have in the company as a whole - both can flourish together but also if trust in one is lost, it will affect the other. Building trustworthy AI systems isn’t just a way to mitigate risks but deliver more value.

In our latest episode of The Data & AI Podcast, hosts Deepak Ramchandani Vensi and David Bartram-Shaw sat down with Mesh-AI Senior Data Scientist Jakub Janowiak to discuss how we can build trustworthy AI systems, what this means, and how we can foster trust when it comes to AI. 

Listen to the podcast in full

AI isn’t just for data scientists

Potentially the most important takeaway from this discussion was that companies need to take AI more seriously, adopt a more collaborative approach and improve communication. 

Chief Risk Officers should be adding it to their framework, determining the risk based on usage of AI. Rather than leaving it to the data science teams to ask questions regarding risk and governance, they need a cross-domain approach, with plenty of open discussion across all stakeholders. When it comes to training AI models, all stakeholders need to be involved in the process. 

Typically, we see stakeholders contributing to the gathering and classification of data but leaving the building and development of the systems to the technical teams. To be able to curate a consistent thread of trust, this needs to change. Everyone should be involved, able to ask questions and volunteer their own ideas of what they deem to be important in making a system trustworthy. As in all aspects of business, a well-rounded view is essential. 

Users of the systems should also have the opportunity to ask questions and give suggestions that businesses can then pass onto the data science teams to be able to implement changes.

Increase and improve AI governance

While there is some AI regulation coming into force - which will define the minimum requirements for businesses- a further layer of governance is also integral to instilling trust for users and companies as a whole. 

Creating a concrete set of rules and guidelines for the training, development and usage of AI models, which can be shared across the company, as well as to users, will underpin businesses.

“Trust comes from areas where people care, but knowing how to look is the hardest part, and that’s where trustworthy standards come in” - David Bartram-Shaw

Educate businesses and users more about AI

While AI is becoming more and more a part of everyday life, we still have a lot to learn, both internally within AI companies and externally to their users. 

Organisations must prepare for the AI revolution by educating their employees. Even for businesses which aren’t building or even buying AI models, informing users of the applications of AI within foundational business tools such as Google Office and Microsoft 365 will help make people aware of the risks, and gain a better general understanding so that they can ask questions.

“We’re at a stage where not using AI is a risk in itself” - Jakub Janowiak

Put fairness at the forefront

Research has shown that training AI models using Google News data leads to a gender bias. For consumers of AI to be able to trust the product, the training of the models and the data we input is key. Information we feed to the models should be varied and come from a broad range of sources in order to prevent bias and ensure a well-rounded view. Using a single data source could easily result in skewed output, displaying prejudice towards particular groups. 

Defining fairness should be up to the entire organisation producing the system, rather than being left for the data scientists to decide as this can also result in a narrow viewpoint. Ultimately, the product represents the beliefs held by the company as a whole, so it’s important that the machine-learning teams can work with the wider business to identify it and make an assessment based on their governance.

Explainability is everything

To consider the trustworthiness of an AI system, we must look at its overall performance, output and the general robustness. Within AI, a key marker of this is the explainability of results produced by a model. To be able to understand how an algorithm has learnt, we should be asking how it got its answers, rather than solely focusing on the accuracy of them. 

Jakub cited the famous example of an AI which would differentiate dogs from wolves, but only by scanning the backgrounds of the images for snow. Using trustworthy AI techniques can help us build more efficient and trustworthy models which capture the underlying causal relationships rather than just looking for correlation between images.

The Department for Work and Pensions was recently warned of algorithmic bias within its deployment of AI to determine socio-economic backgrounds when review benefits applications. The government vaguely described the process as using ‘advanced analytics’, and have been accused of secrecy. 

We need to start thinking about how we can explain AI in a way that will benefit the users and help them to understand how systems are reaching their conclusions. Explainability of AI systems isn’t just important to the users, but to wider stakeholders. Each has unique requirements, and those should be considered. e.g. if AI is used in a fintech setting, the data science team will want to know some details. The Product Owner will want to know the overall model reasoning while having the ability to drill down on specific examples in a less technical way compared to the data science team. Similarly with regulators, they might need to know some specific aspects, that the business wouldn't want to share with users etc

Listen to the full episode on How Do We Build Trustworthy and Trusted AI Systems? from the Data & AI Podcast.

Read more insights from our consultants on data reliability in the enterprise.

Check our our research report on the state of AI in the enterprise.

Latest Stories

See More
This website uses cookies to maximize your experience and help us to understand how we can improve it. By clicking 'Accept', you consent to the use of these cookies. If you would like to manage your cookie settings, you can control this in your internet browser. Find out more in our Privacy Policy