18 Jan

The Generative AI Landscape: What Enterprises Need to Know about the Value and Risks

TJ
Tom Jenkin

Generative AI is fast becoming the most powerful tool for enterprises when it comes to boosting profits, making savings and reducing risk. As its use becomes more ubiquitous, questions are increasingly being raised over the impact AI systems can have with particular concerns over privacy, security and accountability.

With its growing sophistication and the technology embedded into everyday life, its use and deployment should be guided by clear ethical principles. In this blog we will take a look at both the risks and ethical implications that organisations need to consider when adopting AI, as well as the importance of having a well-governed AI ethics framework in place.

The Power to Transform your Business

The ability to lessen the tactical burden on the workforce through Generative AI is well known, such as content creation, research and coding. In tightly regulated industries such as financial services, Generative AI can be used to transform regulatory reporting – streamlining operations and saving businesses millions. It can also play a critical role as regulators begin to cascade down the fast-evolving guidelines and legislation that are currently being issued by governments around the globe.

Consumers can feel the benefits too, such as in the energy sector where AI has been used to support vulnerable customers by allowing businesses to tailor engagement to match individuals’ needs.

Meanwhile, in the oil and gas industry, AI has been utilised to provide real-time insights from the North Sea that can be used to identify emissions leakage and determine safety performance.

There is a realisation that the real potential lies in two areas moving forward. Firstly, applying Generative AI to company data to allow employees to derive value from probing company data more effectively. Secondly, as predicted by our Chief AI Officer, David Bartram-Shaw, there is great potential in stacking Generative AI with other technologies like predictive models for a more enhanced output, allowing enterprises to do more.


What to Consider Before Adopting Generative AI

While the benefits are obvious and have rightly generated an impressive number of headlines, so too have the occasional pitfalls.

Over the past few years, we have seen cases where AI has perpetuated and amplified existing bias in society and others where it has been used to spread disinformation and manipulate public opinion.

The trust users have regarding how a business utilises AI is often closely linked with their trust in the company as a whole–one will impact the other, so it’s critical to get this right.

Creating robust governance when implementing an AI system can build trust from the ground up. It’s also important for organisations to consider the quality of the input data before implementing an AI strategy, as it will have a determining impact on the success or otherwise.

Data needs to be reliable, accurate and democratised. It’s also important that it can be found easily within your organisation and that your data source is traceable.

The conversation regarding Generative AI and Intellectual Property is gaining momentum so it’s key for organisations to have a thorough understanding of where their source data is coming from.

A strong ethical framework is a cornerstone that ensures AI is used in a manner that is transparent, accountable, fair and non-discriminatory.

An AI ethics framework is a set of guidelines, principles, and processes that govern the ethical use of the technology.

It’s used by organisations to ensure that AI aligns with core human values and that it won’t cause harm to individuals or society as a whole.

The framework should also consider the responsibility and sustainability of any such AI enabled capability. Whilst pillars such as risk management and intervention/resolution during times of erroneous behaviour should be included upon its formalisation.


What Should an AI Ethics Framework Look Like? 

Your AI Ethics Framework requires several key capabilities to ensure systems are being used in a responsible and transparent manner.

These should include:

  • A well-defined set of policies and procedures for the ethical use of AI that is aligned with corporate values and regulatory commitments.
  • A clear governance structure, with roles and responsibilities defined, documented and allocated.
  • Data Management & Governance practices that ensure data is collected, stored, and used in a responsible and ethical manner.
  • A robust set of risk assessment processes to ensure AI enabled applications are being used in a manner that is safe, secure, and compliant with relevant regulations.

Each of these capabilities are of equal importance and typically you can’t do one without answering another. Starting with policy definition, organisations can more easily define their operational procedures, governance needs and potential impacts of new and unknown risks to their businesses, customers and the wider markets when adopting AI.  


Consider your Company’s AI Policy

For highly regulated enterprise organisations, an AI policy must include several key elements to ensure that the use of AI is compliant with relevant regulations and ethical principles. These include:

  • Legal and Regulatory Compliance to ensure the use of AI is compliant with relevant laws and regulations, including data protection regulations, such as the new EU AI Act, EU General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA).
  • Ethical Principles that cover areas such as transparency, accountability, fairness, and non-discrimination. Your policy should also outline the acceptable use cases for AI and where in the business it can be deployed (e.g., customer facing v internal operational processing).
  • Data Management guidelines should outline the processes for collecting, storing, and using your data in a responsible and ethical manner, while ensuring that the privacy and security of individuals are protected.
  • Engineering Practices guidance will set out the standards that should be adhered to and the stages of validation, testing, acceptance that must be followed prior to production deployment.

However, this isn’t a one size fits all approach and an organisation’s standards and guardrails will vary based on the level of risk they’re willing to take on. This should be based on the potential impact of an AI enabled service on customers, the organisation or the wider market.


The Importance of a Strong Ethical Framework

‘Good’ AI ethics looks different for every organisation, but some common themes are consistent such as ensuring that AI enabled systems are transparent and explainable and that data is collected, stored, and used in a responsible and ethical manner.

By having a strong AI ethics framework in place, organisations can ensure that they are using AI in a way that not only drives revenue targets and strategic objectives, but also respects the rights of the individuals and the communities they belong to.

However, as you will see, regulations are evolving almost as quickly as AI technology itself as governments attempt to keep up, and in our next blog we will take a look at how some of the key nations are tackling AI and how you can create a robust framework for your business that will keep the regulators happy.


Discover More on our Partnership with DS Smith to Leverage Generative AI Technology to Reimagine Packaging for a Changing World

Most Are Not Ready to Scale AI and a Lack of Data Foundations is Holding Them Back

What Are the Key Implications from the EU AI Act for your Business?

Latest Stories

See More
This website uses cookies to maximize your experience and help us to understand how we can improve it. By clicking 'Accept', you consent to the use of these cookies. If you would like to manage your cookie settings, you can control this in your internet browser. Find out more in our Privacy Policy