24 May

Generative AI: Navigate the Hype, Deliver Impact and Mitigate Risk

DBS
David Bartram-Shaw

The AI sphere is moving at such a pace and the explosion of Generative AI has taken the planet by storm, technical and non technical worlds alike. The sheer pace in adoption of offerings such as ChatGPT and Stable Diffusion have changed the face of the professional world, forever. But with this pace comes pressure to adopt and innovate, before others eat your lunch.

At Mesh-AI, we believe that to keep relevance, companies have to innovate but, this has to be done in a considered way, with strong foundations that can lead to long term impact with minimal risk. When it comes to Generative AI, which is such a green field area, there are a lot of ethical and governance considerations to be made. This is both for “off the shelf” API based solutions, like OpenAPI and self trained bespoke models.

In this article we outline the 5 key areas of consideration for you and your business in the use and adoption of Generative AI. Internally, we have built a framework and approach for each of these areas, designed to identify valuable, viable and safe applications of Generative AI, with a view on the longer term transformational requirements to truly change the way your business operates. The 5 key areas we will cover are: 

  1. Key Applications of Generative AI: Low stakes vs high stakes
  2. Generative AI Risk: Key areas of oversight by those adopting ChatGPT
  3. ML Development: Integration with traditional ML solutions to safeguard the dangers and expand the applications of Generative AI
  4. Data Foundations: Incorporating Generative AI to your current data estate and platform
  5. Generative AI Assisted Development: How to optimise engineering practises

1. Key Applications of Generative AI: Low stakes vs High Stakes

The initial impact area of Generative AI was in the creative space, with the ability to generate impressive artistic and photogenic visuals based on text based prompts. So a disruption of the creative industry began. With the release of ChatGPT, which outputs text from a conversational based input prompt, this opened up a much wider range of outputs. After all, text is essentially used to create documents, code, songs, webpages, maths and a plethora of formats. Then we began to see a much wider range of applications which has really driven the belief and reality that Generative AI will shake up a large number of industries and jobs. The list is long, but a few examples include; 

  • Code Development: Engineering roles across the board from python to front end
  • Communications: Automated content creation such as blog writing, story telling
  • Information retrieval: Our ability to search and find information quickly, impacting jobs like paralegals and researchers.
  • The education sector: Syllabus and coursework creation

Recently, Goldman Sachs published a research paper indicating that over 300 millions jobs face the potential to be impacted by AI aligned disruptions. That said, the research paper did also indicate that AI could eventually increase the total annual value of goods and services produced globally by 7%.

In short, this global disruption can be categorised into three key areas of opportunity:

  • Workforce Optimisation: How can we build and deploy Generative AI solutions that enable the workforce across a number of industries to better carry out the low value, repetitive tasks? In turn, freeing up their minds and institutional knowledge for higher value tasks that will ultimately add more value to your business. For example, this could be exemplified in populating templates for regulatory documentation, responding to frequently asked customer requests, or documenting design patterns and blueprints for the products and services that drive your business. 
  • Knowledge Extraction & Centralisation: Indexing and retrieving information was one of the first key applications of these text based Generative AI approaches.This came in the form of threats to the way we search the internet. But it’s not just the extraction of existing information, it’s our ability to build systems that we feed into over time and optimise with the internal knowledge and expertise that exists within our businesses.
  • Creative Ideation: Building on the early adoption across the communication and artistic communities, we can take this one step further by saying that Generative AI has the ability to generate large numbers of relevant ideas that help humans to be creative at a far greater pace, increasing the iterative process of creativity.

Depending on the industry you are in, the business you operate, the customers you serve and the data you deal with, each of these areas of opportunity will come with low and high stakes options. Often we talk about AI having the ability to make money, save money and reduce risk. The 3 areas above fall firmly into the making and saving money camps but Generative AI, its design and primitive nature actually creates more risk and so is a dangerous field to tread in those “high stake” situations. 

2. Generative AI Risk: Key areas of oversight by those adopting ChatGPT

The capabilities of Generative AI are so impressive it’s hard to argue that they should be adopted in some way. However, as impressive as they are, there are some key considerations that require a cautionary approach.

  • Generative AI cannot easily be controlled: The definition of “Generative” means the algorithms create something new every time they make a prediction (A prediction is what the model returns when you send it a request, or what we call a prompt). This means it cannot easily be controlled when deployed into the hands of users. You could ask the one generative model the same question and it may return thousands of potential responses to the same question.
  • The inability to distinguish between Fact and Fiction: These systems, although trained on vast amounts of data, do not know the difference between true and false. They make probabilistic inferences based on what they think you are asking, based on what they have seen in the past. They also cannot link back to the original source of the information they are returning.
  • How they are trained: The potential bias that may be present in the dataset and the time frame covered by these models. For example ChatGPT is only trained up until 2021 and the way in which the dataset was sampled may negatively exclude certain minority groups. Our data collection is growing exponentially year on year and with it, the geo-political views of society alter in a similar fashion. 
  • Where your data goes when you send your prompts: This has implications on data privacy such as inputting PII or internal secrets/IP. 

Now, there are both Low Stake situations, where the impacts of the above are mitigated, such as only using publicly available data or where making a wrong prediction will not negatively impact your business or customers.

Most of these risks are not considered, because at first the outputs of these models are so impressive. But, and this is a big BUT… not all is as it seems. For High Stake situations, where providing false/misguided information may cause harm to your customers or business, safeguards need to be put in place. 

Each application of Generative AI must be assessed based on it’s risk and reward, alongside the potential impact and effort of mitigation required to make the approach safe. This requires both deep technical knowledge of methodology risks and mitigation approaches, as well as a solid understanding of the business impact and need. Our extensive Generative AI application landscape and risk framework make this both easy and comprehensive to assess and mitigate risk whilst still moving quickly.

3. ML Development: Integration with traditional ML solutions to safeguard the dangers

There are also ways in which we can combine the outputs of these models with downstream safeguards, such as restrictions, human quality checks or additional ML models.

This is something that is actually already done by many of the large Generative AI service providers, like Open AI. Applying designed variation in the responses of their image algorithms for example - so when you ask DALLE-2 for a certain image i.e “a picture of a business meeting”, they may force the underlying model to “Add diversity” in the visual responses. Or in ChatGPT, they will run a classification algorithm on the response to your request of “tell me how to make a bomb” to prevent the model from returning results.

Likewise, you can combine the predicted outputs from “out of the box” Generative AI services, like ChatGPT, with your own classification filters.

Another approach, to navigate the risk of adding your restricted personal/client data into ChatGPT, may be to extract large amounts of data from the system and then use this to “detect” certain scenarios within your own data. For example, if you are trying to detect vulnerable customers who call your contact centre, you may ask ChatGPT to provide you with a large number of examples of the types of things vulnerable customers may say and then run more traditional Natural Language Processing (NLP) matching algorithms to find similar statements in your call centre logs. This navigates the dangers of sending PII into these systems.

Finally, when you really are wanting to utilise the power of these underlying models, you always have the ability to take openly available base models (Like the GPT-3 davinci model) and fine tune it on your own data. This of course is much more expensive, but the massive capability this brings, alongside the mitigated risk is something that should be considered in a large number of scenarios.

Overall, you need a holistic view on AI and the data that underpins it. Generative AI should not be considered in isolation but instead as an integrated part of your solution stack. Again, the knowledge and awareness of which risks exist will allow you to design the data or ML solutions required to mitigate them.

4. Data Foundations: Incorporating Generative AI to your current data estate and platform

Now we’ve reached a universal viewpoint on performant AI models, largely centred around transformers, the value is shifting away from advancements in algorithmic development towards data. Data is your differentiator, the thing that will set your ability to make decisions and optimise moving forwards. 

Therefore it’s important that we not only look at generative AI as something we can use, but something we should build our business operations around, in order to collect more data and improve it moving forwards. This could be storing amendments to regulatory documents produced by the Generative system, or feedback on which outreach messages work for different audience groups. It could be storing and feeding in all of your advisory discussions with clients and fine-tuning your own model over time to learn from real human interactions across service industries. Regardless of the application, the importance of storing the institutional knowledge created every day across your business is imperative and with the advent of new capabilities such as transcription, this need not be form or spreadsheet based data. 

The future of Generative AI is a multi-model one, which means a wider variety of input data will be possible and putting in place the collection, storage and platform capabilities to start collecting that data now, will set you up for success in the (very) near future.

As we mentioned in the previous section, there is a real importance of understanding what data you feed into these algorithms, specifically if you are using an external service, many of which collect and store your data by default. But, even if you have your own in-house generative algorithms, the importance of data lineage, tracking what data goes in and comes out, is vitally important, for both auditing and iterative improvements. 

That’s why at Mesh-AI, we focus on the integrated solution set required to deliver safe AI at scale. This starts with the data foundations, the platform solutions and the product thinking that enabled continuous validated learning.

5. Generative AI Assisted Development: How to optimise engineering practices

Finally, as an engineering-led consultancy, we’d be remiss to not mention the gargantuan impact that Generative AI is having on the way development is carried out. Since the release of Copilot, by Github, the adoption of generative AI powered coding pilots has grown significantly. Many reports claim not only benefits in efficiency but also in job satisfaction and creativity. This new development paradigm and toolset allows coders to crack the repetitive tasks more quickly, allowing them to focus on the important stuff, the outcomes and innovation. 

There are of course considerations, such as IP, ensuring you are not sending all your code directly to the company offering the code-pilot service or putting in place the guardrails to ensure appropriate quality checks are carried out. Indeed, historical challenges around open source licence rules and OWASP vulnerabilities still remain. But, these are things that can be solved with a development framework, acceptable usage policy and practise guidelines within your engineering groups. Ensuring you are up to date with releases, enhancements and risks with a governance based approach will protect and increase the pace of adoption and impact.

It’s time to move on Generative AI, in the right way

We believe the world has changed and this change is here for good. The adoption of Generative AI within businesses is going to be essential to surviving, but, due to where we are on the maturity curve, it’s important this adoption is executed In the right way. Putting together a solid plan around both opportunities that drive real business outcomes and the risks and guardrails is key. 

The wave of possibility for data driven and ethical AI is massive. And we genuinely believe that any of our customers left behind by this wave will have been mis-served by us and the industry as a whole.

Mesh-AI has a proven track record in reimagining how enterprises operate, making data and AI their competitive advantage. Need help developing your AI Strategy? Get in touch with David Bartram-Shaw

Also, take a look at our AI & ML accelerator to supercharge your transformation efforts.

Have a listen to our latest Podcast: On generative AI and the role of Regulation and its future

Read our latest blog on The Most Powerful Data Trends Enterprises Need to know about

Latest Stories

See More
This website uses cookies to maximize your experience and help us to understand how we can improve it. By clicking 'Accept', you consent to the use of these cookies. If you would like to manage your cookie settings, you can control this in your internet browser. Find out more in our Privacy Policy