9 Apr

The Long Tail of AI: A Moment in Time Review of AI Hype vs Reality

DBS
David Bartram-Shaw

TLDR: The media/social noise train is telling us impact is minimal, hype is reducing and coding is soon to be a non-requirement. In the background, AI research & release still sets an incredible pace. The adoption in enterprise however, is limited to non scalable POC's and isolated but impact limiting areas. Data is still an oversight.

Generative AI Spend vs Revenue

Last week the news hit that "Generative AI is not generating returns" based on a Sequoia estimation that the AI industry spent $50billion on the Nvidia chips for GenAI, but only brought in only $3 billion in revenue. A sobering statistic, but this looks predominantly at companies "building" AI solutions, models and services. 

There would be an even more staggering statistic if we were to look at the technology "spend" of companies attempting to implement these solutions at scale – it would be tiny. The majority of those who are making progress are doing so using "off-the-shelf" services that sit separately from their enterprise data and technology estate.

Coding is dead

This $3 billion statistic was also referenced in the article announcing "A Peter Thiel-Backed AI Startup, Cognition Labs, Seeks $2 Billion Valuation", the company that recently released the impressive looking Devin, an Agent based coding solution - or what they called "The first AI software engineer" - one of the latest in a series of hype storms further fuelling the narrative that "coding is dead". 

It is very true that the efficiency of coding has seen a boost due to Generative AI and it's one of the more successful areas of adoption and impact. That said, the need for strong developers, alongside the visionary builders who can bring the "stack" together has never been greater. Or we will be looking at another "Auto-ML" promise, but with a 20x impact. Auto-ML promised the click-ops based auto-learning ML solutions that failed to scale due to the novelty of most problems and the majority of the challenge in set-up and data.

Pace of AI Research Release

The general hype is reducing, which I think is a good thing, but with it comes a reduction in the "belief" of what AI can and will do. Meanwhile, in the background, research labs (both industry and academic) are pushing full steam ahead with mind-blowing developments of even better productonised capabilities, which to the average onlooker do not significantly change the outlook. But behind the scenes these are the things that will bring us sustained scale. Here’s some key areas that are pushing the button right now:

  • Huge context windows: Is RAG soon to be irrelevant? Thanks to Gemini (1m context window/10m in research version). Personal Note: Regardless of RAGs future Vector Databases will always have a place. I'm glad their profile has been rightly raised - Thanks LLMs.
  • Mixture of experts: Architectures that split the computation of a layer or operation into multiple “expert” subnetworks. This increases efficiency and still shows great performance. Key creditor: Mistral's Mixtral Model.
  • Small/Tiny language models (SLMs/TLMs), like Microsoft's Phi-2 are continuing to drive down the cost and durability of LLM deployment. This will unlock privacy safe and IoT/Edge deployments 
  • Open Source Challengers to the "Paywall model endpoints" like GPT-4 are continuing to be released. The latest was Databricks DBRX model.
  • Combining Generative AI with robotics. NVIDIA's latest release of GR00T is an impressive step that shows significant progress in bringing the digital and physical research areas of AI together.

We should also mention the public cloud providers who are, alongside all of their algorithmic research and investment, releasing the services that allow us to develop and deploy at pace.

The reality of enterprise adoption

Currently there are a few core areas where AI (or more specifically Generative AI) is seeing traction within enterprise. To me, these are the easy, short tail areas. 

Areas such as contact centre chatbots (not connected to customer databases, real time transactions web activity etc), knowledge extraction (limited due to focus on unstructured content) and content creation. 

The long tail, the areas that are core to the operation and services of most businesses, are relatively untouched - think true decision making. Yes, there are a large number of POC's that are taking place in other areas but few have hit the scale to genuinely drive enterprise change. 

This will not happen until companies start to approach Generative AI not as a separate "New" space, but as part of the wider Data & AI solution set, whereby it can reach full potential through integration and true orchestration. We published a cool piece about some of these long-tail projects here.

All this to say, I'm positive about where we are heading. If we focus and build in the right way, we really can utilise the amazing research and innovation that is being released that will take us far beyond the obvious and toward true change.

Latest Stories

See More
This website uses cookies to maximize your experience and help us to understand how we can improve it. By clicking 'Accept', you consent to the use of these cookies. If you would like to manage your cookie settings, you can control this in your internet browser. Find out more in our Privacy Policy