.png)
How to turn AI ambition into revenue without the waterfall trap.
If you want to know why most enterprise AI programmes stall out, it’s not because people lack ideas, the models are not good enough, or the business “doesn’t get it”. It’s because the organisation confuses activity for progress, and planning for execution. You end up with a mountain of pilots, a calendar full of steering committees, and a strategy document so long it needs its own onboarding process.
Meanwhile, shipping often becomes an afterthought rather than a driver. Something real gets pushed into a live workflow later than it should, adoption is patchy, and any KPI movement is either delayed or hard to point to with confidence.
The gap between ambition and production results usually comes down to one core issue: strategy that has drifted away from delivery. Strategy absolutely has a place, but it should serve shippable outcomes, not become a parallel workstream that teams orbit around for months.
When AI initiatives underperform, the explanations are often familiar:
There may be some truth in these, but the risk is that they will become a reason to pause delivery until the world is perfect, which rarely ends well. In fast-moving areas like AI, the landscape shifts while you prepare, and you can end up doing a lot of “getting ready” without getting anything real into people’s hands.
A healthier framing is: yes, constraints exist, but which constraints truly block value, and which ones can be improved incrementally as part of delivering something useful? If everything must be fixed upfront, the business can lose patience before the programme has a chance to prove itself.
Most “AI strategy” is presented like a story about the future. Real strategy is a decision system for the present. It’s the ability to consistently answer:
If you can’t explain the answers without opening a slide deck, the strategy isn’t landing. If it mainly exists as a long document, it might still be directionally correct, but it hasn’t yet been translated into choices a team can execute this quarter.
The practical version is simple: pick an area where AI-ification is cheaper and easier to implement than the overall benefit it unlocks, then start small and increment once value is real. That last part matters more than most people realise. Big bang transformations don’t fail because they’re technically impossible. They fail because they’re hard to finance, hard to govern, and hard to keep sponsored without early wins.
If modern AI has a superpower, it’s not “answering questions”. It’s replacing parts of workflows that are slow, manual, and expensive because they require lots of human steps, handoffs, and rework.
So instead of brainstorming endless “use cases”, I prefer an audit process.
Look for processes that are slow, historically immovable, or expensive. The ones where everyone has learned to live with the pain.
Not everything should be solved with LLMs or agents. Sometimes the best fix is removing a step, changing a policy, or improving a form. AI should earn the right to exist.
This isn’t about making everything a spreadsheet exercise. It’s about maintaining sponsorship and momentum. In most organisations, continued investment depends on being able to show progress in a language executives already recognise.
There’s also a very practical reason to do this early: many organisations expect short payback periods for major tech investments. Research published by AWS Executive Insights (ESG) found 52% of respondents said their organisation targets a 7–12 month payback period for major IT investments. That expectation is not always realistic, but it is real, and it influences how quickly programmes lose air cover if outcomes are unclear.
If you want a filter that keeps ambition alive while preventing low-value work from soaking up months of effort, this checklist tends to work well:
It doesn’t shut ideas down. It forces clarity early, which makes delivery faster and scaling easier. It helps to put numbers on the table, not because they’re perfect, but because they make prioritisation and trade-offs much easier.
There’s a popular way enterprise AI gets explained, especially in strategy conversations: first you build the foundations, then you modernise the platform, then you build AI products, and only once those products exist do you worry about enablement and adoption. On paper, it’s tidy. It appeals to the part of the organisation that wants a clean roadmap, clear phases, and the sense that risk is being reduced before anything “serious” ships.
The logic usually goes like this: “Our data is fragmented and low quality, so we need a large transformation first.” Or “We should not build products until the platform is ready.” It’s not a ridiculous argument. It’s just an argument that quietly turns into a waterfall programme when you apply it to a large enterprise, because “foundations” becomes a multi-year initiative with no visible value until the end.
Organisations treat the model like a strict waterfall. Step 1, then step 2, then step 3, and only then do you get to build something people can use.
If you tell a large organisation, “You must complete a $200m data migration before you can build a single AI product,” you might be setting the programme up for a struggle to maintain momentum and sponsorship over a long period of time, especially if the business can’t see incremental wins along the way.
The more positive way to say it is this: foundations work best when they’re constantly being validated by real products. When delivery creates early value, it buys time, trust, and budget to keep improving the foundation in a sustainable way.
The collapse of this model usually happens right at the start, with the foundations. The way out is not skipping foundations, but building them through delivery using strategic vertical slicing. In practice, you start with an end-to-end vision, how the workflow should run, how systems talk to each other, what “good” data looks like, and the standards you want teams to follow. You just don’t try to build all of it upfront. Instead, you take one small, high-value use case and deliver it end to end, using the gaps you hit (data, access, controls) as the exact places you strengthen the foundations. Over time, those slices add up like a jigsaw puzzle, and each shipped outcome makes the next one faster and safer to deliver.
Strategic vertical slicing looks like this:
This approach turns delivery into a flywheel: each shipped slice creates value, and the effort spent strengthening data, governance, and reliability accumulates over time rather than resetting with every new pilot. The programme stays credible because progress is visible, and the enterprise foundation improves because it is being shaped by real demand.
A useful way to think about this is a simple chain:
Value comes from adoption. Adoption comes from trust. Trust comes from data integrity and governance.
An AI product only generates ROI if people actually use it. People only use it if they trust it. Trust is created by repeatable, accurate, safe outputs inside a workflow that matters.
That foundation has three parts:
If the system does not have enough context, it tends to produce generic outputs. People might try it once, but without that context it rarely becomes a tool they rely on day to day.
Bad inputs create bad outputs. And in AI, a few wrong outputs can undo months of progress because trust tends to drop quickly once people feel they’ve been misled.
Users need accuracy. Leaders need safety. If security, legal, or compliance teams aren’t confident, the product will struggle to reach production, no matter how good the demo is.
One more important point: teams often force AI to meet an impossible standard of perfection. That’s understandable, but it can become a blocker.
A more grounded standard is to treat AI like a person doing the same step in the process. It doesn’t have to be perfect. It needs to be cheaper or faster without sacrificing quality, or it needs to deliver a noticeable increase in quality that is worth the risk and change effort. The goal is credible usefulness.
Release checks should be proportional to the cost of failure.
If you’re building an internal tool used by five people and it goes down for five minutes, that’s annoying. You do not need heavyweight governance that turns a small release into a three-month process.
If you’re deploying something public-facing, regulated, or financially material, you do want stronger assurance, because the cost of being wrong is not “a bug”. It is reputational damage, with legal risk and financial implications.
And if your AI system touches personal data, GDPR should be a reminder that thinking “we’ll tidy up governance later” could be expensive - up to €20m or 4% of worldwide annual turnover for certain infringements.
Governance needs to be part of shipping, not something bolted on after the fact.
A lot of AI programmes quietly fail at the point where people should be adopting them. They build something technically impressive, then try to force behaviour change through training, comms, and mandates. That approach rarely sticks.
People tend to take the path of least resistance at work. If using the AI tool makes the job harder, slower, or more complex, adoption will be performative and temporary. If it makes the job easier, faster, or higher quality, adoption becomes natural.
So enablement is not “how to use the tool”. Enablement is:
If the product reduces friction, you don’t need to convince people. You need to put it where they already work.
And it’s worth saying plainly: you can build the most intelligent, elegant, cost-effective AI tool in the world. If no one uses it, it won’t change anything.
The reason pilots often don’t scale is because the organisation can’t prove the thing works in a way that matters to decision-makers.
If the thing works, it will usually scale. The hard part is proving it works.
This is why KPI alignment is not bureaucracy, it’s translation. It gives technical teams a shared language with stakeholders so scaling decisions don’t turn into opinion wars.
Define the KPI and baseline early, before you start building, and scaling becomes a business decision. Define it late, and scaling becomes a debate about whether anyone “feels” like it worked.
If you want enterprise AI to pay back, stop treating strategy as a document and start treating it as an execution discipline: audit for a painful workflow, filter for AI fit, tie it to a KPI that matters, ship a vertical slice into a live system, build the minimum trust foundation that makes outputs credible, apply governance proportional to the cost of failure, make adoption the path of least resistance, then scale because you can prove the thing works.
Write fewer decks. Ship more outcomes.
Mesh-AI partners with enterprises to navigate this exact journey - Connect with us to explore how.