Artificial intelligence (AI) has quickly moved from experimental projects in data science labs to becoming embedded in nearly every facet of enterprise digital transformation. Over 90% of developers now use AI tools daily, according to the 2025 DORA State of AI-Assisted Software Development Report. Enterprises are spending billions to harness AI for innovation, productivity, and customer growth.
Yet the paradox is stark: While adoption is surging, the majority of AI initiatives still fail to deliver meaningful business value. For example, one report indicated 95% of organizations are getting zero return from their AI investments. Further, at least 30% of AI projects will be abandoned after proof of concept, and only 25% achieve their expected ROI.
These statistics raise critical questions: Why do so many AI initiatives miss their intended business outcomes and what can teams do to improve these results?
The answer lies not in abandoning AI but in reframing how we approach AI investments. The findings from the 2025 DORA report, and our experience working with enterprises, point to a similar conclusion: AI is not a silver bullet. It is an amplifier—of strengths or weaknesses. Without the right lifecycle discipline, organizational practices, continuous measurement, and alignment with business goals, AI initiatives introduce the risk of accelerating chaos instead of delivering value.
This blog explores these topics:
Across industries, the expectations for AI are sky high. In our August 2025 webinar, we outlined the four most common business outcomes that leaders expect from AI investments:
However, these expected outcomes rarely materialize. Here are just a few findings that amply illustrate this point:
These results are echoed by what we hear in our own customer engagements. The 2025 DORA report adds a crucial nuance: Adoption itself is not the problem. In fact, over 90% of developers use AI, often for code generation, documentation, or testing. Developers report increased productivity and even quality improvements. But the DORA research shows a paradox: While AI boosts throughput, it also creates risks to stability and reliability.
This mismatch—between widespread usage and underwhelming business value—underscores that the barrier isn’t access to AI tools. It’s the way teams in enterprises manage AI initiatives, align them with strategic goals, and measure progress.
In our webinar, we explored the three most common root causes for AI initiative failure:
Among these, the lack of clear objectives and outcome alignment is the most critical. Too often, AI initiatives focus on local optimizations (such as code generation or test automation) that improve efficiency but don’t move the needle on enterprise-level outcomes.
The DORA report reinforces this diagnosis. Their research shows that AI amplifies what’s already there. When teams have strong DevOps practices, high-quality internal platforms, and clear organizational alignment, AI can deliver real value. Weak foundations, by contrast, are magnified—with AI producing more change and more instability.
To break this cycle, enterprises must move from siloed AI initiatives to a holistic lifecycle view grounded in VSM. They must also improve the core AI capabilities identified in the DORA report.
Instead of treating AI as an isolated tool within development and testing, VSM looks at the end-to-end value stream, including these aspects:
To support this lifecycle approach, the 2025 DORA report identifies seven core capabilities enterprises must focus on (see figure below). The report provides substantial evidence that when teams paired these seven capabilities with AI adoption, AI’s impact on important outcomes was amplified.
The DORA report strongly aligns here: The authors stress the importance of internal platforms and VSM practices as multipliers of AI’s impact. In organizations with robust platforms, AI adoption correlates with higher throughput and better outcomes. In those businesses without these capabilities, AI merely accelerates delivery instability.
Another reason AI initiatives fail is over-reliance on lagging outcomes. Business outcomes like revenue growth, customer satisfaction, and time to value can take months or years to materialize. By the time results arrive, it’s too late to course-correct.
We call these “lagging surprises.”
Groups often track outputs—such as the number of models deployed or percentage of code generated by AI. While useful for execution, these metrics don’t predict whether outcomes will be achieved.
The solution is to instrument leading indicators that map directly to lagging outcomes (see figure below). Here are some examples:
These indicators…
The DORA report itself emphasizes this same need for continuous measurement and trust. Developers are adopting AI, but trust remains low. Leading indicators, tied to real outcomes, can build confidence across teams and leadership.
Tracking leading indicators at scale requires automated data ingestion and mining. In practice, AI implementations produce huge volumes of telemetry across tools—code repositories, testing, deployment, product analytics, and customer systems. Manual collection is not feasible.
The answer is automated pipelines that provide these capabilities:
This automation makes continuous measurement possible—giving leadership a single, unified view of delivery progress, investment alignment, and business outcomes.
So, how do enterprises turn these concepts into practice?
From both our ValueOps Insights customer engagements and the DORA report’s findings, these imperatives emerge:
When these elements come together, AI stops being a gamble and becomes a disciplined, measurable driver of value.
The 2025 DORA report makes one thing clear: AI is now embedded in software delivery. The question is not whether organizations will adopt AI, but whether they can harness it effectively.
Left unmanaged, AI initiatives will continue to fail, accelerating instability, creating technical debt, and leaving leadership frustrated by a lack of ROI.
But with a VSM-based lifecycle approach, leading indicator instrumentation, and automated measurement, enterprise teams can shift from reactive disappointment to proactive realization of outcomes.
In practical terms, this means:
By rethinking how we approach AI initiatives, tying them to outcomes, and measuring what matters continuously, enterprises can finally close the gap between AI ambition and business reality.
As the DORA report rightly observes, AI is an amplifier. It magnifies what’s already in place. For enterprises, the choice is clear: Have AI amplify weak practices and chaotic delivery or use VSM and outcome assurance to turn AI into a catalyst for achieving meaningful business value.
The organizations that embrace the latter approach will not only outpace their competitors but also ensure their AI investments are delivering real results—and have the data to prove it.
For a more detailed review of this subject and a demo of how ValueOps Insights can be used to predictively track AI outcomes, please view our webinar.