Introduction
Artificial intelligence (AI) has quickly moved from experimental projects in data science labs to becoming embedded in nearly every facet of enterprise digital transformation. Over 90% of developers now use AI tools daily, according to the 2025 DORA State of AI-Assisted Software Development Report. Enterprises are spending billions to harness AI for innovation, productivity, and customer growth.
Yet the paradox is stark: While adoption is surging, the majority of AI initiatives still fail to deliver meaningful business value. For example, one report indicated 95% of organizations are getting zero return from their AI investments. Further, at least 30% of AI projects will be abandoned after proof of concept, and only 25% achieve their expected ROI.
These statistics raise critical questions: Why do so many AI initiatives miss their intended business outcomes and what can teams do to improve these results?
The answer lies not in abandoning AI but in reframing how we approach AI investments. The findings from the 2025 DORA report, and our experience working with enterprises, point to a similar conclusion: AI is not a silver bullet. It is an amplifier—of strengths or weaknesses. Without the right lifecycle discipline, organizational practices, continuous measurement, and alignment with business goals, AI initiatives introduce the risk of accelerating chaos instead of delivering value.
This blog explores these topics:
- Why most AI initiatives fail to meet business outcomes.
- The need for a Value Stream Management (VSM)-based lifecycle approach.
- How leading indicators can be used to continuously deliver lagging business outcomes.
- Practical ideas for how enterprises can improve AI success rates by integrating these practices.
The harsh reality: Most AI initiatives miss the mark
Across industries, the expectations for AI are sky high. In our August 2025 webinar, we outlined the four most common business outcomes that leaders expect from AI investments:
- Accelerate time to value: Enabling teams to speed innovation and realize benefits sooner.
- Improve efficiency and productivity: Fueling such gains as automating work, optimizing resources, and boosting throughput.
- Enhance quality and reliability: Using AI to catch defects earlier, increase resilience, and maintain consistent service.
- Deliver business results: Generating improvements like enhanced customer growth, revenue, and competitive advantage.
However, these expected outcomes rarely materialize. Here are just a few findings that amply illustrate this point:
- 95% of organizations are getting zero return from their AI investments.
- At least 30% of AI projects will be abandoned after proof of concept.
- Only one in four achieve their expected ROI.
These results are echoed by what we hear in our own customer engagements. The 2025 DORA report adds a crucial nuance: Adoption itself is not the problem. In fact, over 90% of developers use AI, often for code generation, documentation, or testing. Developers report increased productivity and even quality improvements. But the DORA research shows a paradox: While AI boosts throughput, it also creates risks to stability and reliability.
This mismatch—between widespread usage and underwhelming business value—underscores that the barrier isn’t access to AI tools. It’s the way teams in enterprises manage AI initiatives, align them with strategic goals, and measure progress.
Why AI initiatives fall short
In our webinar, we explored the three most common root causes for AI initiative failure:
- Poor data quality and management. Without high-quality, well-managed data, AI models produce flawed insights. Data remains the foundation and weak governance undermines even the most advanced algorithms.
- Lack of clear objectives and ROI alignment. Many projects start as experiments without a well-defined problem or measurable outcome. If you can’t articulate how AI will accelerate time to value or drive business growth, it’s impossible to prove impact.
- Skills and resource challenges. AI expertise is scarce, and scaling initiatives requires more than data scientists. It demands delivery leaders, product managers, platform engineers, and domain experts, who must all be aligned around business outcomes.
Among these, the lack of clear objectives and outcome alignment is the most critical. Too often, AI initiatives focus on local optimizations (such as code generation or test automation) that improve efficiency but don’t move the needle on enterprise-level outcomes.
The DORA report reinforces this diagnosis. Their research shows that AI amplifies what’s already there. When teams have strong DevOps practices, high-quality internal platforms, and clear organizational alignment, AI can deliver real value. Weak foundations, by contrast, are magnified—with AI producing more change and more instability.
The need for a VSM lifecycle approach and core AI capabilities
To break this cycle, enterprises must move from siloed AI initiatives to a holistic lifecycle view grounded in VSM. They must also improve the core AI capabilities identified in the DORA report.
Instead of treating AI as an isolated tool within development and testing, VSM looks at the end-to-end value stream, including these aspects:
- Value identification: Using AI to uncover unmet needs, analyze customer behavior, and prioritize features aligned with strategy.
- Alignment: Translating business goals into epics and features with AI-assisted analysis.
- Planning and execution: Automating requirements, generating code/tests, and facilitating AI-assisted reviews.
- Validation: Using AI for test prioritization and defect prediction.
- Deployment: Automating AI-enabled deployments, canary releases, and rollback.
- Operation and value realization: Establishing improvements in AIOps, personalization, user analytics, and customer support.
By mapping AI across the full lifecycle, enterprise teams can identify where it will have the greatest business impact—so they’re not just focusing on local efficiency gains.
To support this lifecycle approach, the 2025 DORA report identifies seven core capabilities enterprises must focus on (see figure below). The report provides substantial evidence that when teams paired these seven capabilities with AI adoption, AI’s impact on important outcomes was amplified.

The DORA report strongly aligns here: The authors stress the importance of internal platforms and VSM practices as multipliers of AI’s impact. In organizations with robust platforms, AI adoption correlates with higher throughput and better outcomes. In those businesses without these capabilities, AI merely accelerates delivery instability.
Lagging versus leading indicators: Moving beyond “lagging surprises”
Another reason AI initiatives fail is over-reliance on lagging outcomes. Business outcomes like revenue growth, customer satisfaction, and time to value can take months or years to materialize. By the time results arrive, it’s too late to course-correct.
We call these “lagging surprises.”
Groups often track outputs—such as the number of models deployed or percentage of code generated by AI. While useful for execution, these metrics don’t predict whether outcomes will be achieved.
The solution is to instrument leading indicators that map directly to lagging outcomes (see figure below). Here are some examples:
- Lagging outcome (such as customer growth) → leading indicators: Percentage of features prioritized by AI, adoption flow time, and click-through rates on AI-generated campaigns.
- Lagging outcome (such as improved reliability) → leading indicators: Defect detection ratio, change failure rate (CFR) of AI-assisted deployments, and flaky test detection.
- Lagging outcome (such as faster time to value) → leading indicators: Percentage of AI-prioritized features, AI-assisted PR reviews, and AI-generated test coverage.

These indicators…
- Are measurable on a daily or weekly (not yearly) basis.
- Provide continuous confidence signals about whether long-term outcomes are on track.
- Enable predictive analytics to forecast outcome achievement.
- Allow adaptive actions (for example, shifting focus or re-prioritizing investments) before it’s too late.
The DORA report itself emphasizes this same need for continuous measurement and trust. Developers are adopting AI, but trust remains low. Leading indicators, tied to real outcomes, can build confidence across teams and leadership.
Automating data capture and analytics
Tracking leading indicators at scale requires automated data ingestion and mining. In practice, AI implementations produce huge volumes of telemetry across tools—code repositories, testing, deployment, product analytics, and customer systems. Manual collection is not feasible.
The answer is automated pipelines that provide these capabilities:
- Ingest and normalize data from across the value stream.
- Ensure quality and hygiene, eliminating noise and inconsistencies.
- Apply machine learning and AI techniques (such as regression, ensemble models, and large language model (LLM)-powered natural language processing) to enable confident outcome forecasting and early risk detection.
This automation makes continuous measurement possible—giving leadership a single, unified view of delivery progress, investment alignment, and business outcomes.
Bringing it together: Ensuring AI outcomes
So, how do enterprises turn these concepts into practice?
From both our ValueOps Insights customer engagements and the DORA report’s findings, these imperatives emerge:
- Take a lifecycle approach. Move beyond isolated AI pilots in development and testing. Map AI’s role across the full value stream, from ideation through value realization.
- Focus on improving the seven core AI capabilities identified in the 2025 DORA report.
- Instrument leading indicators. Link AI efforts to measurable, predictive signals tied to business outcomes. Shift from lagging surprises to proactive outcome attainment.
- Automate data capture and analytics. Build robust, automated pipelines to track indicators continuously and at scale. Use predictive models to anticipate risks and intervene early.
When these elements come together, AI stops being a gamble and becomes a disciplined, measurable driver of value.
The enterprise opportunity: From AI chaos to AI confidence
The 2025 DORA report makes one thing clear: AI is now embedded in software delivery. The question is not whether organizations will adopt AI, but whether they can harness it effectively.
Left unmanaged, AI initiatives will continue to fail, accelerating instability, creating technical debt, and leaving leadership frustrated by a lack of ROI.
But with a VSM-based lifecycle approach, leading indicator instrumentation, and automated measurement, enterprise teams can shift from reactive disappointment to proactive realization of outcomes.
In practical terms, this means:
- CIOs can confidently justify AI investments.
- Product leaders can see early signals of customer impact.
- Delivery teams can tie their AI work to business value.
- Executives can align strategy and execution with a common language of outcomes.
- AI doesn’t guarantee transformation—but disciplined enterprises can.
By rethinking how we approach AI initiatives, tying them to outcomes, and measuring what matters continuously, enterprises can finally close the gap between AI ambition and business reality.
Final word
As the DORA report rightly observes, AI is an amplifier. It magnifies what’s already in place. For enterprises, the choice is clear: Have AI amplify weak practices and chaotic delivery or use VSM and outcome assurance to turn AI into a catalyst for achieving meaningful business value.
The organizations that embrace the latter approach will not only outpace their competitors but also ensure their AI investments are delivering real results—and have the data to prove it.
For a more detailed review of this subject and a demo of how ValueOps Insights can be used to predictively track AI outcomes, please view our webinar.