In recent findings shared by VentureBeat, AI product teams are facing a stark reality: despite massive investments—estimated between $30 to $40 billion into generative AI—95% of organizations report zero measurable return, with only 5% effectively crossing the so-called “gen AI Divide.”¹ Even though 94% of businesses are increasing AI investments, only 21% have operationalized meaningful results, according to a Qlik and ESG study.² Additional corroboration from Informatica’s CDO Insights highlights major barriers like poor data quality, insufficient readiness, and technical immaturity.³ Meanwhile, outdated frameworks such as RICE (Reach, Impact, Confidence, Effort) fall short in capturing AI’s complexity—omitting critical dimensions like model maturity, hallucination risk, data governance, and alignment with human workflows.⁴
Sources: Times of India, Qlik.com, Informatica
Key Takeaways
– Investment Overshadows Impact: Enterprises are pouring billions into generative AI, yet only a small fraction see real-world results.
– Operational Bottlenecks Stall Progress: Factors like poor data quality, technical immaturity, and lack of AI readiness hinder implementation.
– Legacy Frameworks Don’t Fit AI: Traditional models like RICE fail to account for AI-specific challenges such as hallucination risk, governance, and human-AI alignment.
In-Depth
Over the past year, the AI gold rush has captured executive attention—and investment. With $30–40 billion poured into generative AI efforts, organizations hoped for revolutionary productivity gains. But the numbers don’t lie: 95% of companies report no measurable return; just 5% have broken through from experimentation to value realization. This disparity isn’t for lack of ambition; rather, it underscores a mismatch between AI’s promise and the systems supporting it.¹
A survey by Qlik and ESG tells a similar story: nearly all (94%) of businesses are ramping up AI spending, but a mere 21% can point to meaningful operational deployment.² On top of that, Informatica’s CDO Insights reveal that stalled progress often stems from foundational weaknesses—begrudgingly, technical immaturity, and poor data hygiene.³ In short, AI projects aren’t failing because models are bad; they’re failing because everything else around them is.
Enter the frameworks—take RICE, a once-trusted model that scores initiatives on Reach, Impact, Confidence, and Effort. Elegant on paper, perhaps. But disastrously misaligned with AI’s complexity. For example, Reach assumes user numbers scale cleanly—an ill fit when AI outcomes depend on contextual adaptation rather than raw volume. Confidence often amounts to gut feeling—yet AI models demand maturity, governance, and data rigor. Effort traditionally tracks code work—ignoring the grueling toil of obtaining, cleaning, and governing AI-ready data. And Impact presumes consistency and measurability—whereas AI may yield subtle enhancements, augment decisions rather than replace humans, or fail silently through hallucination.⁴
So what’s going on beneath the hood? Legacy frameworks simply don’t capture the reality of AI development. They lack critical considerations: Is the model mature and tested across diverse contexts? Are hallucination and bias risks controlled? Do humans trust and have agency over the AI’s decisions? Is there governance tracking change, feedback, and compliance? Without these in view, prioritization veers toward “flashy” experiments that fail to land.
Moving forward, product teams need to integrate new dimensions into decision-making. Metrics should include model maturity, data readiness, hallucination and bias risk, alignment with human workflows, and governance structures. Teams should adopt a holistic, outcome-driven strategy that considers whether AI can adapt, learn, and reliably support real tasks—not just whether it looks cool in a slide. Tight feedback loops, staging for safe rollout, and cross-functional participation (modelers, product managers, UX, stakeholders) become essential.
In 2025, the AI story isn’t about tools—it’s about infrastructure, readiness, and strategy. Teams that adapt their playbook—not just their tech—stand a chance of bridging the divide between hype and real returns.

