February 20, 2026
5 min read
AI & Data Strategy

88% of companies have adopted AI. Only 6% see financial impact. The gap isn't what most CEOs think.
Almost every organization has jumped on the AI bandwagon. Few can point to what they're getting for it.
According to McKinsey's 2025 State of AI report, 88% of organizations now use AI in at least one business function. That number is up from 78% the year prior. Microsoft's research tells a similar story. Enterprise AI adoption climbed from 55% in 2023 to 75% in 2024, with 78% of large enterprises having initiated AI projects.
But here's the number that should stop every CEO in their tracks: only 6% of organizations are achieving meaningful financial impact from those investments.
McKinsey found that just 6% of companies qualify as "AI high performers," defined as those generating 5% or more of their EBIT from AI. Independent research from MIT's Media Lab landed on nearly the same number from a different direction: 95% of enterprise generative AI projects have failed to deliver measurable financial returns. Cisco's 2025 AI Readiness Index puts it at 13% achieving consistently measurable returns. Three different methodologies, same conclusion: single-digit to low-teens success rates.
Deloitte's 2026 State of AI in the Enterprise report reinforces the point: only 20% of organizations are actually growing revenue through AI, despite 74% aspiring to do exactly that.
That gap between doing AI and getting value from it? That's the problem worth solving.
The Barriers Aren't What Most Leaders Think
When CEOs hear that AI isn't delivering, the instinct is to question the technology. Maybe we picked the wrong platform. Maybe we need better models. Maybe we need more data.
The research says otherwise.
Boston Consulting Group surveyed 1,000 C-suite executives across 59 countries and found that 70% of the barriers to AI value are people and process-related. Only 20% are technology problems. The remaining 10% are algorithmic. The skills gap is consistently the top barrier. McKinsey pegs it at 46% of organizations citing talent shortages, while Microsoft's data shows 30% lack specialized AI skills in-house.
Deloitte found something even more telling: perceptions of data and infrastructure readiness actually declined year-over-year, even as adoption surged. Companies are buying the tools faster than they're building the capability to use them.
As KeyAnna Schmiedl of Workhuman put it at the WSJ Leadership Institute: "What AI does is show you how broken the org structure truly is." The technology isn't failing. It's exposing problems that were already there.
What the Top 6% Do Differently
If the barriers are organizational rather than technological, what separates the companies that are getting results?
Three things stand out across every major report.
They redesign workflows instead of bolting AI onto existing ones. McKinsey's research identifies workflow redesign as the single biggest factor determining whether a company sees EBIT impact from AI. Their high performers are three times more likely to redesign how work gets done rather than simply adding AI to the way things already operate. Yet Deloitte found that 37% of organizations are still using AI at a surface level with little to no process change. Adding AI to a broken workflow doesn't fix the workflow. It just makes it faster at being broken.
The CEO owns AI strategy, not IT. Microsoft, McKinsey, and Deloitte all point to the same conclusion: executive leadership is the strongest predictor of AI success. Microsoft's research is direct on this point: "Strong senior leadership is as crucial as the right technology. For AI to deliver material impact, C-suite sponsorship is non-negotiable." McKinsey identifies CEO oversight as the single element with the most impact on EBIT from generative AI. Deloitte states that organizations where senior leadership actively shapes AI governance achieve "significantly greater business value" than those delegating the work to technical teams alone. Wharton's 2025 adoption study found that 61% of enterprises have now established Chief AI Officer roles. Accountability is moving to the C-suite because that's where the lever is.
They measure from day one. Wharton's longitudinal research found that 72% of companies now track structured, business-linked ROI metrics. Those that build measurement into their AI programs from the start are far more likely to realize value. They don't retrofit measurement after the fact. Microsoft's data shows a $3.7x average return per dollar spent on AI, but top performers hit $10.3x. That nearly threefold difference in returns comes down to execution discipline, including the rigor of how outcomes are defined and tracked before a project begins.
The Proof-of-Concept Trap
There's a related pattern in the data that deserves attention. Two-thirds of companies remain stuck in pilot or experimentation phases, according to McKinsey. Deloitte found that only 25% of organizations have moved 40% or more of their AI pilots into production.
This is what we call the proof-of-concept trap. A small team builds a pilot with clean data and limited scope. It works in the demo. Leadership gets excited. Then the project hits the wall of integration, security, governance, and scale. And it stalls.
BCG's data bears this out: 74% of companies struggle to move beyond proof-of-concept to tangible value, and 66% have difficulty establishing ROI even on the opportunities they've already identified. S&P Global's 2025 survey found that 42% of companies abandoned most AI initiatives during the year, up from just 17% in 2024. The average organization scrapped 46% of proofs-of-concept before they reached production.
MIT's research identifies the root cause: "brittle workflows, lack of contextual learning, and misalignment with day-to-day operations." It also found that in-house AI tools succeed only one-third of the time, compared to two-thirds for third-party vendor tools. This suggests that most organizations lack the internal capability to build AI solutions that integrate with how work actually gets done.
The pilot didn't fail because of the technology. It failed because the organization wasn't ready.
Where Does a CEO Start?
If 88% of companies are already doing AI and only 6% are getting real value, the question isn't whether to adopt AI. It's how to become one of the 6%.
The reports converge on a practical sequence.
First, establish governance. Only 18% of organizations have an enterprise-wide AI governance council, per McKinsey. Only 21% have mature AI governance, per Deloitte. Cisco's readiness data is even more stark: only 23% consider their governance processes primed for AI. Yet governance is the foundation that everything else depends on. It determines how risks are managed, how investments are prioritized, and how accountability works. Without it, every AI initiative is an isolated experiment. The market is catching up to this reality. Forrester predicts 60% of Fortune 100 companies will appoint a head of AI governance in 2026.
Second, identify where workflow redesign can create measurable value. Microsoft's research recommends starting with processes that have "measurable friction where AI improves cost, speed, quality, or customer experience." Productivity use cases are the clearest starting point: 92% of AI-using organizations deploy for productivity, and 43% say those use cases deliver the greatest ROI.
Third, define success metrics before you start building. The organizations getting real returns don't launch AI projects and hope for the best. They tie every initiative to a specific business outcome with clear measurement criteria, then track against those benchmarks from day one.
The Competitive Stakes Are Real
This isn't an academic exercise. BCG's research shows that AI leaders achieve 1.5x higher revenue growth and 1.6x greater shareholder returns compared to their peers. Microsoft's Frontier Firm research documents 4x better outcomes in brand differentiation, cost efficiency, revenue growth, and customer experience among organizations that deploy AI comprehensively.
The window for catching up is narrowing. As Wharton's research notes, 88% of organizations plan to increase AI spending this year. But patience is running out. Kyndryl's 2025 Readiness Report found that 61% of senior business leaders feel more pressure to prove AI ROI than they did a year ago. A Teneo survey revealed a telling disconnect: 53% of investors expect positive ROI in six months or less, while 84% of CEOs predict it will take longer. Forrester estimates 25% of planned AI spend may be deferred into 2027 as enterprises demand to see returns.
The companies that figure out the organizational side of AI first will compound their advantages. Those that don't will find themselves defending budgets that produce less and less relative to their competitors.
Open Questions the Research Hasn't Answered
The data is clear on what separates the top performers from the rest. But several questions remain unresolved, and they matter for any CEO making decisions right now.
1. How long should you wait for ROI before pulling the plug? Investors expect returns in six months. CEOs say it takes longer. MIT found that 95% of projects show no measurable returns within six months. But some of those projects may deliver value at month nine or twelve. There's no consensus on the right timeline, and premature abandonment (42% of companies in 2025) may be as costly as staying the course on a failing initiative.
2. Build or buy? MIT's data shows in-house AI tools succeed one-third of the time versus two-thirds for vendor tools. But vendor solutions don't always fit enterprise-specific workflows. The right balance between custom-built and off-the-shelf AI is still an open question for most organizations, and the answer likely varies by use case.
3. Who actually owns AI in the org chart? The research says the CEO should own AI strategy. Forrester says 60% of Fortune 100 will appoint a head of AI governance. Wharton documents the rise of Chief AI Officers. But the reporting lines, budget authority, and operating model for AI leadership remain undefined at most organizations. Governance without clear ownership is just a committee.
Where to Start: Five Steps for the Next 90 Days
The research points to a practical sequence. These aren't aspirational. They're the common thread across every report on what the top performers did first.
1. Audit your current AI initiatives against business outcomes. Most organizations can't answer a simple question: which of our AI projects are tied to a specific P&L metric? Start there. If a project doesn't have a defined business outcome and a measurement plan, it's an experiment, not an initiative.
2. Stand up a governance framework before you scale anything. Only 23% of organizations have governance processes primed for AI (Cisco). Every report identifies governance as the prerequisite for scaling. This doesn't require a 12-month effort. It requires clear policies on data usage, risk tolerance, accountability, and decision rights for AI investments.
3. Pick one workflow to redesign, not ten to pilot. McKinsey's high performers focus on workflow redesign, not tool deployment. Identify one process with measurable friction, one where AI can demonstrably improve cost, speed, quality, or customer experience, and redesign it end-to-end. One successful redesign builds more organizational capability than ten pilots.
4. Close the skills gap with structured enablement, not training decks. The 46% talent shortage (McKinsey) won't be solved by hiring alone. Cisco found that 99% of companies seeing AI value have formal employee training programs. Put AI tools in the workflow where people actually work and build capability through practice, not presentations.
5. Set a 90-day decision point with your board. The CEO-investor expectation gap (53% want ROI in six months, 84% of CEOs say it takes longer) creates a governance risk. Align your board on realistic timelines, define what "progress" looks like before full ROI, and schedule a structured review at 90 days. This prevents both premature abandonment and indefinite experimentation.
Getting From Ambition to Activation
The gap between knowing AI matters and knowing what to do about it is where most organizations are stuck today. The research is clear that closing this gap requires more than technology. It requires leadership, governance, and a disciplined approach to implementation.
This is exactly where Nova Group works with organizations. As a consulting firm providing Fractional CIO, Advisory, and Implementation Program Management services, Nova Group helps leadership teams move from AI ambition to AI activation. That starts with an AI Readiness Assessment to understand where your organization stands, building an AI Governance framework tailored to your business, and managing the implementation program that turns strategy into measurable outcomes.
If you're a CEO who knows AI matters but isn't sure where to start, or you've started and aren't seeing the results, that's the gap Nova Group was built to close.
Sources
McKinsey & Company, "The State of AI: How Organizations Are Rewiring to Capture Value" (2025). mckinsey.com
Microsoft / IDC, "Generative AI Delivering Substantial ROI to Businesses" (January 2025). news.microsoft.com
Microsoft Work Lab, "2025: The Year the Frontier Firm is Born" (2025). microsoft.com/worklab
Deloitte, "The State of AI in the Enterprise: From Ambition to Activation" (2026). deloitte.com
Boston Consulting Group, "Where's the Value in AI?" (2024). bcg.com
Wharton School, "2025 AI Adoption Report: Gen AI Fast-Tracks Into the Enterprise" (2025). knowledge.wharton.upenn.edu
MIT Media Lab / Nanda, "The GenAI Divide: State of AI in Business" (2025). entrepreneur.com
Cisco, "AI Readiness Index 2025." Referenced in CIO.com
WSJ Leadership Institute / Workhuman, "AI Is a People Challenge, Not a Tech One" (2025). unleash.ai
CIO.com, "2026: The Year AI ROI Gets Real" (2025). cio.com