WRITER's April 2026 AI Adoption Survey put two numbers on the same page that do not obviously fit together.
87% of leaders report that their AI super-users, the top quartile of adopters, are at least 5x more productive than their colleagues in the same role. Only 29% of the companies employing those super-users report significant ROI from generative AI.
Both numbers come from the same 2,400-person sample. Both are directionally corroborated by independent research. They are not contradictory findings; they are the two halves of what is now clear enough to treat as a structural problem rather than a transitional one. Individual AI productivity is real, measurable, and significant. It is not compounding into enterprise value.
The problem goes deeper than one survey. McKinsey's 2025 State of AI puts the harder denominator more starkly: only 6% of companies qualify as AI "high performers" where the technology contributes meaningfully, more than 5%, to EBIT. Its 2026 State of Organizations report makes the organisational version even plainer: 81% of organisations experimenting with AI do not report meaningful bottom-line gains. Gartner, in April 2026, forecast that over 40% of agentic AI projects will be cancelled by 2027 on cost and unclear business value.
- The super-user paradox
Individual AI productivity is rising sharply for a small cohort of power users, but most firms are not seeing those gains become measurable revenue, cost, or operating performance. The gap is not a model problem. It is an organisational design problem.
The AI productivity paradox is that individual employees can save hours and produce better task-level output while the firm still records little or no financial return. The missing layer is workflow redesign: decision rights, governance, incentives, and ownership that convert faster individual work into collective operating performance.
This is the super-user paradox. What follows is what the numbers actually say, why the individual gains do not reach the P&L, and what the small group of organisations capturing ROI did differently.
The individual win is real
The WRITER survey, published on 7 April 2026, was fielded between 17 December 2025 and 25 January 2026 by independent research firm Workplace Intelligence. The sample was 2,400: 1,200 C-suite executives and 1,200 non-technical employees actively using AI at work.
Super-users, defined as the top quartile of AI adopters, save about nine hours a week on repeatable work versus roughly two hours for laggards, a 4.5x gap in measured time saved. They represent about 40% of the workforce in marketing, sales, HR and customer-support functions. They are 3x more likely to have received a promotion or a raise in the past year. One in nine has built their own AI agent, tool or workflow. Ninety-two percent of the C-suite say they are actively cultivating a new class of "AI elite" employees. Sixty percent plan to lay off those who cannot or will not adopt AI.
One caveat is worth stating directly: WRITER is a vendor with a commercial interest in positioning AI tooling favourably, and the headline 5x multiple comes from leaders' perception of their super-users, not a direct measurement of output. The 4.5x gap in hours saved is the independent anchor. The directional finding is corroborated elsewhere.
Stanford's 2026 AI Index documents task-level productivity gains of 14-15% in customer support, 26% in software development, and 73% in marketing output. The landmark Brynjolfsson-Li-Raymond customer-support study found a 14% average gain with novice workers improving 34%. Dell'Acqua and Mollick's Harvard-BCG experiment found consultants using GPT-4 completed tasks 25% faster with 40% higher quality, but also performed 19% worse on tasks outside the model's "jagged frontier." The individual productivity effect is real and measurable. It is also uneven, task-dependent, and concentrated in specific kinds of structured work.
The AI productivity paradox is organisational
Zoom out to the organisation and the picture inverts.
Only 29% of companies in WRITER's sample report significant ROI from generative AI. For AI agents specifically, the number is 23%. This despite 59% of companies investing over a million dollars a year in AI technology, and 97% of executives saying they deployed AI agents in the past 12 months. 75% admit their AI strategy is "more for show" than an actual operating document. 48% call their adoption so far a "massive disappointment."
WRITER is not the only source pointing at this gap, and WRITER's 29% may overstate the reality.
MIT's NANDA initiative published its GenAI Divide report in August 2025, based on 150 leader interviews, a 350-person employee survey, and analysis of 300 public AI deployments. The headline finding: 95% of enterprise generative AI pilots deliver no measurable P&L impact, against a cumulative $30-40 billion in enterprise spend. NANDA's implementation detail is sharper than any single figure: AI tools bought from vendors and rolled out with external partners succeeded 67% of the time. Internal builds succeeded roughly a third as often.
An NBER working paper published in February 2026 by Ivan Yotzov, Jose Maria Barrero, Nicholas Bloom and colleagues (w34836) surveyed almost 6,000 executives across the US, UK, Germany and Australia. Sixty-nine per cent of firms actively use AI, but 89% of managers reported no change in productivity from AI over the past three years, and 90% reported no change in employment. Average executive AI usage in the sample sat at 1.5 hours a week. A quarter of respondents did not use AI at work at all. The same firms, asked to forecast AI's effect on productivity over the next three years, returned an average prediction of just 1.4%.
Stanford's 2026 AI Index reinforces the point. Organisational adoption has reached 88%, but agent deployment, the thing 97% of WRITER's executives say they have done, sits in the single digits across business functions when measured by actual usage rather than procurement. Gartner's April 2026 research on AI in infrastructure and operations found projects stalling ahead of meaningful ROI; its forecast is that over 40% of agentic AI initiatives will be cancelled by 2027. McKinsey's 2025 State of AI puts the hardest denominator of all on the question: only 6% of surveyed companies are "high performers" where AI contributes more than 5% to EBIT. Its 2026 State of Organizations report adds the operating-model read: less than 20% of companies that have tried AI have seen significant tangible bottom-line impact.
The paradox, then, is not subjective. One well-defined population is getting enormously more productive. Another well-defined population, the aggregate firm, is not seeing that productivity arrive at the revenue line.
The shape of this is not new
In a New York Times Book Review essay in July 1987, Robert Solow observed that despite two decades of massive corporate IT investment, US labour productivity growth had slowed from 2.9% annually between 1948 and 1973 to 1.1% after 1973. He summed it up in a line economists have been quoting ever since: "You can see the computer age everywhere but in the productivity statistics." The IT productivity dividend did not arrive in aggregate data until 1995-2005, roughly fifteen years after the investment wave peaked.
What is interesting about today's AI moment is not that the paradox is repeating. It is that the mechanism driving the gap is legible in real time.
MIT NANDA's framing of that mechanism is the sharpest I have seen. The issue, they argue, is not model quality. It is the "learning gap" for both tools and organisations. Generic AI tools excel for individuals because they are flexible. A consultant uses Claude to draft a deck; an analyst uses ChatGPT to summarise a report; a marketer uses Midjourney to produce a thousand ad variants. The tool adapts to the user.
The same tools stall in enterprise deployments because they do not learn from or adapt to the workflow. They cannot. What AI amplifies, then, is the organisation underneath it. If decision rights are unclear, AI exposes it: faster drafts arrive, and nobody knows whose approval matters. If data governance is weak, AI magnifies it: retrieval-augmented systems surface inconsistent ground truth, and the error rate compounds. If incentives are misaligned, AI accelerates the misalignment: super-users get promoted, laggards get laid off, and the individual productivity gain never flows to collective output.
This is why 67% of executives in WRITER's sample believe their company experienced a data breach via unapproved AI tools, and 35% say they could not immediately shut down a rogue agent. Individual adoption is a governance problem before it is a productivity one. Enterprises are only now starting to realise it.
The strongest counterargument
There is a credible objection to all of this.
Erik Brynjolfsson, Daniel Rock and Chad Syverson developed the "productivity J-curve" framework almost a decade ago, drawing on historical evidence from electricity, personal computers and the internet. General-purpose technologies depress measured productivity during an investment phase, then deliver gains with a substantial lag. Firms spend on systems, retraining and reorganisation; the output dip comes first, the harvest comes later.
In a February 2026 Financial Times op-ed titled "The AI productivity take-off is finally visible," Brynjolfsson published an updated reading. US labour productivity grew roughly 2.7% in 2025, nearly double the 1.4% decade average. His argument: the economy is transitioning "out of this investment phase into a harvest phase where those earlier efforts begin to manifest as measurable output." Q4 GDP tracking at 3.7% supports the read. The AI productivity dividend, on this view, is showing up in the macro data precisely now.
He is partly right, and the point has to be conceded without hedging.
But the distribution underneath the 2.7% jump is the problem, and, tellingly, Brynjolfsson flags it himself. In the same analysis, he notes that "a small cohort of power users" is automating end-to-end workstreams with AI agents and completing tasks in hours rather than weeks. The harvest, in his own framing, is concentrated. The Yotzov-Barrero-Bloom NBER survey says the same thing at the firm level: nearly 90% of managers still report zero productivity change over three years. Daron Acemoglu, Nobel laureate in economics, put it drily: "I don't think we should belittle 0.5% in 10 years. That's better than zero. But disappointing relative to promises." Apollo's Torsten Slok has made the same observation from the macro desk: "AI is everywhere except in the incoming macroeconomic data, productivity data, or inflation data."
Deloitte's 2026 State of AI in the Enterprise, a larger and more recent dataset than WRITER's (n=3,235, fielded August-September 2025), adds a useful partial counter: 66% of organisations report productivity and efficiency gains from AI, 25% say the impact on their company is material, up from 12% the year before. But even in that more optimistic frame, only 20% report revenue growth attributable to AI, against 74% who are still hoping for it.
The harvest is real. It is just not broad-based. A harvest concentrated in a narrow cohort of super-users within a narrow set of firms is, for the other 70% or 80% of the enterprise economy, not a harvest at all. It is a structural competitive disadvantage.
What the 6% did differently
So what separates the McKinsey 6%, the companies where AI contributes meaningfully to the P&L, from everyone else?
WRITER's report points in a consistent direction. The organisations capturing ROI share four operational patterns: they connect AI use directly to revenue growth, cost efficiency, productivity or risk reduction; they define priority use cases rather than letting a thousand tools bloom; they assign executive owners with real authority; and they set KPIs that track against those use cases rather than against adoption rates.
MIT NANDA's finding adds a second operational vector: vendor-partnered deployments succeed 67% of the time; internal builds succeed roughly a third as often. The implication is that buying a learning-capable system and integrating it with an existing workflow beats building a generic tool and hoping the workflow reshapes itself around it.
McKinsey's 2026 State of Organizations report puts the operating change plainly. Capturing AI value "depends as much on people as on technology investments," and the report cites one executive's rule of thumb: for every dollar spent on AI technology, five should be spent on people, training, workflow redesign, and organisational change.
BCG's April 2026 research adds one more complication worth sitting with: top performers without AI do not automatically become top performers with AI, and non-top performers can become top performers with AI. The super-user population, in other words, is not simply the old high-performing cohort with better tools. It is a reshuffled population. Which means organisations that index their AI strategy on "give the tools to the best people" are indexing on the wrong signal. The best people at the new workflow are not reliably the best people at the old one.
May Habib, WRITER's CEO, put it as bluntly as the data would let her: "Layoffs are not a viable AI strategy. The leaders putting in the work to radically redesign operations with human-agent collaboration at the centre are the ones compounding their advantage in ways competitors cannot replicate."
This is the operator translation the data has been waiting for. Which is a harder, slower, more political project than buying more seats of whatever agent platform happens to be in vogue this quarter.
What this means, by audience
For professionals inside organisations, individual AI adoption is, at the current moment, a career advantage disproportionate to the work it takes to acquire. WRITER's 3x promotion multiple for super-users is significant. The BCG finding, that the super-user cohort is reshuffled, not inherited from prior top performers, makes it more significant still: the return to early adoption is genuinely democratic inside a given company. But the ceiling on individual productivity is the ceiling of the organisation's workflow. Beyond a point, a super-user in a flat organisation is producing output the organisation cannot absorb. The decision worth making is whether to keep building inside an organisation that is redesigning the workflow, or find one that is. That is the professional version of the argument in Everyone Has AI. What Still Makes You Different?: AI makes baseline output cheaper, so judgement and operating context matter more.
For allocators underwriting AI-exposed companies, the 95% pilot failure rate, the 29% WRITER figure, and McKinsey's 6% high-performer denominator are not softening numbers. They are the true denominators, and they get tighter the closer you hold the ruler to actual P&L. Any thesis on an AI-adjacent company's TAM or adoption curve that assumes uniform enterprise uptake is materially mispriced. The harvest arrives for the 6% that are actually earning meaningful EBIT contribution, and plausibly for the broader 29% that are capturing some ROI. The other 70%-plus are a different asset class, and many will never cross the valley.
For operators building AI-native products, the implication is about GTM, not product. The minority of buyers that are structurally ready to capture ROI are a real market; the rest will churn regardless of product quality. The structural-readiness screen, does the buyer have executive ownership, KPIs aligned to use cases, a serious people-and-workflow budget, a vendor-partnership mindset rather than an internal-build reflex, is the single most under-indexed input in the AI-SaaS sales model right now. This is why AI for Executives starts with strategic reasoning rather than tool access. The product-led motion that got early-stage AI companies to their first $10M in ARR will stop working at the point the buyer base has to include the structurally-unready.
The last fifteen years
Solow's 1987 line took fifteen years to resolve. The productivity statistics caught up with the computer age in the mid-1990s, and the economy spent another decade compounding on those gains. If Brynjolfsson is right and the J-curve is turning, this cycle's version could move faster, because the technology is diffusing faster than either PCs or the internet did.
But the curve does not turn for everyone. It turns for the firms that took the intervening years to redesign their workflows. The others do not just miss the upside. They cross into the next cycle carrying the full weight of the organisational debt that AI exposed.
The two numbers at the top of this piece are both true, and both will remain true for the foreseeable future. The 5x multiple belongs to the super-user. The flat revenue line belongs to the firm. That gap is not a measurement error. It is the visible edge of the organisational debt AI is surfacing: decision rights, data governance, incentive design, the parts of a company that were built in the pre-AI era and now sit in the path of the tools trying to run through them.
Translating one number into the other is now the only thing that matters for most of the enterprise economy. The firms that treat that as a workflow problem rather than a procurement one are the 6% earning meaningful EBIT from AI, and the broader 29% capturing some ROI. The rest are, on current form, the next decade's losers, paying for the tools, absorbing the governance risk, and watching the productivity accrue to individuals they will eventually lose.
Sources
WRITER 2026 AI Adoption in the Enterprise report, published 7 April 2026, fielded 17 December 2025 to 25 January 2026 by Workplace Intelligence.
NBER Working Paper 34836, "Firm Data on AI", Yotzov, Barrero, Bloom et al., February 2026.
McKinsey, The State of Organizations 2026, 19 February 2026.
BCG, "AI Transformation Is a Workforce Transformation", February 2026.
Brynjolfsson, Rock, and Syverson productivity J-curve framework, 2018; Erik Brynjolfsson, "The AI productivity take-off is finally visible," Financial Times, February 2026.
Dell'Acqua, McFowland, Mollick et al., "Navigating the Jagged Technological Frontier," HBS/BCG, 2023.
Gartner April 2026 research on AI in infrastructure and operations; Bureau of Labor Statistics productivity data; Robert Solow, New York Times Book Review, 12 July 1987; Acemoglu and Slok via Fortune.
