Technology
Codehouse
5
min read

James Mayhew
,
Commercial Director

A single piece of fiction briefly moved financial markets this February. What that tells us about AI, digital teams, and the gap nobody is being honest about.
In February 2026, a single Substack post from research firm Citrini Research briefly shook financial markets. Framed as a memo written from June 2028, it imagined an economy devastated by AI-driven white-collar displacement - a "human intelligence displacement spiral" with, as the authors put it, "no natural brake." Michael Burry amplified it on X with four words: 'And you think I'm bearish.' Discussion racked up roughly 16 million views. IBM fell 13% in a single session. Several other named companies followed.
The piece was, by its own admission, a thought exercise. Citadel Securities subsequently published a rigorous rebuttal arguing the scenario rested on flawed macro assumptions. But that almost misses the point. The post didn't move markets because the analysis was bulletproof. It moved markets because the narrative was compelling and the fear was already there. Sentiment coordinated around a story.
A few weeks later, we ran an event, "Who brought the Monster?" The 'Monster', of course, was AI. We hadn’t heard of Citrini Research when we planned the event, but it became the perfect framing.
The post didn't move markets because the analysis was bulletproof. It moved markets because the narrative was compelling and the fear was already there.
The Friction We Were Already Trying to Remove
For those of us working in digital experience, one section of the Citrini piece stood our, "When Friction Went to Zero." Trillions of dollars of enterprise value built on top of human limitations - gets dismantled by AI Agents.
Removing friction is, of course, a key goal of digital teams. Be easy to find. Put things in the right context. Enable evaluation. Reduce the number of clicks. Onboard smoothly. Renew efficiently. Done well, these things result in happier, retained customers. But 'friction' isn't always the enemy - many of the most interesting points on a buying journey are precisely the moments of evaluation and deliberation that good digital experiences are designed to support, not eliminate. In B2C that might involve family or friends. In B2B it involves procurement teams and project boards.
AI will accelerate all of this. The question for digital teams is how - and how fast.
The New Spreadsheets
AI is already reducing friction and helping us work through our ever-longer to-do lists. But the governance challenges ahead are obvious.
Spreadsheets, as useful as they are, often become the solution to one problem and the start of another. They break when multiple people interact with them. One error can cascade across linked files. Yet they can be deeply embedded in critical processes. There is an obvious parallel.
Without proper governance, AI and its Agents will follow a similar pattern. Increasingly problematic as organisations scale up from tens to hundreds and more. Too little oversight and you end up with the AI equivalent of spreadsheet hell: Agents making decisions in the background that nobody reviews - and unlike spreadsheets, Agents will simply keep executing, potentially incorrectly, without anyone noticing until something breaks.
IT policies will need to evolve rapidly. Everyone will want their AI plugins installed despite the security warnings. Privacy needs a rethink, to say the least. Regulation will not keep pace. We will need to govern ourselves, and wait for the eventual evolution of frameworks like GDPR.
Too little oversight and you end up with the AI equivalent of spreadsheet hell: Agents making decisions in the background that nobody reviews.
The governance challenge is real, but so is the potential upside.
A Different Kind of Technology Shift
Amongst the noise around AI and job displacement could this wave be different? Previous technological revolutions tend to leave those nearing the end of their careers behind, but could AI empower those with the deepest experience. If AI multiplies capability, then the human variable in that equation - accumulated expertise, judgement, context - could make experienced operators significantly more powerful, not less.
In reality this scenario probably won’t last, it will simply become a question of how much better any given person, old or young, can become with AI. At school, AI will cease to be treated as cheating and will elevate learning in much the same way calculators elevated mathematics. The young will use AI as symbiotically as they use their smartphones today.
What started for us as a simple equation - AI × human experience = impact - eventually became the foundation of our IMPACT framework, explained in a subsequent article. For now, the key principle: any AI deployment decision needs to answer the question of where human involvement genuinely changes quality, and where it doesn't, should be considered before you build or buy anything.
The AI Adoption Gap
Meanwhile, the disconnect between what organisations say about AI and what’s actually happening inside them is widening. In its 2026 AI Proficiency Report, Section finds that while 55% of knowledge workers use AI at least weekly, 85% still lack a single AI use case that clearly drives business value, and about a quarter do not use AI for work at all. At the same time, senior leaders are far more likely than individual contributors to say their organisation has a clear AI strategy, that adoption is widespread, and that employees are encouraged to experiment with AI. The rest of the workforce disagrees.
That disconnect isn’t surprising, but it matters. Anthropic’s Labour Market Impact report, published in March 2026, finds little evidence that AI has materially reshaped overall employment so far, though there are early signs that hiring of younger workers is slowing in the most exposed occupations – maybe a different type of shift after all. These findings underline that actual AI adoption remains a fraction of what current tools are theoretically capable of delivering. The gap between potential and practice is enormous.
Digital teams are experiencing the exact same dynamic that Citrini described in the markets: surrounded by AI narratives and, in the absence of clear internal guidance, prone to leaping to conclusions. Some are paralysed. Others are overinvesting in tools without a framework to govern them.
85% still lack a single AI use case that clearly drives business value, and about a quarter do not use AI for work at all.
So Who Brought the Monster?
Citrini's piece was a thought experiment. Its authors said as much. But the reason it resonated - the reason it moved markets - is that the fear it named was already real. Not the fear of a 2028 economic crisis necessarily, but the fear of being caught unprepared. Of being the organisation that didn't move fast enough, or that moved without thinking.
The Monster isn't AI itself. It's the gap between the AI conversation happening in the market and the AI conversation happening inside digital teams. It's the distance between what the C-suite believes and what is actually being deployed. It's the Agents that will be spun up without governance, the strategies built on hype rather than evidence.
For now at least digital experience teams are still dealing in the currency of MQLs and SQLs, curating and delivering campaigns, and trying to align design, technical and content workflows. AI looks like an unlocking mechanism that can help us deliver on many of the promises that so far we have been unable to keep – personalisation being an obvious one.
The AI DXPlaybook provides digital teams a practical structure for bridging the gap - moving from conversation to strategy, and from strategy to a roadmap for transformation. More on this in "How to Train the Monster".
If this resonates with what you're seeing in your own organisation, please get in touch







