1 Aug 2025
Technology
3
min read

James Mayhew
,
Commercial Director
Artificial‑intelligence assistants now draft proposals, surface research and write code. Most organisations have already introduced AI from the bottom up—an approach that wins early buy‑in but often leaves teams debating when to involve it and how to keep human talent in the loop. Some teams over‑delegate; others cling to manual ways of working. IMPACT flips that grass‑roots momentum into an organisation‑wide AI‑governance playbook—crucial groundwork for any digital‑transformation roadmap. The result is inconsistent quality and no shared playbook.
The IMPACT framework turns that fuzzy choice into a structured conversation. It is not a scoring model. Instead it walks a team through six lenses so they can agree where to lean on AI and where human acumen is irreplaceable.
Where some software product teams may use a framework like RICE (Reach - Impact - Confidence - Effort) to help make prioritisation decisions, we're starting to see teams use a framework like IMPACT to help them evaluate collaboratively where and how to introduce AI into their organisation.
The IMPACT lenses
Here’s the cheat‑sheet your team will use when it meets.
Lens | Guiding question | |
---|---|---|
I – Impact | How significant is one successful outcome? | Could a single positive outcome materially affect revenue, risk or reputation? |
M – Multiples | How often can this output be reused or repurposed? | Is this a one‑off artefact or a template we’ll ship 1 000 times? |
P – Polishing threshold | Where is the good‑enough line, beyond which extra tweaks add little value? | Which objective trigger tells us we’re done?• Content: readability ≥ 70 & spell‑check clean• Code: all tests pass & no critical bugs• Design: brand checker ≥ 90 % & accessibility AA |
A – Acumen | What uniquely human insight or judgement is required? | Where does lived experience, nuance or ethics matter? |
C – Collaborative intelligence | How much will today’s feedback train the model? | Can prompts, examples or fine‑tuning compound future gains? |
T – Time saved | How many minutes can an AI genuinely remove? | What does pilot data or a vendor benchmark tell us?• Example: GitHub Copilot reports 55 % coding‑speed uplift in enterprise pilots |
Plain‑English rule of thumb
Start with the minutes the job normally takes.
Subtract the minutes an assistant could save (T).
Look at what remains through the other five lenses.
The discussion—not a final score—shows where to automate and where to stay hands‑on.Setting the polishing threshold: Agree a Definition of Done for every workflow. Pick 1–3 measurable signals (e.g., readability score, bug count, stakeholder sign‑off) that, once hit, mean further tweaks are cosmetic. Time‑box perfectionism and record when you reach the threshold to build reference data for next time.
Multiples | Impact | Recommended stance | |
---|---|---|---|
Weekly thought‑leadership blog | ≈ 52 per year | Moderate SEO lift | Let AI draft and format to slash T. Stop polishing once the P threshold is reached; publish and move on. |
Single bespoke email to a CFO | 1 | High (£250k pipeline) | Use AI for research and a first draft (T high). Invest human Acumen to refine tone and context because I is huge despite M being tiny. |
The comparison shows why Impact deserves its own lens: frequency alone never tells the whole story.
Before you dive into the mechanics, gather a cross‑functional group (e.g., that could be just in marketing, or even across sales, product, operations) and map your organisation’s major processes on a single board. This high‑level view surfaces where repetitive toil, specialist judgement and customer impact cluster. Pick one promising area—often the step with both visible pain and clear value upside—and then run the deeper IMPACT conversation.
45‑minute workshop: from map to action
Purpose: Pick one workflow, inspect it through IMPACT, and agree the tooling, guard‑rails and owners for a pilot.
Activity | |
---|---|
0–5 mins | Set the stage — share the IMPACT lenses and confirm today’s objective: choose one workflow to examine. |
5–15 mins | Map domains, pick a focus — list major processes (sales outreach, content ops, analytics…)—for example, CFO outreach email or weekly blog pipeline, and vote on one for the deep dive. |
15–30 mins | Walk the IMPACT lenses — estimate baseline effort (T) and discuss each lens in turn, capturing key insights on a shared board. |
30–40 mins | Define tooling & guard‑rails — decide which steps to automate, select AI tools or models, and agree human checkpoints. |
40–45 mins | Confirm next actions — assign a pilot owner, draft success metrics (e.g. cycle time ↓ 40 %, error rate ≤ 2 %), and schedule a follow‑up review. |
Materials
Digital whiteboard or Post‑its
Stopwatch
Printed IMPACT cheat‑sheet
Access to pilot AI tool where possible (optional)
Tips for the facilitator
Keep numbers rough. Half‑points or ranges avoid analysis paralysis.
Invite dissenters. Divergent views surface hidden risks.
Follow Impact. A single high‑value win can outweigh hundreds of low‑stakes outputs.
Applying IMPACT across the digital‑experience lifecycle
Digital‑experience work rarely happens in a vacuum. At Codehouse we group activities into six DX Plays—our shorthand for the end‑to‑end digital‑experience supply chain. IMPACT helps each play find its human‑versus‑AI sweet spot:
High‑value human focus | Where AI excels | |
---|---|---|
Measurement & research | Deciding what to measure and why; interpreting weak signals | Crunching large datasets, spotting patterns, drafting research summaries |
Design | Establishing principles, brand expression, emotional resonance | Generating multiple layout options, stress‑testing accessibility at scale |
Development | Solution architecture, complex interaction logic | Boilerplate code, component testing, documentation drafts |
Content | Crafting narrative, setting voice & tone, fact‑checking nuance | Transforming copy for channels, bulk localisation, metadata tagging |
Customer acquisition | Value‑prop definition, offer strategy, creative direction | Audience targeting, bid optimisation, subject‑line experimentation |
Omnichannel orchestration | Journey mapping, exception handling, governance | Real‑time personalisation rules, multi‑touch rollout, performance tuning |
Take‑away: In every play, humans create the why and the north star; AI scales the how.
From assisted execution to strategic partnership
IMPACT also maps neatly to the familiar five‑level maturity journey:
Assisted execution – AI formats, transcribes, or tests what humans specify.
Collaborative creation – AI offers alternatives; humans curate.
Augmented decision‑making – AI highlights patterns and recommends; humans choose.
Continuous optimisation – AI tweaks experiences in real time within guard‑rails.
Strategic partnership – AI runs an operational domain; humans focus on innovation and ethics.
Use the IMPACT lenses at each stage to identify which workflows can graduate to the next level — and what acumen is critical to keep.
Embedding IMPACT in day‑to‑day work
Run the workshop quarterly. Model capabilities evolve; so should your playbook.
Capture mini case studies. Two slides per pilot turn abstract debate into relatable evidence.
Grow a prompt library. Every reuse boosts C and speeds onboarding.
Log cost and risk. Track licence spend, oversight hours and compliance exposure so enthusiasm remains grounded.
Teams that treat IMPACT as a conversation prompt—not a league table—find the sweet spot between human ingenuity and machine efficiency, unlocking scalable generative‑AI automation without sacrificing brand integrity. One perfect email can matter more than a thousand adequate blogs. The lenses help you see that before committing precious time.
Interested in trying IMPACT? If you’d like the Miro template we use with clients, let us know and we’ll share it — or we can facilitate your team’s first 45‑minute IMPACT session for you.