
DXPlaybook
Play 2: Experience Design & Development
DXPlaybook is a practical blueprint that connects business goals, customer needs, and the platform and process foundations that make great digital work repeatable. Use the plays in sequence or pick the one that unblocks you; each produces tangible outputs you can reuse in the next.
This page is Play 2, which focuses on the engine that creates digital experiences. It looks at how your organisation moves from a concept or product requirements document to something live on the internet — the steps, decisions and handoffs that turn intent into a release. When this engine runs smoothly, teams tend to deliver a wider range of experiences with fewer surprises and a tighter link to the outcomes set in Play 1.
Use Play 2 when lead times feel long, handoffs are unclear, forms and routing behave inconsistently, or work ships without a clear connection to results. Working through this play gives you a shared view of the idea-to-live path, highlights where friction typically appears, and helps you prioritise a small improvement to trial next.
Why this play matters
If Play 1 sets direction, this play tunes the engine that takes any concept or product requirements document into something live on the internet. Most organisations don’t lack ideas; they lack a visible, shared way of moving an idea through discovery, concise definition, design, build, test, release and learning. Without that, teams work hard yet still see long lead times, inconsistent quality and unclear impact.
This play focuses on the path, not a particular page or feature. It shows how work actually moves across people and partners, and how to remove friction where it really lives: unclear decisions, slow approvals, mismatched environments, fragile critical conversion steps (for example, enquiry submit, sign-up and first-run activation, checkout, or a top self-service task), and untrusted data. We use a one-page definition before any build; its roots are in widely adopted product practices that value brevity, clarity and outcomes over paperwork.
A tuned engine has a few defining characteristics. Ideas are anchored to a clear business intent before teams invest. Discovery reduces uncertainty rather than generating deliverables for their own sake. A short shared definition aligns the audience, the problem, the five-second promise, the minimum proof, the primary action, the success measure, and the first increment worth shipping. Design expresses journeys as reusable components mapped to templates, so teams assemble rather than reinvent. Build and test run against a definition of done that covers accessibility, performance and measurement, not just visuals. Release is a controlled step with a lightweight checklist and a simple rollback. Learning is routine, so what happens after go-live is as important as what happens before.
This matters whether delivery is in-house, agency-led or blended. Agencies move faster when they can see the same path, artefacts and gates as internal teams; internal teams make better decisions when the last mile—the critical conversion step—is explicit and consistent. If this step is brittle, everything upstream under-performs.
What this play covers
from intent to definition — capture the audience, problem, five-second promise, minimum proof, primary action, success measure, and the first increment on one page before design or build
from definition to design and build — design journeys before pages, assemble from reusable components, and treat the critical conversion step as a product in its own right with clear guidance, respectful error handling, explicit consent where relevant, dependable routing, and consistent events
from build to release — ship on a predictable cadence with a short checklist: accessibility to Web Content Accessibility Guidelines 2.2 AA, Core Web Vitals budgets met, analytics events firing as designed, conversion routing verified end-to-end, rollback ready and rehearsed
from release to learning — review the agreed key performance indicator (KPI) quickly and decide what changes next; keep loops short so evidence guides the next increment
What good looks like
clarity up front — a short shared definition aligns intent, message, proof and measurement
a repeatable path — the same stages and artefacts across in-house and agency work; journeys precede pages; conversion behaviour is consistent
objective quality — accessibility and performance are non-negotiable gates, not opinions
short loops — first increments ship frequently; learning arrives quickly; scope grows only when the previous step proves its worth
How deep to run this play
Small update on existing templates — run the lightest version: quick evidence check, one-page definition, reuse components, apply the standard conversion pattern, release with the checklist, review the KPI.
Whole-site rebuild or major section — run the full version: several concise definitions (one per first increment), extend components and templates, make the conversion pattern universal, automate release gates in continuous integration and continuous delivery (CI/CD), and still ship one small end-to-end change live before scaling.
If your organisation is cautious about small releases: you can still work incrementally without reputational risk. Use feature flags, invite-only previews, time-boxed soft launches, non-indexed routes, or low-traffic roll-outs (for example, geography or percentage splits). Each approach proves the engine in production conditions while keeping exposure controlled.
Questions to ask
Play 2 is about the engine that takes ideas to live. The fastest way to see how that engine actually runs is to ask a small set of pointed questions and capture real answers, not assumptions. Questions create a shared truth about who decides, what happens next, where work waits, and how we will judge success. They cut through “our process on paper” and reveal the process in practice across in-house teams and agency partners.
The list below covers the end-to-end path once, stage by stage, without gaps or overlap. It is not a deep technical audit; it is the right granularity to make decisions and unblock delivery.
Tip for quality: record each answer in a single sentence with four tags — Owner, Evidence link, Status (Known or Unknown), Next step.
Process stages
Intake and alignment
Where did this idea originate — campaign, product, service, agency or executive?
Owner: … Evidence link: … Status: … Next step: …Which Play 1 outcome will this support and how will we know if it helped?
Who is the decision owner and what is their go or no-go criterion?
What is the first increment worth shipping to learn something useful?
Discovery
What uncertainty must we reduce before design or build — value, usability, feasibility and viability?
What quick evidence will we gather — analytics slice, top-task analysis, five stakeholder conversations, a light user touchpoint?
When does discovery end and what decision will it enable — proceed, pause or rethink?
Design
Have we sketched the journey before pages? — for example, search → landing → proof strip → primary action → conversion step
Are we assembling from reusable components and templates rather than inventing new patterns?
Is the critical conversion step specified? — guidance, validation, respectful errors, consent if relevant, events, data routing
Is the copy specific and accessible? — labels, hints, alt text and error messages
Build
Are design, content and engineering working from the same source of truth — component library or design system?
Do we share a definition of done that includes accessibility to WCAG 2.2 AA, Core Web Vitals budgets, analytics events and data routing checks?
Are environments stable and are preview links available for review by stakeholders and editors?
Is the work behind a feature flag so we can stage roll-outs safely?
Test
Have we tested behaviour, not just visuals — do people understand the promise, find proof and take the primary action?
Does the conversion step behave as designed — inline guidance, clear recovery from errors, no dead ends?
Do events fire end to end — view → start → error(type) → success — and do test records route correctly through marketing automation to customer relationship management?
Release
Does the release checklist pass — accessibility checks, Core Web Vitals budgets on key templates, analytics verified, routing verified, rollback ready and rehearsed?
Who is on point for the release window and for a potential rollback?
What is the immediate monitoring plan after go-live?
Learn
When will we review the key performance indicator and decide what changes next?
What is the time from release to decision — we aim to keep this short and repeatable?
Will we retire, simplify or scale based on what we learn, and who owns that call?
Scale of work guidance
Depending on the scale of the work to be done, you can flex up or down the processes.
Small update on existing templates
Light discovery, one-page definition, reuse components, apply the standard critical conversion pattern, release with the checklist, review the KPI
Whole-site rebuild or major section
Several concise definitions — one per first increment — extend components and templates, make the conversion pattern universal, automate release gates in CI/CD, and still ship one small end-to-end change live before scaling
Red flags to watch
Multiple briefs for the same idea with no single decision owner
Definition cannot fit on one page or lacks a primary action and a measure
Conversion steps behave differently across sections and cannot be audited
Releases feel like one-off events and rollback is unclear
No agreed review window after go-live, so learning never turns into change
Exit criteria for this section
A completed checklist with Owner, Evidence link, Status and Next step for each question
A shortlist of bottlenecks with one named owner each
A clearly described first increment to ship next, tied to a Play 1 outcome
Good patterns
The purpose of this section is to turn the abstract idea of “a better delivery engine” into a small set of behaviours you can actually adopt. Patterns are useful because they codify what good looks like without forcing heavy process. They make handoffs clearer, reduce rework, and protect the quality of what reaches customers — whether you’re shipping a small update on existing templates or a full site rebuild.
Think of these patterns as guardrails. Each one nudges work in the right direction with minimal overhead. They also reinforce each other: a one-page definition makes journeys easier to design; journeys make component reuse obvious; component reuse makes releases simpler; and releases on a predictable cadence make learning routine. When the critical conversion step (enquiry submit, sign-up and first-run activation, checkout, or a top self-service task) follows the same behavioural standard everywhere, leaders can trust the numbers and teams can move faster.
How to use this section:
Pick two patterns to pilot in the next fortnight. Don’t try to adopt everything at once.
Tune depth to scope. For a small update, keep artefacts light but real. For a bigger programme, formalise them and automate the checks.
Measure the effect on the signals you care about: lead time to live, conversion reliability, quality gate pass rate, and the time from release to decision.
What follows is the minimum set that consistently improves outcomes in large organisations while staying practical for in-house teams and agencies alike.

One-page definition before any build
A short, shared definition stops ambiguity at the door and anchors work to outcomes from Play 1.
What to capture on one page
Audience and problem — who this is for and what they are trying to do.
Five-second promise — the line a visitor must understand immediately.
Minimum proof — one or two credible signals near the top (for example, customer logos, quantified results, certifications).
Primary action — the single action you want taken on this journey.
Success measure — the key performance indicator with a baseline and review window.
First increment — the smallest end-to-end change that is worth shipping to learn something useful.
Risks and assumptions — the two or three things to watch.
How to use it
Read it aloud with the decision owner. If it will not fit on one page, it is not ready. If you cannot name the first increment, you are still shaping.
Origins of the one-page definition
This practice blends ideas leaders already trust: the decisive brevity of a Shape Up–style pitch (problem, appetite, solution), the risk-first mindset of product opportunity assessments, the clarity-first narrative of Working Backwards press release and FAQs, and the GOV.UK principle that discovery decides whether and what to build rather than producing artefacts for their own sake. The point is decisiveness, not paperwork.
Journeys before pages
Design the journey that moves the measure, then assemble the pages.
What to do
Name the outcome and the shortest journey that could credibly move it (for example, search → landing → proof strip → primary action → conversion step).
Place proof high on the page so visitors do not have to hunt for credibility.
Choose templates and components only after the journey is clear.
Write specific, accessible copy now — labels, hints and error messages — so it does not get deferred.
Signals of success
Heatmaps and scroll depth on your first increment show that visitors encounter the promise, proof and primary action without excessive scrolling or competing distractions.
One conversion pattern, everywhere
The critical conversion step is where intent becomes value. Treat it as a product of its own.
Behavioural standard
Inline guidance and validation as people type or move through steps.
Clear, respectful error messages placed at the point of failure.
Explicit consent where relevant, with links to policy pages.
Events captured consistently: view → start → error(type) → success.
Dependable routing to the next system (for example, marketing automation to customer relationship management) with deduplication and service-level agreements for assignment.
A success state that confirms what happened and offers a sensible next step.
Why it matters
If this step is inconsistent or brittle, everything upstream under-performs and leaders stop trusting the numbers.
Reusable components and a design system
Assemble experiences, don’t reinvent them.
Make components shippable
Definition — purpose, content rules, accessibility notes, and performance budget.
Examples — a real content sample that editors can copy.
Ownership — who maintains it (design, engineering, content).
Checks — what must pass before a change to this component ships.
Starter inventory
Hero value block; proof strip; call-to-action bar; comparison table; pricing card; testimonial; case study tile; resource teaser; conversion block (form, sign-up, checkout, task); consent banner.
Regular releases with objective gates
Make releases predictable and dull — in the best sense.
The release checklist
Accessibility — passes to Web Content Accessibility Guidelines 2.2 AA.
Performance — key templates meet Core Web Vitals budgets (Largest Contentful Paint, Interaction to Next Paint, Cumulative Layout Shift).
Measurement — events fire as designed; dashboards see the change.
Routing — test records traverse marketing automation to customer relationship management correctly.
Rollback — plan rehearsed and owner on call; feature flag or equivalent ready.
Monitoring — immediate checks after go-live; error and alerting in place.
Cadence
Ship on a rhythm that teams and agency partners can plan around. Predictability is a quality attribute.
Build small, learn often
Avoid “big reveal” launches that hide problems until late.
How to choose the first increment
End-to-end to production (not a prototype).
Measurable against a Play 1 outcome.
Safe to roll back.
Minimal dependencies.
Exercises the critical conversion step.
Close the loop
Book a review window when you define the work. On that date, look at the measure and decide what changes next — improve, scale, or retire.
Apply now
Adopt the one-page definition for all new requests.
Standardise the conversion pattern in one high-traffic journey first.
Publish a release checklist and use it for every change.
Time your lead time to live for a month; fix the slowest hand-off.
These patterns keep the engine light, observable and repeatable — so whatever you decide to build next, the path to live is clear and the quality bar is consistent.
Case study
Context
The AA needed to modernise a complex sales journey while integrating multiple back-office systems. The challenge was less about “new pages” and more about creating an engine that could compose, release and learn from changes without reinventing patterns each time.
What the programme centred on
A flexible library of reusable templates and components so pages could be assembled quickly and consistently.
Clear integration points into analytics, tag management and back-office systems, so the last mile (events and data routing) behaved predictably.
Editor enablement — training and guidance that let non-developers ship content changes on the same kit of parts.
A steady release cadence with repeatable sign-off and previews, reducing drama at go-live.
Targeted personalisation rules where evidence showed they mattered, avoiding complexity elsewhere.
What you can copy tomorrow
Start a template and component inventory and retire ad-hoc one-offs.
Standardise the critical conversion step (enquiry, sign-up, checkout or task), including inline guidance, respectful errors and end-to-end event naming.
Move to a rhythmic release window with a short, objective checklist (accessibility, performance, analytics, routing, rollback).
Signals and maturity
This last section of Play 2 is here to make progress observable without drowning anyone in dashboards. A small set of signals tells you whether the engine described in this play is working; a simple maturity view shows where you are today and what “better” looks like next. Keep it light. Review it on a regular rhythm (weekly or fortnightly is typical) and use the trends to steer the next decision.
The signals that matter
Lead time to live
The time from a one-page definition being approved to the change appearing in production. It reveals where work waits — approvals, environments, content readiness, tagging or routing. Most of the improvement you’ll see in Play 2 shows up here first.
Release cadence and size
How often you release and how much you bundle each time. Smaller, regular releases tend to be calmer, easier to review, and less risky. A steady rhythm also makes it easier for leaders to see cause and effect.
Conversion reliability
How consistently your critical conversion step works in the real world — the moment intent becomes value (for example, enquiry submit, sign-up and first-run activation, checkout, or a top self-service task). Look at completion rate, the common error types, and whether records route to the right queue within the service-level agreement (SLA). If this signal is weak, everything upstream will under-perform.
Quality gate pass rate
The proportion of releases that pass the non-negotiables first time: accessibility to Web Content Accessibility Guidelines (WCAG) 2.2 AA, Core Web Vitals budgets (Largest Contentful Paint, Interaction to Next Paint, Cumulative Layout Shift), analytics events firing as designed, conversion routing verified, rollback prepared. When this signal is healthy, releases feel predictable rather than dramatic.
Learning loop time
The time from release to the decision on what changes next. It keeps the engine honest. If decisions arrive quickly, your teams are using evidence rather than waiting for the next big reveal.
You don’t need perfect instrumentation to start. Use the tools you already have — work tracker timestamps for lead time; your repository or release notes for cadence; analytics for conversion; CI/CD checks for quality gates; a short note in the ticket to record decisions. Refine as you go.
How to read the signals together
Lead time flat, cadence steady, conversion improving — the engine is running; scale the pattern to the next journey.
Lead time volatile, quality gates failing — fix the release step before adding scope; your risks are operational, not strategic.
Conversion weak, quality gates green — the experience is easy to ship but not yet persuasive; revisit the promise, proof and primary action in the one-page definition.
Learning loop slow — decisions are not being made; schedule the review when you define the work so the loop closes by design.
A simple maturity view
This isn’t a judgement or a certification. It’s a shared language to describe where you are and what “a bit better” looks like next quarter.
Ad hoc
Work arrives as page requests; definitions vary; releases are episodic; conversion steps behave differently by section; results are anecdotal.
What changes next: introduce the one-page definition and sketch the current path with owners and handoffs.
Defined
The path is visible; owners are known; a standard conversion pattern exists; there is a release checklist, though it isn’t always used.
What changes next: make the checklist your default and practise a rollback so confidence grows.
Managed
Journeys precede pages; components are reused; releases follow a rhythm; most changes pass the quality gates; conversion reliability is measured; the review window after go-live is booked and kept.
What changes next: shorten lead time to live by removing the slowest approval or environment dependency.
Optimised
Small end-to-end increments ship frequently; lead time is measured in days; conversion is dependable; teams close the learning loop quickly and retire low-performing experiences as a matter of course.
What changes next: sustain the rhythm and extend the shared standards to adjacent teams and partners.
Keeping it lightweight
Put the five signals and the maturity level on a single page alongside one or two sentences of commentary (what improved, what you’ll try next). That is enough for leaders to steer and for teams to act. The goal here isn’t more reporting; it’s clearer choices about where to focus effort in the next cycle.
Workshop template
Get access to the Miro template to use with your whole team to work through the DXPlaybook
Glossary
Glossary
Critical conversion step
The specific moment where intent becomes value (for example, enquiry submit, sign-up and first-run activation, checkout, or a top self-service task). It’s treated as a product in its own right.
First increment
The smallest end-to-end change that reaches production and can teach you something useful about the outcome you want (also called a thin vertical increment).
First-run activation
In product sign-ups, the early actions a new user must complete to get value (for example, verify email, create first project). Often a more reliable success signal than sign-up alone.
Feature flag
A switch that lets you turn a change on or off (for a segment, region or percentage) without redeploying — useful for soft launches and quick reversals.
Rollback
A planned way to restore the previous working version if a release misbehaves. It should be rehearsed, not theoretical.
Preview environment
A temporary, production-like environment generated from a branch or pull request so stakeholders review the real thing, not screenshots.
Event stream for conversion
A consistent analytics schema for the conversion step, typically: view → start → error(type) → success. It exposes real friction and success rates.
Data layer
A structured object that holds page and interaction data for analytics and tag managers. Designed to avoid personal data by default.
Consent management platform (CMP)
The tool that records user consent and enforces it, so non-essential tags and cookies only fire when permitted.
Marketing automation (MA)
Systems that capture, score and nurture enquiries (for example, HubSpot, Marketo). In this play they must receive clean, deduped data and hand off reliably to CRM.
Customer relationship management (CRM)
Systems of record for contacts, accounts and opportunities (for example, Salesforce, Dynamics). In this play they must receive correctly routed records from MA.
Lead deduplication
Logic that prevents duplicate leads or contacts when the same person submits more than once (for example, match on email + domain before creating a new record).
Routing (MA → CRM)
Rules that assign a new lead or case to the right queue or owner, with an agreed service-level agreement (SLA) for first response.
Performance budget
A target for key speed and stability metrics (for example, Core Web Vitals) that new templates or features must meet before release.
Quality gate
An objective, pass/fail check that runs on every release (for example, WCAG 2.2 AA, performance budgets, analytics events firing, routing verified, rollback ready).
Definition of done (DoD)
The shared checklist a change must satisfy before release, covering not just visuals but accessibility, performance, measurement and routing.
Top-task analysis
A quick way to prioritise by identifying the small set of tasks users most want to complete, then designing journeys around those tasks.
Proof strip
A reusable component that surfaces credibility early (for example, client logos, quantified results, certifications or awards).








