Building a design capability, not just a product
Individual designers produce work. Design teams produce capability. This is a story about the latter.
The scale
Four UCD teams. Ten to twelve practitioners each. Five live digital services running concurrently. Four years.
At this scale, the work stops being primarily about individual design decisions and starts being about the systems within which decisions get made. Who makes them, how they get validated, how they get documented, how they get communicated across teams that may never be in the same room.
The five services were distinct in purpose but connected in context, all serving different parts of the same policy landscape, all under the same governance framework, all needing to meet the same standards. They ranged from external-facing applications used by schools and trusts, to internal tools used by staff managing complex casework, to hybrid services spanning both.
Leading this meant not just designing things, but building the conditions in which good design could happen at volume and with consistency.
The gap
When I joined the programme, teams were doing good design work. But they were doing it independently. Decisions made on one service weren’t visible to the team working on an adjacent problem. Research findings weren’t being shared. Patterns being developed in one place were being re-invented elsewhere. And when team members left (as they do on long-running programmes), the knowledge they held left with them.
This isn’t a failure of the designers. It’s a failure of infrastructure. And it’s a design problem.
Building the infrastructure
Design histories
The most important thing we built was a system for documenting design decisions: not just outcomes, but decisions. Why we tried something. What the research said. What we rejected and why. What the constraints were.
This practice, design histories, had been developed by another team elsewhere in the organisation. We adopted it, adapted it for our context, and embedded it across all five services.
But adoption required more than advocacy. Many designers on the teams couldn’t efficiently use the tooling that existed. So we built a CMS integration: a content management interface layered on top of the publishing system, that let anyone on the team write and publish a design history without needing to know how the underlying technology worked.
The result was a practice, not a product. Over time, the histories became the institutional memory of the programme. New team members could read the history of a service and understand why it looked the way it did, which patterns had been tried and rejected, and what the open questions were. Stakeholders could see design thinking, not just design outputs.
Figma component library
Alongside the documentation practice, we built a shared Figma library: a set of components and patterns that could be used across all five services as a starting point for iteration.
The goal wasn’t consistency for its own sake. It was speed and quality: so that a designer starting a new feature didn’t spend the first week recreating components that already existed, and so that when components did need to evolve, the change could propagate across the programme rather than diverging.
The library also created a shared vocabulary. When designers from different teams discussed a problem, they could reference a component by name and know they were talking about the same thing.
The practice: research-led decisions at the detail level
The infrastructure created the conditions for good work. The work itself still had to be done.
Two examples of the kind of research-led decision-making the programme produced.
Language alignment. One service asked delivery officers to record the potential risks and negative factors affecting an academy transfer. The section was labelled “Benefits and other factors.” In user interviews, officers consistently described the content as risks. The label didn’t match their mental model. We renamed the section “Benefits and risks.” A two-word change. But the confidence with which users could complete the section improved noticeably in the next research round.
This kind of decision: small, grounded in observation, quickly testable, quickly verifiable; that’s what distinguishes a design culture that pays attention from one that doesn’t.
Meaningful identifiers. The same service required a system for identifying individual transfer projects across the team, so officers could find their cases, managers could track progress, and the service could maintain clear records over time. We designed a reference number system based on region, transfer type, and sequence. We tested it. Users found it functional but slightly opaque.
In a subsequent round, we tested an alternative: using the outgoing trust’s name as the primary identifier, with the reference number as a secondary label. Users were clear which they preferred: the name was already how they thought about their cases. We updated the design accordingly.
Neither of these decisions was dramatic. Both required observation, testing, and willingness to change based on what we found. Across five services and four years, this is how design quality compounds.
Accessibility as systematic practice
One of the services, an external application used by schools, underwent a structured accessibility remediation programme. Not an audit conducted once at the end, but a sustained process of identifying, prioritising, and resolving accessibility issues across the live service.
The issues ranged from structural problems: missing page titles, inadequate error notifications, to interaction-level failures: upload components that didn’t communicate state to assistive technology, links that duplicated adjacent text without meaningful distinction.
Each issue was documented, assessed for impact, designed against, implemented, and verified. The practice was systematic, not reactive.
This is what it looks like to treat accessibility as a first-class design concern rather than a compliance checkpoint.
The legacy
By the end of four years, the design histories system was embedded across the programme and being actively maintained. The Figma library was in use across all five services. The ways of working we had developed, including a cycle-based approach combining elements of Shape-Up and Scrum, had outlasted several team membership changes.
The strongest signal that the infrastructure worked: when new designers joined the programme, they could become productive quickly, not because they were exceptional, but because the knowledge was accessible. The design was in the history, not in someone’s head.
That’s what building a design capability looks like.