Measuring design system “adoption”
Measuring design system adoption is like measuring how much you love your children. It’s hand wavy, adjective-laced and head-scratching ensues once the numbers come out. However, developing an adoption metric is attainable, productive and won’t land you in family therapy. As long as your “adoption” metric isn’t measuring adoption.
Measuring true adoption is impossible
Design system adoption isn’t quantifiably measurable because adoption isn’t purely quantitative. It’s also one of those things I suspect we all think we agree in definition, but there’d be some wild disagreements once we got into brass tacks. My take is adoption that just measures amount of use is missing the amount it’s used correctly. And it’s that last part that’s the monkey wrench. Once intended use is in the picture, well, now you’re crossing quantitative and qualitative streams.
So, if you buy what I’m selling, true adoption measure isn’t attainable. Congratulations! You now have the freedom to define something less loaded than adoption while still providing useful signal.
Measure aspiration instead of adoption
Now is a good time to stop reading and think about what wild, unmitigated success looks like for your design system. What part of that wild, unmitigated success is:
- Quantifiable
- Simple to understand
- The most critical marker of success
That’s your “adoption” measure.
As an example, if success looks like enabling teams to quickly develop new components on top of your system, perhaps you measure the use of your tokens/building block components in non-system components. I’m spit-balling here, you’ll come up with something better.
An approachable measure helps reinforce your vision of success
Simple is hard. We all know that. But an approachable metric is critical. I’d argue the way you measure success can be just as powerful of a narrative as the measure itself. How do you measure your view of success that’s approachable? I can tell you it’s not with more math. And that’s the beauty of not measuring adoption–you get to choose your own adventure.
For my work on Gestalt, success has always been moving usage from the small, atomic components to the bigger, more opinionated ones. Thus, the measure I’m proposing is to divide our bigger, opinionated components by everything else. Meaning our own smaller components count against us. The rationale goes back to the fact that while our smaller components are critical, their usage does not equate to wild, unmitigated success. There’s even signal that those smaller components impede on the usage of bigger ones. Is that an accurate measure of adoption? Nope. May it even be a little controversial? Sure. But it was the clearest measure I could think of that communicates success.
Metrics are a hell of a drug
I love numbers. In moderation.
In my experience, metrics can be a gateway drug to more metrics. Without discipline, you end with metrics for the sake of metrics. Then one day you wake up in a pool of spreadsheets and aren’t quite sure how you got there.
Metrics can be fudged, manipulated and stomped on. Worst of all, metrics can abstract what you’re actually trying to accomplish. Without a thinly veiled skepticism of metrics, you can fall trap to believing that moving the metric is the job. It’s not. And even despite best efforts, once this measure becomes a KPI or shows up in OKRs, some people will be solely focused on it.
Let’s be clear, you’ve hit wild, unmitigated success when when you’ve hit wild, unmitigated success. Metrics can help validate and contextualize, but chances are, you’ll already know when you’ve done it.