Design System Discussion: Measurement - Takeaways and the video

Luke Murphy
Luke Murphy
  • Updated

December 8th saw the second session in our Design System Discussion events occur on Zoom. I was joined by three amazing panelists, Nathan Curtis, Maya Hampton, and Gabby Hon, to answer questions regarding one of the more difficult topics surrounding design systems, measurement. We have recorded the session for those who want a recap or missed it, and keep reading to see key takeaways and quotes from our panelists! Here's the video in case you weren't able to join:

Key Takeaways

This is a summary of key takeaways from the discussion. The main points were shared between Gabby, Maya, Nathan, and Katrina. The takeaways are a mix of quotes and summarized conclusions based on the panelist's answers. We hope you enjoy and get a lot from it!

What does measurement mean to you?


  • From a DesignOps perspective, measurement is essential to know best how to support a team. Measurement is about looking at and defining outcomes — it doesn't have a lot of value in a vacuum, so unless you've identified the results you're looking for, you can't track it.
  • What I care most about is, are people talking about it? Are people contributing to it?


  • Outcomes are a great way to start. You can get abstract, but the real value comes from digging into what design systems make things and how they're used to create more tangible things.
  • Outcomes can be things like solving a problem or progressing a specific item from A to B, but you really should start with the design system itself if you want to be more specific with those things.
  • Design systems are a process of creation, so it's good to ask how much did we create? How much did other people use? Are they happy with what they are using? Are we doing it efficiently?
  • Are we minimizing costs to maximize the benefit of what we're trying to produce?
  • Ultimately, how it impacts the users is hard to get at, and that's what you want to find out.


  • Treat the design system as a product. By looking at it that way, it's easier to find ways to measure it. There's a lot of importance in focusing on the value proposition. Emphasizing efficiency metrics is valuable to companies to manage what we deliver. We can help measure if teams are getting more work done consistently through specific metrics.

What are some indicators of success within a design system?

Awareness of the design system is critical when starting to build it out. There are a few indicators regarding measurement that can point to success. Adoption is an essential metric, and you can view the evolution over time. Look at the ongoing adoption of updates, new releases and consider the interest and engagement of the viewers and contributors regarding the usage of current components. Coverage is another indicator that can be measured, and in Figma, you can have analytics showing how much your toolkit is covering. Qualitative aspects also indicate success, especially if the team can easily understand the system with little confusion. The design system should be organized in a way the team can understand and utilize it. If there is confusion, spend the time to revise it and ensure that the team better understands the design system with each revision. If done correctly, it ultimately leads to more adoption and success.

Metrics you can also review are:

  • How many people have we onboarded to the system?
  • Who's participating in our office hours or joining the slack channels?
  • Is there knowledge about what we're doing if we have a branded name?
  • Do people understand what that is?
  • What is the percentage of usage?
  • How much is the code is being used by the team?
  • Do they understand what a design system is? If so, I think that's an exciting initial metric.
  • How much is the system covering our product portfolio and to what degree have we reached sufficient coverage to feel that we're successful in what we've done?

What is more important quantitative or qualitative metrics?

They both provide value; one is not more important than the other. The qualitative metrics help tell the story and indicate the meaning behind the quantitative metrics. If you want to better understand how people use the design system, track downloads, track how things are getting feedback in Github, view the code rework, take those numbers, and talk to people about what those numbers mean, find out where the value is.

How do you provide the value of qualitative versus quantitative metrics to senior leadership?

Qualitative data informs the storytelling you need to share to express the core value proposition and continue reinforcing through your communications to your customers, your sponsors and your leader's qualitative data helps to guide us. Although Senior leadership often seeks quantitative results, they are swayed by qualitative conversations enough to move the needle to reveal the value in the design system. If you can get numbers, it's advantageous to get that baseline and see trends over time, but adding value and making a real impact comes from combining the numbers with users' stories.

Are there any bad metrics to measure?

All metrics provide some value but it varies depending on the objectives that have been set. For example, efficiency is the goal for most companies, but it is complicated to measure. It confounds people because they say the primary reason we have is efficiency, and we have no idea how to measure. To measure efficiency, it would require inspection of people's work habits and other varying arrays of effects. These effects include their skill level, constraints, priorities, and the existing code base that they're working with. You cannot tease out the impact of the system in a concrete way that's statistically significant.

When do you think you should start implementing metrics for design systems? When is the best time to do that?

It's best to start as soon as possible. You do not necessarily need to have everything perfectly mapped out. Start small, tell the team you will have office hours to explain the concept of the design system to the company or your partners in IT. Office hours or lunch and learn will help introduce new things for you to look at and measure. Initially, it will be strange; take the journey, trust your team. It is good to start with a target you want to measure and start small.

Tips on how to start:

  • Create a sign-up form to know who's coming, to prepare for topics. Have something ready, so for the first meeting, you have an idea of who will be there.
  • Knowing whose participating will help your team get in the habit of knowing what you need to track who cares and who is involved.

Do you use any tools to visualize all your data and metrics?

Spreadsheets are your best friend. Many fancy applications exist, but a spreadsheet is where you get the most value at the end of the day, and it will probably outlast any application long-term. For most design systems, you can create a spreadsheet that includes all the potential adopters of your system as teams or even grouped to your company to put in for each row. Rows should consist of a Developer Lead point of contact, a design lead point of contact, and other relevant roles of people managing, viewing, contributing to the design system.

It will be a manual conversation you will have with a person in each of those rows once every quarter of the year to check in on how they're doing. Through this process, you can find the pockets of adoption across your portfolio, point out which groups are the most strong or weak that use the system or not, and why (if you ask the right questions). Not everyone will be adopting the design system. It'll be the work of one person doing a lot of manual work using a spreadsheet. You don't need to have an engineer build out all those needs to move the needle of influence where your scope and investments are going.

Where does the design system have the most impact to push forward? To whom should you share these metrics with?


It would be best if you pushed it to anyone that will listen. Often you're tailoring your message to who you're talking to, so you can pull specific metrics forward to figure out what that audience cares about. It's best to have quarterly planning cycles and consistently report the measurements and OKRs. Keep in touch with your stakeholders to know adoption usage and where we see gaps in coverage, people staying up to date, and seeing those opportunities. We use the measurements to observe the internal teams. It helps get a sense of what components people are using or not using. It gives signals to follow up with our users. It helps us understand how people use a system and points to follow up with them. A lot of the time, it validates what we hear anecdotally. It's nice to get to some members to back that up as well. We use that internally, both with our team and our stakeholders.

One thing that surprises most people is how political design systems are. People who don't even think they have an opinion, be it a button or dropdown menu, end up having all kinds of views. It's about working with the team to decide what you want to track initially, share that with your leadership, design leadership, IT leadership. Get a sense of who your stakeholders are; what do those stakeholders care about? Do what those stakeholders care about align with the list of things we think we want to accomplish and measure?

Measurement is ultimately about building alliances and finding people to champion your team's work and the design system as a whole. It is a prolonged and agonizing process. By coming at it from a UX perspective, for example, finding out how to understand the users adopting it, communicating that you want to achieve something together, and how can we make that happen? Going back to the first principles of user experience is the best way to understand how to provide value to our users.

Are measurements that keep the system funded the same as what keeps the system team motivated?

The motivations for business outcomes are different from your adopters' motivations and the people making the system. Various incentives and opinions are based on what people want from a very divergent system. The system that is accessible is often addressing all those different dimensions to varying degrees. Other factors outside measuring and funding also play a role that affects motivation, like the team's happiness and well-being within the company.

Robust metrics can be challenging without engineering resources; how do you measure or estimate value have you found to be the most useful in getting initial buy-in for resourcing a design system before you have engineering resources available?

It gets tricky getting numbers regarding coverage and usage of specific components, but if you're looking at the initial buy, in-it's the opportunity to use qualitative. It's important to talk to people you know, build those relationships and figure out where those opportunities are. Make a spreadsheet, keep track of which teams can use what, start figuring out where those technical constraints are, and get a sense of what those potential blockers will be going towards whatever you're trying to measure or even the measurement itself. If you can start to create that story around the outcomes you want to be delivered and don't have a great way to track that, you'll need some engineers to help fund your team more.


Routine ways of making a health check conversation with adopters of a DS are things that I sometimes do in my consulting and bring in or with the designers and developers from the same team. Recently, I had a conversation with a developer where they mentioned their team had adopted the DS about 80% and were participating in articles by commenting or contributing in some way. Within another month or two, we expect it to be 95% done with anything we can use the DS for. At that point, they will be virtually fully adopted. This is a very healthy adoption rate, and there wasn't a problem for us to solve. There's not a lot of incremental value we're going to serve them with to move the needle forward.

Is there any value in staffing a research role specifically in the early foundation of a design system team to spearhead measurement end of all things research or is that more the job of the PO or PM until the team matures to a point where it requires an additional headcount?

Treat your design system as a product. Think about all the distinct possibilities or skill sets you need to ensure product success. Accessibility specialists, researchers, content strategists, QA, testers - all of these are not typically part of a core team that establishes a design system in most places, but they will support it to grow in the right direction. Many design system teams start small and scrappy. It's almost everyone's job to be doing this work. The Product Manager can often help fill the gaps before getting to an ideal state with fully dedicated resources like a researcher to help plugin measurement.

It should grow to be a partnership between product and UX research. Research provides more value to help move the needle forward; sharing verbatim quotes from internal and external customers can help sway the minds of senior leadership, helping to get the necessary funding or support that the team needs.

Have you encountered any design systems where the teammates do not adopt the design system despite communicating its benefits multiple times, and if yes, how do you deal with it?

The goal most times is for everyone to adopt the design system but that can be an unrealistic goal. Researching and asking the team if there was a specific reason they needed to break the component or build something custom will help you understand the pitfalls in the adoption process. Reaching out to the people not adopting and finding out if there's a way to support them, like having one-to-one conversations, can honestly go a long way. Interviewing a team member can help uncover some unanswered questions like why they did not realize this was available or how they are supposed to be using it to help locate the root of the resistance.

Favor community over control when you're trying to make design systems successful. In this process, you don't want to become an enforcer because it puts the users in a very awkward position. It makes it difficult to build alliances with others and encourage usage or articulate the value of having this design system in place for all the customers.

Considering the following:

  • How is the team working with the design system?
  • How does it help with their workflow?
  • Are there difficulties?
  • Have you had to invent things on your own?

We're migrating from x products using a library like to a proper organization-wide design system. I was thinking of measurements like "how much time would it take for you to create these 10 components" but since each product is already using MUI, that answer would be skewed, since, well, they would say "pretty fast" but they don't take consistency across the organization in scope. Does that make it a bad metric? Or how would you tackle this?

It may take 5 seconds for a designer to create a button in Figma, but that does not mean you have a robust, reusable, scalable, and durable button component. There's a distinction between making something fast and making something with minimally sufficient quality. Design systems have to check a lot more boxes for different types of customers versus an individual product team that will have boxes that they need to check. You have to think about those dimensions when discussing the scalable value of the design system.

Do you have any examples of Key Performance Indicators (KPIs) that you've used in the past that worked well for you? The challenge is to include numbers and timeframe to be an actual KPI.

It's a tough sell because everyone wants quantitive data. Suppose you want more direction regarding which KPIs and OKRs to set. In that case, there's a series of books by long-time Information Architect, Christina Woodke. She frames them very intelligently, and it'll help you write your OKRs and KPIs differently. It takes a lot of investment of time and effort to get to complex numbers. If you need to provide metrics, it's best to make it simple, can you track numbers around lunch and learns, office hours, and evangelizing sessions with stakeholders.

What percentage needs to be used and to what extent for adoption to be providing value to your team internally and externally?

It's easier to break it down into a rule of thumb that may be a tad inappropriate for all this quantitative talk but the rule of threes.

How satisfied will our team be when we get ⅔ of our potential adopters using the system? ⅔ adoption might be helpful for your team to feel sufficient energy and set realistic goals.

A lot of people are asking for your favorite metrics or measures. Do you have a go-to metric for senior leadership that you use for spinning up a design system vs for a mature design system?

Adoption continues to be the go-to metric; there's new adoption and ongoing adoption. Once you have some more robust measurements, you can get a better sense of coverage across the website as well. With those numbers, you can infer a lot of great stories about the impact that it's having on the end-user experience. Over time as adoption continues, it gets easier to track and makes a better argument for senior leadership to buy into the system. Anything that you can do to find ways to track senior leadership's interest and combine it with your goals at a given moment within your design system while tracking it to your design system roadmap. It may not be evident initially, but start having those conversations early with senior leadership to establish their goals.

When you're trying to build support and alliances through the organization, it's essential to understand:

  • What is the company roadmap of the year?
  • What are the company OKRs for the year?
  • What is the senior leadership interested in?

On the flipside - How comfortable should you be about stakeholders saying they don't 'need' metrics or measures for the design system - they want it. Is this a big red flag?

That would be an ideal situation because you wouldn't need to spend the time to make a case. It's still important to dig into the incentives of senior leadership to make sure you can continually get funding by getting a good sense of what they are aiming to accomplish.

At some point, any senior leader will be asking for numbers, for measurement tracking because that's just part of their role. It's great that senior leadership is behind the team, but it's vital to be proactive and discuss with them to align your goals and theirs. It's recommended to share the data you collect with senior leadership along the way to keep them in the loop, which also helps build alliances with them.

Do you have any advice or recommendations you would like to share regarding measurement or work in general with the audience?


  • Start something even if you feel like you do not have the resources. Have a conversation with people, don't be afraid to reach out and get that qualitative data. It could serve you in the long run. It can be your baseline, test out different things, and figure out which is interesting.
  • See what sticks with the different people you talk to. It doesn't hurt to get a range of stuff out there to understand how people use the system and what they get out of it.


  • The key is to over-communicate; if you feel like you're talking anyone's ear off about the design system, the status, the goals, the roadmap for it, keep going. Everyone can pay attention at different levels, at different times. There's a lot of things about a design system that are nuanced, as we've talked about earlier.
  • It's political sometimes, depending on your organization, keep talking to people, even if you feel uncomfortable doing it. It's probably a good sign. It's going to save you a lot of time in the future, just over-communicate, be clear, track how people are reacting to what you're saying and make tweaks as you go.


  • Focus on some basic questions, whoever on your team if you're a leader or teammate, is interested in doing this and has the space and time to do it well. What should they measure? The problem I see entering people tend to ask is: 'is this making us more efficient?' as opposed to how much time this person saved trying to create this compared to before.
  • Start with spreadsheets and with how you model the data that you're either collecting or modeling the kind of form of the outcome that you want to present and be able to build what you're starting to measure even if manually at first to prove whether or not you're measuring something good.


Was this article helpful?