
Peter finds himself in a state of uncertainty when it comes to testing products for his company. On one hand, he knows his tests are effective because bugs are being identified and fixed. However, when it's time to quantify the value of his efforts, he gets stuck. How can he showcase his efforts? How can he benchmark his success when his processes aren't standardized, and his only indication of success is the identification of bugs?
This dilemma keeps him up at night because, as a product manager, understanding metrics and building reliable processes are crucial to his role. He wants to demonstrate the usefulness of his user testing efforts, but he doesn't know:
- How to measure test performance accurately and consistently,
- How to calculate the impact of tester feedback on the overall product, or
- Which values truly reflect the impact of his user testing program.
How can he move forward?
Common Mistakes Companies Make
Peter's story isn't unique — it's a combination of experiences we've heard from customers over 20 years of developing user testing best practices and software. 70% of test managers find it challenging to prove the ROI of their efforts, either due to a lack of time to gather data, uncertainty in calculating it accurately, or both.
This is understandable because it is indeed challenging. Which metrics should you measure? Bugs collected? Bugs fixed? Surveys completed? Number of active testers? Tester engagement levels? Without knowing which metrics are most impactful, you can waste a lot of time (we'll delve into this shortly).
Then there's the challenge of collecting these results and data signals from various sources. If you're not using an all-in-one platform, you're burdened with the time-consuming task of pulling data from multiple spreadsheets and tester emails. You might also need to rely on stakeholders in Engineering, Sales, and Product Success, which means adhering to their timelines and bandwidth before you can even start.
All this work, and you haven't yet calculated the impact of your efforts. If you're a part-time test manager (i.e., it's only a small part of your role and you have other responsibilities), you likely don't have spare hours each week to locate all these data points, clean them up, and manually perform the calculations.
The Pitfall of Vanity Metrics
As mentioned, most test managers either lack the time or knowledge to measure their efforts' impact. But some have OKRs and need to show something. This is usually where vanity metrics come into play.
User testing requires considerable time and focus (if you're not using automation to assist). You want something to show for your efforts. However, metrics that don't delve deep enough aren't truly useful — that's what makes them "vanity metrics."
To clarify, these metrics are real numbers that serve a purpose. Naturally, you'll want to measure the number of bugs submitted by user testers. But that alone won't tell you the impact fixing those bugs had on product success. While knowing the number of bugs doesn't create value on its own, it can be used with other signals to tell a complete story.
Here are some examples:
Centercode Product Director Chris Rader put it best:
"Ask yourself: What can I actually do with this data point? If it's something you can't take action on, it's a vanity metric — and you need to dig deeper."
Impressing your boss and stakeholders with large numbers like how many bugs you've collected or how many testers you've recruited might dazzle them initially. But showing them the actual impact of those efforts, like how fixed bugs have increased positive feedback or your product's NPS score since the last iteration, is much more valuable.
The Only 3 Metrics You Need to Track
If this resonates with you and you want more effective and impactful measurements, here are three metrics you can use to show the value of your user tests and take action:
- Project Health
This metric shows how well your project is performing from a feedback and engagement perspective. More than just showing the number of bugs or activities completed, it's calculated by comparing the engagement you already have with the engagement your project needs overall. This signal shows you how confident you can be in your results and where you might need to take action to increase engagement. - Project Impact
This metric shows you, your boss, and your stakeholders how much better your product has become since resolving feedback submitted by your testers. You can calculate this score by comparing your product's initial satisfaction score to its new satisfaction score post-feedback implementation. - Product Success
This metric is incredibly useful because it predicts how successful your product or feature will be at the time of launch or release. It's calculated by weighing all the positive and negative experiences testers submit during your user test, giving you a single score tied directly to actionable improvements.
Learn more about these three metrics — and how the Centercode Platform auto-generates them to save you time — in this on-demand webinar, Learn How Centercode Delivers Delta Metrics That Matter.


