Peter is in a state of limbo when it comes to testing products for his company. On the one hand, he knows his tests are somewhat effective because bugs are coming in and they're getting fixed. But when it's time to actually quantify the value of his efforts, he gets stuck. What's the best way to show his effort? How can he benchmark his success when his processes aren't standardized, and his only clue they're working is that he's getting bugs?
This keeps him up at night because, as a product manager, understanding metrics and building reliable processes are critical to his role. He'd love to be able to show the usefulness of his user testing efforts, but he doesn't know:
- How to measure test performance accurately and consistently,
- How to definitively calculate the impact of tester feedback on the overall product, or
- Which values will actually show the impact of his user testing program.
How can he move forward?
What Most Companies Are Doing Wrong
Peter's story isn't unique — in fact, it's an amalgamation of the stories we've heard from our customers over 20 years of developing user testing best practices and software. 70% of test managers find it difficult to prove the ROI of their efforts, either because they don't have enough time to pull in the data, don't feel confident they can calculate it accurately, or both.
It's understandable because…well, it's really difficult. In the first place, which metrics do you measure? Bugs collected? Bugs fixed? Surveys completed? How many active testers you have? How engaged those testers are? You can waste a lot of time by not knowing which metrics will be the most impactful to showing your effort (we'll get into this in a sec).
Then there's the matter of actually collecting these results and data signals from all over the place. If you aren't using an all-in-one platform, you're saddled with the time-consuming task of pulling data from multiple spreadsheets and tester emails. You might also need to rely on your stakeholders in Engineering, Sales, and Product Success, which means you're also beholden to their timelines and bandwidth before you can even start.
All this work and you haven't actually gotten to the part where you're calculating the impact of your efforts. If you're a part-time test manager (i.e., it's only one small part of your role and you've got other responsibilities to handle), you probably don't have a spare few hours each week to locate all these data points, clean them up, and manually perform the calculations.
Vanity, Vanity, Vanity
As we said above, the majority of test managers either don't have the time or knowledge to measure the impact of their efforts. But some of you out there have OKRs and need to show something. This is usually where vanity metrics pop up.
User testing takes a good bit of time and focus (if you're not using automation to help you out). You want something to show for your efforts. But at the end of the day, metrics that don't go deep enough aren't actually useful — that's what makes them "vanity metrics."
To be clear, these metrics are real numbers that serve a purpose. Of course you'll want to measure the number of bugs being submitted by user testers. But that won't tell you the impact fixing those bugs had on product success. While knowing how many bugs you've got doesn't create value on its own, you can use it in combination with other signals to tell a complete story.
Here are some examples:
Centercode Product Director Chris Rader said it best:
"Ask yourself: What can I actually do with this data point? If it's something you can't take action on, you've got a vanity metric — and you need to dig deeper."
Showing your boss and stakeholders a big number like how many bugs you've collected or how many testers you've recruited might dazzle them at first. But showing them the actual impact of those efforts, like how fixed bugs have increased positive feedback or your product's NPS score since the last iteration, is much more useful.
The Only 3 Metrics You Need to Track
If all of this feels very real to you and you want more effective and impactful measurements, here are three metrics you can use to show the value of your user tests and take action!
- Project Health
This metric shows you how well your project is performing from a feedback and engagement perspective. More than just showing the number of bugs or activities completed, it's calculated by comparing how much engagement you already have and how much engagement your project needs overall. This signal shows you how confident you can be in your results and where you might need to take action to increase engagement.
- Project Impact
This metric shows you, your boss, and your stakeholders how much better your product has gotten since resolving feedback submitted by your testers. You can calculate this score by comparing your product's initial satisfaction score to its new satisfaction score post-feedback implementation
- Product Success
This metric is incredibly useful because it predicts how successful your product or feature will be at the time of launch or release. It's calculated by weighing all the positive and negative experiences testers submit during your user test. This gives you a single score tied directly to actionable improvements you can make.
Learn more about these three metrics — and how the Centercode Platform auto-generates them to save you time — in this on-demand webinar, Learn How Centercode Delivers Delta Metrics That Matter.