Prioritizing feedback during a beta test is one of the most important and difficult parts of being a beta manager. It doesn’t take much for an engaged tester team to generate an overwhelming amount of feedback, and the more feedback you collect, the more labor intensive it becomes to review and organize it.
With all of this data available to you, how do you determine which changes could have the most impact on your product? That’s where Feedback Scoring comes in.
Feedback Scoring allows you to assign scores to different aspects of your feedback, which then combine to determine the potential impact of each piece of feedback. To understand the true value of Feedback Scoring, you need to understand the three elements that go into it: Feedback Weight, Popularity Score, and Impact Score.
The first element of Feedback Scoring is Feedback Weight. These are relative weights given to different inherent aspects of your feedback, such as severity and category. The second element is the popularity of the feedback. How many testers are experiencing or discussing the feedback? This is the Popularity Score. These two elements combine to become the feedback’s Impact Score. The higher the Impact Score, the more important the feedback could be to the overall success of the product. Let’s look at each element in more depth.
To determine the Feedback Weight of a bug report, feature request, or other type of ongoing feedback, you need to look at the different fields on your feedback forms and assign weights to different options. You can assign weights for things like severity, feedback type, associated feature, or internal priority. Then, starting with a baseline of 1.0, you assign a weight to each option based on its relative importance.
For example, a critical bug is more valuable than a cosmetic one, so you would give a bug with a critical severity a weight of 2.5 and a cosmetic bug a lower weight of 0.5. By combining different weights, the most significant feedback becomes easy to pick out.
Feedback Weight alone isn’t enough to measure impact. A major bug that only one tester ran into might seem important compared to a minor bug, but if 50 of your testers ran into that minor bug, it suddenly starts to sound like a higher priority.
So, Feedback Scoring must take into consideration the popularity of your feedback. Our system combines the following factors to calculate the Popularity Score for each piece of feedback.
- Duplicates: How many times was the same issue submitted by different testers?
- Predictive Matches: Did a tester select a previously submitted piece of feedback as a match to their issue?
- Votes: How many testers indicated that they had the same issue or opinion as the submitter?
- Comments: How many of the testers contributed to the discussion?
- Viewers: How many testers looked at the feedback?
Each type is given a different weight in the system. These then come together to produce the overall Popularity Score of the bug report, feature request, or discussion thread.
When you combine Feedback Weight and Popularity Score, you have the feedback’s Impact Score. The feedback with the highest Impact Score will have the biggest effect on your product.
You can use this score to organize and prioritize your feedback, and identify trending discussions or problem areas of your product. You can also pull lists of the top bugs and feature requests to send to your stakeholders throughout your test.
Feedback Scoring helps you prioritize the flood of information coming in during your beta test. It also helps determine where to focus your team’s limited resources in order to have the largest positive impact on your product as you prepare for launch.
To learn more about these scoring methods, check out our complete whitepaper on Implementing Feedback Scoring in Your Beta Program.