The Definition of Useful Customer Feedback

Customer Validation relies on customer feedback to give you a better understanding of your target market and their experiences with your products. But not all feedback is created equal. Irrelevant, confusing, or incomplete feedback funnels valuable resources away from the feedback that can actually help you make meaningful product improvements and deliver value to the stakeholders of your CV projects. But what is the definition of useful customer feedback? It is relevant, descriptive, and contextual.

Venn diagram of useful feedback

1. It is Relevant

In order for feedback to be considered relevant, it must come from the right source and it must contribute to the goals of your test. Here’s what that looks like for each phase of Customer Validation:

  • Alpha Testing evaluates the stability of your product, so relevant feedback will come from enthusiastic strangers who can test your product in technology environments that can’t be recreated in your QA labs. These testers must endure feature gaps, critical bugs, and even crashes without getting discouraged, so co-workers who don’t typically have enough time and motivation are oftentimes unable to provide the relevant feedback you need.
  • Since Beta Tests evaluate your customers’ satisfaction with your product, relevant feedback will come from strangers who are a part of your product’s target market. Feedback from friends, family, and employees is typically not considered relevant because it is biased and may not be an authentic reflection of the attitudes and behaviors of your true customers.
  • The goal of your Field Tests is to evaluate the adoption of your product’s features through natural use. Relevant feedback in the Field phase will come from target customers who are also strangers, but keep in mind that your Beta participants now have a biased view of your product and can’t provide relevant feedback in the Field phase. Only new strangers can deliver truly unbiased feedback and data that reflects natural use.

In addition to measuring the relevancy of your feedback against the central goals of each CV testing phase, you should also measure it against the specific objectives that you defined at the beginning of your project. For example, you might be focused on testing newly updated features or assessing the clarity of product documentation. If you can tie tester feedback to those specific goals, it is relevant. Keep in mind that feedback that doesn’t help you achieve your goals might be interesting, but ultimately it’s a distraction from what you’re trying to achieve.

2. It is Descriptive

Descriptive feedback is clear and includes detailed information that paints a complete picture of the tester’s experience with the product. It includes specifics about the event or issue, and an exact recount of how it occurred, including the events leading up to it. In some cases, it also includes an explanation about why the event, feature, or issue stood out to the tester.

For example, if a tester submitted a private journal saying, “I enjoyed setting up the device,” you would have no way of knowing what part of the setup process they liked or why. If a bug was submitted stating, “The signup process didn’t work,” you wouldn’t be able to identify which part of the signup process failed. On the other hand, descriptive feedback will enable your team to accurately assess the topic at hand, recreate the issue if necessary, and form actionable product recommendations based on the feedback.

3. It is Contextual

Only contextual feedback can help you gain a true understanding of your customers and how they actually use and experience your products in their everyday lives. Feedback is contextual if it is tied to both the submitter’s technical and demographic profiles, and also to related feedback from other customer testers.

A tester’s technical profile contains information about their testing environment, including specifics on their devices by type, OS, and browser. Their demographic profile includes data points like age, location, education, or any other information that helps you understand the experiences and motivations of each tester or tester group.

If you’re using a tool that’s built to handle customer feedback projects, your feedback is automatically linked to these contextual details. After that, it’s easy to categorize feedback on an individual basis, or group it with related feedback to surface meaningful trends and insights about your customers and products. This contextual feedback is also easy to summarize and analyze in a way that gives you a clear picture of which actions to prioritize in the period leading up to your product launch.

By collecting feedback that fits these three criteria, you’ll have the information you need to make insight-based recommendations that will benefit the development of your product, the success of your CV program, and eventually, your company’s bottom line. If you’re not sure how to go about gathering useful feedback, these tactics will help you get started. You can also take a look at our webinar series on Customer Validation for more information.

Watch the recordings from our Customer Validation webinar series!

You Might Also Like