Get insights from over 2,200 users in our new report on health & fitness wearables - read it now!
Test Strategy

How to Collect Useful Feedback

September 17, 2018

Three out of four companies that deliver best-in-class customer experiences rely on customer feedback. If that isn’t a high enough commendation for the power of feedback, compare the high percentage of customer involvement in top performing companies to the low percentage in companies who routinely miss their mark, where only one in four use feedback. It’s easy to see the value of letting your customers drive the direction of your product. But with the value of useful feedback top of mind, you have to ask: how are you going to know what your customers really want without getting their direct input? More specifically, how do you collect useful feedback from customers so you can actually build it into your product development plan?

Whenever you look for ways to improve your product, people will say, “Talk to your customers!” They’re not wrong. But while it’s easier said than done, there are many places and ways for you to leverage customer input throughout the product life cycle. Customer Validation is the first opportunity for you to get real experiential feedback. With a functioning product that can be used by potential customers in their live environments, a strong plan of attack, and the right batch of testers, you have all the key components to you need to start collecting useful feedback.

Identifying Useful Feedback

All feedback is not inherently equal. In order to secure meaningful product recommendations, the feedback you collect from testers should meet these qualifications:

  • It should be relevant. Relevant feedback comes from the right sources and contributes to your test goals. It comes from a targeted user (a member of your target market) who has both the demographic and technical qualifications to contribute meaningfully to your test.
  • It should be descriptive. Descriptive feedback paints a clear picture of your testers’ experience. It includes specific details about a feature or event that stood out to them, and why.
  • It should be in the correct context. Understanding all of the nuance surrounding the specific issue, suggestion, rating, or other type of feedback brings the information into living color. Items could include anything outside of the specific issue and target market data in their profile, such as other devices they own, number of family members, climate, etc.

Feedback that doesn’t meet all three of these criteria eats up time and energy while offering little in return. There are many factors that contribute to your testers’ ability to produce useful feedback, but the first step is creating a thorough, detailed plan.

Setting the Right Objectives

Knowing your test’s key objectives and laying out a focused, step-by-step plan to meet those objectives is critical to focusing your testers’ energy where it counts most. Trying to include too many objectives in a single test will scatter their concentration and negatively impact the quality of their feedback. It’s best to keep your testers focused on about 2-5 experiences per week, taking no more than 1-2 hours away from their normal day-to-day life.

The type of test you’ll run, and the type of testers you’ll consequently need, depend on the goals you’re trying to meet. Before jumping into a test, ask yourself (and your stakeholders) where your priorities lie and tailor your test strategy to meet them.

  • Evaluating stability. If your product is mostly feature complete and you need to evaluate its stability and performance in real technical environments, an Alpha Test surfaces bugs and other critical issues with the help of a small group of technical users and/or employees. Look at a combination of the number of occurrences in the field, the user impact of the issues, and the area of the product where these issues occurred to prioritize issues.
  • Evaluating satisfaction. A Beta Test, on the other hand, is ideal for evaluating product satisfaction and gauging customers’ responses to key features. In this methodology, you’ll guide a large group of testers through a product tour, methodically collecting feedback on specific product areas. After experiencing a section of your product, have them rate that section with a survey. Then, identify the key drivers from data associated with the survey response score.
  • Evaluating adoption. Field Tests look at feedback from long term, natural use scenarios to assess product adoption. They can begin just before release and continue after launch, gathering feedback from a large group of testers to see how their attitudes and behavior change over extended periods of time. Here, you would employ a more hands-off approach, collecting product responses through “diary” entries on either a cadenced (once a week, once a month, etc) or event-based (after a specific use, or product “event”) frequency.

Getting the Right Testers

Whether you’re running an Alpha, Beta, or Field Test, your ideal participants will be strangers who reflect your target demographic (relevant) and meet the technical requirements to fully and accurately test your product (contextual). They should be eager to help and able to provide detailed information about their product experiences (descriptive) when you need it.

To encourage their efforts and make it easy to continue providing feedback, put these processes in place.

  • Set expectations early. Keeping your testers in the loop from the get-go is essential to keeping them engaged. In a survey of over 5,000 testers, 43% said their primary reason for testing is to “help improve products I use.” Most of your testers will be genuinely invested in doing a good job. Clear and consistent communication gives them the direction they need to do so, and keeps both you and them happy.

Note: While you should let testers know about your participation requirements in the Beta Test agreement, preparing gentle reminders to use throughout the course of the test will allow you to focus on analyzing incoming feedback later.

  • Don’t overwhelm them. Overloading testers with too many topics each week diminishes the quality of their feedback. In our experience, planning a maximum of two to three topics each week is ideal, and except for very unique situations, never include more than five.
  • Establish a base camp. Your testers need a reliable way to communicate with you, both to submit feedback and to receive updates about the test’s progress. For example, the Centercode Platform offers a hub for testers and test managers, where recruitment, feedback submission, and data analysis all happen in a central location. But while having all these processes in one tool is incredibly convenient, there are other point solutions you can leverage to handle each aspect of your test. The important thing is that your systems are reliable, organized, and that you have processes in place to keep important data from falling through the cracks.

Maintaining an Inflow of Feedback

Here’s where you begin to reap the benefits of your carefully laid out plan. With encouragement and direction, testers will engage with your products, revealing thoughts and attitudes that can be spun into meaningful product recommendations.

To keep the flow of useful feedback rolling, there are a few things to remember:

  • Communicate regularly with your testers. Responses will naturally start to wane after the first week, but you can keep participation rates steady by staying in contact with your testers. If you don’t have an organic reason to contact them, use polite reminders via email or phone just so they know you still care. Also, if any unexpected changes happen to the test schedule, make sure to let your testers know quickly. Everyone likes to feel that they’re in the loop and that their participation matters. Even small engagement efforts reap the big benefits of increased participation.
  • Get others involved. As I mentioned above, testers are primarily involved because they want to help you make a better product. When they get comments or feedback from your developers, marketers, and project managers, they feel like they’re part of the team. Give your internal teams a few good guidelines, but let them ask your testers questions directly. Seeing feedback from other team members increases tester participation because they know you’re actively viewing their feedback. There’s also the added benefit that issues may get resolved much faster this way.
  • Actively respond to feedback. In some ways, a simple “Thank You” or “Could you please clarify what you mean?” goes further than an incentive when it comes to tester engagement. Actively responding to questions and feedback does more than deepen your understanding of the product experience; it shows your testers you value their time and opinions. When testers feel heard, they put more energy into providing quality insights.

Collecting useful feedback from real users is the core of what makes Customer Validation so valuable to your company. For a step-by-step guide to cultivating, collecting, and managing high quality feedback, download the Beta Feedback Playbook from our resource library!

Download the Beta Feedback Playbook for Free

No items found.