Survey data is a crucial tool for understanding your target market’s attitudes. In a Customer Validation (CV) project, it gives much-needed context to the issues, ideas, and praise submitted by your testers. It also helps you make decisions with lots of measurable information about your customers’ feelings of their experiences. That’s why so many people use surveys during pre-release testing.
Here are five tips you can use during survey design and deployment to keep your data clean. By eliminating bias and securing more accurate data, you’ll maximize the value of every survey you publish.
Designing Your Survey
Data cleanliness starts all the way back at the survey design phase. The best way to prevent biased survey data is by protecting against bias from the get-go. As you might have read in our blog post about psychological biases, people have a tendency to gravitate towards certain choices based on the context they’re presented in. Thankfully, there are a few proactive steps you can take to avoid them.
Survey Tip 1: Randomize choices where you can
The order of options presented to a respondent in a multiple-choice question can have a big impact on how testers respond to that question. There are two forces at work here: Primacy bias and recency bias. Primacy bias is the tendency for a respondent to choose one of the first answer choices presented to them. Recency bias is the tendency for a respondent to choose the option they saw most recently. This usually means an answer choice near the end of your list.
The best way to counteract these biases is to randomize the order of your answer choices for every respondent. This doesn’t remove the biases from each individual tester experience, but it protects your data set as a whole.
Survey Tip 2: Don’t lead your testers
Collecting survey data without introducing bias is critical when you’re using that Customer Validation data as the basis of important product decisions. Even experienced survey writers need to keep human nature in check to ensure they’re not tipping their hand.
There are subtle cues we use in everyday language that point to what we expect to happen in the future. This is great when you’re foreshadowing a plot point in a novel to build drama, but including these cues in survey writing can lead respondents to give biased answers. One of your jobs as a survey writer is to comb through your own writing to make sure you’re not accidentally swaying your results.
For example, consider these two survey questions:
Option 1: How did you enjoy your phone experience with our rockstar call center team today?
Option 2: How would you describe your phone experience with the call center team today?
The second option is much more likely to elicit an unbiased response because it doesn’t include implied value judgments (“enjoy” and “rockstar” are both positive; “our” implies a personal response) of the call center team. The only way to address issues accurately is with honest feedback — even if that feedback could be painful in the short term.
Deploying Your Survey
Even if you think you’ve set things up perfectly, there’s no reason to throw caution to the wind. When you’re ready to launch, it’s best to expect potential errors and act accordingly.
Survey Tip 3: Check your conditional logic (again)
Most surveys involve some level of conditional logic, which is just a fancy way of saying that certain questions can be included or omitted in your survey based on a respondent’s answers to previous questions. Conditional logic can be very simple or very complex.
Simple: Question one asks respondents to indicate what type of smartphone they own. If they select “iPhone”, a conditional question pops up with a list of iPhone models to choose from.
Complex: Question one asks respondents to indicate what type of smartphone they own. If they select “iPhone”, they enter a unique survey flow. These users answer a group of followup questions about their device, their relationship with Apple, and their purchase history. They skip questions that only apply to Android users.
Even experienced survey writers can accidentally send out a survey with flawed logic if they’re not careful. Unfortunately, when this happens, it negatively impacts both response rates and data integrity. When you reach out to your respondents a second time to clear things up and ask them to re-take your survey, you will inevitably receive fewer responses. On top of that, the responses you do get will be biased because the respondents have seen most of the questions already.
These errors are totally understandable, but the effects are serious enough that you should try hard to avoid them at all costs. That’s why I recommend dedicating some time to checking the logic flow of each survey from every response angle. A little time saves a lot of heartache.
One of the best ways to avoid mistakes like these is to ask a friend or co-worker to go through your survey before you publish it. A fresh set of eyes is much more likely to spot a flaw in your logic process than you are, especially after you’ve been drafting and editing the same survey for hours on end.
Survey Tip 4: Allow for user error
People mess up — it’s bound to happen. We constantly get people asking to retake surveys because they made a mistake or their circumstances changed. Why not include some room in your processes for respondents to recover if this happens while taking your survey? As a first line of defense, a platform that allows you to reset surveys for individual users is super useful. Enabling users to edit their answers if they make a mistake increases the accuracy of your survey data overall.
If you don’t have the time or resources available to fix it, it’s better to delete data you know is inaccurate than to include it in your results anyway and hope the bulk of your clean data outweighs the few bad entries. As a last resort, you can scrub incorrect survey answers from your dataset as part of your normal data cleaning processes.
The Big Picture
Great job! You’re well on your way to collecting clean and relevant survey data. There’s one more survey tip that will help your CV project results as a whole.
Survey Tip 5: Use templates
In the world of Customer Validation, templates of any kind (project templates, email templates, survey templates, etc.) are huge time savers. But two of the biggest benefits of using templates is that they ensure consistency and enable you to compare performance and satisfaction over time. For example:
- If you ask the same questions in the closing survey of every project you run, you automatically gain the ability to compare the scores of different products against each other.
- If you ask the same questions at the end of each phase of a multi-phase test, you can track a product’s improvement in real time.
- If you ask the same questions about different topics within the same project, you can compare sentiments about different features of the same product against each other. That makes it easier to do things like find out which aspects of your product should be promoted in advertising.
If you’re using multiple tools or templates for surveys across your program, it’s very easy for your survey data to look completely different each time. I’d highly recommend cloning survey templates whenever you can and pulling any additional individual questions you might need from a question bank of existing vetted questions. Not only does this save time when you’re building your survey, it allows you to see how product acceptance is evolving with every build and new release.
You’re on a roll — why stop now? Find out more survey-building tips and techniques from the Centercode Research Team in our ebook, Survey Building for Customer Validation.