Master surveys that drive meaningful feedback in user testing—download the new ebook with expert tips and best practices now!
Test Management

5 Mistakes People Make When Writing Surveys

October 18, 2016

One of the best methods for collecting feedback as part of your customer validation process is using surveys. Surveys can very quickly deliver a snapshot of what people are feeling or thinking about your product or certain features, but a badly written survey can leave you with more questions than answers.

Writing a survey is not easy — even trained professionals have a hard time. You could easily end up with useless, inaccurate survey data from which you’ll be asked to make important business decisions. Luckily, there are ways to ensure you get the most out of your surveys. Here are the most common mistakes people make when writing surveys and the best practices we use to avoid them.

Setting Unclear Survey Goals

Perhaps the most important step to building any survey is asking yourself what information you want to gain from it. If you don’t establish objectives for your survey, you’re more likely to overload it with aimless questions, hoping at least one will get you a useful answer.

Establishing goals will help you understand what kind of questions you need to ask to get the type of information you’re after. Using these goals to build your surveys will ensure your questions are clear and direct so that you don’t waste your respondent’s time with pointless or repetitive questions. It also means you won’t be left with a mountain of aimless information to sort out, hoping it may become useful at some point.

Using Poorly Written Questions

People generally scan or skim over text, so it’s important to write survey questions in a way that your respondents quickly understand what it is that you want to know. They want to answer your questions to the best of their ability, but if they can’t figure out what you’re asking, you’re going to collect inaccurate information.

Avoid paragraph-length questions. Instead, use very simple headings with single-sentence questions under it. For example, you could use a “Satisfaction” heading with simple rating scale questions asking what they thought about the main features of your product. This makes it very clear that you’re asking how satisfied they felt about the different features.

You also don’t want to fall into the trap of asking double-barrel questions — this is where you ask two separate (though similar) things in one question. For example, asking someone to rate both the audio and video quality of a video streaming service in one question. If they give you a rating of just 2, you won’t know which is bad: the video or the audio. Or are both bad? It’s better to either break the different features into two questions or ask them to rate the overall experience and provide a text box option to explain why they gave it that rating.

Choosing the Wrong Question Type

While you’re writing your survey questions, it’s also important to make sure you’re using the right type of question, whether that’s a drop-down menu, text box, or multiple choice selection. Keep the role of each type in mind when you’re formulating your questions, as well as what kind of information you want to gain from it. For example, if you’re trying to determine what color users like best, you can just ask which color they prefer with a single drop-down menu rather than using a rating scales for each separate color.

The question type can also help inform the survey respondent what kind of information you’re looking for. Using a small or large text box can indicate to the user just how much or how little information you want them to provide.

Adding Bias to the Survey

Adding bias is when you influence the way your respondents should view or feel about the survey, or phrasing certain questions and answer elements in a way that limits the respondent’s ability to answer truthfully. For example, it’s not a good idea to start a survey with something like, “We’ve been working really hard on the latest version of the app, and we’re excited to hear what you think!” Respondents generally don’t want to disappoint you, so this could make them feel uncomfortable about truthfully giving you any negative feedback, even if it’s how they really feel.

Similarly, you shouldn’t ask a question like, “How easy was it to use Feature X?” as it assumes the feature is easy to use. You’d be better off asking, “Rate the ease of use of Feature X” with a scale that ranges from “Very Difficult” to “Very Easy.” Also, make sure your scales aren’t also biased — a scale ranging from “Okay” to “Extremely Easy” obviously doesn’t give your respondents many options.

Using Time-Dependent Questions

Keep in mind that time-dependent questions can also bias a survey. It’s nearly impossible for anyone to accurately predict, for example, how often they will use an app six months from now. It is just as difficult to recall things that happened six months ago, unless it was an important date, like a birthday or wedding anniversary. Essentially, the further you ask someone to look into the past or the future, the less accurate the data you collect is going to be.

At the end of the day, the best way to ensure you have a good survey is to make sure you know why you want to conduct the survey in the first place. Ask yourself why you are asking each question, what kind of answers you hope to gain from them, and how you will use the data afterward. This will not only help you build the survey, but it will also help you determine the best method for organizing and using the data you collect.

Learn more about creating surveys that collect meaningful and actionable tester insights by checking out The Feedback Playbook whitepaper.

Get the Feedback Playbook

No items found.