Centecode is headed to Boston! Register for our user groupnetworking event on May 21st!
Glossary
>
User Research

User Research

What is user research?

User research is a discipline of learning from users, customers, or potential customers by studying their interactions with products or the users' experiences. By getting feedback on your concepts, designs, and prototypes from customers and users, you amplify the power of their voice and insights in your product design and development processes.

Why is user research important?

Companies rely on user research to ensure that the products they offer are thoughtfully designed and resonate well with their intended audience. Companies invest time and money into user research in order to: 

Make better data-driven decisions

Teams collect user research data to inform their decisions on concepts, designs, positioning, content, and much more.

Reduce bias

Ever been in a meeting where there is that one person who comes in guns blazing and seems to know everything about the product, design, and features — but their ideas seem bonkers? Leveraging feedback from your target market or intended audience helps you ward off the bad bias juju and eliminate (or support!) wild ideas.

Have peace of mind

Verifying your ideas and designs with user research does wonders to put your mind at ease. While it can be stressful to learn about design or product flaws at any point in development, addressing these issues early will save a lot of time and money.

Create a customer-centric culture

Involving customers in your design and development process nurtures a culture where customers become the center of influence and thought. When teams can empathize and interact with customers, products become more relevant, impactful, and usable for the intended audience.

Build better designs

When designers and product teams have access to user research as they build their interactions or concepts, they tend to create better, more intuitive designs. And reinforcing design with customer feedback literally makes products better than they were before, as more and more customer influence improves them over time. 

Write impactful messaging and content 

Messaging is the way into the minds of your target market. User research teams can test the influence of messaging with your intended audience, producing content that both resonates and has a real impact on users buying and learning about your product.

User research categories

There are a lot of methods and different types of user research studies that support product development and design. Designer and Researcher Christian Rohrer talks about 20 popular methods of user research, mapping each one over different dimensions as it helps with the product development process.

Attitudinal vs. Behavioral

Attitudinal and behavioral dimensions contextualize users' stated beliefs versus their actual behaviors. This can be summarized as "What people say" (attitudinal) vs. "What people do" (behavioral). Attitudes typically require self-reporting, while behavioral tends to require observation. Together, these two different types of data help teams understand how people use (behavior) and perceive (attitude) a product. In the image below, the y-axis shows that specific studies are able to collect more or less of one type of data than another.

Source: NN Group

For example, interviews and focus groups primarily collect attitudinal data since they are asking users questions about their beliefs, motivations, or opinions, rather than observing their interactions with the product. Compare this with clickstream or A/B testing that directly collects behavior data based on the way the user interacts with the product or design.

Qualitative vs Quantitative

The qualitative and quantitative dimensions are used to understand "why and/or how to fix" (qualitative) versus "how many and/or how much" (quantitative). These two different types of data help teams understand why users are responding to an aspect or feature of a product in a certain way and what the impact will be on customers as a whole.

For example, interviewing customers is a qualitative method for identifying pain points, and using a survey as a quantitative method you can quantify those pain points with a larger sample to identify how many people are being impacted by the identified issues.

Who's responsible for user research?

There are some companies that have full-time user researchers or user experience (UX) researchers: professionals with backgrounds in human factors, psychology, or human-centered design who focus explicitly on performing user research. But many teams and individuals perform the duties of user researchers as part of their job. For example, product managers and UX designers often have to plan, conduct, and analyze their own research. 

While investment in user research has matured rapidly over the past decade, the rapid pace of production has pushed many organizations to fold user research skills into non-research roles.  For product managers, quick feedback helps shape the roadmap, groom the product backlog, or evaluate a design. This is immensely valuable for keeping up with the demand of product development.

How to do user research

You may not be a trained professional with a bunch of experience in running user research projects. But if you are planning on doing a user research study or test, it's useful to understand the four stages of conducting research: Planning, Recruiting, Study, and Analysis.

  • Planning: This is the initial build-out of the project plan where you identify your goals, propose research methods, set your schedule, outline your target audience, and rope in stakeholders.
  • Recruiting: This is the process of finding, selecting, and scheduling (if applicable) the target audience for your project.
  • Study: This is the process of conducting your study or test. It includes tasks like introducing yourself and the purpose of the study to the participant, providing tasks throughout the study, taking notes, and making observations about what you find.
  • Analysis and Presentation: Finally, this is the process of combining your results, reviewing the data, and summarizing your findings into a presentation or report.

Popular User Research Methods

Method Description
A/B Testing Compares two or more variations of a design with a group of users to see which one performs better statistically.
Card Sorting Users or testers use notecards to label or categorize items in groupings that make sense to them.
Clickstream Analysis Records users' screens as they complete tasks in order to measure the clicks, time on page, and other telemetry data.
Concept Testing Users or testers evaluate an idea or product before release to determine whether or not it meets their needs.
Desirability Studies A method that measures how appealing a product's visual design is and what emotional impact it has on users.
Diary Studies (longitudinal study) Users or testers keep diaries or make audio and video recordings to describe tasks or experiences in their lives that are relevant to a product, feature, or service.
Email Surveys A survey distributed to potential respondents via email.
Ethnographic Studies A qualitative research method where researchers and product teams observe participants in their natural environments.
Eyetracking A research method that uses special equipment to track the eye movement of testers while they complete tasks or scenarios.
Focus Groups A group of participants discuss topics about a product or service with the guidance of a moderator while being observed by researchers and/or the product team.
Intercept Surveys While using a product or website, users are presented with a survey initiated by a trigger based on usage or completion of a task.
Interviews A method where users or testers are asked specific questions about a topic, with follow-up questions to understand their point-of-view in depth.
Moderated Remote User Test Users complete tasks given by a remote moderator while their screen is being recorded or observed by a researcher.
Participatory Design A method where users are given design elements or materials to construct their expected design.
True Intent Studies A method (usually a survey) where site visitors or product users are asked why they're visiting. This is then followed up by verification of whether or not they achieved their goals once they attempt to leave the site.
Unmoderated Remote User Test A research method where panel participants record their own screens and talk through their tasks and interactions without interacting with a moderator or researcher.
Unmoderated UX Studies A method where participants are presented tasks to complete and a system or tools records their interactions with each task.
Usability Benchmark Test A method where specific metrics are being measured as users or testers complete tasks. Common metrics for this test include clicks, time on task, and error rate.
Usability Lab Study A method where users are brought into a usability lab and asked to complete various tasks using a design, prototype, or product.

When does user research happen?

User research typically happens during design phases and iterates throughout development. Often, user research falls into multiple phases of product development, including: Concept, Design, Prototyping, and Test.

Area chart that shows increasing Product Readiness by stage (Concept, Design, Prototype, Test, and Launch)
Example of stages where user research is conducted

User research tools

Tools help us coordinate, communicate, and make business tasks easier. User research is no different. There are a wide variety of tools to help you plan, conduct, analyze, and share user research. Here is a short list of some tools that may be helpful:

  • Centercode: The Centercode Platform supports every aspect of your user testing program with powerful automation to maximize your resources and consistently release amazing products.
  • UserZoom: UserZoom can help you run remote user studies by planning, recruiting, administering tests, collecting recordings and surveys, and measuring the performance of user research.
  • UserTesting: UserTesting helps teams quickly get feedback through recordings and answering simple questions while interacting with a product or design.
  • Maze: Maze allows teams to rapidly collect feedback and recordings on design, product types or live digital products. 
  • SurveyMonkey: SurveyMonkey can help teams collect surveys and form data with an easy builder to develop and distribute surveys.
  • Zoom: Most companies adopt some sort of tool for video conferencing, these tools are great for hosting and recording interviews and tests without breaking your budget.
  • User Interviews: Easily plan and conduct your interviews with User Interviews.

User research metrics

There are different types or categories of user research metrics. One way to break them down is test, study, and product metrics vs. post-task or activity metrics.

Test, Study, and Project Metrics

The metrics and models below are used to evaluate the user experience, including various elements like usability, usefulness, effectiveness, etc. These metrics are typically used in a concluding study or project manner to evaluate the experience of the system or product holistically.

Metric Definition
SUS (System Usability Scale) A ten-element questionnaire used to evaluate the usability of a system or product. Each question uses a 5-point rating scale, from strongly disagree to strongly agree, with the questions varying from negative to positive references.
UMUX (Usability Metric for User Experience) Similar to the SUS and in alignment with the definition of usability (ISO 9241), it contains four questions with two negative and two positive statements. UMUX uses a 7-point scale, asking respondents to strongly disagree to strongly agree with the statements.
UMUX-LITE A proposed shorter version of the UMUX that only shows the positively framed questions (two questions).
NPS (Net-Promoter Score) A critical product KPI that measures the likelihood a user will recommend a product, service, or brand to a friend, colleague, or family member. It's a single-question survey with a scoring formula that ranges from -100 to +100.
TAM (Technology Acceptance Model) A model designed to measure the adoption of a product based on customers or user motivations toward perceived usefulness and ease of use.
SUPRQ (Standardized User Experience Percentile Rank Questionnaire) An eight-element questionnaire used to measure the quality of a website's user experience, including subfactors usability, credibility, loyalty, and appearance.
Satisfaction A simple question used to evaluate the perceived satisfaction of a product or experience within a project.
Brand (Attitude) Lift The measurement of attitudes before and after an experience has been modified or introduced.
CSUQ (Computer System Usability Questionnaire) Also known as the PSSUQ (Post Study System Usability Questionnaire), this is a 16 element questionnaire used to evaluate the usefulness, information quality, and user interface quality of a system or product. It uses a 7-point scale asking respondents to strongly agree to strongly disagree with statements.
SUMI (Software Usability Measurement Inventory) A 50-element questionnaire used to evaluate the user experience with software. It's broken into six subscales that measure global, efficiency, affect, helpfulness, control, and learnability.
WAMMI (Website Analysis and Measurement Inventory) Similar to SUMI, but this method focuses specifically on evaluating the user experience of a website.
Star Rating A simple 1-5 star rating, similar to Amazon's Star Rating and product reviews.
Delta Success Score The Delta Success Score balances the impact of positive feedback like praise, negative feedback like issues, and improvement ideas to show the overall success of your product in real-time.

Post-Task or Activity Metrics

The metrics and models below are used to measure smaller experiences and can be used collectively to provide a summary of a product. In contrast to the metrics discussed before, these are typically used during a study or project manner to evaluate the experience of the system or product incrementally.

Metric Definition
SEQ (Single Ease of use Question) A single question used to evaluate how difficult or easy it was to complete a given task. The 7-point scale ranges from very difficult to very easy.
ASQ (After Scenario Questionnaire) A three-element questionnaire used to evaluate how easy or difficult it was to complete a given task or scenario. Each question uses a 7-point scale asking respondents to strongly agree to strongly disagree with statements.
SMEQ (Subjective Mental Effort Questionnaire) A single question used to measure how difficult a given task was to complete. The 9-point scales range from "not at all hard to do" to "tremendously hard to do."
UME (Usability Magnitude Estimation) This assessment method asks participants to assign usability values to tasks or scenarios, with the goal of measuring usability of the product.
NASA-TLX An assessment tool used to rate the perceived workload in order to assess a task or system. The questionnaire is broken into six categories: mental demand, physical demand, temporal demand, performance, effort, and frustration. The rating scale of each task is 100 points with 5-point steps from very low to very high.
Task/Activity Completion Rate The percentage of users or testers who are able to complete a task. Teams often use partial completions to produce granularity in tasks completions, such as: completed successfully, completed with a minor issue, completed with a major issue, or failed to complete.
Task Confidence A simple single question that asks users how confident they were that they completed the activity or task.
Time on Task The recording of time it takes to complete a task.
Lostness A metric used to measure how lost a user or tester is when they use your product, specifically while using a feature or completing a task. The metric ranges from 0 to 1 and uses decimals to demonstrate granularity between. A high score indicates the user had difficulties finding what they needed, while a low score means it was easy to find.
Error Rate A metric used to identify how many errors a user encountered with a specific task or scenario.
Clicks The number of clicks from a user to complete a task or scenario.

Get Started for Free or Schedule a Live Demo to Learn More