Improving Product Experience Through Customer Research

At Under Armour Connected Fitness® we start with the customer, and that means that we believe the best solutions arise from a thorough understanding of customer wants, needs, and behaviors. In addition to the raw data we can analyze via our app platforms, we inform and inspire our work through qualitative research methods. As customer research is core to our process, we create solutions that are informed, validated, and in turn resonate with our customers. We neither define nor design from a distance.

Through our design, product, and engineering teams, we employ a mix of new and tried-and-true methods, and are happy to do so in a ‘guerilla’ style—scrappy, impromptu, quick, and self-sufficient. We get results, we iterate, and we move quickly. In addition, our dedicated Customer Happiness teams are an excellent source for gathering additional structured insights from customers.

Customer Research is employed at multiple stages during the product development life cycle:

  • at the onset as the product is being defined (helps to define user needs)
  • during the design phase (i.e. via user testing)
  • once the product is launched (customer feedback), resulting in iterative product improvements.
Three Phases of Customer Research

Three Phases of Customer Research

There are three main forms of Customer Research:

  1. Foundational Research – Inspire
    Foundational Research helps to answer the question,”What is the correct product/feature to design?”. It highlights behaviors that provide a good indicator of customer needs. It can provide insight and inspiration into the direction of future product opportunities.
    Typically in the Define phase.
  2. Generative Research – Validate
    Generative Research is employed to generate new ideas, refine existing ones, and to zero in on a set of core functionalities about a product. Methods can include user testing, and co-designing exercises with target customers.
    Typically in the Design phase.
  3. Evaluative Research – Inform
    Evaluative Research focuses on understanding the usefulness of a product. It typically involves user and usability testing. Once the product is launched, through instrumentation and customer feedback, Evaluative Research helps form a loop that allows us to continually improve a product
    Typically in the Deploy phase.

Ways to conduct Customer Research

Research Methodologies

Customer Research Methods

There are many methodologies we can employ throughout the three phases of Customer Research. Let’s discuss how we employ a few of them:

  • Customer Surveys
  • Contextual Inquiry
  • Customer Happiness
  • User Panels
  • Hallway Tests

Customer Surveys

To learn about our customer wants, needs, and behaviors, we employ several techniques, one of which is customer surveys. In tandem with in-person contextual interviews, examining app usage data, and other research techniques, Customer Surveys are an excellent tool to generate insights from real customers through conversation and observation.

Customer surveys sit on a continuum somewhere between app usage data and in-person interviews.

Customer surveys sit on a continuum somewhere between app usage data and in-person interviews.

When to use customer surveys:

  • You have specific questions you want answered (i.e., not ideal for explorations)
  • You want to simulate a digital environment (such as testing an app’s onboarding questions)
  • You need a large sample size to challenge your assumptions or see if what a few people are saying is true for the general population
  • You’re targeting a specific demographic

Limitations:

  • Recipients may interpret survey questions in different ways
  • Recipients don’t always take survey questions seriously
  • Response rate ranges can be quite low (i.e. between 1-5%)
  • We survey recipients who use our apps – we don’t always get the perspective from people who are dissatisfied or who don’t use our apps or who don’t like to answer survey questions

Best Practices

  • Target the right population. Screen for certain behaviors or activity levels such as:
    • People who have joined in the last year
    • People who have been active in the last 10 weeks
    • People who have logged activities with a specific device (e.g., UA Record™ Footwear)
  • Work with e-mail team to ensure survey recipients have not been surveyed recently
  • Determine the size of the survey pool by calculating the number of responses needed for statistical significance and your expected response rate (i.e., 1-5%)
    • If you target a very large survey pool you shrink the pool for future survey
    • A large number of responses does not add to the value of the results if you meet statistically significant levels
  • Avoid marking every question as “required”. This enables your respondents to skip questions they do not want to or can not answer.
  • Vast majority of responses come within first 36 hours; so no need to run a survey for an entire week
  • Keep them short, 10-15 minutes (usually equates to 10-15 questions)

Summary
Surveys are good for certain situations. When it comes to understanding user behavior, needs, wants, frustrations, and preferences, it’s best to use surveys in tandem with other research techniques. Each research tool helps you see a portion of the whole picture.

Contextual Inquiry

The MapMyRun® team recently conducted guerrilla testing on Town Lake trail. The goal was to test the readability of new charts/graphs in the workout details page of MapMyRun. Our process was:

  • Prototype
    Generate a prototype using high-fidelity designs (InVision).
  • Questions
    Determine the exact questions to be answered as a result of testing (i.e. will users easily understand how to read a chart? Will users know to scroll here? Will users see this button or will they miss it? Will they understand what this word means?)
  • Time/Place
    Outline best time/place for testing (runners on a popular Austin running trail, bright conditions, in an area where we people are naturally “paused” from running, like by the water fountain or the parking lot)
  • We’re legit
    Proper introduction and preliminary questions (“We’re conducting a test for Under Armour®. We want your help informing new designs. Have you ever heard of MapMyRun? How often do you run?” etc)
  • Context setting
    “Pretend you have just finished a run that you tracked with your phone, and now you’re looking at the screen showing you the results of that run…”
  • Un-aided observation
    Let the user hold the phone like they normally would. Take notes of organic reactions and comments. Avoid aiding or helping (i.e. don’t say “there’s more down below if you scroll”) and if they ask questions, instead of immediately answering, try and understand why they are asking it and what they would do if there wasn’t anyone around to ask.
  • Document & Synthesize
    Take pictures, transcribe notes, spot trends, make conclusions and takeaways. Generate a report.
  • Design
    Iterate on designs based on feedback. Test again as needed.

Note-taking, photography, and video are musts and can be massive aides when observing participants in their environments. There are many subtleties captured that can provide deep insight into customer behaviors.

Customer Happiness

Our Customer Happiness team is on the front lines of listening and responding to customer feedback emails. Much of the feedback is in the form of feature requests. The Customer Happiness team have created a system to track and rank common feature requests using a JIRA ticketing system. This allows product teams to address the most commonly reported issues from customers using our apps.

User Panels

User Panels give our employees the chance to engage directly with our customers – and vice versa. Much like surveys, they allow for a dialogue and deeper dive into specific uses and areas of our products.

The Panels are conducted as a free flowing conversation, with a moderator guiding the conversation from topic to topic. Interacting with customers and hearing how they articulate their experiences with the products in real-time is invaluable.

Hallway Tests

Spending an afternoon putting your designs in front of coworkers can help identify potentially larger design and functional issues before the feature goes into production.

Pros: They are quick – typically a hallway test can be setup and completed within a day.
Cons: Contextual authenticity – lacks the real-world insight, even though the participants are valid users of the product.

Setting Up Tests Can Reveal A Lot
Even before the sessions begin, you can uncover issues simply by organizing the test. As you lay out screens and prepare questions and tasks, you might find obvious oversights, such as a transition from one screen to the next being too abrupt, or a flow feeling disjointed. For example, while working on a release for UA Record, we encountered a challenging scrolling experience that would not have been uncovered by static comps alone. We discovered this while building the prototype to test, and immediately took note of the issue as something that needed to be addressed.

Recruiting Participants
Recruit coworkers who are as far-removed from the feature being developed as possible, yet who represent your target audience as closely as possible. It can be tricky to find valid candidates, but there are ways you can approach the test session with participants on the fringes that will yield specific insights. For example, one of our Customer Happiness agents loves volunteering for Hallway Tests. Our approach with the agent is to identify areas that might lead a user to contact support for help, and to focus on reducing those pain points as much as possible, or provide a way for the user to solve the issue themselves. This can include anything from the terminology used to ability to complete a given task.

Given the fast-paced set-up and execution of hallway tests, it might be hard to recruit a large number of participants. This is OK! If you only get 3 people to look at your designs, you’ll be able to identify trends – even if that “trend” shows each participant has a completely different reaction to something. Better than that is when they all have the exact same reaction to something – which is a strong indicator that it needs to be addressed.

Conducting The Test
Always schedule the first and second test with a large enough break between them to fix any issues with the test itself. Consider the first session the “test of the test”. Aim for a task-based structure, but be prepared to go off-script if necessary. And allow time for some open-ended discussion at the end.

Summary

Customer Research yields fascinating and insightful conversations, that lead to product improvements, and even innovation. Understanding customer context, needs, and behaviors is critical to our success as our company transforms from product-led to consumer-led. Data and insights derived from research can help inform improvements to the product experience across the product development life cycle. Engaging with consumers in meaningful ways as we design our products will lead to more meaningful customer engagement with the finished product. And continuing to listen to customers post launch allows us to iterative and improve based on real behaviors.

customer happinesscustomer researchdesign researchproduct designux design