Testing content can give you more confidence in your content design decisions and support your design rationale. When we lead research, we demonstrate that we’re more than “words people”—we’re designers, and we’re integral to the design process.
Testing doesn’t have to be a big ordeal. Any test you do, however small, will teach you something if done the right way.
You do have time to test content. You just need to do a little prep work. Here are some tips on getting started.
Know what you want to learn
There are different reasons to want to test something. Maybe you’re trying to:
- figure out a path to take with an interaction flow
- improve the metrics of something just launched
- navigate office politics and find a clear winner with data
Whatever the reason, you can test anything at any stage. However if you can’t articulate how the outcome of the test will benefit the customer, you probably shouldn’t be doing it.
- What do I want to learn?
- How will the outcome benefit the customer?
- How will I measure success?
Go broad to learn big
Don’t just tweak a few words or rephrase a sentence or two. A successful test will frame two or more different offerings. For example, one approach may focus primarily on benefits, while the other may be purely functional.
Remember, testing is your chance to learn something new, so don't be afraid of being bold and comparing two or more different approaches or concepts.
- What is the underlying customer problem we’re trying to solve here?
- What different approaches might I use to help me learn?
- How might this change customer behavior?
Qualitative or quantitative?
Quantitative research is when you collect statistical or numerical data to draw generalized conclusions about users’ attitudes or behaviors. In quantitative studies, our customers help us choose the right words to use.
Use this method to learn a users’ intent and see what a user will do—in other words, “how” your users act. This is great when you need to quantify a problem, or validate a solution at a larger scale with more customers.
Qualitative research is when you collect and analyze non-numerical data like interviews, text, audio or video to draw conclusions on a users’ attitudes or behaviors. Listening is key to understanding customer problems, building empathy, and designing experiences they can relate to. Use this method to learn what people like and dislike–in other words, the ‘why’ behind their actions. This is great for usability or comprehension tests.
- You can't test everything, so what should you test? (Nielsen Norman Group)
- Testing content with users (Nielsen Norman Group)
Write up a research plan
Don’t ever walk into a study or test without spending some time beforehand to think through what you hope to achieve. An important tool in making sure your testing goals are clear is to write up a testing plan. Sharing it with partners in advance can help raise awareness of your study.
If you’re sharing testing with partners, such as product/visual designers and product managers, ask to see their testing plan and add your own questions.
What to include
- Goals of the study (what do you want to learn or validate?)
- Target customer and testing plan (who are you testing and what test are you doing?)
- Script (what questions are you asking?)
- Testing content (A List Apart)
- When should we turn to content testing? (Medium)
- Writing an effective interview guide (Nielsen Norman Group)
For Intuit employees
Find the right test
Before choosing which test to use, make sure you're clear what you're testing for. Consider your goals—do you want to learn why or how your content works? Do you have any time constraints? Here are some ideas to get you started.
I want to know if a customer understands something
I don't know what word to use
- Customer interviews
- Card sorting (if you’re naming menu items or labels)
- A/B test (once you have some options from qualitative insights that you want to test at scale)
- Money test (if you want to identify which benefits are more appealing to customers )
I don't understand why a design is performing badly in the product
- Customer interviews (make sure you identify the correct customers to interview)
- Highlighting (useful follow up if it becomes clear during customer interviews that the tone is off in your content)
I don't know why my webpage is performing poorly
- Customer interviews
- Highlighting (useful follow-up if it becomes clear during customer interviews that the tone is off in your content)
I want to know how I should group items together and what to call those groups
I want to get more confidence in my design solution
- Start with qualitative testing with small numbers of customers, for example customer interviews, comprehension tests, the money test
- When you’re confident your design has been validated, move onto quantitative testing with larger numbers of customers, like an A/B test
Types of tests
Good for: Understanding what resonates with customers (like benefits or features, for example)
This can be done in Mural. Have customers drag and drop fake dollar bills onto features that interest them the most (for example). They have $10 to spend and can put as little or as much money as they want on each feature. Here's a template (for Intuit employees only).
Card sorting is used to help understand or evaluate information architecture (IA) or any kind of hierarchical structured content. Customers will group topics into categories that make the most sense to them. You can do this in person or online.
Testing IA and understanding your customers’ mental models when it comes to organizing and naming things
Not good for
Specific in-product interactions
Asking open-ended questions requires the participant to provide a reasoned response and can’t be answered by a yes or no. This helps when you’re talking to customers who may be super literal.
Some questions you could ask:
- If you had to name this, what would you call it?
- How would you describe this in your own words?
- When it comes to X, what’s important to you?
- If we weren’t here, would you continue?
- If you selected that button, what would you expect to see on the next screen?
Listen carefully to how customers describe, name and label things. Make a note of these insights and use them to inform your content decisions.
You could also pull words out of the interaction and have customers explain in their own words what is happening. This is a good way to find gaps in the interaction itself before you ever do content. Demonstrating research findings like this shows we’re also design strategists.
Present the customer with the screen you want to test and a selection of content options on the side. Ask them questions based on actions they would usually take when using that screen.
For example, “You want to settle an invoice. Which button makes most sense to you to complete that action?” Give them the option to create content, too, if what’s available doesn’t meet their expectations.
Learning more about what microcopy to use, like button test.
Not good for
Testing usability (validate that in a subsequent study)
Show customers a prototype and ask them questions about the content. Here are some examples:
- Now that you’ve read X, explain it back to me in your own words.
- What does this [piece of content] mean to you?
- What would happen if you tapped here?
- Did you notice this smaller text? What does it mean? What if you tapped here?
This type of test is good for learning if your customer understands your content.
A Cloze test is basically Mad Libs and tests comprehension and context. Remove words from your copy and let customers fill it in.
It's good for determining if your target audience understands the meaning of your content.
Cloze test for reading comprehension (Nielsen Norman Group)
Get some customers in a room with your copy (you could also do this virtually with a Google Doc or other collaborative tool). Have them highlight words or phrases based on the criteria you set. For example, if the goal of the content is to make your customers feel confident, have them highlight words and phrases they feel inspires confidence. Then ask what words don’t make them feel confident.
If possible, have them talk out loud about why they made that choice. This could also be unmoderated.
This exercise is good for understanding which words and phrases are communicating the emotions you’re aiming for.
5 fun ways to test words (John Saito, Medium)
Write up some emotions (~25) on notecards (if in person) or in sticky notes on Mural or some other collaboration tool. Have customers read your copy and then choose which emotions/cards they’d use to describe it.
Similar to the highlighting exercise, reaction cards are good for understanding which words and phrases are communicating the emotions you’re aiming for.
There are tons of resources out there about A/B testing. Prioritize an A/B test for important decision moments in the customer journey, like connecting a bank. These types of tests take time to bake, so make sure they’re worth the time and effort. Work with your PM, marketing partner, and research partner if you have one.
Test for maximum impact:
- Focus on headlines and CTAs.
- Make the variants notably different (different benefits, tone, information)
- It’s OK to change the visuals slightly to support the content
- Test in higher traffic areas where it really matters that you nail the content. You can then extrapolate the results to lower traffic areas.
Comparing how many customers respond at important decision moments (first-time use, purchase decision, opt-in, App Store, marketing pages).
Also good for validating content solutions previously tested during a qualitative study with a smaller number of customers.
Not good for
Understanding the "why" behind the results.
More content testing resources
For Intuit employees
- Content testing (presentation by Mirabel Bradley)
- Research methods for strategizing and evaluating content
- Sample test plan (all-in experience content)
- Visual metaphor sample test plan
Narrow who to test with
We build products for a broad and diverse population of customers, but when it comes to testing, go narrow on who your customer is. Work with your PM and XD partners to help identify the customer segment you want to be testing with.
Questions to consider
- What are your research goals? What are you trying to learn?
- What do we know about this customer (time in business, industry, age, location)?
- Are we capturing all the diversity within that customer segment? Is there a segment you’re missing? Double-check during the interview that the customer fits the target.
- Are you balancing power users with newbies? Power users and existing customers are great for assessing trajectory. Amateur or new customers are helpful when looking for pain points and initial impressions.
- During interviews, watch for similarities that may not have been evident in the screening process. Do all your subjects have graduate degrees? Are they all tech savvy? Is there diversity in ethnicity and gender expression? Have you included customers whose first language is not English?
How many people?
The number of people to interview depends on the test you want to run.
For usability tests, the suggested number of people to interview is between 3-5 users. Five people is all you need for patterns and potential problem spots to emerge. Steve Krug (author of Don't Make Me Think) says it well: “It’s much more important to do more rounds of testing than to wring everything you can out of each round.”
It takes at least 20 users for a quantitative test to reach statistical significance. You may need even more users if you need tight confidence in your data.
A card sorting test requires at least 15 users. You need more people than a typical usability test to account for people’s different mental models and unique vocabulary to describe the same concept. More data from more users is needed to get an accurate picture of their preferred structure and determine how to accommodate differences among users.
- How many test users in a usability study? (Nielsen Norman Group)
- Usability testing: How many users do you need? (UX Design Institute)
- Don't Make Me Think by Steve Krug
For Intuit employees
Analyze your findings
So you’ve run your test(s). Great work! Now it’s time to assess what you found. Share your research findings with a broad group of partners and stakeholders to give insight into the content design process and rationale behind your word choices.
Open up Mural, a spreadsheet, Google Doc, or collaborative tool of your choice and point out the top 3 findings per customer.
You can also go through transcripts and color-code things that bubble up.
Don't jump to conclusions.
Partner with research to define what makes up a trend. Are you hearing the same feedback from many participants, or is it all over the place?
Beware of splitting work too much, because lots of biases come into play.
Show your work
Document everything. Use customer verbatim or video clips of interviews in your shareouts This helps you bring people along.
Remember the big picture
Factor in long-term considerations. A click isn’t the end-all be-all. Our bigger goal as a company isn’t to get people to click on something in a moment. It’s to build brand trust and equity.
Prioritize the why
What customers say they like is tricky and has to be taken at face value. Testing and interviewing customers lets you dig deeper below the surface to uncover the underlying reasons for attitudes and preferences. If 8 people say they like something, that's not necessarily statistically significant. You need to ask why. Take feedback in the context of the broader design goal.
Let’s use receipt capture vs receipt snap as an example. This is a feature in the QuickBooks mobile app. In a user test, more people liked receipt capture because it sounded formal and professional. But the people who liked receipt snap liked it because it felt easy, understandable, and friendly. This was the goal of the broader design and more closely matched Intuit’s voice and tone.
In this example, the deciding factor wasn't the number of people who liked something, but why they said they liked it. This also offsets the data-driven culture idea that numbers tell the truth all the time. They don’t—at least where content is concerned.
- How to analyze qualitative data: thematic analysis (Nielsen Norman Group)
For Intuit employees
Tips for success
Be aware of biases
Cognitive biases are little shortcuts your mind makes to reduce the mental burden of decision-making. Some biases are harmless—and often unavoidable—but others are worth watching out for during the testing process. Try to keep an open mind and recognize biases when they show up in your customers, your partners, and yourself.
The tendency to look for or emphasize information that conﬁrms our beliefs and ignore information that doesn’t. Watch out for self-fulfilling data from yourself and others. Be careful about pushing your own ideas and agenda.
If you find yourself or others slipping into this bias, come back to the reason why you’re testing. It’s easy to feel vindicated and see data that you want to see, but that won’t benefit the customer. Take your feelings out of it.
This is when exposure to something affects our reaction to something that follows. When writing tasks, avoid terms that appear in the interface. (It just primes people to look for the words in your task.) For example, asking “What do you think this button does?” primes users that the UI element is a button, though they may not have realized it was a button.
You can also inadvertently prime users with your behavior. Being overly friendly, for example, can make test participants feel they need to be extra positive about the design. (Source: Nielsen Norman Group)
A frame is the context used to describe an idea or question. Sometimes we frame certain things in a positive or negative way without meaning to. That can heavily influence the response you get from people, whether it’s in an interview or presenting your findings to stakeholders.
Be conscious of how you frame your test results and double-check this bias to make sure you’re not overlooking or leaving out important information.
Make sure the right people are involved
Bring folks along at every stage and show your work. Get buy-in early—ask colleagues to review scripts, hypotheses, and the goals of the test. The more buy-in you get, the more people feel a sense of ownership and feel included in the process. That goes a long way.
Have those tough conversations
The main reason a PM or partner might be hesitant to test content is because they don’t know how easy it is. Link back to the why and remind them of the customer problem you’re trying to solve.
Bring in a researcher if you’re struggling
Your friendly UX researcher can act as a consultant and help you brainstorm ideas, think through tests, review scripts, and bounce ideas. If you know you want to test something but you’re not sure how to go about it, ask them.