The terminology problem nobody talks about
Research and testing have a language problem. The words get used interchangeably, the meanings blur together, and before long nobody is quite sure whether they've commissioned research or testing, or whether the insights they're looking at are benchmarks or results.
This matters. Using the wrong method for the wrong question wastes time and produces findings you cannot act on. So before you can choose what to do, you need to know what the words mean.
Here's the plain-English version.
Quant and qual: the two categories everything falls into
Almost every research and testing method is either quantitative or qualitative. You'll hear these shortened to quant and qual constantly, so let's deal with them first.
Quantitative research and testing
Quant is a numbers game. It relies on large volumes of data to tell you something meaningful, and its natural home is analytics, search data, and any platform that produces statistical output.
Quant tells you that something is happening. It tells you how often, at what rate, and where. What it rarely tells you, on its own, is why.
Qualitative research and testing
Qual is where you collect detailed insight from people. Think conversations, observations, and open-ended responses rather than percentages and counts.
A simple way to hold the distinction: quant gives you numbers, qual gives you words.
Some methods mix both. Tree testing, for example, produces statistical pass and fail rates alongside the reasoning people give for their choices, which is why it sits neatly across both categories.
Research: understanding the problem before you touch the design
Research is what you do before you have anything to put in front of people. Its job is to help you understand the problem, not to validate a solution you've already built.
When starting work on a navigation or search problem, the first questions are always the same:
Is there a problem?
What is the problem?
Who is affected by it?
Without clear answers to those three questions, any design work is guesswork.
Research for navigation and findability problems tends to take two forms. Desk research, sometimes called bench research, is where you read extensively to understand the context, the product, and the existing evidence. Qualitative research is where you talk directly to the people who are experiencing the problem you need to solve.
Both are necessary. Neither replaces the other.
Benchmarking: understanding performance over time
Benchmarking is a comparative tool. It tells you how something is performing now, relative to how it performed before, or relative to how you want it to perform in the future.
If you are comparing present to past, you are using benchmarking to identify whether performance has changed and in which direction.
If you are comparing present to future, your benchmark becomes a key performance indicator or success metric. It sets the baseline against which you will measure whether your work has made things better or worse.
Benchmarking almost always produces quant output. Being able to express performance as a percentage or a rate makes it significantly easier for stakeholders outside the design process to understand what they are looking at.
Testing: finding out if something works
Testing is what happens once you have something tangible to put in front of people, whether that is a prototype, a live product, or a proposed change to an existing one.
Its purpose is to find out whether something works, and if it does not work, to understand what is wrong with it.
One important clarification: testing is not user testing. You are not testing the people who use the product. You are testing the product. The people using it are your research participants, not your subjects.
Testing is predominantly qualitative, because the valuable output is usually the behaviour and reasoning of the people doing it. The exceptions are platforms that run card sorting, tree testing, or first-click testing at scale, where you need enough participants to produce statistically meaningful results.
Insights: what research and testing produce
Research and testing both generate insights. Insights are the output you use to inform design decisions, both during the design process and after something has launched.
Insights are primarily quant data. Analytics platforms and onsite search data are the most common sources, and they give you a continuous view of how people are actually using your product.
Their particular value in navigation and search work is that they are largely unbiased. You are watching behaviour rather than asking people to self-report it.
One important caveat: insights tell you how something is performing, but they can only tell you whether performance is good or poor if you have already defined what good looks like. Setting that definition is part of the work, and it is worth doing before you start collecting data rather than after.
Need help making sense of what your search and navigation data is actually telling you? Get in touch.
▶ Know someone who’d love this? Forward it their way.
▶ Was this email forwarded to you?

