The invisible problem
Navigation is supposed to disappear. When it's working, nobody notices it. Every single person who uses your digital product will touch your navigation at some point, but the experience they came for is never the navigation itself. If people are thinking about your menu, your labels, your structure, you've already lost them.
This creates a genuine methodological headache. How do you test something that only succeeds when nobody notices it?
The answer is: you don't test it directly. You make people find something instead.
Make them find the thing, not test the thing
This sounds obvious once you hear it, but it changes the entire shape of a research session.
If you ask someone to evaluate your navigation, you've handed them a lens they wouldn't otherwise use. They're now a critic, not a user. Every decision gets scrutinised. You'll get opinions, not behaviour.
If instead you ask someone to find a specific thing within your product or service, you get something far more valuable. You see where they look first, where they hesitate, where they give up and reach for search. You see the navigation performing under real conditions, under real pressure, without the artificial spotlight.
What you're actually measuring, when you set a task rather than ask a question, is three things at once.
The three things you should measure in every test
Navigation is whether they got to the right place. Did the labels make sense? Did the structure support the journey? Did they take a logical path or a chaotic one?
Orientation is what happened when they arrived. Finding a page is not the same as understanding where you are. Users who arrive but feel lost will bounce.
Wayfinding is the journey itself. Once they were in motion, could they move through the experience without friction? Could they go back, go deeper, recover from a wrong turn?
Testing these three things consistently, across every research session, is what turns individual findings into a pattern. One test tells you almost nothing. Consistent measurement across multiple rounds tells you whether you're improving or just shuffling problems around.
Remote versus in-person testing
Remote testing is brilliant for recruitment. More people will take part, from a wider range of backgrounds, and you get data faster. For most kinds of research, it's the obvious choice.
Navigation is the exception.
Mobile behaviour is fundamentally different from desktop behaviour, and you cannot assume a remote participant is testing on the device that matters most to you. If your navigation collapses behind a hamburger menu on mobile, you need to watch a real person, on a real device they actually use, try to find the menu without being told where it is. That moment, watching someone scan the screen looking for a way in, is worth a dozen survey responses.
If you can get people in a room, or at least on a video call sharing their screen on their phone, do it.
When people go straight to search
They will. A lot of them.
Don't interpret this as a problem with your research design. It's information.
Some users simply prefer to search. If that's the pattern you're seeing, ask them why. Their answer will tell you something useful about how your search needs to work, what it needs to return, and what vocabulary it needs to understand.
Other users go to search because your navigation has already failed them. They looked at the labels, didn't recognise anything relevant, and bailed. These are the sessions you need to dig into. Ask what they expected to see. Ask what word they'd use for the thing they were looking for. That's your taxonomy problem right there, surfaced in the wild.
The signs your testing missed something
You've done your research. Everything looked fine. You launched.
And then things start behaving oddly.
Users are bouncing between Google and your site, as if they can't find their footing. Search is being used for content that sits in the main navigation. Engagement is lower on mobile than it should be. Customer service is fielding questions that should be answered on the site.
These are not launch problems. They are testing problems. Something was missed, a corner was cut, a round of research was treated as a box to tick rather than a question to answer.
Navigation testing only works if you do it consistently, across multiple rounds, with real tasks and real observation. A single round of testing to say you've tested it is not testing. It's insurance documentation.
What good navigation testing looks like
Test with tasks, not questions. Make people find things. Measure navigation, orientation, and wayfinding every time. Get people on their actual devices. Listen when they reach for search. Run more than one round.
Navigation that works is navigation nobody mentions. The only way to get there is to test it properly, and keep testing it, until silence becomes the norm.
Want to know if your navigation is actually working? Get in touch.
▶ Know someone who’d love this? Forward it their way.
▶ Was this email forwarded to you?

