Card sorting has a good reputation. It feels participatory, it generates data, and it gives stakeholders something to point at when navigation decisions get challenged. The problem is that the method has serious limitations that tend to get glossed over, and those limitations have real consequences for real users.

This is not an argument against card sorting entirely. It is an argument for understanding what it can and cannot do, and for being honest about both.

The accessibility problem nobody talks about

Card sorting is designed around sight. The entire premise depends on a participant being able to scan a large number of cards simultaneously, hold a mental model of the whole set, and make grouping decisions based on what they can see at once.

That is an ableist design by default.

For users who are blind or have significant visual impairment, card sorting becomes close to impossible. The cognitive load of having every card read back by a screen reader, repeatedly, while trying to maintain a mental model of the full set, exceeds what working memory can reasonably handle. Miller's Law tells us that the average person can hold seven items in working memory at any one time. Card sorting is specifically designed to reduce large sets down to manageable clusters. Remove the visual scanning and the whole mechanism collapses.

In-person card sorting introduces a different problem: the facilitator, trying to make the task manageable, becomes a source of bias. The moment you start helping, you are influencing the outcome.

ADHD and high-stimulus environments

Online card sorting tools, with their large canvas interfaces and moving elements, are also known to be problematic for participants with ADHD. A screen full of cards is not a neutral testing environment. If your participant recruitment does not account for this, your data does not reflect your actual user base.

If your user base includes disabled people (and it almost certainly does), a method that systematically excludes or disadvantages them is not a valid primary research tool.

The recruitment and scale problem

Getting useful data from card sorting requires either quality or volume, and achieving both at the same time is harder than it looks.

In-person card sorting gives you the opportunity to explain the task clearly and observe how participants approach it. What it cannot give you is scale. You need a reasonably large sample to draw meaningful conclusions about how people expect information to be grouped, and in-person sessions do not get you there quickly or cheaply.

Online tools like Optimal Workshop solve the scale problem, but introduce new ones. Recruiting your own participants for remote card sorting is notoriously difficult. People struggle with the task without facilitation. Completion rates are often low, and the quality of responses from participants who find the interface confusing is questionable.

Neither approach is wrong, but neither is straightforward either. The recruitment challenge alone is enough to make card sorting an unreliable foundation for major navigation decisions.

Card sorting is not information architecture

This is the point worth underlining, reading again, and then writing on a sticky note somewhere visible.

Card sorting does not replace information architecture or taxonomy design. It is a research input, not a design output.

The confusion arises because card sorting can be used to test labelling and grouping early in a process, and that is genuinely useful. The problem comes when it is used as the only method, or as a substitute for the structural thinking that has to happen afterwards. Testing whether a label resonates with users is different from designing a navigation system that works under real conditions, at scale, across device types, with metadata, filters, and search in play.

Card sorting tells you something about how users mentally categorise information. It does not tell you how to build a system that surfaces that information reliably.

The polyhierarchy problem

This one matters particularly for ecommerce, publishing, and any site that uses metadata to put content in more than one place.

A polyhierarchical structure allows a single product, article, or page to live in multiple locations simultaneously. A winter coat might sit under Outerwear, under Gifts, and under New In at the same time. Card sorting cannot model this. By definition, a participant sorts each card into one group. The method assumes a single, fixed hierarchy.

If you are using card sorting to design navigation for a site that relies on metadata and faceted structures, you are using the wrong tool for the job. You will get data that reflects a simplified, single-hierarchy mental model, and then you will spend a significant amount of time wondering why your navigation still does not work once it is built.

What this means in practice

Card sorting has a place. Used as a checkpoint, with appropriate participants, as one input into a larger process, it can tell you useful things about label comprehension and broad structural expectations.

Used as the answer, it will let you down.

Before you commission a card sort, ask three questions. Who is excluded by this method, and does that matter for this project? What will you do with the data once you have it, and is card sorting actually the right way to generate that data? And is there an information architecture or taxonomy problem here that needs to be solved before testing makes any sense at all?

If the answer to that last question is yes, start there.

Murmuration helps retailers and digital teams understand and fix onsite search. Get in touch if you'd like to talk through what a diagnostic might look like for your site.

Know someone who’d love this? Forward it their way.

Was this email forwarded to you?

Comment

Avatar

or to participate

Keep Reading