7 Cognitive biases affecting UX designers and how to avoid them

Dec 29, 2022


When it comes to work-related decision making, being objective and data-driven is imperative. UX designers, in particular, have to make countless decisions during the initial user research and prototyping phases.

While you can argue that good design is subjective and sure, UX designers need to rely on their strong right brains to come up with creative and appealing solutions. But design decisions need to be objective and backed by facts instead of being purely instinctive.

Sadly, even the most experienced, rational, and objective designers regularly fall prey to cognitive bias. After all, UX designers are human too.

But what exactly is cognitive bias?

It is a systematic error in human thinking that affects the decisions and judgments that people make. A cognitive bias is a psychological deviation from rationality, a blind spot in our understanding of the world.

What happens is that we formulate our own subjective reality where reason takes a hike in favor of our unconscious biases which leads to poor decisions, even if we think they’re the best decisions.

In other words, a cognitive bias may result in warped perception, inaccurate judgments, and even illogical decisions. It negatively impacts your design thinking and ultimately, spoils the end user’s experience. What’s more, such biases are often extremely difficult to discern from the inside.

Being cognizant of these biases will enable you to make more intelligent and objective decisions when designing products. Thus, below is a list of seven common cognitive biases that may affect you as a UX designer and how you can avoid them.


Photo by: www.youruxteam.com

1. Confirmation Bias

This is one of the most common types of cognitive bias which is quite difficult to rectify.

Basically, confirmation bias is our tendency to seek out results or information that are in accord with our worldview. We usually like to believe what we want to believe even if there’s firm evidence pointing otherwise. Any information that challenges our hypothesis may be discarded.

For instance, we all look up our symptoms online (even if it is advisable not to). When we are browsing the search results, one of two things happen:

  • We try to convince ourselves that what we have isn’t any of the serious diseases/disorders listed, even if the symptoms match.
  • Or we freak out and are convinced that what we have is terminal and we need an immediate appointment with a qualified specialist, even if only a couple of symptoms match.

Either way, we’ve already made up our minds about what illness we have and we browse the internet just to find information that conforms with our thinking. Similarly, let’s say you design a cool call-to-action that you think is bound to bring in more conversions. But you soon realize that the clicks aren’t coming and so, you decide to do some user testing to fix the problem.

Now, you’re impressed with your work. And because you love the button you designed, you might dismiss user data or feedback that goes against your initial assumption.

The best way to dodge this bias is to remember that the purpose of user research and testing is to learn more about users rather than you being right about users. Treat all data equally instead of favoring some results over others.

2. Observer-Expectancy Effect

A slightly less common and subtle bias, the observer-expectancy effect is when you subconsciously act out your preconceptions when conducting user research, which influences how your participants behave.

As humans, we all have our personal beliefs, prior knowledge, and ingrained subjectivity. This can affect how we act around people we observe, and influence their behaviors such that they are more in tune with our desires.

A famous example of this effect is from the 1900s. Wilhelm von Osten believed his horse (Clever Hans) was capable of solving mathematical problems. So, unknowingly, he gave Hans hints that enabled him to answer such puzzles correctly.

Likewise, when conducting a user interview, your gestures and overall body language may unintentionally pressure the user to answer your questions in a certain manner. But you obviously want unswayed answers from your users if you truly want to build a great user experience.

To overcome this, practice your interviews before conducting them. Ask a coworker to give you feedback about your body language and non-verbal communication that does not appear neutral.

3. Wording Bias

Along the lines of the observer-expectancy effect, this bias occurs when you frame a question in a way such that it prompts the user to answer in a certain manner.

For example, imagine taking a survey in which the question is worded as “How difficult was it for you to use our mobile app?” The question itself suggests that using their mobile app is tricky. It stimulates you to think in a pessimistic way.

While this can be useful if you want to find bugs in your product, it does not inspire objective answers to your questions. So, thoroughly proofread your questions to verify whether your questions are neutral.

4. Sampling Bias

When you’re deciding on a sampling pool for your user research, it may happen that some types of users are unintentionally left out.

Suppose you are devising a marathon running app and need to conduct research on marathon running enthusiasts. You decide to interview and observe runners in your city, but you fail to recognize that their running behavior may markedly vary from those who live in suburban and rural areas. You are running the risk (pardon the pun) that your research insights may not apply evenly to all your target audiences.

A solid way to tackle this bias is to clearly define your audiences and list down the key differentiators in terms of their background, behaviors, and attitudes. Include people of diverse characteristics in your sample.

“Sed ut mi odio. In hac habitasse platea dictumst. Sed efficitur felis eget quam tristique dapibus. Praesent gravida” – Praes gravi

5. Clustering Bias

Humans tend to see “trends” and “patterns” in what are completely random outcomes. We are naturally inclined to bring order to chaos. While it may help make some sense of randomness, clustering bias disrupts objective insights. A “hot streak” in a luck-based casino game? That’s clustering bias.

UX designers frequently rely on qualitative analysis. But with a short sample size, it is often impossible to not see patterns that are just smaller sets of randomness which appear to share common features.

One way to counter clustering bias is by conducting user research and prototyping with totally distinct and diverse sets of users. Another way of avoiding it is to have a quiet brainstorming session before discussing it among stakeholders. You can also include more diverse stakeholders in the analysis process so that the bias has a better chance of getting eliminated.


Would you like to discuss your project?

We’re ready to help you with your next design project. Schedule a call with us.

Talk to our expert

6. Anchoring Bias

In simpler words, it means jumping to conclusions. This happens when you rely too heavily on a preliminary piece of information offered (the “anchor”) when making decisions.

In other words, you anchor to an idea or detail so strongly that you become reliant on it (most likely because it’s the first bit of information you’ve understood perfectly), disregarding other information which could be just as helpful (or more) when making a decision.

The initial prototype that you design for an app may look a certain way and as it evolves through the design process, it becomes something quite different in the end. When you present it to your team lead, they might say “I thought it would look more like this” or “This is different from the original prototype which was great”. Sounds familiar?

This is anchoring bias and it’s extremely common in product development. A good way to beat this is to show multiple designs right from the start so that no one design has a higher value, and thus, no anchor.

7. Sunk Cost Fallacy

This one’s a biggie, not just in UX design but in major life decisions such as career and relationships. Essentially, the more you invest in something, the harder it is for you to let it go.

So, the next time you end up having a blackout because of “just one more drink”, or keep investing in a toxic relationship that’s bound to go sour, you know which bias to blame.

UX designers invest a lot of time in conducting user research and garnering reliable data. Over time, this data can become a burden rather than a boon. Obsessing over our findings, we can easily get lost and miss the bigger picture of what we truly want to achieve.

To circumvent this, it is crucial to weigh our efforts against the rewards. This means breaking down the analysis into little chunks and making go/no-go decisions on a smaller scale.

Summing Up

In this post, we highlighted some of these common biases and described how you can mitigate their effects on your work. To sum up, here are all seven biases in a nutshell:

  • Confirmation Bias: We tend to only look for evidence that conforms with our hypothesis.
  • Observer-Expectancy Effect:You need to take care of your gestures and body language during user interviews as it can sway the answers of subjects.
  • Wording Bias:How you frame your questions goes a long way in influencing user responses.
  • Sampling Bias: Selecting a sample such that all sets of users are included is critical.
  • Clustering Bias:We are predisposed to find patterns amidst randomness when there are none.
  • Anchoring Bias: Jumping to conclusions is easy if you cling on to your initial understanding of something.
  • Sunk Cost Fallacy:We find it hard to abandon things we’ve invested a lot in.

As already mentioned, even the most seasoned UX designers can be the victim of a cognitive bias. These biases only become a menace when we don’t recognize their impact on our design decisions.


Chintan Bhatt

Founder and Design Director