Heuristics – Finding usability problems in designs

As part of my online Human Computer Interaction (‘HCI’) course by Scott Klemmer, I learned a lot about “heuristic evaluation”. Even though I had heard the term ‘heuristics’ used before, I had no clue what this entailed. In his online lecture, Scott explained that heuristic evaluation is a method one can use to review user interface (‘UI’) designs. He was quick to stress that there are multiple ways to evaluate designs and that heuristic evaluation is just one of them (I’ve outlined some of the different review methods in Fig. 1 below).

In essence, heuristic evaluation means that you get experts or peers to critique your UI design. Typically, the evaluators use a set of principles (“heuristics”) to assess the design. In 1995, UX expert Jakob Nielsen introduced his “10 Heuristics for User Interface Design” as outlined in Fig. 2 below. These criteria can serve as a good guidance for one’s design evaluation process. However, none of these heuristics are set in stone; the actual criteria that one ends up using are very likely to depend on one’s specific design goals.

Let’s zoom in a on the following, more ‘practical’ aspects of heuristic evaluation:

  1. When to get design critique? Scott suggested a number of stages at which it can be valuable to get design critique. I guess the main thing here is to have a clear design goal and an understanding of what it is that you would like to get out of peer design reviews. In Fig. 3 below I’ve outlined some stages at which you can ask peers or experts to do a design review.
  2. The different phases of heuristic evaluation – In Fig. 4 below I have included some common phases of the heuristic evaluation process. Ideally, you have 3-5 evaluators reviewing your designs. It’s preferable to have multiple evaluators as different people are likely to find different problems. You can then have them compare their findings afterwards. You don’t need a fully working UI to do get peer input, you can just as well have your evaluator review some paper sketches!
  3. How to rate designs? It was interesting to learn more about how to review designs. Scott suggested to use a severity rating which could combine the following factors: frequency, impact and persistence. Scoring a problem will help in establishing the importance of resolving a certain usability problem and allocating resources to fix it. I’ve added a range for such scores – as suggested by Scott – in Fig. 5 below.
  4. Heuristic evaluation vs user testing – Scott emphasised that heuristic evaluation and usability testing with real users aren’t mutually exclusive. He talked through some of the pros and cons of both feedback methods and stressed the value of alternating both approaches. By mixing up methods you’re likely to find different problems and you’re less likely to waste user participants’ time.

Main learning point: heuristic evaluation is something that I hadn’t applied previously, but it was really great to learn more about this review technique and how to best apply it. By having both ‘experts’ and real users testing your user interface designs you’re more likely to get as rounded product feedback as possible.

Fig. 1 – Multiple ways to evaluate UI designs 

  • Real-time feedback from users (which I wrote about earlier – insert link to rapid prototyping post)
  • Use formal models to predict user behaviour
  • Heuristic design evaluation – having peers review and critique your UI
  • Automated user acceptance tests (thinks tools like Cucumber and Specflow)

Fig. 2 – Jakob Nielsen’s 10 Heuristics for User Interface Design (taken from: http://www.nngroup.com/articles/ten-usability-heuristics/)

  1. Visibility of system status – The system should always keep users informed about what is going on, through appropriate feedback within reasonable time.
  2. Match between system and the real world – The system should speak the users’ language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.
  3. User control and freedom – Users often choose system functions by mistake and will need a clearly marked “emergency exit” to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.
  4. Consistency and standards – Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions.
  5. Error prevention – Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action.
  6. Recognition rather than recall – Minimise the user’s memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.
  7. Flexibility and efficiency of use – Accelerators — unseen by the novice user — may often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions.
  8. Aesthetic and minimalist design – Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility.
  9. Help users recognise, diagnose, and recover from errors – Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution.
  10. Help and documentation – Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user’s task, list concrete steps to be carried out, and not be too large.

Fig. 3 – Stages at which you can ask peers or experts for a design review (by Scott Klemmer)

  • Before user testing -> Objective: to identify and resolve minor issues before inviting real users for testing.
  • Before redesigning -> Objective: to get a better understanding of what works and what needs changing.
  • Articulate problems -> Objective: to gather concrete evidence for usability problems (or to make a case for a redesign).
  • Before release to LIVE -> Objective: to smooth out any remaining rough edges prior to going live with a new product or feature.

Fig. 4 – Phases of heuristic evaluation (by Scott Klemmer)

  1. Pre-evaluation training: provide the evaluators with the required domain knowledge and relevant scenarios to review against
  2. Heuristic evaluation: individuals evaluate and aggregate results
  3. Apply severity ratings: at this stage, you determine how severe each usability problem is (and prioritise accordingly). This can be done individually first and then discussed as a group.
  4. Debriefing: review findings of the heuristic evaluation with design/development team.

Fig. 5 – Suggested range for severity scores (by Scott Klemmer

0 = we don’t think this is a usability problem

1 = this is a cosmetic problem

2 = we feel this is a minor usability problem

3 = we feel this is a major usability problem, important to fix

4 = this is usability catastrophe, imperative to fix

Related links for further learning:

  1. http://www.nngroup.com/articles/ten-usability-heuristics/
  2. http://www.nngroup.com/articles/how-to-conduct-a-heuristic-evaluation/
  3. http://www.nngroup.com/articles/usability-problems-found-by-heuristic-evaluation/
  4. http://www.nngroup.com/articles/how-to-rate-the-severity-of-usability-problems/

3 responses to “Heuristics – Finding usability problems in designs”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: