It’s easy to assume that your online customers instinctively understand what they are expected to do on your website. And it’s convenient to believe that they will happily put in a bit of effort if they have a goal they want to achieve. They’ll work out which buttons to click and how to complete online forms because they’ve found something they want to buy or do.
Easy, convenient... and deluded. Deep down you know that customers avoid and abandon sites that make it difficult for them to achieve their goals.
But how do you get to the truth? How can you understand the obstacles your site really puts in your customer's’ way? And how you can identify a simpler pathway for them to achieve their goals?
You could ask them of course. But site development and optimisation plans based on surveying opinions are always risky. People are likely to say that you have a great website even if they found it painful to use.
Observe behaviour, on the other hand, and you’ll know, without doubt, whether user goals and on screen actions mesh like finely machined gears. Or whether they crunch and clash like a gearbox being abused by a first time learner driver.
If you need to update or improve an existing site, you should think about a structured usability audit as a first step. An audit based around the usability characteristics (or heuristics) developed by Jakob Nielsen could identify obvious and significant issues that should never have got through the design stage.
The characteristics are these:
Matching the real world. Are you using terms your customers recognise and can they make sense of product categories and descriptions?
Control and freedom. Would visitors feel they are in control of the route they take through your site?
Consistency and standards. Are common actions and navigation choices executed using consistent, industry standard icons, controls and positioning?
Error prevention. If user errors or omissions could prevent goal completion, how does the site avoid them or notify users what is expected?
System status reporting. For example, do users know how far through the checkout process they are?
Recognition not recall. Are you making unrealistic demands on your users to remember information between one screen and another? Could drop down lists help, for example?
Flexibility and Efficiency. Can expert visitors get to their goal quickly without compromising the experience for new users?
Design. Is your fancy design distracting attention from what users need to do or see?
Help and documentation. If users get stuck, how easily can they find help? Would in-line help improve the user experience?
Error recognition and recovery. If somebody does something ‘wrong’ how do you help them recognize what it is and to correct it?
I’ll give you an example for the last category that helps to illustrate why good usability matters so much.
I recently purchased tickets for a sporting event online. The ticketing site is one that requires you to print your own tickets.
To print the tickets I needed to enter my booking reference and my surname. Unfortunately the booking process never directed me to the form where I had to enter my name. The process allowed me to complete the ticket purchase without capturing this essential piece of information.
When I tried to print my tickets I just got an on-screen message telling me that my booking couldn’t be found. Absolutely nothing else! No suggestion of why this might be or what I needed to do about it.
I can’t imagine that the usability of this functionality has ever been tested (except by unfortunate customers). I also can’t imagine that anyone looked at the requirements for goal conversion (buying and printing a ticket) and asked whether it was possible to buy a ticket without providing all the information needed to print it.
Usability audits will help you sift out some of the more obvious issues. But to fully optimise any site you need usability tests to observe how people interact with it in realistic situations.
How to conduct usability tests
Whether you want to improve an existing site or you have a prototype that you want to investigate, unfocused testing is pointless. First document the following:
- Typical users and their goals.
- Your conversion goals and critical milestones.
- Significant drop-off points recorded by Google Analytics
- Potential concerns identified by your usability audit
You can then design realistic and meaningful tasks for your test.
Lab or remote?
Tests can be moderated and conducted in laboratory conditions, or unmoderated and remote. Unmoderated tests are significantly cheaper and users use their own machine in their natural environment, which is arguably more realistic.
Moderated tests allow more interaction with participants. You can question why they carried out certain actions or believed that they were the appropriate ones. You can also follow up the tests with open questions to explore ways that your site could be improved. And you can observe facial expressions and body language during the test. You can even use eye-tracking technology to understand exactly where attention is focused.
Unmoderated tests are the most financially viable if you want large numbers of participants or regular tests. They still reveal plenty of valuable insights but don’t offer the full scope of moderated tests.
Services like WhatUsersDo add an extra dimension to unmoderated tests by recording a commentary from users of what they are thinking as they move through each task. The resulting video combines the commentary with their mouse movements.
Participants have to be representative of your target customers and user groups. You might need to design different tasks for different groups to reflect goals that might be more relevant.
Generally 60-90 minutes is as much as people can take without losing focus in moderated sessions.
Unmoderated sessions shouldn't last more than 30 minutes.
How many tasks?
Normally we recommend giving people 5 to 10 specific tasks to accomplish. Having gone to the trouble and expense to put the test together you want to get insights into all of the critical functions people might need to perform.
What type of tasks?
The critical word here is ‘specific’. On an eCommerce site it might be to select a specific product (including size/colour), creating an account, completing a purchase, adding/removing an item from the shopping basket, or recovering a password.
The test should specify the outcome as well as the starting point (eg home page, product category etc). Design the tests based on the critical elements of the user journeys.
Setting the task
In moderated tests it’s all too easy for the person conducting the test to unwittingly influence or help the participants in their instructions. Use a script that contains only the essential information and stick to it.
Read one task at a time and get participants to complete the task without any help or guidance before giving them the next one. Get somebody independent to conduct the test rather than anyone who has a stake in the outcome.
Mix up the task order
People get more familiar with the website with each task they perform. Setting tasks in the same order for all participants could give the false indication that later tasks are easier than earlier ones. People may simply find them easier because they’ve had more experience of using the site.
Gather feedback immediately
If people fail to complete tasks or it takes them a long time, find out what specific difficulties they had while it is still fresh in their minds. Moderated tests offer much more scope for this.
If you are using moderated tests you can observe people while they are working. Facial expressions and body language will tell you when people are stressed or confused. Outside of the test environment these are strong indicators of when somebody is likely to abandon their purchase or goal.
Tools such as Morae enable you to simultaneously monitor on screen actions and facial expressions. If you use unmoderated tests the commentary provided with WhatUsersDo offers some insights into the mental state of participants.
What to do with the information
Once you have the results of a usability audit, usability test and data collected from Google Analytics you should have most of the insights you need to direct purposeful site developments.
You will have identified specific areas where your site creates confusion or difficulties for your customers. You’ll also have some very strong indicators for improvements that will boost conversions.
Tempting as it will be, what you shouldn’t do at this point is roll-out sweeping changes to your website or eCommerce store. Use the information gathered to develop hypotheses for A/B split tests and multivariate tests.
Test any changes so that you are certain they will have a positive impact on conversions before you roll them out for all users. If possible, test each proposed change individually so that you improve your detailed understanding of what makes a difference to customer behaviour and goal conversion.
Never assume that simply changing something people didn’t like will automatically increase conversion rates.