How to Devise and Focus your A/B Tests to Improve your Conversion Rates

Aran Reeks
By Aran Reeks
30 Jul 2014

As you may know, I'm a big fan of A/B split tests. There's nothing like them for taking the guesswork out of the process of improving eCommerce conversions.  But how should you decide what to test? A poorly targeted test is a waste of time and money. You need evidence rather than intuition to design a profitable testing hypothesis. 

In short, useful tests always have a focus. And be prepared for them to reveal the unexpected. Take this example from the Online Power Tools eCommerce site that we tested recently.

OPT_screenshotOriginally, at the top of all the category pages, there was an upsell region where the best-selling products within the selected category were displayed to try and help customers. This is a normal feature on eCommerce sites and one that works pretty effectively - most of the time.

In this case A/B split testing revealed it was having the opposite effect.  It turned out we were pushing attention right to the top of the page in the main column. Consequently, visitors who didn't immediately find what they were after there dropped out.

The category pages that made use of these upsells were converting at 35.8%. Once we removed the upsell feature this increased to 40.5%. The goal here being that a customer found a product of interest and subsequently clicked through to the product details page.

Even when you think you're doing something positive to improve user experience (such as helping people find the most popular items related to what they're looking for), it can backfire if not implemented correctly. 

Focusing your test

You sometimes encounter some strange views about split testing - including one expert in the field who claims that ‘the objective of a test is not to get a lift, but to get a learning.'  Sorry, I don't really get that. Sure, learning is definitely positive, but the two objectives should be aligned as you’re hopefully testing to increase your bottom line.

Certainly you will get tests that don't give you the result you expected or hoped for (otherwise what's the point of the test). But I'm not sure that telling a client that you didn't manage to increase conversion rates but learnt a lot in the process is ever going to be good business. We certainly don’t take that approach with our clients’ websites, don’t worry!

The trick is to design your tests so you get highly specific feedback and so that people don't try to predict the answers you are looking for and give you those. With specific feedback you can design some meaningful split tests.

So how should you identify where to focus attention for this type of experiment, and decide what, specifically, you are going to test? You need a realistic hypothesis that is worth the time, effort and cost of setting up an A/B split test.

Coming up with an intelligent and rational hypothesis behind the split test is always the first step. There has to be a better reason for setting up a test than ‘we thought it might make a difference.' CRO is a scientific process and there's not much scope for following hunches - that's why we talk about hypotheses and experiments.

Insights that lead to the design of fruitful A/B split tests come from two main sources: data that help you understand more about your visitors, and observations that reveal how people interact with your site and your content.

Data - goals and events

google_analytics

Google Analytics is a natural starting place. If you've done a thorough job of setting up goals, then the process of designing tests becomes a lot easier.  In the case of Online Power Tools we found that a higher percentage of visitors than we would have expected were navigating to product pages and leaving the site without making a selection.

Knowing exactly where people drop out of the process gives a strong indication of where to focus. Perhaps they don't understand what to do next, don't find a piece of information they need, have their attention directed to the wrong place, or just find the whole process too confusing.

Segmenting traffic

It can also help to segment data between new and returning visitors; separating the hardy souls who have cracked your complex site navigation and returned to buy something else, from the new and bewildered.

Statistics such as time on page and user behaviour will tell you a lot. People bouncing off a page quickly might indicate that the choices offered were misleading or unclear, or that they ended up on a page that didn't seem to be what they wanted.  

Tracking specific events on your eCommerce site will also give you useful data that you can use to design tests. Here are a few examples of events you could track as often they can throw up issues you’d never considered:

  • Add to Cart
  • Add to Wishlist
  • Calls to action clicks
  • Unsuccessful add to cart or wishlist
    Are customers trying to purchase out of stock items often?
    Should stock levels be made clearer?
    Should products out of stock even be shown/indexed by search engines?
    Should you allow customers to sign up to receive a notification when the item comes back in stock?
  • Form submission errors
    Invalid email addresses, badly formatted/indentation phone numbers and/or unexpected required fields are all things we commonly see and can be very quick fixes.
  • Failed payments
    Does one payment method result in an above normal declined payment rate?
    Are you paying monthly for a payment gateway that doesn’t convert/isn’t used?

add_cart

You can also track general progress through your site using events to reveal the stages when most people drop out. Relevant events here would include:

  • Login/Logout
  • Registration
  • Checkout (step by step)
  • Changes to item quantities within the cart
  • eCommerce order placing
  • Once you've worked out where to test you then need to consider what to test.

If you’d like to learn just how easy it is to setup tracking within Google Analytics for events such as we’ve just discussed then you can read more about setting up event tracking in Google Analytics here.

Observation

Sometimes you can look at a problem page or process and identify obvious candidates for testing. But the ways that people interact with your site can be subtle and unpredictable. Often there's no substitute for direct observation. There are three main ways to do this: usability testing, heatmaps/eye tracking and session recording.

Usability Tests

usability

Usability tests put people in a controlled environment so you can get direct feedback on how they use your site and what they find difficult or confusing.  They work best when people are set realistic tasks to perform on your site but using language that doesn't lead them towards the pathway you expect them to take.

They have the advantage of letting you observe how real people navigate and carry out tasks on your site. You can also question people directly and potentially get a deeper level of insight into how they use your site and the difficulties they encounter. 

On the flip side, getting a representative cross-section of users can be difficult. Testers can also be more patient and determined to complete a task than real users who might be browsing more casually and giving it less than their full attention.

The trick is to design your tests so you get highly specific feedback and so that people don't try to predict the answers you are looking for and give you those. With specific feedback you can design some meaningful split tests.

Heatmaps

Heatmaps and predictive eye tracking reveal a lot about where visitors' attention is focused. Ideally you want this to be on your call to action or navigation, but sometimes you find they're concentrating elsewhere.

If, for example, you find that people are being distracted or just completely missing your Calls to Action (CTAs) you have the opportunity to design some useful tests. You can make your CTA more prominent, remove the distractions or move elements around so that attention focuses more naturally on the CTA. You can experiment with colours, contrast, type size and style, and positioning.

If eye-tracking reveals that people are seeing your CTA but not clicking on it then you would probably focus on the CTA wording you use for your A/B split test. If the CTA is a ‘Buy Now' button it might also indicate that they are not seeing important trust building elements elsewhere on your page.

Session Recording

record_button

When you're really struggling to understand how people use your site and where they encounter difficulties, there's nothing quite like session recording. This allows you to replay every page view and click for individual users.

If you identify a common step where people are giving up you have a very clear focus for an A/B split test. Examine the page and try to identify some logical reasons why you think people are giving up - and then set up tests to prove or disprove each hypothesis.

As an eCommerce web agency you get to understand an important fact: user experience best practice will only get you part of the way to maximising conversion rates. Each business has different clients and the ways they interact with eCommerce sites are never 100% predictable. To maximise conversions and revenues you have to be relentless in identifying potential issues and testing possible solutions.