Conversion Rate Optimisation decoded - the terms you need to understand

Aran Reeks
By Aran Reeks
2 Dec 2015

Before we start on individual terms, here’s my definition of conversion rate optimisation: a step-by-step, data-driven cycle of observation, reasoning and testing, aimed at increasing sales or other precisely defined goals on your website.

Hopefully once you’ve read through the rest of the terms you’ll see what I mean. Whether your KPIs are increased sales volumes, cost per acquisition, leads, or average order value, you need a scientific approach if you want to invest wisely and achieve a meaningful return. 

Optimisation 

When something is fully optimised it performs a function as well or as efficiently as possible. In theory, an optimised eCommerce store or website will deliver the maximum possible conversion rate. The job is finished - your site is perfectly in tune with your customers at that point in time, and no further improvements are possible.

In reality this is never the case. ‘Improvement’ would be a more straightforward and accurate use of language - but optimisation sounds more technical and is the term that has stuck.

Google Analytics

google_analytics

Most websites these days have Google Analytics tracking code embedded within their pages, collecting data about every visitor to your site.

Useful as this is, it doesn’t go far enough. To make the data really useful, you need to start implementing Goal and Event tracking. This is where specific data is sent back to Google Analytics when people perform specific functions (such as clicking as button, completing a form or adding an item to a cart). This enhanced data, helps to form a fuller picture of the user’s experience and will deliver richer, more valuable information.

Advanced eCommerce Analytics takes things further, allowing you more granular tracking down to the level of capturing individual product impressions as well as the use of vouchers and promotions.

Enhanced Analytics delivers data that helps pin-point potential issues and lost conversions. You can then build hypotheses about improvements to your site and test them to see if they work.

Read more about enhanced eCommerce analytics.

Bounce Rate

This is one of the most basic measurements in Google Analytics. It gives you the percentage of visitors who landed on a particular page and then left your site before visiting any other page.

Usually a high bounce rate is undesirable. It indicates that people didn’t find what they were looking for, or possibly didn’t understand what to do next.

A raw bounce rate score isn’t always that helpful. If it’s a blog page a ‘bounce’ might be normal behaviour. You really want to know how far through your content visitors manage to get. A visitor could subscribe to the blog or complete an online form and still register as a bounce. 

A page on an eCommerce site with a high bounce rate might tell you where some customers’ lose interest or hit an obstruction. But it won’t tell you how or why. For a more useful picture of where the problem might be, you need to set up Events in Google Analytics so that detailed on-page actions are captured.

You also need to segment your traffic to see if bounce rates vary by device, which could indicate specific issues with your experience; or whether paid visitors complete the same journeys as organic traffic.

A/B Split testingAB_testing

This is a foundation of Conversion Rate Optimisation (CRO). Essentially you create two versions of a page, divide the traffic between them and see which one performs best. You can then base site developments on observation and data rather than guesswork.

Naturally, if you want to be informed rather than misled, there’s much more to this than just investing in a tool like Optimizely (excellent as it is) and testing whether a different button colour improves your conversion rate or not.

Things you need to understand and plan are: creating a sound hypothesis for your test, sampling plans, statistical significance and confidence levels. We cover these below.

Multivariate testing

With multivariate testing you can test combinations of changes to a page simultaneously. For example adjusting the colour, size, positioning and/or wording of a Call to Action (CTA). If you plan this carefully with well reasoned hypotheses you can eliminate multiple A/B split tests. This is helpful if it would take a long time to gather sufficient observations for each test.

You can also test multiple versions of the page simultaneously. Just be careful that you don’t end up with tiny volumes of traffic to each version.

The big risk with poorly planned multivariate tests is that factors that could improve conversion rates get missed by being combined with ones that have negative or neutral effects.

Hypotheses

Hypotheses for A/B split tests and multivariate tests should always be data driven. Goal and event tracking will pinpoint where people are potentially having difficulty. You can get further insights from session recording, heatmaps and usability testing, all of which are discussed later.

The data might indicate a possible location of confusion or difficulty. You then need careful analysis to identify likely causes and potential improvements. Like any other hypothesis you need a scientific experiment to tell you whether your assumptions or deductions were accurate.

Sampling

If you want results you can rely on, you need to understand the principles of sampling. How many observations do you need for your test results to be reliable? This partly depends on the size of the effect you are observing. If there is a large difference then you might not need as many observations. 

Conversion rates that move from 2.0 to 2.1% could be highly significant financially (it’s a 5% increase in sales transactions). But statistically the difference is not huge (one in a thousand). So you may have to run a test for several days or weeks to generate the data you need based on your volume of traffic. 

The day of the week or time of the day can also affect how people behave. So you need to set up tests that observe how results vary with time, rather than look at one overall number recorded during normal business hours.

Confidence levels

If you use Optimizely to run split tests it will display a conversion rate for each version being tested. It will also show the confidence level as a +/- figure. If a page is converting at 2% (+/- 0.5%) the real value, if you ran the test for long enough, could turn out to be anywhere between 1.5% and 2.5%. 

Understanding confidence levels is crucial as they reflect the statistical significance of the sample. 

The longer you run your test the narrower the confidence level range becomes, simply because you have more data. Using a Forest plot you might have two pages where the conversion rates and confidence levels look like this:

A winning intervalscreenshot_01

A losing intervalscreenshot_02

Ideally you want a gap between the lower range of the better page and the upper range of the poorer performing page. You can then safely stop the test and stop diverting traffic to the page that doesn’t work as well.

Statistical significance

This is the likelihood that a result is caused by something other than mere chance, and is the reason why tests need to run for days or weeks to gather enough data. Statistical hypothesis testing is traditionally employed to determine if a result is statistically significant or not. The percentage of certainty for statistical significance is widely accepted at 95%.

Usability studies

Asking people how easy it is to use your site will probably tell you nothing useful. They are just as likely to tell you what they think you want to hear. And they probably won’t want to look stupid by saying they found it difficult.

But if you give people a task to perform (such as finding and ordering a specific item), and then observe what they do, you can definitely get usable insights. You can use these to design smarter test hypotheses.

Session recordingsession_record

Another example of observing behaviour to reveal what’s really happening on your site. Session recording logs every mouse movement and click so you can see when users abandon their journey. You can also see whether the route they take matches the user journey you imagined and designed.

Heatmaps

Heatmaps can be generated by eye-tracking (which needs sophisticated and expensive technology) or by tracking the mouse cursor.  Cursor movements have been shown to have around an 80% correlation with eye movements. You end up with a good indication of where people’s attention is focused.

Among other things, a heatmap can reveal when people simply don’t notice your CTAs, possibly because they were distracted by more dominant visual elements. You can also see where they might be confused by having too many options. Again, all useful input for designing test hypotheses and scenarios.

Shopping Cart Abandonmentabandon_cart

All of your insights into site visitor behaviour become particularly valuable when you look at shopping cart abandonment. Baymard Institute estimates the average cart abandonment rate at 68.53%, based on 31 different studies. So over â…” of the people who start a cart don’t go on to complete a purchase.

Yes, some will just be using a cart to compare features and prices. But even these are people who were interested in a product yet didn’t take the final step. Often people abandon carts because the process is too long or confusing, or because they lacked the trust to submit their card and personal information.

Using enhanced analytics will tell you exactly where people leave the process so you can experiment with improvements. 

Generating cart abandonment data also opens the scope for using abandoned cart emails. These can be very effective in recovering lost sales.

User Experience

This is the sum total of how easy, enjoyable and satisfying people find it to use your site. The user experience must make it easy for customers to achieve their goals; it must guide them to their destination efficiently, and answer all of their questions and potential objections along the way.

You can’t hope to deliver the user experience your customers need unless you have mapped out their customer journey very carefully.

Every time your user experience lapses from being exceptional into being mediocre, your conversion rates are not optimised.

Customer Experiencelikely

This is broader than user experience. It encompasses every interaction, online and offline, you have with your customers. This is a whole article in itself but an outstanding customer experience, including aftersales support, helps ensure that people arrive on your site in a receptive frame of mind - more ready to be converted into a sale.

Processes such as Net Promoter Score can help you get a better understanding of the quality of your customer experience and identify how it needs to be improved. Some of what you discover will relate directly to the user experience and your conversion rates.

Summary

Hopefully by exploring these terms you’ve seen why a methodical and scientific approach is needed if you want to maximise conversion rates. An audit is often a good way to identify where some of your main issues might be before you dive into the detail.

If you are interested in finding out more, talk to our optimisation experts today about gathering the information you need to make improvements to your site.