Vous êtes sur la page 1sur 40

Incorporating A/B testing

into your conversion


optimization strategy

Preface
A/B testing was formerly the preserve of organizations with substantial
technical resources, but the arrival on the market of new tools simplifying the
implementation of these tests is democratizing the practice. Such tools permit
anyone at all to create several versions of their web pages and measure the
effectiveness of each one against their objectives, one of the most important of
which is the conversion rate.
Though these tools greatly simplify the process of implementing these tests,
this apparent simplicity should not lead one to forget that obtaining significant
results from A/B testing depends primarily on establishing an appropriate
testing methodology.
Numerous organizations have embarked upon A/B testing in the hope of seeing
their conversion rates increase dramatically, only to end up achieving limited
results. What they very often have in common is that they have rushed into it
without taking time to carefully consider the relevance of their tests and the
contribution of the elements tested to the conversion process.
The purpose of this white book, aimed primarily at marketing and e-business
teams, is to act as an aid to mastering A/B testing methodologies. Within it,
teams will find practical advice about integrating A/B testing into their conversion
optimization strategy. What can we expect from it? How can we benefit from
it? What are the pitfalls to avoid and what is best practice for achieving good
results?
Rmi Aubert

Co-founder of AB Tasty

The advice presented here is the fruit of over three years


experience as an A/B testing software solution provider
with clients who have carried out thousands of tests.

Contents

Conversion optimization: new Holy Grail of e-commerce

A/B testing: a practice rapidly gaining in popularity

The place of A/B testing in conversion optimization

Implementing an A/B testing methodology

10

Effective A/B testing: tips and tricks

22

A/B testing in practice: which elements to test?

26

Beyond A/B testing:


how can conversion rates be continuously improved?

33

Conclusion

35

Glossary

36

Notes

38

1
Conversion optimization:
new Holy Grail of e-commerce
Conversion optimization can play a major role in increasing a businesss
profitability, yet it remains little used. Though the average conversion rate for
e-businesses is somewhere between one and three percent1, Forrester Research
estimates that for each $100 spent on traffic acquisition only $1 is dedicated
to conversion optimization2. Many e-businesses are therefore focusing on
acquiring traffic yet failing to convert that traffic in more than 97% of cases.
Investing $1 more to increase conversion rates, even by just a few percentage
points, can prove very profitable and can improve the return on investment
provided by traffic acquisition channels. At a time when acquisition costs are
on the increase and the quest for new sources of traffic is becoming more
complex, why not begin by maximizing the potential of your existing traffic?
What conversion optimization promises is essentially simple: generate more
revenue from a consistent level of traffic.
Though the concept is a simple one to describe, it has to be recognized that for
many businesses the difficulty lies in investing that additional dollar. Conversion
optimization is basically a complicated practice that businesses find difficult to
understand; because the conversion process, itself, is a complex mechanism. It
brings various factors into play, such as:

the quality of the traffic generated,


the websites ergonomy,
the quality of the product or service
(e.g. the information provided by the product information sheets),
complementary services (e.g. free returns, payment methods accepted),
advantages offered (e.g. competitive prices and delivery charges),
the reputation/credibility of the online business,
technical performance (e.g. page load times),
competitors actions.

All these elements play a part in determining an e-businesss ability to convince


the web visitor to purchase from its website rather than from a competitors.
They are all elements that can create friction and bring about losses, which the
retailer will seek to minimize.
Numerous tools and methodologies are available to assist them with this task. A/B
testing is one of these, and one enjoying increasing success, as demonstrated
by an Econsultancy study3 which found that A/B testing has been the method
most used amongst conversion optimization experts for two years.

Current methods used


for improving conversion
For 2 years running A/B testing most used method Source : Econsultancy, 2012

50

25

ONLINE SURVEYS
CUSTOMER FEEDBACK

USABILITY TESTING

22%

21%

20%

17%

15%
EXPERT USABILITY
REVIEWS

CUSTOMER
JOURNEY ANALYSIS

27%

MULTIVARIANTE
TESTING

COPY OPTIMISATION

29%

ABANDONMENT EMAIL

30%

EVENT-TRIGGERED
BEHAVIOURAL EMAIL

40%

SEGMENTATION

40%

COMPETITOR
BENCHMARKING

42%

CART ABANDONMENT
ANALYSIS

46%
A/B TESTING

E-tailers are not the only ones concerned about A/B testing. Media sites or
internet service providers can use this method to optimize their conversions,
whether its filling out a form, registering for a newsletter or increasing the
consumption of pageviews if their business model is based on advertising. All
players in the web industry can therefore benefit from A/B testing.

1. Barometer Google - Kantar Media Compete France (2013)


2. The State Of Online Testing - Forrester Research (2011)
3. Conversion Rate Optimization Report - Econsultancy (2012)

2
A/B testing, a practice
rapidly gaining in popularity
A/B testing, a practice that has long been used in direct marketing, involves
submitting several versions of a message, differentiated by a single criteria, to
a sample of consumers then measuring which of the versions has achieved the
best results.
The development of digital marketing has brought new perspectives to the
practice by multiplying the range of tests and performance measurements
possible. When applied to a website, A/B testing effectively permits a practically
unlimited number of versions of a page to be tested and the performance of
each version to be measured using indicators such as visitor engagement or
buying behavior. Advances in technology have also led to the development of
dedicated tools that make implementing these tests and analyzing their results
easier. These tools are finally making it possible to create multivariate type tests
in which multiple elements within a page are simultaneously modified in order to
identify the best combination.

Version B is more effective than version A

50 registrations

85 registrations
6

A/B testing offers


numerous advantages
It is a rapid and inexpensive method of collecting information. Furthermore,
the data collected relates to a large number of individuals and presents little bias
because the web visitors are unaware of the tests existence.
It is a scientific method that places the data at the heart of the decision making
process, relegating personal opinion and assumption to a position of secondary
importance and speeding up decision making.
A large amount of data is collected, permitting precise measurement of those
indicators most relevant to the decision making process. These indicators,
the familiar KPIs (Key Performance Indicators), are essentially specific to each
business.

Nevertheless, obstacles
to adopting A/B testing exist
The supposed complexity of implementing the tests. Fortunately, new tools
designed for use by marketing teams, such as AB Tasty, have appeared on the
market. Their purpose is to give users the independence to implement their own
tests, without requiring the intervention of technical teams.
The lack of expertise within businesses. Though certain tools make A/B
testing accessible to all, their simplicity must not conceal the fact that a strict
methodological approach must be adopted if a testing program is to be effective.
This white book is intended to solve these problems by providing users with
both a methodological framework and tips and advice to enable them to get the
most from their A/B testing tool.

3
The place of A/B testing in
conversion optimization
A/B testing is a tool for use as part of a conversion optimization strategy, but
the conversion strategy cannot be reduced to the use of a single tool. A/B
testing permits hypotheses to be statistically validated, but it does not on its own
provide all the keys to understanding web visitors behavior. Yet it is precisely by
understanding this behavior that any impeding factors and conversion problems
can be identified.
Other methods and tools that can provide additional information about web
visitors and indicate the hypotheses to be tested must therefore be used to feed
into the A/B testing strategy. Though a good testing tool is necessary, that alone
will not always be sufficient where conversion difficulties are complex.

The place of A/B testing


within a conversion optimization strategy

DIAGNOSIS
POSSIBLE
SOLUTIONS
TESTING

The key to success with an A/B testing strategy is therefore the formulation of
powerful hypotheses that can have a positive impact on conversion. Though
random testing, with no genuine justification provided for the hypotheses tested,
can be justified when learning to use a testing tool, the practice must rapidly be
replaced with a strategy based on solid foundations.
There are numerous sources of information available to help increase your
understanding of web visitors:
Web analytics data. Though this kind of data does not explain
web visitors behavior, it does permit conversion problems
to be identified (e.g. shopping cart abandonment). This kind
of data also helps when prioritizing the pages to test.
Heuristic evaluation and ergonomic audit. These
methods of analysis are an inexpensive way of discovering
what the website experience is like for the user.
User testing. This kind of qualitative data, though limited by the
sample size, can prove to be a source of very rich information not
otherwise revealed through the use of quantitative methods.
Eye tracking and click tracking. These methods shed light on
the way in which web visitors interact with the elements within
an individual page, not just between the different pages.
Client feedback. Businesses already collect a large amount of
feedback from their clients (e.g. comments and reviews left on the
site; questions asked of customer services). The analysis of this
type of feedback can be supplemented by the use of tools such
as client surveys or live chats to collect additional information.
The following are examples of tools used as part of a
conversion optimization strategy

4
Implementing an
A/B testing methodology
Equipping yourself with a rigorous methodological framework is the best
method of obtaining reliable results from a program of A/B testing. In this
chapter we detail the steps to take in implementing such a program.
The procedure is first outlined below then described in detail, one step at a time.

Defining the aims and objectives


A sentiment often expressed by businesses that have implemented a testing
program is that their tests do not produce results. This can be explained by how
well they understand the notion of success.
Some, particularly those new to A/B testing, consider a
test to be conclusive once it has produced a significant
increase in the conversion rate. The monetary gain directly
associated with their test is the principle measure of success.
In practice, few tests produce results of this kind.
For others, ourselves amongst them, success can be measured
once a test produces a positive effect on visitor engagement levels,
even if the visitors in question are not immediately converted. The
success of a testing program is here considered to be made up of
a succession of gains, sometimes small in size. This does not mean
that there is not necessary to try to achieve significant gains, but
it must be borne in mind that such gains will not always occur.
There are others, still, who consider a test to be conclusive
once it has permitted their sites aging design to be
modified without negatively impacting their KPIs.

10

Expectations with respect to A/B testing are therefore largely dependent on


the experience businesses have in the domain of conversion. The objectives
measured will also vary in accordance with these experience levels. Many tests
will fail to provide useful data if the only item measured is the global conversion
rate (the macro conversion). A single modification may actually have no impact on
the global conversion rate but still have a positive impact on micro conversions,
such as shopping cart additions or user account creations, which themselves are
steps taken towards a macro conversion. An increase in the average shopping
cart value is something else to take into account when evaluating the results of
a test.

Required steps for a successful


A/B testing strategy

1
2
3
4
5
6
7
8
9
10

Defining aims and objectives


Establishing a project team
Developing test hypotheses
Prioritizing the tests to conduct
Implementing the tests
Analyzing the results
Documenting the tests carried out
Implementing the winning versions
Communicating the results
Making testing continuous

11

Establishing a project team or


nominating a testing supervisor
The success of a testing program does not only depend on the A/B testing
tool used but also on the experience of the individuals tasked with conversion
optimization. Acting as the sole lead for such a program is a challenge because
the number of people involved can be large when it comes to the sensitive
subject of conversion. The person instigating a modification must first seek
senior management approval, then mobilize graphical and technical resources
to implement the test, before finally calling on the services of a web analyst to
evaluate the results.
This is why it is advisable to establish a multidisciplinary project team capable
of carrying out data analysis and identifying conversion problems, and which
is able to arrive at suitable solutions by considering the website experience
from the final users point of view. There are two specific roles that are also
useful: project leader and sponsor. The project leader will coordinate the teams
and take responsibility for the testing roadmap. The sponsor will endorse
optimization initiatives with senior management and be responsible for the
return on investment resulting from testing activities.
If the structure of the business does not justify such resources, it is still advisable
to have a testing supervisor who will centralize test execution and results
analysis.

Develop powerful test hypotheses


As already mentioned in the preceding chapter, a program of A/B testing must
be supplemented with other sources of information. Conversion problems need
to be identified and the behavior of web users understood. This is a critical
stage of the analysis process and has to lead to the formulation of powerful
hypotheses to test.
A correctly formulated hypothesis is the first step in a successful program of A/B
testing and must conform to the following principles:
it must relate to a clearly identified problem with suspected causes,
it must offer a possible solution to the problem in question,
it must specify the result expected, itself directly linked to the KPI
measured.

12

For example, if the identified problem is a high rate of abandonment of a


registration form, which is suspected to be too long, a valid hypothesis might
be: Shortening the form by removing optional fields, such as telephone number
and postal address, will increase the number of completed registrations.

Prioritizing the tests to conduct


Analysis of information sources has successfully brought several conversion
problems to light and various test hypotheses have been formulated. It is now
time to prioritize them in order to establish a roadmap which will formalize the
A/B testing program and provide a structured testing schedule. Various elements
must be taken into account when prioritizing the hypotheses:
The potential gains from the test. Heavy traffic pages experiencing
major conversion problems (e.g. a high exit rate) are good candidates
for a place at the top of the list of pages to test. These pages must be
identified through a preliminary analysis of the web analytics data.
The ease of implementation of the test. Depending on
the resources available, the complexity of solutions
proposed can influence prioritization of the tests.
At the end of the prioritization process, the roadmaps outlines must be drawn
up. To formalize the process, it is advisable to put everything down in black and
white, with as much information as possible included, such as:

the name of the test,


the type and URL of the page tested,
the planned launch date,
the hypothesis to be confirmed,
the KPIs to be measured,
the potential impact (a score between 1 and 3 for example),
the implementation effort (a score between 1 and 3 for example).

Sharing the roadmap will allow the efforts of the participating parties to be
mobilized, aligned and coordinated towards achieving the defined objectives.
Finally, the roadmap will act as a guide for the A/B testing process.

13

Implementing the tests


Once the tests have been prioritized, the process of implementation will vary
depending on the technical solution adopted and the operating methods
chosen by the business.
Some A/B testing tools require complex implementation necessitating the
intervention of technical teams to modify the source code of the pages to be
tested, whilst other tools do not require any technical knowledge and permit
anyone to launch a test. In the second scenario, the user modifies their own
website pages using a WYSIWYG (What You See is What You Get) type editor.
These tools take less time to learn to use and, following training, the user quickly
becomes autonomous.
Where the operating method is concerned, two trends have emerged: complete
integration of the test creation process into the organization, or delegation to an
external service provider who, in addition to providing conversion optimization
advice, will take responsibility for designing page variations, developing
graphical and editorial elements where necessary, and then implementing tests
using one of the tools available on the market. Certain tools, such as AB Tasty,
offer a certification program aimed at such service providers, which validates
their knowledge of the tool in question and their expertise.
The choice of an A/B testing tool and operating method will depend of course
on the level of experience the business has with respect to conversion and the
resources required for it, be they financial or human. Each scenario is therefore
different, and all we can recommend is to choose a solution that meets these
needs and is adapted to these constraints. Having a complex tool at their
disposal serves no purpose if the user, wanting to be autonomous, depends
on a service provider to make use of it. On the other hand, a tool that is overly
simple to use may prove of limited use as needs evolve.

14

Live edit pages of a website


through the AB Tasty graphical editor

15

Analyze the test results


The analysis phase of the testing process is the most demanding. An A/B testing
software solution must, as a minimum, provide a reporting interface that shows
the conversions recorded per variation, the conversion rate, the percentage
improvement compared to the original, and the statistical reliability index
recorded for each variation.
The most advanced tools allow raw data to be segmented using various
selection criteria (e.g. traffic source, the web visitors geographic origin,
customer typology, etc.). This makes it possible to identify groups of web visitors
for whom one of the variations statistically outperforms the original, and this can
be the case even where a test appears inconclusive at the global level (all web
visitors combined). This information is of strategic value because it can be used
to define the direction of future actions (e.g. content personalization for a specific
customer segment). The principles of statistical reliability must not be ignored
when taking advantage of the ability to view results per segment. Though a test
may prove reliable for web visitors as a whole, it may not necessarily do so with
a restricted sample. The tests reliability must also be verified for the sample
concerned.

A/B Tastys integrated reporting interface.


Here results are filtered to show only those web
visitors arriving via sponsored link campaigns

16

It is also advisable to integrate the tests into a web analytics tool in order to
benefit from complementary metrics and to be able to analyze other dimensions
of your tests impact.

Integrating the test data into a web analytics tool


(Google Analytics in this example)

Results analysis also depends on the objectives defined beforehand and the
KPIs involved. Though there is nothing to prevent the measurement of several
indicators during a test (e.g. add to cart, visitor engagement levels, etc.), it is
important to identify a primary KPI to differentiate between the variations. It is
not rare, in fact, to observe a test affecting two KPIs in opposing ways (e.g. an
increase in the number of purchases but a decrease in average cart value).
Result interpretation thus differs depending on the businesss objectives.

17

The primary constraint to overcome before the results of a test can be


analyzed is the obtainment of a sufficiently high statistical confidence level.
Professionals generally work with a threshold of 95%. This is an indication that
the probability that the differences in results between variations are merely
down to chance is very low. The time required to reach this threshold varies and
is largely dependent on the traffic of the website and the tested pages, the initial
conversion rate of the measured objective, and the impact of the modifications
made. It can range from several days to several weeks. In the case of low traffic
sites, therefore, it is advisable to test a high traffic page, to attribute 100% of the
traffic to the test, and to test the resulting modifications. Until the threshold is
reached, any conclusions drawn serve no purpose.
Furthermore, the statistical tests used to calculate this confidence threshold
(such as the chi-squared test) are based on a sample size approaching infinity.
In cases where the sample size is small, caution is required when analyzing
results, even where the test indicates a reliability rate of 95% or higher. Consider
the example of a test which after several hours produces the following results:
Visitors

Conversions recorded

Conversion rate

Original version

100

5%

Version 1

100

15

15%

The statistical tests will indicate a gain of 200% with a confidence index of 98%.
However, with the sample size so small, it is possible that the results will be
substantially altered if the test is left running for a few additional days. This is why
it is advisable to have a sample of a sufficiently large size. There are scientific
methods available to calculate the size of the sample. However, for practical
purposes, it is advisable to have a sample of at least 5,000 visitors and 100
recorded conversions per variation.
Finally, even if the sites traffic allows a sufficiently large sample to be quickly
obtained, it is advisable to leave the test running for several days to account
for differences in behavior observed on different days of the week, or even
from hour to hour during a single day. A minimum duration of one week is
therefore preferable, two weeks ideally. In some cases, this period can be even
longer, especially where the conversion process concerns products whose
purchase cycles require time to complete (complex or B2B products/services).
There is therefore no standard test duration.

18

Documenting the tests carried out


It is essential to correctly document and archive the tests carried out. Where
there are several people in charge of optimizing conversions, this will enable
information to be shared efficiently. The same will apply if someone new
becomes involved and needs to look at tests carried out several months
previously. Documenting a test involves keeping a written, post-test record of
information such as:

the name of the test,


the test period,
the hypothesis tested and the data leading to its formulation,
a description of the variations used,
including supporting screen-captures,
the test results,
what was learned from the test,
the potential monetary gain over the course of a year
following implementation of the best performing variation.
Because it demands comprehensive analysis of the results, this documentation
work also permits the team in charge of the testing program to identify new
hypotheses to test, and to evaluate the ROI for the work it carries out.

Implement the winning versions


and validate the gains observed
Once one of the variations is clearly outperforming the original, it is time to put the
winning version into production. Depending on how the business is organized,
the interval between each release of the site (the production phase) may be
substantial. To avoid missing out on any profit, especially where it is significant,
most A/B testing tools offer the possibility of displaying the winning version to
every web visitor whilst the changes are going into production.
Once the optimization has been definitively implemented, it is a good idea to
verify that the levels of gain observed during the test are confirmed over the
long term. Continuing to monitor KPIs can prove judicious, because there are
numerous external factors that can cause an optimization to produce better
results during testing than after implementation. For example, as end of year
celebrations approach and there is a growing sense of urgency, the conversion
rate may naturally improve. Though a test may indicate that a variation
19

outperforms the original by 10%, the gain may be less outside holiday periods.
Traffic origin can also affect the gains indicated by a test. Level of buzz, or an
acquisition campaign, can cause a peak in conversions involving web visitors
who exhibit behaviors different from the types of behavior normally observed.

Communicate the test results


It is important to communicate the lessons learned from the tests as widely as
possible. Senior management is the primary target of this communication. They
should be presented with an overview of the results emphasizing the tests
impact on the KPIs defined beforehand. Broader lessons learned that potentially
impact other aspects of the businesss activities must also be highlighted. For
example, if it has been proven that a particular audience segment reacts better
to a particular message, this information could be useful to the teams in charge
of traffic acquisition channels. Information sharing must therefore occur at all
levels within the organization so that a culture of testing is progressively instilled.
Finally, where the A/B testing tool permits evaluation of the tests monetary gains
(the difference in revenue generated by the original page and the variations),
mentioning these gains will enable the testing programs ROI to be calculated
and investment in it to be justified, both in terms of tools and human resources.

Reporting of transaction data in AB Tasty


to permit evaluation of a tests monetary gains
(average cart value, per visit value and absolute gain per variation)

20

Making testing continuous


A/B testing is a process of continuous optimization. At the end of each test,
lessons will have been learned and the information will be used to fuel new test
hypotheses to further develop the roadmap. It is over the long term, moreover,
that the efforts made will bear fruit: the first tests will certainly not produce the
desired results because it takes time to carry out in-depth analysis.

Continuous cycle of optimization

ACT

MEASURE

TEST

ANALYZE

DESIGN

21

5
Efficient A/B testing:
hints and tips
Our intention here is to describe certain good practices which, we hope, will
enable businesses to avoid some of the pitfalls encountered when implementing
A/B testing. They are born out of the experiences, both positive and negative,
that our clients have had when carrying out their testing.

Ensure the data provided by the A/B


testing software solution is reliable

Carefully calibrate the


test before launch

It is advisable to conduct at least one A/A test to ensure that traffic is randomly
allocated to the different versions. This also provides an opportunity to compare
the indicators reported by the A/B testing software with those from web analytics.
The figures should be evaluated in terms of their approximate values rather than
precisely comparing them. Precise comparison, moreover, proves impossible,
because the methods of calculation are not identical, as is also the case when
different web analytics tools are compared. Large discrepancies, however,
warrant further investigation in order to ascertain whether or not the two tools
are being correctly implemented.

Do some of the results appear to be counter-intuitive? Have the test parameters


and objectives been correctly defined? The time dedicated to calibrating the
test often results in valuable time saved that would otherwise be taken up
interpreting false test results.

Test one variable at a time

This allows the impact on the variable to be carefully isolated. If an action buttons
placement and caption are both modified simultaneously, it will be impossible to
identify which change produced the effect observed.
22

Run one test at a time

The temptation to run several tests simultaneously is considerable where


a websites traffic is low or where tests take time to achieve a sufficient level
of reliability. However, and for these same reasons, it is advisable to run only
one test at a time. Results will be difficult to interpret if two tests are running in
parallel, even more so when running on the same page.
Nevertheless, some tools allow several tests to be launched simultaneously
whilst guaranteeing that each web visitor will only be subjected to a single test.
In the case of high traffic websites, this facility can be useful if the roadmap
contains numerous tests and rapid results are required.

Adapt the number of variations


to the traffic volume

Wait until statistical reliability


is attained before acting

If there are many variations for a small amount of traffic, the test will take a long
time to produce conclusive results. Where a low amount of traffic is allocated to
a test, a low number of versions must be used, and vice versa.

It is not advisable to make any decision at all until the test has achieved a level of
statistical reliability of at least 95%. The probability that the differences observed
in the results will be due to chance rather than the modifications introduced
will be too high otherwise. Furthermore, it is possible to see a trend in results
reversed if a test is left running for a longer period of time.

Allow a test to run sufficiently

Even if a test rapidly achieves statistical reliability, sample size and the
behavioral differences observed on different days of the week need to be taken
into account. It is advisable to allow a test to run at least once per week, twice
per week ideally, and to have registered a minimum of 5,000 visitors and 100
conversions per version.
23

Know when to terminate a test

Where a test takes too long to achieve a reliability rate of 95%, it is likely that
the element tested does not impact the indicator measured, or the modification
is not significant enough. There is no point in prolonging the test: it results in
wasted time and unnecessarily monopolizes a portion of the traffic that could be
used for another test.

Measure multiple indicators

It is advisable to measure multiple objectives during the test: a primary objective


(allowing the different versions to be differentiated) and secondary objectives (to
deepen the analysis of the results). Amongst the indicators often measured are
the click rate, the add-to-cart rate, the conversion rate, the average cart value,
the number of leads, etc.

into account any marketing


10 Take
activities that coincide with the test
Some of the variables external to a test can falsify, or at least impact upon, its
results. In many cases, these will be traffic acquisition campaigns that attract
a group of web visitors displaying behavior which differs from the norm. It is
preferable to limit these collateral effects by ensuring tests and campaigns do
not coincide, though this is not always possible. Nevertheless, this is something
to be aware of, even if only to explain any unexpected results.

11 Segment the tests

In some cases, running a test on all the visitors to a website does not make
sense and may even lead to false results. Where a test is designed to measure
the impact of different customer benefit packages on the sites user registration
rate, testing the existing registered user base serves no purpose and may even
create dissatisfaction amongst existing users, who would not be aware of said
benefits. It therefore makes sense to only subject the new visitors to the test.
24

A common approach to conversion optimization involves keeping the route the


web visitor follows open (the scent trail concept). In practice, this consists of
reassuring the visitor, throughout their online journey, that they will find what
they are looking for. If they carry out a Google search by typing the expression
mountain walking boots for men, the AdWords advertisement presented
to them must mention these terms and the landing page they arrive at must
correspond as closely as possible to their search. This could involve presenting
them with a customized title, with a photo of the product in use, or with a list of
matching products.
If a test is going to be carried out on this page, it will have to be possible to target
the page at the segment of visitors arriving via sponsored links and searching
for mountain boots. Fortunately, advanced testing tools allow tests to be
segmented according to numerous criteria (visitor origin, behaviors, etc.).

Segmentation options offered by the AB Tasty tool

25

6
A/B testing in practice:
which elements to test?
This is a recurring question, and one that relates directly to the fact that, in many
cases, businesses are unable to explain their conversion rate, be it good or
bad. If a business was aware that its web visitors did not understand its product,
it would not attempt to prioritize the testing of the placement or color of its
add to cart button. It would test instead different customer benefits package
formulations. Each case is therefore different, and the aim of this chapter is not
to provide an exhaustive list of elements to test, but rather some of the aspects
to consider.

Showcasing the value proposition


The value proposition is the sites reason for existing and the reason why web
visitors use its products or services. It is made up of a subtle combination of the
benefits and risks perceived by the web visitor. The objective is to increase the
former whilst minimizing the latter. In this respect, the tests can make it possible
to provide answers to questions such as:
which benefits should be promoted?
what will the web visitors be most receptive to?
are the web visitors more concerned with the intrinsic qualities
of the product or the intangible benefits?
how many aspects should be mentioned?
is it better to be succinct or to detail the benefits to the maximum?
what kinds of prices or incentives work most
effectively with respect to their target?
what price formats should be displayed (e.g.
markdowns, rounded or odd prices)?
which are the most important services to highlight
(e.g. free returns, free delivery offered)?
26

At Cdiscount.com, the value proposition,


based on low prices, is communicated
through the way the prices are displayed
(markdowns, charm prices, amount saved and percentage reduction)

27

Message clarity and


ease of understanding
Once the value proposition has been identified, the web visitors must then be
able to rapidly understand it. In this respect, the paths to optimization consist
of limiting the mental effort needed to understand the proposition. There are
numerous elements to test and they relate as much to style as to content:
the site hierarchy (simplicity of navigation, product grouping, etc.),
labeling of navigation categories,
organization of information at the page level,
content presentation style (table, bulleted list, paragraphs, etc.),
legibility of the text (font sizes and color contrasts, etc.),
relevance and quality of the images,
highlighting of calls to action (placement above
the scroll, contrast and colors, etc.).

The same presentational style in the form


of a table but with different approaches
taken in terms of information density

28

Relevance of the product or service


to web visitors expectations
The relevance of a page can be summarized by the following question: Will the
web visitor find what they are expecting or what they have been promised on the
page? A number of factors can affect the pages relevance. Primary amongst
these are traffic sources. Care must be taken to maintain the coherence between
the context in which the web visitor clicked on the advertising message (typed
search request, a referrer page consulted, email segmentation, etc.), the content
of the message (text of sponsored link ads, promotional email content, etc.) and
the destination page (is the product or service mentioned in the message clearly
visible and described in the same terms?).
Another means of maintaining a high degree of relevancy is to segment your
audience and address a specific message to each segment. The visitors to
a website are in principle all individually different and all expect products or
services relevant to their particular needs. This segmentation can be based on
various criteria: returning/new visitor, prospect/customer, traffic source, customer
segments already in use within the business, etc. Testing the customized
messages for each segment is therefore recommended.

Landing page with pre-filled fields


corresponding to the destination but
the loss leader price is no longer the same :-(

Search request submitted


to a search engine

Related AdWords advertisement


mentioning a price starting at 67

29

Reducing distractions and noise


Distractions are all those elements that tend to turn the web visitors attention
away from the main message and from the task to complete. These distractions
can occur from the instant the web visitor arrives at a page and forms their
first impression. The opening screen, above the scroll, is therefore of capital
importance. The ambient noise must be reduced so that the main message can
be focused on.
This type of simplification work usually proves to be effective, including for
subsequent stages in the users journey. For example, many e-commerce
websites suppress the main navigation menu once the user has entered the
conversion funnel in order to limit the navigation options. There are numerous
testing opportunities:

removal of ineffective taglines,


addition of a clear, concise title,
removal of images that do not contribute to the message,
repositioning of supplementary content,
removal of a cluttered background,
limiting the number of navigation options,
removal of rotating banners or slideshows,
page layout simplification,
removal of superfluous calls to action.

Simplified conversion funnel at ldlc.com

30

Removing uncertainty
and adding reassurance
This involves all those elements on a website which can give rise to confusion or
cause questions to be posed. Very often, this concerns elements missing from
the site: the user needs more information to convince them that the product or
service meets their needs, but they cannot find it on the site. A qualitative study
carried out on a sample of prospects must be used to highlight these elements
so that they can be integrated into the site.
To dispel any other doubts about the features and the advantages of the product
or service offered, various marketing tactics can be tested:

offer of free samples,


inclusion of a demonstration video,
organization of a webinar,
free time-limited evaluation version,
purchase guide promotion,
addition of professional and customer reviews,
addition of customer testimonials,
addition of case studies,
addition of awards and distinctions,
addition of social mentions.

For this product, Amazon offers several reassuring


elements: a title that provides answers to
buyers main questions, a demonstration video,
a purchase guide, and customer reviews

31

Creating a sense of urgency


Urgency is defined as the need for an individual to take immediate action. There
are different degrees of urgency and it depends on elements internal to the user
as well as external ones. It is a very important factor, because the stronger the
sense of urgency, the less the tendency for the user to look at different options
presented to them. When the sense of urgency is more intense, conversion rates
very often increase. This is the case, for example, when end of year celebrations
are approaching.
Several kinds of marketing tactics can be adopted to intensify this sense of
urgency, particularly those taking advantage of the phenomenon of scarcity.
There are numerous ideas for tests to carry out in this respect:
display of remaining stock for products in limited quantities,
display of the level of demand for the viewed product
(number of other people also interested in the product),
adding the time of the most recent sale of the product
(implication: it is a successful, fast-selling product),
addition of a timer for time-limited offers,
creation of exclusive, time-limited offers,
use of temporary incentives
(e.g. free delivery when ordered before 12:00 pm).

Elements contributing to the creation


of a sense of urgency on Booking.com

32

7
Beyond A/B testing:
how can conversion rates
be continuously improved?
A/B testing, because of the methodology it imposes and its iterative nature, is
an excellent way of identifying what does and does not work with respect to
different audience segments. Advanced testing tools provide comprehensive
reporting interfaces offering filtering and data recalculation features that allow
you to accurately identify the messages which have been most effective with
respect to each different type of web visitor.
The next stage thus involves taking advantage of what has been learned from
the tests in order to personalize the experience for each user segment. This
means using the right message, with the right visitor, at the right time. By using
an optimized customer journey, the results from A/B testing, and messages
customized for each visitor segment profile, e-businesses maximize their
chances of achieving conversions.

Complementarity of A/B testing and content


personalization for optimizing conversion rates
CREATE
your pages, your content, your
personalized experiences

EXTEND

TARGET

sources, behaviors,
characteristics

the right message, at the


right time, for the right user

ANALYZE

TEST

visualize and explore


the results in real time

define your objectives


and test on a sample

33

Some A/B testing tools, such as AB Tasty, allow you to move from a testing
approach to a personalization approach very easily. They are designed
primarily to provide agility, and this approach extends to content personalization
campaigns. In contrast to other software solutions, based on opaque algorithms
and relying entirely on automation, these types of solutions leave the user with
complete control over the personalization scenarios they envisage.
The user remains in a familiar software environment where they find the
same automation as offered for test creation: same interface, same manner of
operation, same indicators. They can create or modify personalizable elements
with ease using interactive tools they are familiar with. They can then define
the types of user to whom the messages are to be addressed. The user has all
the targeting criteria needed to achieve this at their disposal, enabling them to
create personalized content of varying degrees of complexity:
traffic source (e.g. CPC, affiliation, etc.),
web visitor behavior (e.g. visit history, specific actions, etc.),
data generated by the back office (e.g. existing segmentation, etc.),
type of device (e.g. mobile phone, tablet, etc.) and browser,
geographical location and many others.
Messages can be linked to web user segments in just a few clicks.
The e-business therefore has the benefit of unprecedented flexibility and
speed of execution, in terms of personalizing their users experience, without
ever having to call on the services of technical teams. The possibilities are infinite,
and limited only by the businesss creativity and ability to carry out analysis.

34

8
Conclusion
We hope that the reader, in the course of reading this white book, will have gained
awareness of the prerequisites required for implementation of an effective A/B
testing program. Such a program demands much more than just a high quality
testing tool. As with many disciplines, success depends on a carefully weighted
blend of human resources, the process itself, and technologies. A/B testing is
no exception to this rule.
We also encourage the reader who is implementing A/B testing for the first time
to adopt the methodology introduced over the course of these pages as early
as possible. The advice distilled here will be of invaluable help in establishing
the right foundations from the outset. From experience, we know that the first
tests are decisive in terms of generating and sustaining interest in testing within
the business.
The effort really is worth it, because though tests may prove profitable in the
short term, A/B testing, as a process of continuous improvement, shows its full
potential over the long term. Beyond improvements noted in conversion rates
or other KPIs, A/B testing basically leads to a better understanding of web visitors
and greatly increases the amount known about the customer information of
inestimable value that can impact on everything the business does and give it a
competitive advantage.

35

Glossary

Chi-squared test

A
A/B test
An A/B test involves comparing the
performances of several versions of the same
page in terms of the objectives specific to each
business. This could involve the user registration
rate for a service, the number of sales, or the
average sale value. The different versions are
set in competition in a real environment: each
web visitor, unaware of the test, is randomly
assigned to a single variation on arriving at the
site. At subsequent visits, the visitor remains
assigned to the variation first viewed. With largescale testing, trends emerge to reveal which
version is the best.

A chi-squared test is a statistical test that


permits the independence of two random
variables to be tested. The method consists
of comparing the actual values obtained by
crossing the modalities of the two variables with
the theoretical values that one would obtain if
the two variables were independent. To achieve
this, an index is constructed that measures the
deviation between the actual values and the
theoretical values.

Conversion rate
The conversion rate, sometimes called the
transformation rate, corresponds to the
percentage of visitors that have effected the
desired conversion (purchase of a product,
sign-up to a newsletter, etc.). If, for example, a
website attracts 100 visitors per month and two
of them make a purchase on the site, the rate of
conversion of visitors into purchasers will thus
be 2% (number of purchasers / total number of
visitors x 100).

Conversion funnel

Bounce rate
The bounce rate is the percentage of web
visitors who arrive at a web page then leave
the website without consulting other pages
and who, therefore, have only viewed a single
page on the site. An elevated bounce rate
can indicate visitor dissatisfaction. It may also
indicate, however, that they immediately found
what they were searching for.

A conversion funnel is a series of stages


leading to a desired action (online order, contact
request, etc.). On a merchant website, the
conversion funnel normally begins at the page
corresponding to the adding of the product
to the shopping cart and finishes at the order
confirmation page. In web analytics, it can
also refer to a graphical analysis feature which
allows the phenomenon of abandonment to be
illustrated at each stage of the funnel.

Key Performance Indicator (KPI)

Call to action (CTA)


A link, button or other visual element leading
the web visitor to carry out an action on the site,
such as an add to cart etc. The effectiveness of
a call to action depends primarily on the visual
and editorial quality of the marketing hook used
to improve the response rate. Initiating action is
a key element because it serves to lead the web
visitor towards the conversion funnel.
36

KPIs, or key performance indicators, are statistical


measures which act as an aid to management
and decision-making processes. They can be
financial, technical, social or logistical in nature,
or relate to marketing or other aspects. In the
case of e-commerce related business activity,
these KPIs can include the add to cart rate, the
conversion rate, the number of transactions, the
average value of a visit, or the number of new
customers.

Heat map
A heat map is a cartography of those elements
of a web page most frequently scanned (via eye
tracking) or clicked (via click tracking) by the
users. It provides a graphical representation in
which warm colors are used to represent the
most attractive elements and cold colors to
represent the least attractive elements.

Original version/control version


This is the original page, the one actually in
use, which one hopes to improve. Indicators
measured for the web visitors who are presented
with this page are compared to those for web
visitors presented with alternative versions
of the page. The percentage improvement
measured is always relative to this page, which
serves as the reference, hence the term control
version.

Macro conversion
This is the primary objective as well as the
reason for the sites existence. In the case of
an e-commerce website, it normally involves
generating transactions and, by consequence,
revenue. The conversion rate, also known
as the global conversion rate, is directly
associated with the act of making a purchase.
In the case of non-transactional websites, the
macro conversion may consist of generating
qualified prospects or examining page views
if the economic model is based on advertising
revenue.

Micro conversion
Micro conversions are secondary conversions
that may contribute to the macro conversion.
Essentially, the web visitor is often not ready
to effect a macro conversion immediately
after they arrive at the site. It is therefore a
good idea to offer them alternatives involving
less engagement (e.g. sign up to a newsletter,
request a free demonstration, etc.) in order to
be able to contact them again. Measuring these
intermediate stages is therefore important when
evaluating the sites capacity to maintain the
relationship with the web visitor throughout their
purchase cycle.

Multivariate test
A multivariate test, or MVT, is a test which allows
multiple versions and multiple variables to be
tested simultaneously. The principle consists of
modifying multiple elements simultaneously on
the same page then identifying, amongst all the
combinations possible, the one which has had
the greatest impact on the indicators tracked.
This kind of test permits, in particular, the role
of associations between variables to be tested,
which is not the case when successive A/B (or
A/B/C, etc.) tests are implemented.
37

Reliability rate
The reliability rate is a statistical indicator that
allows the point at which conclusions can be
drawn from the results provided by the A/B
testing tool to be identified. It is calculated using
different statistical tests, such as the chi-squared
test, and once it reaches a certain threshold (by
convention 95%), it indicates that the differences
in results between two different samples can
justifiably be attributed not to chance but to
the element modified. Below this threshold, it
is hazardous to base decisions on the figures
generated.

S
Split Test
This is the generic term used to designate A/B
type tests, which are not necessarily limited just
to a comparison between two versions. It can
actually also refer to A/B/C tests or A/B/C/D
tests.

V
Variation
In the context of an A/B or multivariate test,
this is a version of the original page on which
one or more elements have been modified in
order to evaluate their impact on the conversion
rate. The performance indicators measured
for that variation are subsequently compared
to those of the original version and statistical
analyses make it possible to confirm whether
the differences observed are significant and not
simply down to chance.

About the authors


Rmi
Aubert
Co-founder, AB Tasty
Rmi Aubert is the co-founder of AB Tasty. He
began his career at Twenga, where he managed
search engine optimization for a price comparison
website. He then joined the search agency Keyade,
where he was responsible for managing problems
associated with aliation and trac acquisition. In
2009 he co-founded, together with Alix de Sagazan,
the web analytics consultancy Liwio, to help
e-businesses with their conversion optimization
strategies. Faced with the lack of tools available to
validate recommended optimizations, he created
the AB Tasty software solution. Today he manages
the development of the business and the evolution
of the softwares features.

Anthony Brebion
Head of marketing, AB Tasty

Anthony Brebion is head of marketing at AB


Tasty. After working for several years within the
advertising departments of organizations such as
Orange and AOL, he began to focus on search
engine optimization, becoming an SEO consultant
for the agency Aposition (Aegis Media Group).
Observing how few resources the majority of online
businesses dedicate to conversion optimization, he
decided to become part of the AB Tasty venture
in order to participate in the evangelization of A/B
testing and the organizations development.
38

About AB Tasty
AB Tasty is the essential SaaS (Software as a Service) A/B testing software
solution. Developed for marketing and e-commerce teams, it simplifies test
creation to the maximum whilst at the same time providing advanced features.
Its graphical editor, in particular, makes it possible to modify a websites pages
without specialist technical knowledge, and to track business indicators specific
to each website (add-to-cart rate, global conversion rate, average cart value,
etc.).
AB Tasty users are therefore rapidly able to turn their optimization ideas into reality,
and they gain in terms of the rapidity with which tests that improve the user journey
and the businesss profitability can be created and launched. Many organizations
of all types and sizes have already placed their confidence in AB Tasty: Bouygues
Telecom, Photobox, Boulanger, Etam, Microsoft, Axa, France Tlvisions, OuestFrance, Prisma Presse.

Do you have any questions?


Please feel free to contact us
for more in-depth information.

contact@abtasty.com

Interested in seeing
examples of A/B tests?
Consult our library of case studies.

blog.abtasty.com/en/

Looking for an
A/B testing software?
Test the AB Tasty solution for free at:

www.abtasty.com

39

+44 20 3445 0902


contact@abtasty.com

W W W. A B TA S T Y. C O M

40

Vous aimerez peut-être aussi