A/B Tests
What Are A/B Tests?
An A/B test measures the effect of different versions of a promotion or content asset on customer activities in the storefront, such as order generation. Using A/B tests, promotions and content assets can be tested and optimized until they yield the expected results.
For example, suppose that you plan to run a promotion on selected articles to boost online sales, but you are not sure whether to use a discount value of 10% or 20% to reach your revenue goals. The A/B test feature makes it possible to set up a simple test in which both discounts are randomly presented to customers. While the test runs, the system keeps a detailed record of all customer activities. Using these data, you can thoroughly evaluate both discount versions, providing a solid basis for your decision.
Possibilities offered by the A/B testing engine in Intershop Commerce Management include:
-
Promotion and content asset testing
You can test promotions, components, and page variants. Having evaluated the test results, you can add tested discounts to your campaign list with the click of a mouse.
-
Multiple test groups
You can define any number of test groups, making it possible to implement simple and complex test scenarios.
-
Session and cookie stability
The test engine makes sure that users using the same session are presented the same discount and content versions. Moreover, the system tries to set a cookie in order to identify multiple sessions from the same client for non-registered users. If the client permits cookies, the system recognizes new sessions, but does not increment the number of participants in the test summary. If the client does not permit cookies, the system assumes that each new session originates from a new participant.
Working with A/B Tests
Setting Up an A/B test involves the following main tasks:
-
Creating the Test
Besides name and description, you have to set the start and end date/time during which you want the test to run.
-
Defining Target Groups and Test Groups
The target group defines which customers are going to participate in the test. To define the target group, you can select one or more customer segments, affiliate partners or partner programs. The members of the target group are then distributed proportionally over two or more test groups. For example, in the most simple case you might setup two test groups, with 50% of the target group assigned to each. The test group determines which promotion, content asset, or page variant a test participant will be presented. Depending on the proportional split, the system randomly assigns customers that belong to the selected target groups to one of the test groups.
One of the test groups serves as control group. The proportional size of the control group can be set manually, the proportional assignments of the other test groups are then automatically adjusted to make 100%. For example, when setting up two test groups besides the control group, with the control group being assigned 50% of the target group, the proportional assignment for the test groups is automatically adjusted to 25%. When changing the assignment for the control group from 50% to 40%, the assignments for the test groups automatically increase to 30%.
It is also possible to assign URLs to test and control groups that are required for tracking click events.
Note: In order to be able to assign customer segments as a target group, at least one segmentation service must be configured for the sales channel, see Customer Segments. -
Assigning Promotions and Content Assets
For each test group, you can assign one or more discounts or content assets. Note that you do not necessarily have to define a promotion for all test groups, for example you may want to compare a test group which is offered a certain discount (via a promotion) against a control group which is offered no discount at all.
-
Enabling the Test
The last step in setting up an A/B test is to enable the test.
-
Viewing Test Results
After the session lifetime has expired and the "UpdateTrackEvents" job has run, you can view the first test results. For details about these settings, contact your system administrator. The results are updated on a regular base as the test progresses.
Test results can be viewed via a summary table showing the most important results at a glance, or via detailed breakdowns of all different customer activities that the system tracks.
For details on tasks related to A/B tests, refer to A/B Tests Management.
Planning and Executing A/B Tests
The success of A/B tests largely depends on careful test planning and execution. As A/B testing is a well-established marketing means, it is strongly recommended to consult the existing literature on this topic. In particular, the following issues are of importance:
-
Define what to test
Before setting up a test, you should clearly define the overall questions which you want the test to answer. Keep in mind that a simple A/B Test is appropriate for testing the effect of a single factor, such as "discount value". An A/B test is not appropriate for testing the effect of multiple factors at once, or the interaction between factors. If you need to test the effect of multiple factors, consider running multiple A/B tests.
-
Use simple test design
The test design pertains to the number of different discount and content versions necessary for the test, and accordingly, the number of test groups. Make sure the different discount or content versions only vary with respect to the factor whose effect the test is to measure. For example, if you want to know whether a 10% or a 20% discount is more effective in boosting sales, vary the discount value only and keep everything else identical. The more differences you introduce, the more difficult the test results are to interpret.
-
Define criteria for success
During the test, Intershop Commerce Management keeps track of a number of different variables, ranging from the number of sessions to the number of generated orders. Before running a test, you should clearly define which variables you need to focus on in order to measure success.
-
Use appropriate sample size
The sample size refers to the number of participants which you need to be able to draw firm conclusions from the test results. If the sample size is too small, you may miss effects, or misinterpret accidental differences. If the sample size is too large, basically every difference may seem significant. A simple way to estimate the sample size is to run a preliminary test in which all test groups are exposed to the same discount and content versions. As soon as the results for the variables of interest match up with each other, the sample size is likely to be sufficient.