12 Tips To Take Your A/B & Multivariate Testing To The Next Level

235,000 — that’s the number of results for A/B testing guides in Google. Clearly, A/B testing is an important tool for digital marketers today. But if you already understand its role and are conducting your own experiments, what’s next? Where do you go from here? The best way to take your testing efforts to the next […]

Chat with MarTechBot
Climbing Man

Image via Shutterstock

235,000 — that’s the number of results for A/B testing guides in Google. Clearly, A/B testing is an important tool for digital marketers today. But if you already understand its role and are conducting your own experiments, what’s next? Where do you go from here?

The best way to take your testing efforts to the next level is to focus on the various elements of your experiments. This helps ensure that your tests are not flawed and will produce meaningful results that can be used to inform your marketing efforts. Only then will your testing efforts truly begin to pay off for your organization.

Sounds like a no-brainer, right? Well, you’d be surprised at the mistakes that seasoned marketers can make with their A/B and multivariate tests – even marketers just like you.

The below tips will help you avoid some of the biggest errors I’ve seen marketers make with A/B and multivariate testing

1. Use A Sufficient Sample Size

Marketers often stop a test the moment it shows a winner or a loser, without considering the sample size used to make the assumption. But this is a mistake — sample size is important.

For instance, when this article gets published, three of my co-workers and my wife will be the first ones to look at it, and all of them will like and/or share it on their networks. That means that out of the first 10 people who see it, 4 will share it, which translates into a share rate of 40%. Now, if this gets 10,000 views over the next week, does that mean that I can expect 4,000 shares? No, of course not. That assumption is based on a small and irregular sample.

2. Test Duration

Every campaign is different, as is each industry or field, and each has its list of variables– sales cycles, consumer buying habits, growing seasons, election cycles, etc. Therefore, you want to make sure that you are running your test for a sufficient duration in order to get a true average, which will depend on your campaign goals and industry-specific criteria.

For instance, if you are testing pricing or product attributes, your duration should run through an industry-standard sales cycle. The goal is to get numbers that are meaningful to the brand’s goal. VisualWebsiteOptimizer has a helpful A/B duration calculator for planning an experiment’s duration.

3. Exclude Irregular Days

Unless you are trying to assess the impact of certain days — holidays, weekends, Black Friday, etc. — on your business, omit them from your tests, as they will skew your data.

For instance, testing a button for “buy now” vs. “order today” on Easter Sunday will deliver skewed results. That’s because many people will be with their family or in church; whereas on a typical Sunday, they might be browsing the Internet and shopping online.

This also applies to other irregular days that could influence consumer behavior, such as paycheck days, tax return season, etc. During these days, consumers have an influx of spending power that could reduce barriers to purchase.

In addition, monitor the news for happenings related to your industry to make sure there is no activity around your vertical that could influence sales (recalls, competitive product launches, etc.).

4.  Limit The Segments & Channels

In general, try to limit the variety of traffic in a test in order to keep the data clean and free of extremes or measurements outside the assumptions of the experiment.

For example, avoid mixing display and paid search traffic, as a consumer’s intent for each is dramatically different (push vs. pull) and will not give you clean data. Instead, testing several variables within a display ad campaign or testing two different aspects of a paid search campaign will deliver data that is in sync with each consumer group’s intent for each type of marketing.

5. Inspect What You Expect

Testing is not a “set it and forget it” endeavor, but many marketers take that approach. Unfortunately, this can derail the whole effort. I have seen many cases where testing failed because the code got changed or broken during the testing process.

Be smart — make sure you regularly check your data for consistency during the experiment. You need to ensure that the control code is placed correctly and that large deviations from past performance are not occurring unexpectedly.

6. Find Your Tribe

Make sure your test includes the right audience. Strive to use audiences and traffic sources that are a close match for the target audience of the pages being tested. Otherwise, sending the wrong consumers to the test pages will skew your data.

Before starting your test, create a campaign and drive traffic for a sufficient period of time in order to establish a historic baseline.

7. Go Deeper — Save Details

Make sure you are collecting enough data to tell a deeper story beyond the face-value metrics. For instance, in a recent campaign, we compared three landing pages for approximately four weeks. The results showed that the page with the shortest form and no image had the overall highest conversion rate.

However, when we dug into the data more, we saw that the longer page worked better on desktop devices, and that the shorter one performed best on mobile devices. This finding led us to create two campaigns: one for the desktop that served the longer page and one for mobile devices that served the short one.

8. Go Even Deeper — Get Granular

Even after you uncover the story your data is telling, don’t stop there. Revisit it and go deeper. Doing so may help you uncover some more granular findings that could help you refine your efforts.

This is exactly what happened in the case mentioned directly above. Another look at the data showed that most of our mobile traffic occurred during the week, and we had more desktop visits on the weekends. This deeper finding allowed us to break our campaigns down further. Now we have four campaigns that have PERFECT landing pages associated with each.

9. Test By Funnel Stage

When creating your test, make sure you consistently focus its elements on the same stage of the funnel, whether it is at the awareness level or the purchase level, etc.

For instance, don’t send people with the category intent of “digital camera” to the same page as people with the product intent of “Nikon d7000.” Doing so will skew your data.

10. Pop The Question

Nothing is better than combining binary testing data (yes and no) with real-world consumer data. Adding an exit survey can help you do exactly that. Be sure to use it both for consumers who completed a conversion as well as those who abandoned.

11. Focus On Tests That Matter

I have seen many test plans that have left me feeling like I should either laugh or cry. Why? Because some organizations use testing to resolve creative disputes — Helvetica vs. Tahoma — instead of leveraging it for more important efforts. Focus on test that matter – the ones that deliver findings that will help you sell more stuff!

Start by testing elements that give you big wins, such as calls to action, pricing and offers, headlines, images, forms, and external links.

12. Create A Testing Process & Framework

While the best practices above are key, nothing is more important than having a good testing process and framework. This holds true whether you are an enterprise manufacturer with 200 brands, or a small agency with two.

It starts with translating business needs into digital KPIs. You need to determine the influencers for these KPIs and define the metrics that can affect the KPIs.

Your process should also cover the testing pipeline: what is currently being tested, what has been tested, and what will be tested next. And it should all come together in a shared test matrix that documents all previous tests, their findings, and your conclusions. This is important because, while you should never stop testing, proven results do not need to be tested over and over again.

Worth A Closer Look

Remember, the results your tests produce are only as good as the experiments you have designed.  If they are flawed, your results will be too. Take a closer look at your tests, and apply the above best practices. Doing so will help you design better tests and produce meaningful results that can be used to inform your marketing efforts.

What are you doing to take your A/B and multivariate testing efforts to the next level?  Share your tips here!


Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.


About the author

Benjamin Spiegel
Contributor
Benjamin Spiegel, Chief Digital Officer, Global P&G Beauty, has nearly 20 years of experience in the technology, advertising and marketing industries. He is known as an innovator, leading the development of strategic solutions that combine data, media, insights and creativity to create disruptive digital solutions that transform brands and businesses. Prior to joining P&G as Chief Digital Officer, Global P&G Beauty, he led the search practice across the GroupM agencies, the P&G business for Catalyst, and most recently served as CEO of MMI Agency. In his current role, Benjamin brings to P&G Beauty his digital expertise, leadership and passion for creating and building leading digital capabilities. He is known as an industry thought-leader. As such, he is a frequent contributor to and speaker at conferences around the world.

Fuel up with free marketing insights.