Martech: Martech is Marketing Logo
  • Topics
    • Customer & Digital Experience
    • Digital Transformation
    • Data
    • Marketing Management
    • Marketing Operations
    • Performance Marketing
    • Special Reports
    • All Topics
  • Conference
  • Webinars
  • Intelligence Reports
  • White Papers
  • What is MarTech

Processing...Please wait.

MarTech » Data » Measuring The Impact Of Conversion Lift

Measuring The Impact Of Conversion Lift

How do you measure the impact of conversion improvement on your site? Most marketers know the basics of how to calculate this, but might get confused when it came to details — or do they? Astute readers will recall I began a series on calculating the impact of your continuous improvement efforts a few months […]

John Quarto-vonTivadar on June 12, 2012 at 9:00 am | Reading time: 5 minutes


How do you measure the impact of conversion improvement on your site? Most marketers know the basics of how to calculate this, but might get confused when it came to details — or do they?

Astute readers will recall I began a series on calculating the impact of your continuous improvement efforts a few months ago, called “Making Millions From Losing Tests“.

Several readers wrote in privately asking for an even simpler example. It turns out that it’s the basic lift calculation that seems to be causing doubt, and therefore the detail work “feels shaky”, as one fellow put it. This is not uncommon.

Many of my clients seem to get “stuck” when measuring the impact to their bottom (and top) line from their conversion improvement and continuous optimization efforts. They’re actually doing the work and getting results — but they manage to confuse themselves (and their bosses) when it comes to reporting the results.

Let me take this chance to clarify a simple way to measure lift. Often when you have the simple technique in your mind, the details fall into place by themselves.

A Hypothetical Example

Let’s assume your company started the year with some core numbers: 250,000 visitors and $1m in revenue. (I’m picking easy numbers on purpose.) That was for January. Now, during February, you started optimizing and visitors went up to 300,000 whereas revenue jumped to $1.5m. So, the simple lift question: How much of the increase can rationally be argued to be part of your optimization efforts?

Surely it’s not the full half mil. Nor the extra 50k visitors. Where do you start in guestimating the impact of the improvement?

Here’s the simplest way to do it: First and most importantly, some amount of your traffic should be funneled into the control (or some call it “baseline”) part of your site. That is, your site as it was in January. This is what your visitors would have seen if you hadn’t done any improvement testing at all.

Allocate Some Traffic For A Control

As in all good testing programs, some percentage is allocated to the control. I’ll assume — and again, a “easy” number to keep the numbers simple — that you dedicated 10% of your traffic to the control version of your site. Ten percent of 300,000 visitors is 30,000 visitors.

Now you also know from your analytics what total revenue came from this group (or alternately, you know average order size, total carts completed, conversion rate, etc, from which you impute this total revenue number). Let’s say revenue from the control group was $125,000.

We know we pushed 10% of our traffic toward the control group and the control group brought in $125,000. So that means if the entire 100% of the traffic were sent to control, it would have generated $1.25m for February.

So the lift, in dollar terms, you could rationally argue, is the difference between that number and the actual total revenue for February: $1.5m (total) minus the imputed revenue of $1.25m from the control group nets out to $250,000. This is what can be attributed to the non-control group — which is to say, your optimization efforts. So your efforts brought in an extra quarter-mil lift in February, or about 20% increase in revenues.

That’s the simplest way to get a first estimate. Of course, you’ll have more detailed numbers of all of your individual test and campaigns but their summed total will come out to about this number.

And that’s it.

How To Deal With Less-Than-Ideal Testing Situations

As an aside, you usually aren’t given 100% of the traffic to work with and you often don’t get to determine how much of the traffic is in the control group. You may need to dial up the percentage of the traffic over which you do have access, so you can be sure the critical control group is getting enough traffic to give you confidence in the results. That number can range from a third (on the high side) to as low as 5%. Again, it’s all dependent on your traffic.

If you are given less than 100% of the traffic on which to work your continuous optimization efforts, don’t forget an important caveat: don’t let the traffic over which you do not have access be counted as your control — you must do a control group on your own. This is because you have to ensure that the conditions of the sample traffic being exposed to your control are approximately the same as those who are part of your ongoing tests. Otherwise there could be a skew in the sample traffic invalidating your measurement efforts.

For example, if you’re allowed to test against everything except affiliate traffic, then you can’t use the results from the affiliate traffic as your control — you need a control group from the non-affiliate traffic you do have access to. Otherwise you can’t be sure that your lift is due to your work, and you’re as likely to over-report your effort’s results as under-report them.


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


New on MarTech

    How marketers are preparing for the future of in-game ads
    How CDPs transform donor experience for a nonprofit organization
    Welcome emails have best click-to-open and first purchase rates
    It’s time for CMOs to talk business
    10 questions to ask when auditing your email program

About The Author

John Quarto-vonTivadar
John Quarto-vonTivadar is one of the inventors of Persuasion Architecture and regularly combats innumeracy among marketers in his popular "Math for Marketers" series. John's 2008 best-seller, "Always Be Testing", written with business partner Bryan Eisenberg, has been the standard reference for conversion optimization through testing since its release and has been used for the basis of both academic coursework as well as corporate training.

Related Topics

DataPerformance Marketing

Get the daily newsletter digital marketers rely on.

Processing...Please wait.

See terms.

ATTEND OUR EVENTS The MarTech Conference logo.

September 28-29, 2022: Fall

Start Training Now: Master Classes

Start Discovering Now: Spring



The SMX Conference logo.

Start Training Now:: SMX Advanced

November 14-15, 2022: SMX Next

March 8-9, 2022: Master Classes

Webinars

Tracking Growth From Organic Search

Beyond the Buzzword: Transform Digitally to Drive Organic & SEO Growth

Leap or Linger: Determining Which Ad Platforms to Test for Your B2B Brand

See More Webinars
Intelligence Reports

Enterprise Marketing Performance Management Platforms: A Marketer’s Guide

Enterprise Customer Journey Orchestration Platforms: A Marketer’s Guide

Enterprise Account-Based Marketing Platforms: A Marketer’s Guide

See More Intelligence Reports
Featured White Paper

The CMO’s Formula To 3x Your Digital Marketing Campaign Results

See More Whitepapers
Search Our Site

Receive daily marketing news & analysis.

Processing...Please wait.

Topics

  • Transformation
  • Operations
  • Data
  • Experience
  • Performance
  • Management
  • All Topics
  • Home

Our Events

  • MarTech
  • Search Marketing Expo - SMX
  • Speaking
  • Sales
  • Code Of Conduct

About

  • What is MarTech
  • Contact
  • Privacy
  • Terms Of Use
  • Marketing Opportunities
  • Staff

Follow Us

  • Facebook
  • Twitter
  • LinkedIn
  • Newsletters
  • RSS

© 2022 Third Door Media, Inc. All rights reserved.