Marketers must get geeky about campaign lift and control groups. Here’s why

When you ask your vendors about lift, ask them about the composition of the control group. Contributor Gladys Kong explains why sophisticated targeting methods make this question more important than ever.

Chat with MarTechBot

Geek Nerd Smart Idea Ss 1920 Lk0kffMarketers, do you use mobile advertising to target specific audiences? Sure you do. And when you target specific audiences with mobile advertising, do you care about measuring campaign lift? I’m sure a lot of marketers are probably thinking, “Of course I do!”

So, why do I ask these questions if the answers seem so obvious? Because unfortunately, most marketers do not realize that if they are targeting specific audiences using mobile advertising and using a lift metric to gauge campaign ROI, there’s a lot more to know about what goes into measuring it.

Here’s the thing: Lift results are greatly affected by the way your measurement provider determines control groups. Don’t know what a control group is? I’ll explain in simple terms what a control group is, and why marketers — and their agencies — should demand that their mobile location attribution measurement providers be more transparent about how they build them.

Remind me, what is lift?

First, let’s refresh our memories about the lift metric. In mobile location attribution measurement, lift represents the difference in visit rate among people exposed to advertising compared to the visit rate of those not exposed. If a larger percentage of those exposed to ads visit a measured location, the result is positive lift. The larger the difference between the exposed and unexposed rates, the higher the lift value.

Sounds simple. But we don’t want to compare against all unexposed people. If you run a campaign using store locations in Chicago, you don’t want to compare against unexposed people in Miami — you want to use unexposed people from Chicago. What is needed is called a control group: a group that looks like the targeted audience, but was never exposed to the campaign.

When selecting a control group, the devil is in the details. The control group selection can dramatically affect how exposure is measured. If we inappropriately used those unexposed Miami residents to determine baseline visits for that Chicago campaign, we wouldn’t have obtained a meaningful result. Clearly, how the control group is selected is critical to proper measurement analysis.

The lack of any industry standard for building control groups and measuring lift can lead to confusion. Brand marketers and their agencies might use multiple vendors for one campaign, and each might use a different method for building the control groups used to measure lift.

Ever see a campaign showing great lift numbers from one vendor — as high as 10 percent or more — while another shows minimal lift of 1 to 3 percent? Yep, you guessed it. Control groups can have a lot to do with those reporting disparities, making it tough for agencies to get a true measure of campaign ROI or vendor value.

Apples to apples (or in-market millennials to in-market millennials)

Apples To Apples Ss 1920 Lrxxek

The key to creating a valid control group is ensuring an apples-to-apples comparison. In other words, rather than just any group of people who didn’t see an ad, we should want our control groups to be a mirror image of the group that’s actually been targeted by campaign ads.

To truly understand whether advertising had an impact or drove lift, the goal should be to compare the exposed group to an unexposed group with the same demographic, geographic and even psychographic characteristics.

For mobile measurement, the device activity should be considered as well. In the case of some location-based measurement systems, highly active devices are both more likely to be served an ad impression and to be seen in a location. The same mix of device activity levels should be present in the control and exposed groups.

Not all measurement providers build control groups using this apples-to-apples approach, however. But here’s why they should: Let’s say Cadillac wants to create more brand awareness among young people in the market to own their first car through a mobile ad campaign.

If Cadillac and its measurement provider measured foot-traffic lift simply by looking at the exposed group of millennial in-market auto buyers compared to a more general in-market auto-buying population, or even to just Cadillac’s typical, possibly older, auto-buying population, they would not get a valid comparison or accurate reporting results.

The control group should look as close to that targeted group of millennial in-market car buyers as possible.

Why marketers should demand transparency

Marketers and their agencies should ask measurement providers if control groups match demographic, geographic and device location activity of their target audience.

Here’s another example to drive the point home. Say a retailer is targeting a special offer only to people who frequent their store on a regular basis. Let’s remember that lift is calculated by comparing the rate of visits among the exposed audience to the visit rate of those not exposed.

So, if the retailer compared the visit rate among that narrowly targeted exposed group to the visit rate among the much larger, more general customer base, it might appear as though the campaign resulted in a super-high lift. This is only because the targeted group of frequent shoppers has a higher likelihood of visiting the store compared to the general customer base. The real apples-to-apples comparison is a control group also made up of frequent shoppers who were not exposed to the media campaign.

It may seem a bit geeky, but by taking a little extra time to understand control groups and how metrics like lift are derived, marketers can be that much more confident in the decisions they make based on these metrics that affect their brands and products.

While industry standardization around details like control groups may be a long way away, hopefully, we’ll soon start to see more marketers and their agency partners demand more transparency from mobile location attribution measurement providers. The industry will be better for it.


Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.


About the author

Gladys Kong
Contributor
Gladys Kong, CEO of UberMedia, is an expert in mobile technology and data solutions. Gladys is dedicated to innovating and developing new ideas within technology startups. Since joining UberMedia as Chief Technology Officer (CTO) in 2012, Gladys has been responsible for taking UberMedia from social media app development company to a leading mobile advertising technology company and recruiting one of the best data science teams dedicated to consistently producing data solutions that anticipate and respond to today’s diverse marketplace. Gladys’s tenure in technology is extensive: She was CEO and co-founder at GO Interactive, a social gaming firm. Prior to that she was VP of Engineering at Snap.com, and VP of R&D at Idealab, where she helped create numerous companies, including Evolution Robotics, Picasa, X1Technologies, and many more.

Get the must-read newsletter for marketers.