Evaluating location data measurement providers? Here are 5 questions you should ask

Contributor Gladys Kong lays out what marketers need to know about location data to plan and gauge the success of their campaigns.

Chat with MarTechBot

Location Data Cityscape Ss 1920 Rmjpym

In the world of mobile location data, there’s an acute need for more robust education and understanding of what’s behind the information that marketers receive in measurement and attribution reports. Only by arming marketers with this knowledge will the location data measurement space achieve the high data quality standards and full transparency we as an industry need today.

Truth is, when it comes to measuring the performance of marketing and advertising efforts using mobile location data, most marketers don’t know what type of data or methodology lies beneath their reports.

For those who are ready to carefully vet their location data measurement providers, to open the hood, so to speak, there are five key areas they should address with vendors or include in RFPs, including specific questions they should ask.

Question #1: How is location defined?

The way in which “location” itself is defined greatly affects whether users are tracked as present in actual campaign locations after seeing an ad, as opposed to merely being in the vicinity. Analyzing GPS data gathered within precise retail boundary polygons really is the most precise means of confirming whether or not individuals are seen inside advertisers’ store locations. There are less precise methods — things like measuring point-in-radius, contour maps or user check-in data, for instance.

In addition to the method for pinpointing location, the actual behavior exhibited by users in campaign locations is important to consider. Location behavior can vary widely — and affect the amount of data generated and available for measurement purposes.

People may not interact with their phones or open ad-supported apps while visiting places like art galleries or theaters, for example. Sports arenas, on the other hand, not only attract far more people, but patrons may be more likely to interact with mobile apps in these places.

 Question #2: What systems are in place to account for fraudulent or misleading data?

There are bound to be inaccuracies in mobile location data, no matter the source your measurement provider gets it from. A few of the most common problems arise from misleading device signals and hotspots. Plus, publisher data can be cluttered with unreliable GPS coordinates. Let me explain.

Sometimes publishers send measurement services location data that includes bad GPS information, like unreliable coordinates representing geographic centers of countries, states or cities.

In addition, fraudulent patterns can emerge when many devices produce a relative few requests per day — or when a small number of devices generate a high volume of requests per day. And then there are hotspots, which can also be misleading.

When many unique devices appear in specific lat/long pairs that are typically associated with cellphone tower locations, they can have undue influence on the significance of a location.

So, in order to ensure data accuracy, it’s wise to ask vendors what systems they have in place to account for fraudulent data like faulty GPS coordinates or unreliable device patterns.

Question #3: How are exposed visits calculated and reported?

An exposed visits metric gauges the number of users whose devices were spotted at a specific location after being served an ad. Seems pretty simple, right?

Well, location data measurement providers take varying approaches to arriving at that exposed visits number that marketers are so interested in when gauging the ROI of mobile and digital ad campaigns.

There are big distinctions between reported numbers that reflect the “raw value” of exposed visits and those reporting “extrapolated exposed visits.” Let’s start with raw value. When measurement providers report exposed visits using “raw value,” they are measuring the actual count of devices exposed to an ad that have been tracked in campaign locations.

In contrast, when they report exposed visits by gauging extrapolated exposed visits, they are estimating the physical foot traffic using a mathematical model factoring in things like GPS usage, the frequency of ad requests, dwell time and other parameters. The idea behind extrapolating as opposed to reporting raw numbers is to reflect a more accurate estimate of the total number of exposed visits seen in a real-world location.

It’s important to remember that understanding the difference between these two methods of measuring exposed visits really can have a significant impact on how campaign media costs are evaluated. Because the raw value is likely to be a smaller number than an extrapolated visits number, if media cost is determined by dividing media spend by the raw value of visits, it can result in a higher perceived media cost.

Question #4: How are control groups built?

Now for control groups. Marketers may hear this term a lot, but not everyone knows quite what control groups are or why they can have a big impact on how ad exposure and lift are measured.

First, the what: A control group represents the people who were not exposed to advertising. The key to creating a valid control group is ensuring apples-to-apples comparisons between the demographic and geographic audience targets of the campaign and the characteristics of the control, or non-exposed, group. Unfortunately, not all location data measurement services take care to do this.

Consider this scenario: An auto brand targets women in the Los Angeles DMA in the hopes of driving people to dealerships. However, the control group doesn’t just incorporate women from the LA area, it encompasses men and women throughout the US who were not exposed to ads. This is a problem because the control group is not similar enough to legitimately compare to the exposed group. The result is an apples-to-oranges comparison, though the marketer may never know if they don’t ask how the control group is defined.

What’s the problem? Well, for one thing, this approach can skew lift results, because in this case, the lift percentage is derived from a far broader unexposed group.

Question #5: How will media planning and placement affect measurement?

Marketers may not realize the impact their media planning decisions can have on location-based measurement. Things such as campaign duration, audience targeting and media placement each affect the ability to measure campaigns precisely and reliably.

For instance, when marketers ask me how long they should run their campaigns in order to get a substantial amount of data and generate valid results, I tell them it depends on the budget and targeting of the campaign; however, for a broadly targeted campaign with a decent budget, one month is generally a good benchmark.

Targeting is important, too. To put it simply, the broader the target, the better the chances of delivering an ad to someone who then visits a location. And while narrowly targeted campaigns limit the size and reach of the exposed group, those unique users may be exposed to campaign ads more frequently.

Lastly, media placement really matters. Many measurement providers rely on matching a user to an anonymized device ID to ensure that when gauging ad exposure, they’re measuring users who actually visited a campaign location. Mobile apps are tied to device IDs, so the majority of ads placed in mobile apps can be readily linked to those IDs. However, most other media channels — such as mobile or desktop web inventory, television or outdoor media — cannot, making those connections trickier.

I look forward to seeing smart marketers ask their location measurement providers questions like these. If the mobile location data industry is to truly become transparent, we’ll need marketers to step in, ask questions and hold their providers accountable.


Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. The opinions they express are their own.


About the author

Gladys Kong
Contributor
Gladys Kong, CEO of UberMedia, is an expert in mobile technology and data solutions. Gladys is dedicated to innovating and developing new ideas within technology startups. Since joining UberMedia as Chief Technology Officer (CTO) in 2012, Gladys has been responsible for taking UberMedia from social media app development company to a leading mobile advertising technology company and recruiting one of the best data science teams dedicated to consistently producing data solutions that anticipate and respond to today’s diverse marketplace. Gladys’s tenure in technology is extensive: She was CEO and co-founder at GO Interactive, a social gaming firm. Prior to that she was VP of Engineering at Snap.com, and VP of R&D at Idealab, where she helped create numerous companies, including Evolution Robotics, Picasa, X1Technologies, and many more.

Fuel up with free marketing insights.