Why your media decisions matter for campaign location attribution
Vendors may seem to hold all the cards for location-based campaign measurement, but contributor Gladys Kong explains how your media buys have a substantial influence.
Marketers using location data to gauge the effectiveness of their campaigns need to be able to trust their location attribution measurement providers, and that means the location data industry desperately needs more standardization and transparency to inspire that trust.
For that to happen, marketers must call on providers to give them more details about how visits are reported and control how control groups are built. In the past couple of months, I’ve shared tools they can use to do just that.
However, measurement companies are not the only ones holding the reins. Marketers themselves, perhaps unknowingly, already control several important elements of their campaigns which have a direct impact on the location data and, by extension, what it reveals. Yes, as far as location-based attribution is concerned, the media-related decisions marketers make have an inherent effect on the precision and reliability of the numbers in their campaign measurement reports.
In fact, whether mobile campaign ads run in-app or on the mobile web makes a big difference when it comes to measuring with precision.
Let’s say a regional Toyota dealership group delivers ads for its spring sales event inside mobile apps only and uses a location data measurement partner to track whether people who saw those ads went to its dealerships. In this case, the advertiser would have certainty that the same devices that showed up at the dealership were served an ad.
To in-app or not to in-app?
The reason is technical, but it’s not difficult to grasp. Because mobile apps are installed and therefore tied to device IDs, the majority of ads placed in mobile apps can be linked readily to those IDs, usually when a measurement provider’s SDK (software development kit) is integrated with the app. This allows measurement providers to easily connect ad exposure to visits associated with a specific user’s device.
However, people sometimes use mobile browsers instead of mobile apps on their phones, and advertisers want to reach them on the desktop, on television and in outdoor media, too. That’s where things get trickier. If that Toyota dealership group includes ads served via TV, desktop and mobile web in its media plan, the IDs of devices that were served ads typically are not known.
So, an additional step is required to determine whether a device tracked in the dealership is associated with someone who saw their ad on the desktop or mobile web. Measurement services often work with cross-device connection or bridging technologies that allow measurement providers to tie ad exposure to device IDs through IP addresses and cookie information. While these systems are highly complex and provide reliable matches, they are primarily based on probabilistic estimates rather than directly tracked device ID information.
Because of all this, the precision of measurement results is affected by whether or not marketers include in-app ads in a campaign. And because device ad exposure for in-app ads can be tracked directly, in-app campaigns may require lower impression minimums than are needed for mobile web and desktop campaigns to achieve reliable results data, for instance.
However, the choices marketers make regarding media placement also affect whether measurement providers are able to obtain enough valid results data. For example, most measurement providers require marketers to reach impression volume minimums to ensure that there is a statistically significant amount of data. If an advertiser delivers a couple of million impressions to a small number of locations with limited foot traffic, there simply may not be enough data to produce a legitimate measure of a campaign.
Getting enough data
Ultimately, it’s all about creating enough data, so things like campaign duration and targeting matter, too. As a rule of thumb, longer campaigns are better when it comes to generating a substantial amount of data to provide analysis. If marketers want a report just a week into a campaign, the results may not be very useful. Typically, measurement providers will have enough data after one month, and at that point, marketers can get valid campaign reports more frequently.
There are exceptions, such as campaigns promoting movie opening weekends or special weekend sales during which marketers run a high volume of impressions in a short period of time to drive people to an event. Due to the volume of ads being served, those types of short-term campaigns likely will generate enough visits to provide statistically significant results.
As for targeting, the broader the target, the more likely it is that you’ll deliver an ad to someone who then visits a location because more people can be reached. Consider this scenario: that Toyota dealership group wants to target only moms with at least three children who already own a minivan in a certain DMA. While the target segment may be more likely to be interested, the narrowness of the target would limit the size and reach of the exposed group. So, while those same unique users would probably see the ads more frequently, the number of them who might visit a dealership is also limited.
Frequency capping factors in, too. Most mobile ad firms will require advertisers to limit the number of ads a unique user sees in a 24-hour period. If a target audience is relatively niche, and each one of those targeted devices can only be served an ad three times per day, it reduces the number of overall impressions that can be delivered in a given period of time. Keep in mind that frequency capping can also ensure delivery of ads to more people, which can have a positive impact on measurement results.
So, while measurement providers have a lot of say over how mobile visits are measured and how control groups are built, marketers have more control than they think over important factors affecting the outcome of their location-targeted campaigns.
Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.