6 questions to help you evaluate an attribution modeling vendor
How do you choose the right attribution modeling vendor? Columnist Alison Lohse offers six questions to help you attain transparency and navigate the tricky waters of attribution.
Full disclosure: I am not a data scientist, yet below I’ll talk about modeling for marketing measurement.
Disclosures are tricky. Facebook will be sued over its disclosure of the snafu in its video metrics. Would they be in this legal mess if they had not revealed the mix-up? I think the answer is yes, eventually, as truth cannot stay hidden, and marketers demand and deserve transparency.
This is especially true for marketing measurement modeling. You get transparency by asking the right questions.
Here is a list of questions to ask your vendor or data scientists to get to the truth and transparency in attribution.
1. What algorithm(s) is the model using?
Ideal answer: Best-of-breed predictive machine learning algorithms like gradient boosting machines, FTRL, neural networks, game theory and logistic regression.
Like advancements in technology, there are advancements in data science. Many non-mathematicians, including me, love logistic regression, as it’s one of those methods we understand the best. There is an equation at the end that has a dependent variable and many independent variables. The independent variables dictate how variation in each of them would depict variation in the independent variable, such as sales.
Such a perfect scenario — if I increase X budgets in search, video and TV, my sales will be Y, as foreseen by the amazing know-it-all equation. So what happens if seasonality effects sales? What effect does cutting TV ads have on the overall marketing portfolio?
The algorithms mentioned can help factor in seasonality and interaction effects between channels. This brings me to the next question…
2. How many algorithms are being used?
Ideal answer: Blend of performance from at least three algorithms.
All algorithms have advantages and disadvantages, and it’s best not to place all your eggs in one basket. Every industry is different, and even within an industry, each client is different.
For example, within the same industry we saw that one client’s data set logistic regression got it right 90 percent of the time, and for another of our clients, logistic regression predicted only 30 percent of the variations.
Experimental design or A/B testing is rolled into many of the advanced machine learning algorithms. These machine learning algorithms pick up variations that work, at scale, and enhance their predictions.
3. How are the algorithms combined?
Ideal answer: Algorithms are combined to create the model itself.
To be able to truly leverage the trends of different algorithms, a combined model is better than stand-alone models that talk to each other through variable exchange. But to combine the algorithms, data scientist(s) with experience with feature engineering are required.
Domain understanding and in-depth expertise in advanced algorithms available for modeling don’t hurt either!
4. How is the model validated?
Ideal answer: It is validated with sample data that the model has not used in training.
Just as an athlete trains for an event like the Olympics, the model is trained for predictions by showing it real-world data to help it learn the patterns. It looks to try to understand the variations in data to apply it in future scenarios.
If an athlete is trained for a 1,500-meter race and then tries out a sprint, the person may or may not succeed. But algorithms are expected to be prepared for all scenarios, and if they memorize a scenario, their prediction may only work if that same situation is recreated in real life, or else it can be really off and lead to disastrous results.
So validation of performance on unseen data, as well as regular monitoring, is required.
5. How frequently is the model updated or validated?
Ideal answer: Every time it is used to provide predictions.
Once the model is created, it’s rarely revisited by most organizations. With changing business conditions each day, I recommend continuous monitoring of performance of the model.
In fact, every time the model is in use, the performance has to be tracked and decline in performance rectified so that on-the-fly optimizations are accurate. This also enables us to measure the brand equity in terms of a baseline, which can be adjusted hourly as well.
6. How granular are the insights available?
Ideal answer: As granular as the user-level data available.
Granularity is the need of the hour for performance marketers making daily or sub-daily adjustments to their campaigns to adapt to changing trends.
It’s harder for algorithms to predict small, shorter variations than long-term trends. This is as true of daily stock prices as it is of marketing measurement. There is a greater chance of accuracy in long-term predictions, like next year’s budget, to achieve certain revenue goals if algorithms accurately predict short-term goals.
To stay agile and top-of-mind for the cross-channel consumer, a very granular creative or keyword-level recommendation is required, and hence, the data to provide it.
As marketers try to navigate the ins and outs of technology available, analysts and the data science community can come forward to smoothe the road ahead. I have traversed the path many times with clients, from Fortune 100 businesses to ecommerce startups. Question the experts, and get the information you need to make the right decisions.
Opinions expressed in this article are those of the guest author and not necessarily MarTech. Staff authors are listed here.
New on MarTech