Three golden rules for forecasting
Accurate forecasting can serve to inform the direction of your digital marketing campaigns and properly set client expectations. Columnist David Fothergill provides some food for thought to those new to the practice.
Ah, forecasting — it’s a task I personally enjoy, but I know it’s not a universally loved process. That said, forecasting demand is an inevitability for anyone working in any form of marketing.
If you or a client is going to invest money, you’ll want to have at least some idea of what your ROI will be.
If you are setting targets for the year/month, it helps if you are well-informed, rather than taking a stab in the dark (or worse, just setting your ideal figure blindly).
I didn’t want to provide another practical “how-to” guide, but rather give a conceptual view of three golden rules to help make sure your process of forecasting is as valuable and actionable as possible.
There are lots of posts and resources on the best methods and data sources to use for forecasting, so I won’t cover that here.
1. Forecast across a range, not a point
[blockquote cite=”Nate Silver”]We must become more comfortable with probability and uncertainty.[/blockquote]
By its nature, forecasting is a step into the unknown. There is very little chance that your forecast will be 100 percent correct. Embrace this fact, and you’ll find that it’s both educational and rewarding to start thinking about your forecast as a “range” of possibilities.
Here’s why I say educational: If you are forced to consider how much you don’t know, how much your metrics can vary, what factors you are reliant upon and so on, then you are forced to understand more about the dynamics of your market. This immersion in the influences is wholly valuable in terms of the knowledge it will provide you.
You may build in your ranges by confidence intervals (statistically derived measures of uncertainty) or by simple scenarios (e.g., “What does it look like if we manage to increase conversion rate by 0.5 percent?” and “What does it look like if conversion decreases by 0.5 percent?”).
The benefit of the latter example is that you are now armed with a sketch of the playing field for many eventualities (“Argghhh, conversion rate has dropped, what does this mean in the context of the target?” *pulls out scenario forecast*). Of course, it doesn’t solve your problem, but it puts you in a good position of knowing what the impact will be, the size of the tasks at hand and a tangible scope of traffic/revenue increase to chase.
2. Be clear about assumptions made
[blockquote cite=”Jonathan Swift”]It is useless to attempt to reason a man out of a thing he was never reasoned into.[/blockquote]
Knowingly or not, you will be making many, many assumptions when forecasting.
For example, there are lots of step-by-step tutorials on how to grab some keyword data from Google, crunch the numbers and voila, there’s your forecast. This approach is fine to follow, but two huge assumptions are baked in:
- The Keyword Planner data and click-through rate data you’ve used are based on truth.
- Things will continue as they are in future.
My advice here is not to change approach or try to guard against it, but simply to acknowledge that you’ve made these assumptions, and annotate your forecast to this effect.
Making clear the assumptions you’ve made will be useful for both yourself and others for future reference. From the get-go, it embellishes the raw data with useful context (e.g., “This assumes that the temperature during key months follows the pattern of the last five years.”). This ensures that everyone who has an interest in your forecast is under no false illusions when it comes to the presence of uncertainty.
3. Revisit your forecasts
[blockquote cite=”Arthur C. Nielsen”]Watch every detail that affects the accuracy of your work.[/blockquote]
There is a fine line when it comes to revisiting forecasts — I always try and warn against the “re-forecast because we’re not meeting target” trap. Not that it shouldn’t be done, just that jumping straight into this means you’ll pass over opportunities to draw out information by comparing the forecast data with reality.
Revisiting a forecast can be highly valuable when it comes to uncovering operational or strategic insight. To give a couple of examples:
- Accuracy. Measuring how accurate your model is in the face of reality can tell you much. Did you underestimate the impact of an important factor? Is variability much higher than expected? Is the method used totally unreliable? Introspection of the approach and results is one of the single best ways to improve future forecasting performance. (A number of best-practice methods, such as Mean Percentage Error, are outside the scope of the article, so you can read more here.)
- Assessing underlying causes. Assuming you have confidence in your model, investigating reasons that you are above/below your expected range can very useful. Say you decided to re-invest a greater percentage of your budget in programmatic display than you have done historically, and overall your revenue increased vs. prediction. Seeing that the ROI improved is one thing, but seeing the overall impact in terms of beating expectations is a much more powerful message.
Hopefully, what this post has lacked in practicality, it’s made up for in food for thought. I’ll wrap up by saying that forecasting will always be an interesting blend of science and judgement.
Embracing these in equal measures should allow you to create more useful forecasts for yourself and to continue getting value beyond the initial number-crunching which forms the basis.