Why MMM makes marketers nervous — and why you should use it anyway

Forget the hype and the horror stories. Here’s how to use MMM the right way without turning it into another attribution debate.

Chat with MarTechBot

Mention marketing mix modeling (MMM) to a performance marketer and you’ll get a strong reaction — either total excitement because it’s all anyone’s talking about or the look my client gave me when I brought it up, like they’d just swallowed something bad.

The reactions rarely reflect what MMM actually is or how it should be used. The excited crowd sees it as the cure for every attribution problem — a way to finally bring clarity to messy data. They’re often channel managers burned by last-click reporting.

The skeptics usually had a bad experience. In my client’s case, the problem wasn’t MMM itself — it was how they used it. The person running the model also purchased the TV media and conveniently made TV look like the hero.

I’ve seen this happen countless times when leading cross-channel measurement projects. 

  • Search teams defend last-click. 
  • Social and programmatic folks lean on platform reports because they flatter their numbers.
  • CTV or TV buyers push incrementality tests or MMM because it’s all they have.

That’s the real danger: when every team picks the measurement system that makes them look best, no one gets the whole picture. The business ends up with conflicting numbers and no clear answer to the only question that matters — how do we optimize campaigns and budgets to drive real growth?

Why MMM feels both scary and antiquated

MMM gets a bad rap. It feels old-school — and for years, performance marketers have laughed at MMM decks while high-fiving the CFO over attribution reports. But here’s the twist: that old-school vibe is precisely why it matters again.

Privacy laws and tracking changes are making the web look a lot more like the past, when you couldn’t follow every user across devices and platforms. MMM doesn’t rely on stitching together user paths, and at this point, neither can MTA. That makes MMM newly relevant.

Why does it feel scary? Because marketers treat it like a replacement for last-touch attribution — their old (and false) single source of truth. With MMM, you could run 10 different models — all with excellent statistical fit — and get 10 different stories about where to spend. 

That ambiguity freaks people out when they’re used to seeing one neat number in one report. It’s the same discomfort teams have with confidence intervals in incrementality testing (despite my belief that they are a feature, not a bug). 

Even worse, once teams accept that MTA belongs in the past, they rush to sign an MMM vendor to solve all their problems. When finance pushes back on cost, MMM gets put on a pedestal as the savior instead of what it actually is — a tool. One that works best when paired with incrementality testing to validate what the model is really telling you.

Dig deeper: Unlocking the power of marketing mix modeling solutions

Making MMM less scary: A workflow that actually works

Here’s how I like to make MMM less intimidating and more useful.

1. Start with a go-dark incrementality test

Run a full program holdout. How much revenue disappears when media is paused? Is that topline contribution worth it, given your margins and the spend to generate it? This exercise helps build credibility with financial stakeholders.

Create simple models in Excel or Google Sheets: if product margins are 50% and $1 million in media only drives $1.5 million in topline, you’re underwater. That clarifies the real use case for MMM — aligning with finance on whether the goal is topline growth (even at the expense of margin), profitable growth or cutting fat to boost earnings.

In this context, MMM pinpoints where to redistribute or reduce media spend when current efforts are dragging down the P&L.

2. Use the quiet time to get your data house in order

While the test runs, you’ve got four to six weeks to clean your data and build a hierarchy. This step is critical. To feed a model, you need to group your campaign data thoughtfully — broad enough to show curves, but specific enough to be actionable. 

“All of Facebook” is too broad, and “every campaign name” is too granular. Consider prospecting versus retargeting, or branded versus non-branded search. The right level of detail depends on your media scale, but the goal is to mirror how you plan budgets and where performance naturally varies.

This is also the time to outline your marketing calendar so product launches, promotions and other events are captured. You don’t want random campaigns that ran during a big sale to get overcredited. It’s easy to overthink this step, but start simple. You already know which promotions and launches skew your numbers. Review recent reports where you’ve noted those spikes and flag them for the model.

Dig deeper: 4 steps to kickstart incrementality without overcomplicating it

3. Run the models, then don’t freak out

Whether you’re using an open-source package or a vendor, you’ll probably get several well-fit models. Don’t panic if they disagree. MMM is just math — it has no outside context. That’s why you start with incrementality testing: it’s your north star for which model to trust.

Look for the one whose baseline (or intercept) aligns directionally with what your go-dark test revealed about organic vs. media-driven revenue. That’s your anchor point. Then apply some institutional knowledge — carefully. You’re not trying to make the model tell the story you want to be true, only to confirm it reflects how your business actually operates.

4. Run channel decomposition and test planning

Once you have a model you trust, use it carefully. The goal isn’t to declare the MMM finished. Like any measurement tool, its value comes from action, not insight. Your job now is to make smarter bets, validate them outside the model and feed those learnings back in. That’s the MMM flywheel.

A few examples:

  • Meta prospecting looks like it’s contributing 5% of the topline with room before diminishing returns? Perfect. Push spend and see if topline scales along the curve. Too large to notice at a national level? Run a geo test by increasing spend in some markets and holding others steady.
  • CTV looks like the hero? Hold it out in a few designated market areas and see if your incremental ROAS estimate falls within the model’s confidence interval.
  • A channel shows no diminishing returns? Don’t trust it unquestioningly. Run a scale test — keep some markets steady, increase others by 50% and double spend in a few more. Chances are the model just hasn’t seen that level of spend before.

Dig deeper: The smarter approach to marketing measurement

The point

MMM isn’t the end-all, be-all. It won’t save you from hard attribution conversations. It’s a tool — one of several — built to help you make more confident, data-driven decisions. With realistic expectations and a clear framework for turning insights into action, MMM becomes a valuable part of your performance marketing workflow, rather than another black box to debate.

Used well, it creates a shared language between marketing and finance. It gives marketers a structured way to quantify impact and test hypotheses that attribution can’t — and that incrementality testing alone isn’t practical for. It also provides finance leaders with more confidence that marketing dollars are being spent wisely.

The value of MMM isn’t in the output deck or the R-squared value. It’s in how you use it — to make smarter bets, validate what’s working and align the business around what truly drives growth.

Dig deeper: Rethinking media mix modeling for today’s complex consumer journey

Fuel up with free marketing insights.

Email:


Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. MarTech is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.


About the author

Tom Leonard
Contributor
Tom helps brands build more profitable performance marketing programs by operationalizing MMM and incrementality testing. He firmly believes incrementality testing is the cornerstone of any successful performance marketing program and that brands need to ensure that insights don't stay just insights but rather that insights are turned into changed behavior.

After spending 7 years at performance marketing agencies running search & social campaigns, leading a 20+ person Programmatic team, and building a performance CTV product, Tom now works with 2-4 brands at a given time as a consultant. 

What doesn't come across on Zoom or Hangouts is a 6'8"" frame that played Division 1 volleyball at the University of Southern California. 

When not ranting about marketing measurement, Tom enjoys the Pure Michigan outdoors with his wife and daughters.