How small pilots and sprint roadmaps turn AI decisioning into ROI
Four expert marketers discuss why the next jump in performance depends on data discipline, human guardrails and small, fast wins.
Spy on Any Website
At the September MarTech Conference, MarTech contributor Loren Shumate, global VP of marketing, QAD Supply Chain Group, convened three practitioners to demystify AI decisioning:
- Jonathan Moran, head of martech solutions marketing, SAS.
- Katie Robbert, CEO, Trust Insights.
- Kara Alcamo, founder and CEO, Alcamo Marketing.
The panel explained what AI decisioning is, why data hygiene suddenly really matters, how to set guardrails and how to build a credible roadmap that proves ROI.
What is AI decisioning?
The panel agreed AI decisioning moves brands beyond brittle, pre-mapped logic.
- Alcamo: “AI decisioning is more closely aligned with how human decision-making goes. It uses real-time data and patterns to determine the best action — instead of legacy if/then logic that requires you to pre-map every route.”
- Robbert: “It’s about patterns and trends — and giving AI the tools it needs to make the decisions you’ve decided it can make on your behalf.”
- Moran: “It’s the evolution of traditional enterprise decisioning. AI learns from structured and unstructured data — patterns, interactions, trends — to adapt decisions over time.”
The live poll during the session showed a split audience: some were “doing it well,” others were “working on data hygiene,” and a sizable group was “not sure where to begin.” That spread framed the rest of the hour.
Data hygiene: the non-negotiable baseline
If AI is so smart, why does hygiene matter? Because inference fills gaps — and sometimes invents them.
Dig deeper: How AI decisioning will change your marketing
Robbert offered a crisp answer: “Data hygiene is step one. You don’t want AI making assumptions — especially in decisioning.” She uses the Six Cs data-quality framework to audit inputs:
- Clean (free of errors).
- Complete (no missing info).
- Comprehensive (actually covers the question you’re asking).
- Calculable (structured so business users can work with it).
- Chosen (no irrelevant clutter).
- Credible (collected in a valid way you can defend).
Alcamo added a cautionary tale from an internal project-manager agent. “Even with good hygiene, the agent “hallucinated the folder ID we explicitly provided,” then failed the job. The fix? Make parts of the flow deterministic (fetch the right folder programmatically), then hand the clean results to AI to summarize. “Hygiene is the baseline,” she said. “Then be strategic about where AI belongs—and where it doesn’t.”
Bottom line: Hygiene first, then right-sized AI. Or as Robbert put it, “Step zero is why; step one is good data hygiene.”
Standards and guardrails: privacy, bias and the human in the loop
Treat AI like a colleague with least-privilege access.
Alcamo: “When our agent writes weekly status updates, we don’t grant access to every client file. It only gets the minimum data needed for that task. It’s the same way you’d permission a co-worker.”
The governance gap is real. Moran cited SAS research of AI leaders: while “80%–85% use AI daily,” only 7% reported a well-established governance framework, 5% had training and 9% felt fully prepared to comply with regulations. “That leaves a major gap between usage and readiness,” he said. The top concerns? Data privacy, security and governance — even above accuracy and cost.
What should standards cover?
- Clear use-case boundaries and acceptable outputs.
- Bias detection and mitigation procedures.
- Privacy and data protection rules tied to law and policy.
- Output testing to catch hallucinations and drift.
- Human-in-the-loop checkpoints and escalation paths.
- A review cadence (e.g., an AI ethics board) for ongoing oversight.
Robbert offered a simple lens: RAFT — Respect, Accountability, Fairness, Transparency. “Think about the standards you hold a human to,” she said. “Then codify them for AI. And tell customers how you’re using it — transparency matters.”
Dig deeper: Marketers are expanding their use of genAI and seeing returns
Roadmap to ROI: Test, scale, measure
The panel’s consensus is to build momentum with tight, valuable use cases that are measured rigorously and then scale across functions.
Start where the loop closes. Moran suggested piloting decisioning where outcomes are immediate and observable, like contact centers. “Serve adaptive decisions to agents first (with human oversight). When the model proves itself, let the bot act on some decisions. Then scale to other departments.”
Prove the business case with the Five Ps. Robbert’s Five P Framework translates vague ambition into measurable work:
- Purpose: What problem are we solving? What decision will change?
- People: Who’s involved (internal/external) and who approves?
- Process: How do we do it today? Where does AI fit tomorrow?
- Platform: Which tools and integrations are required?
- Performance: What KPIs define success? How will we measure time saved, lift, or reduced cost?
A hard truth from Robbert: most companies can’t quantify “before” time and cost. “If you don’t measure it today, you can’t claim a return tomorrow,” she said. Document baselines first, then introduce AI.
Bias for action over perfect plans. Alcamo argued for sprint-sized roadmaps: “Technology is moving too fast for a six-month plan. Ask: What’s the one thing we can ship in two to four weeks that will move a KPI? Build that, measure it, then plan the next sprint.”
Scope data to the use case. You don’t need to “clean the lake.” For their PM agent, Alcamo said, “We only needed project data. Not HR, not finance. We started with one project, proved the joins and flow, then scaled.”
Evolving the stack (without buying the internet)
Don’t start by shopping; start by mapping.
Moran positioned AI decisioning as an evolution of enterprise decisioning, not a wholesale replacement. Traditional rules (“if visited pricing page, send offer”) become reinforcement-learning loops that adapt by observing patterns among similar customers and cohorts. But that evolution still relies on:
- High-quality data (and event streams where speed matters).
- Feature and model ops (to train, deploy, and monitor).
- Channel integrations (to deliver the decision in the moment).
- Feedback loops (to measure outcomes and refine policies).
Robbert urged process documentation before automation. “Don’t hand decisions to AI you don’t fully understand. Great process docs get you to doing faster.”
Alcamo underscored the word evolve: “You didn’t ask ‘what should we add?’ You asked how to evolve. Exhaust what your stack can already do. Only buy when a validated use case requires a new capability. Otherwise, you get tech bloat.”
Decisions are smaller than you think
One of the day’s most useful reframes: “decisioning” isn’t one big decision; it’s many tiny ones executed in the correct order.
Alcamo described breaking “write a weekly status report” into atomic steps: get the project ID, fetch ClickUp tasks, read Google Drive docs, scan Slack threads, prioritize updates, draft bullets, format the email, route for approval. “AI struggled when we asked for the whole thing,” she said. “It succeeded when we gave it one tiny step at a time and sequenced those steps.”
Robbert’s analogy: baking. “You don’t dump everything in a bowl. You cream butter and sugar first for a reason. Order matters.”
Tactical vs. strategic data: which to clean first?
A viewer asked whether to fix micro (tactical) or macro (strategic) data first. Moran’s take: tactical wins. “Strategic success criteria are great, but the decision engine needs behavioral, demographic and channel data — the stuff that actually feeds model features and drives offers.” Fix the data that fuels today’s decisions; you can align broader strategy data in parallel.
Are customers ready for AI decisioning?
It depends — and you’re probably using it already. “If you’re using automated bid strategies, you’re already in AI decisioning,” Alcamo noted. The broader cultural divide remains: some teams are “all in,” others reluctant. Either way, adoption should be visible and consent-aware. “Marketing is only ‘creepy’ when it’s irrelevant,” said Vega in an earlier session; here, the panel echoed the sentiment: relevance with respect wins trust.
The takeaway
AI decisioning isn’t a magic brain that replaces your strategy; it’s a faster feedback machine that improves when your data, processes, and guardrails are precise. Start with hygiene and small wins, be explicit about privacy and bias, and keep a human hand on the tiller. From there, the evolution from rules to reinforcement becomes not only possible — it becomes measurable.
Six panel discussions on data and AI, available on-demand when you log in or register. Watch now for free.
Listen to an audio recap of the September MarTech Conference
Use the player below to listen to an AI-generated audio recap of the September MarTech Conference sessions.
Contributing authors are invited to create content for MarTech and are chosen for their expertise and contribution to the martech community. Our contributors work under the oversight of the editorial staff and contributions are checked for quality and relevance to our readers. MarTech is owned by Semrush. Contributor was not asked to make any direct or indirect mentions of Semrush. The opinions they express are their own.
Add us as a preferred source on Google
Google's "preferred sources" feature allows users to customize their search results by selecting news outlets they want to see more often in the "Top Stories" section.
Add Martech Now