When testing, look at the big picture
Optimizing for MQLs, for example, can have big implications for downstream processes
A common martech activity is testing different UI/UX (content, design, flow, etc.) elements in order to optimize some metric like lead form conversion rate. While the actual responsibility for conducting such a test is usually assigned to a conversion rate optimizer, that doesn’t mean that other practitioners — regardless of whether they’re marketers, makers, modelers, or maestros — are not worth involving, since tests could have broad ramifications.
Indeed, A/B and multivariate testing typically have implications that reverberate throughout a martech stack — maybe beyond it. In some cases that fact may present itself during the experiment phase, while in others the implications may pop up later on during an implementation phase. That why it is important for martech practitioners to understand data flows.
Marketing and sales
The interaction between marketing and sales activities, is an example of a factor to consider. Definitions for marketing qualified leads (MQL) and sales qualified leads (SQL), while related, differ for good reasons. marketing and sales have different procedures, metrics, and goals.
For instance, a very common way people advocate to boost lead form conversion is to remove friction (in other words, fields). In many cases, people are more likely to complete a form that requires less input and interaction than a form that requires a lot. Simplifying a form could increase the quantity of leads while lowering the quality of leads. What’s good MQL-wise is not necessarily beneficial from the perspective of SQL metrics.
Some of the “expendable” fields are likely for lead scoring. They cause friction for marketing purposes, but they can help the sales team better understand a potential customer. This enables salespeople to focus on targets who show strong intent to purchase, as well as tailor their efforts to the lead’s circumstances.
So it’s entirely possible for an experiment variation that has fewer lead scoring fields to boost marketing conversions while inhibiting salespeople from closing deals. Thus, when conducting such an experiment, it’s important to consider and monitor downstream metrics.
Another factor to consider is how and when to collect required information. This is important, since removing or moving data collection points in order to boost MQL or SQL metrics could likely have downstream ramifications — even after the sales process is completed.
For example, when selling insurance (health, dental, life, etc.) the final cost a policy will require knowing a lot of detailed and sensitive information about the person covered. Does it make sense to ask for a lot of that information on an initial lead form, so that when a salesperson follows up they can give the prospect a price estimate with some certainty? Or does it make sense to gather just some general information and allow the salesperson to provide a ballpark estimate along with how various factors will influence the cost? There are pros and cons to both approaches, and each could be pursued depending on whether more total leads or more qualified leads are called for.
However, it’s important to know what information is ultimately required. A prospect will have to provide all of it at some point to turn into a customer. Thus, toggling between those collection options will require a bigger picture view. If the decision is to remove a field from a lead form, when and where downstream will that information be collected?
This may involve teams and systems that are outside of the martech stack, and therefore could require multi-departmental orchestration. Then on top of that, if the decision is made to switch when certain information is collected, what happens to people who are in between the old and new collection points at the time of the switch?
Consider the big picture
It is crucial to consider the ramifications of common martech activities like conversion testing. That perspective will help better ensure success. For instance, there’s more to consider than MQL metrics, and at times, improved results with that one factor may come at a great cost down the line or in another crucial area.
As many practitioners can attest, sometimes changes are more complex than they may appear. It’s really annoying to invest effort into finding a way to boost an upstream metric to only find that it will not work due to downstream issues. Understanding data flows within and beyond the martech stack is one way to help mitigate against wasting time and effort on UI/UX testing that will ultimately not work out.