Continuous Metrics Optimization: I Before E, Except After C
“I Before E, Except After C” — do you remember that from 4th grade spelling? Well, it works just as well when you’re concentrating on your metrics and optimization program. What do the letters mean for metrics optimization? I is for Insight — learnings you derive from your continuous optimization efforts E is for Earnings — which […]
“I Before E, Except After C” — do you remember that from 4th grade spelling? Well, it works just as well when you’re concentrating on your metrics and optimization program.
What do the letters mean for metrics optimization?
- I is for Insight — learnings you derive from your continuous optimization efforts
- E is for Earnings — which is presumably why you’re going through the optimization process
- C is for Conversion — which is how you’re measuring the efficacy of your past I on your current E
Think about it this way: every time you perform a test — heck, it doesn’t even have to be a formal test — you’re using your past experience, learnings and insight to guide your decisions as to what you should be testing.
And, this is not a science lab, where one might do a what-if to nudge out deep metaphysical rules of the universe. The reason you’re doing the test at all is because there’s something about what you’re proposing to test that your experience tells you might be better than what you currently have. No one on the business side does an experiment to try to make things worse (though from some of the testing schedule proposals I’ve seen at companies, I’m starting to question that!).
And, the actual improvement your company nets from your optimization efforts, your Earnings, come after you’ve applied your Insight. Fair enough, especially on the net… because everyone takes credit when there’s a net increase in revenue.
You know the old adage, “success has many fathers, but failure is an orphan.” So, when you propose a test, perform it well, and it results in a net positive to the company — anyone who’s touched any part of effort is surely going to be in line for some of the credit.
Note: If you’re at a big enough company, you may well notice that the total amount of credit claimed is often far in excess than the total amount of effort put in. Ah, but I digress…
Rewarding Smart Failure
What happens when your testing efforts don’t result in a net gain for the company — the test fails? Well, that’s precisely where you actually drive future Insight.
Your learnings always come from your failings. And even better, after a failure, there isn’t a whole line of cousins thrice-removed claiming they were instrumental to the test, competing with you for the new learnings. You often have this insight all to yourself.
This is a very human sort of occurrence, going back to the first caveman who put his hand in a fire. At several of the large companies I consult with, I actually encourage them to publicly reward folks who propose tests that don’t pan out — assuming the test ideas weren’t idiotic in the first place — because the insight that comes from great test ideas that fail is the entire basis for the great test ideas that succeed.
In other words, “I Before E, Except After C” = “Insight comes before Earnings, Except after Conversion improvements.”
Great companies embrace failure because it makes them smarter. How does your company reward smart failure?