On being profitably wrong
I loved this quote from Joseph Tussman:
“What the pupil must learn, if he learns anything at all, is that the world will do most of the work for you, provided you cooperate with it by identifying how it really works and aligning with those realities. If we do not let the world teach us, it teaches us a lesson.”
The final line is a bit clumsy, but I think it means he'd agree with me on this:
Every business and every design choice is part of an experiment – whether or not we want it to be.
What I mean is that if we don't define our experiment variations and outcomes, the world will define some variations and outcomes for us – in the form of competition or changes in the marketplace.
We don't get to choose whether or not what we put out there is part of an experiment.
What we do get to choose is this:
A) we can choose to stick to doing only that thing that we believe should work. Lots of businesses do just this and, you never know, we may be one of the lucky handful that thrive for a while on that approach. (More likely, we'll find ourselves outpaced by competitors, or pulling down the shutters after losing customers and revenue.)
Or:
B) we can invite the world to teach us what really works and align ourselves with it. We can define our own variations and measure their outcomes. We can challenge our cherished assumptions.
Now, the hard part about inviting the world to teach us is that we have to accept that our ideas about how it works might be — gasp! — wrong. This is against most of our training in the modern world: schools and corporations train us to have "the right answer" and our confirmation biases wire us up to look first for evidence that we're right.
Until we truly get used to the fact that we're all wrong about a lot of things, it's painful finding out that we're wrong as plainly as we do when we A/B test or conduct user research.
Let me give you an example
Some time ago, when I was younger and even more foolish, one of my clients decided to change a headline.
It was the headline on a key sales page.
I wanted to A/B test the change but, in the end, we didn't. This was for two reasons:
it was technically tricky and so time consuming to test on this page and they didn't want to waste any time because...
it was an idea they'd lifted from other websites in the industry. It was obviously better.
The product owner was adamant: "we should save our A/B testing for things where we aren't sure. This new headline is a guaranteed licence to print money!"
And so we threw our A/B testing efforts into a part of the sales sequence where it was easier to test. (It was also a lot less effective, but that's a story for another day.)
Now the screen goes wobbly and blurry – we cut to about a year later.
A new employee at the firm did some analysis on that page with with the headline and teased out a startling realisation:
The new "better" headline wasn't selling as well as we thought it would.
In fact, it was selling about half as much as the old "crappy" headline.
(To be fair, we can't be 100% sure it was the headline's fault, because we didn't A/B test and lots of external factors change over time.)
At this point, the business could backwards A/B test the headline to find out the truth about the value of that change. But when that truth could be, "yeah that lost us 50 million in revenue," perhaps they'll choose not to.
What do we take away from this?
I don't think it was the product owner’s fault. Not really. Instead it was our training to see our ideas as being right.
In the world of business experts, ideas are currency. But ideas are a strange currency, because the value of any idea cannot be known until its outcomes have been measured.
Many glittering ideas are fool's gold: shiny, exciting, worthless.
What we should have done (and what I do now) is step back from the glittering idea and look at its context:
is this really the best place to test? (the biggest error in A/B testing is testing in the wrong place.)
does changing this headline make any difference to the outcomes we care about?
The answers to these questions tell us if this test is worth running. Now, in our headline's case, the test was indeed worth running. We should have committed to the experiment.
Next, what's the idea behind the idea?
How can we explore the world more completely by adding other variations to contrast and compete with the shiny idea we had?
In this case, we were making the headline more encouraging and pushy. What if we made it less pushy? What if we made it incredibly plain and straightforward? What if it were a bit dismissive? What if it were more specific?
In every experiment, we're looking for at least five variations to test simultaneously so we can explore the whole space and not just one idea.
Opening ourselves up like this is one way we can invite the world to teach us what really works.
When we set our egos aside and accept that our ideas are probably wrong, we stop trying to "validate" and we start learning. When we do this, we dramatically increase our chances of getting better outcomes.
We allow ourselves to be profitably wrong.
So, have you ever been interestingly or profitably wrong in an A/B test?
Comment – I'd love to hear your story!
– Tom x
P.S. If you know someone who's interested in these kinds of ideas, please do share.