Most of us at some point in our lives have experienced that creeping, irrational fear of failure, of being an imposter in our chosen profession or deemed “a Loser” for not getting something right the first time. marketers who work in A/B testing and conversion optimization.
We are constantly tasked with creating new, better experiences for our company or client and in turn the customers they serve. Yet unlike many business ventures or fire-and-forget ad agency work, we then willingly set out to definitively prove that our new version is better than the old, thus throwing ourselves upon the dual fates of customer decision making and statistical significance.
And that’s when the sense of failure begins to creep in, when you have to present a losing test to well-meaning clients or peers who were so convinced that this was a winner, a surefire hit. The initial illusion they had — that you knew all the right answers — so clinically shattered by that negative percentage sign in front of your results.
Yet of course herein lays the mistake of both the client and peer who understandably need quick, short-term results or the bravado of the marketer who thinks they can always get it right the first time.
A/B testing and conversion optimization, like the scientific method these disciplines apply to marketing, is merely a process to get you to the right answer, and to view it as the answer itself is to mistake the map for the territory.
I was reminded of this the other day when listening to one of my favorite science podcasts, “The Skeptics Guide to the Universe,” hosted by Dr. Steven Novella, which ends each week with a relevant quote. That week they quoted Brazilian-born, English, Nobel Prize-winning zoologist Sir Peter B. Medawar (1915 -1987) from his 1979 book “Advice to a Young Scientist.” In it he stated, “All experimentation is criticism. If an experiment does not hold out the possibility of causing one to revise one’s views, it is hard to see why it should be done at all.”This quote for me captures a lot of the truisms I’ve learnt in my experience as a conversion optimization marketer, as well as addresses a lot of the confusion in many MECLABS Institute Research Partners and colleagues who are less familiar with the nature and process of conversion optimization.
Here are four points to keep in mind if you choose to take a scientific approach to your marketing:
1. If you truly knew what the best customer experience was, then you wouldn’t test
I have previously been asked after presenting a thoroughly researched outline of planned testing, that although the methodic process to learning we had just outlined was greatly appreciated, did we not know a shortcut we could take to get to a big success.
Now, this is a fully understandable sentiment, especially in the business world where time is money and everyone needs to meet their targets yesterday. That said, the question does fundamentally miss the value of conversion optimizing testing, if not the value of the scientific method itself. Remember, this method of inquiry has allowed us — through experimentation and the repeated failure of educated, but ultimately false hypotheses — to finally develop the correct hypothesis and understanding of the available facts. As a result, we are able to cure disease, put humans on the moon and develop better-converting landing pages.
In the same vein, as marketers we can do in-depth data and customer research to get us closer to identifying the correct conversion problems in a marketing funnel and to work out strong hypotheses about what the best solutions are, but ultimately we can’t know the true answer until we test it.
A genuine scientific experiment should be trying to prove itself wrong as much as it is proving itself right. It is only through testing out our false hypothesis that we as marketers can confirm the true hypothesis that represents the correct interpretation of the available data and understanding of our customers that will allow us to get the big success we seek for our clients and customers.
2. If you know the answer, just implement it
This particularly applies to broken elements in your marketing or conversion funnel.
An example of this from my own recent experience with a client was when we noticed in our initial forensic conversion analysis of their site that the design of their cart made it almost impossible to convert on a small mobile or desktop screen if you had more than two products in your cart.
Looking at the data and the results from our own user testing, we could see that this was clearly broken and not just an underperformance. So we just recommended that they fix it, which they did.
We were then able to move on and optimize the now-functioning cart and lower funnel through testing, rather than wasting everyone’s time with a test that was a foregone conclusion.
3. If you see no compelling reason why a potential test would change customer behavior, then don’t do it
When creating the hypothesis (the supposition that can be supported or refuted via the outcome of your test), make sure it is a hypothesis based upon an interpretation of available evidence and a theory about your customer.
Running the test should teach you something about both your interpretation of the data and the empathetic understanding you think you have of your customer.
If running the test will do neither, then it is unlikely to be impactful and probably not worth running.
4. Make sure that the changes you make are big enough and loud enough to impact customer behavior
You might have data to support the changes in your treatment and a well-thought-out customer theory, but if the changes you make are implemented in a way that customers won’t notice them, then you are unlikely to elicit the change you expect to see and have no possibility of learning something.
Failure is a feature, not a bug
So next time you are feeling like a loser, when you are trying to explain why your conversion optimization test lost:
- Remind your audience that educated failure is an intentional part of the process:
- Focus on what you learnt about your customer and how you have improved upon your initial understanding of the data.
- Explain how you helped the client avoid implementing the initial “winning idea” that, it turns out, wasn’t such a winner — and all the money this saved them.
Remember, like all scientific testing, conversion optimization might be slow, methodical and paved with losing tests, but it is ultimately the only guaranteed way to build repeatable, iterative, transferable success across a business.
Related Resources:
Optimizing Headlines & Subject Lines
Consumer Reports Value Proposition Test: What You Can Learn From A 29% Drop In Clickthrough
MarketingExperiments Research Journal (Q1 2011) — See “Landing Page Optimization: Identifying friction to increase conversion and win a Nobel Prize” starting on page 106
The post In Conversion Optimization, The Loser Takes It All appeared first on MarketingExperiments.
from MarketingExperiments https://ift.tt/2uQCoU2
from WordPress https://ift.tt/2zWwPsL
No comments:
Post a Comment