I am not advocating purposely failing
Just like the stock market, when it fails big, it provides people with big opportunities. No one really wants a big fail in the stock market (hopefully), but a fail is not the end, if you are smart.
When you run an A/B or Multivariate test, your boss has the goal of increasing revenue and you should have the goal of learning more about your users on your website. But if your testing variation loses big, is that bad?
Failing is Learning
Every test should teach you something about your users whether more of them buy your product/service or not. You have the unique position as a Website Optimization Analyst to craft the kind of test that will help you to learn more. Not all tests will do this. You have to follow and implement some basic steps to help you learn from each test.
#1 Hypothesize
Come up with some assumptions about your users based on any available data you can find. Things like seasonality, age group, and industry are just a few to consider. Once you identify all of the relevant data, you can decide what some of the bigger conversion roadblocks are that are hindering your users from taking the action you want.
#2 Pick Your Test

Just because you feel your value proposition is lacking, does not mean you figured out how to test improvements to it. MECLabs can help you formulate a solid Value Proposition, but you still need to figure out how effective your new creation will be. I recently built a value proposition with the help of MECLabs and one of our writers at MasterControl Inc, but we will not know if it is a success until we test it, and figure out how to test it. You should always decide if a test is worthwhile before launching one. Weigh the estimated time and resources needed to run the test against the anticipated outcome.
Try a number of testing methods, like placing a simple value proposition on form pages and landing pages. In my testing at MasterControl, I am also trying to gain further insights by placing variations of our test in our PPC ads. Time will tell us if we built the value proposition correctly.
#3 Launch Your Test
We set out to launch our various tests by reaching out to the right people (for example, web designers and developers, PPC managers, etc.), implementing PPC ads and simple HTML text changes. While low on development and graphic design resources, we opted to keep things simple. This is a severe handicap, but as I said in my previous article (Your website Is Bleeding Out), first-aid is better than no-aid.
This portion of the testing process is critical. If you launch the test and do not pay special attention to the variation and its implementation, you may find yourself learning nothing because your rushed job caused you to leave out critical components.
#4 Analyze The Results
Once the results are in, you should should have a new perspective of the user and how they reacted to the changes you made. If your test results show an improvement, then hooray! you and your boss are happy campers. If your test variation does not show an improvement, then it is better to see a big fail, not just a small one. If your test does not show statistical significance in either direction, or you do not see an adverse reaction to your test, your only learnings are likely that users do not seem to notice enough of a difference.
Conclusion
If your users have an adverse reaction to your test, then you can hypothesize that doing the opposite should have an equally positive influence on them. In other words, to fail big is the second best result in testing, because it means you are likely only one test away from a big win and understanding exactly how your users perceive your website.