Thursday, 28 October 2010

Good old button testing

I
Everyone, at some point, does some optimisation testing of buttons designs. Some people think it's a trivial exercise to undertake when there's bigger fish to fry. Well I disagree, button design testing is exactly the kind of thing you can be doing quickly and easily with Google Optimiser or similar. We've done loads of testing in the past on buttons, testing colours, sizes, Apply text and so on,  but I read an interesting article from Get Elastic on how unusual button designs can give you an easy uplift in conversion. So I tried over the course of a couple of months on a landing page testing all the designs you see here. No.1 was the default design, and the winner was...No.3 the 'boxed arrow' design with a 32% uplift in click to apply rate. The arrow-based designs were in the upper end of the winning designs overall, but the, *cough* phallus-based designs stole an early lead but didn't win out overall. Give it a go on your website, it's quick easy and surprisingly fun.

Thursday, 21 October 2010

Google Experiments Follow-Up experiments



First off - What is a follow-up experiment?



Google says:



"

When the results of an experiment suggest a winning combination, you can choose to stop that experiment and run another where the only two combinations are the original and the winning combination. The winning combination will get most of the traffic while the original gets the remaining. This way, you can effectively install the winner and check to see how it performs against the original to verify your previous results."



And why should I run a follow-up experiment?





"Running a follow-up experiment will give you two benefits. First, it will enable you to verify the results of your original experiment by running a winning combination alongside the original. Second, it will maximize conversions, by delivering the winning combination to the majority of your users. We encourage you to run follow-up experiments to get the best, most confident results for any changes you make to your site."


But what happens when a follow-up experiment delivers contradictory results?






 The screenshot below shows the original MVT test results....

 I commenced a follow-up test running the the winning variant from this test in a head to head with the original default. And this is what happened...
The blue line is the original design beating the first test winning variant. This has happened time & again with my follow-up experiments. Then I noticed something. When you set up a follow-up experiment it's easy to overlook the weightings setting or the 'choose the percentage of visitors that will see your selected combination' option of a follow-up test. By default it's set to 95% for your selected combination.

Now I cant offer any explanation but from previous testing with other tools such as Maxymiser we've seen when you up-weight a particular variant in a test in favour of another, invariably it's conversion performance goes down, sometimes radically so. I recommend only doing a 50-50 weighting at anytime in any follow-up experiment because for whatever reason an unequal weighting seems to skew performance.  Just be aware of this possibility and you'll be fine : )

If anyone can offer me a scientific explanation for this behaviour I'm all ears!

By the way, below shows the test after the weightings are reset to a 50/50 split. Bit different from the original follow-up experment no?






Thursday, 14 October 2010

Give a Huq?

It's good to see some examples of AB testing that aren't just about which web page works best. And here's another example of Magazine cover split testing. This months issue of Company magazine is running with two variations of it's cover, one with presenter Fearne Cotton, the other with presenter Konnie Huq (who's married to this guy by the way).  As you can see both covers are the same bar mention of the featured presenter and the hero shot; even the poses are almost identical. I've mentioned magazine cover testing before in a previous article here but I havn't seen any recent evidence of the Press engaging in this kind of marketing test in recent years. Would be (mildly) interesting to see who wins this particular test.