Saturday, 3 December 2011

Popular posts

Thought I'd compile a list of the most popular posts on this blog based upon web stats to date:
1.  Is FatWire any good? 
2. Culling multivariate test variants
3. The Gutenburg Rule
4. Above the fold - The Google Browser size tool
5. How do you calculate uplift in a multivariate test?
6. Review of Google Website Optimizer
7. The Law of diminishing returns in testing.
8. Riding the tsunami - testing during a traffic spike.
9. Short wave MVT testing.
10. An offline call to action.

Saturday, 29 October 2011

Maxymiser - Multiple KPIs

A natty little feature of the Maxymiser reporting interface is the ability to report on multiple KPIs within the same report. Below is a screen grab of a conversion report which reports on both the 'Application submit rate' referred to as KPI 1, whilst also showing the performance of the secondary KPI 2 of 'Click to Apply rate' for the same test combinations. In the past you would have to flick between separate reports for each KPI, whereas this is obviously no longer the case. 

To enable this feature you simply select multiple KPI or Actions in the report filter drop-down menu within the Maxymiser console. See illustration below.

Charlotte 29th Oct 2011

Saturday, 22 October 2011

Google Analytics - Map Overlay

Lessons learnt from Retail* #1

I know Google Analytics (GA) has had it's Map Overlay function for sometime now, a report in GA which shows geographically where your online traffic comes from. Recently though I've had a degree of exposure to the retail sector as opposed to just the financial sector. With this particular brand it was interesting to see how the online traffic so accurately matched up with the branch locations. See image below, on the left is GA map overlay report for the brand and on the right you see its store locations. An incredibly obvious relationship appears between online activity and store location illustrating in retail how branch locality drives online brand awareness.

This kind of thing just doesn't happen in the financial sector where online traffic bears no immediate correlation with the offline world and the companies branches.
Obviously though when there is a match there's a geotargeting opportunity if you chose to act upon that information. This is obviously taken to a higher level with augmented reality applications in the mobile world where there are great marketing opportunities when GPS technology meets the highstreet retail opportunity. I will cover this in more detail in a later post.

*I've been privileged to get access to the web analytics of a retail brand lately. In a brief series I will cover insights I've found between the retail world and the financial sector where I specialize as an online optimization expert 

Friday, 26 August 2011

Tracking uplift post ab testing

Once you've ran either an A/B test or a Multivariate (MVT) test on your website and you're told by your testing tool of choice that your test has achieved statistical conclusion how do you continue to measure performance, and more importantly, should you continue to monitor performance?

I know it sounds reckless, to continue to track uplift is the responsible thing to do right? But in reality there's an important aspect of optimisation testing that needs to be taken into consideration here. Testing outcome is usually the result of the following variables:

product benefit + moment in time + market position + customer experience

Can you continue to accurately measure and account for each of these factors after you've conducted a test? The honest answer is probably no. Also I don't know many people with the personal bandwidth to monitor every single test once it's finished. If I wanted to double my workload I certainly would!

The important thing to do is ensure is that your test has been given enough testing time and traffic volume in the first place before you conclude the test as this post reasonably states Don't Fool Yourself with A/B Testing. If you've done that you should have a reasonable level of confidence in it's future performance.

If you still want to track uplift after testing I would suggest the following are available options for you:

1. Set up a Google Analytics Goal. This gives you the ability to track the performance of a specific customer journey within your normal web analytic. Yes you have to use Google for this one, but any web metrics tool worth it's salt will have the same functionality.

2. Leave your test running. This to me is the fail-safe option. Once you have a test winner up-weight this in favour of the default content but leave a small percentage of your traffic going to the default as a benchmark for continued performance. I usually leave 5% going to the default for a period of time where possible to ensure I've made the right decision. 

3. Run a Follow-up experiment. This is a great feature in Google Optimizer but you can do the same in any other testing tool if you have the resource to do it and there's lingering doubt about the original test outcome.

4. Bespoke tracking. On the pages I optimise I append tracking values to the application form that when submitted to a sales database can be used to tie back sales to a specific landing page. Using this approach you can monitor conversion rate performance before, during and after a test. I cant recommend this approach enough and is entirely dependent upon your on-line application forms particular design as to whether you can implement it.

That's about it really. If I think of any other methods for ongoing tracking I'll add them in.

Happy testing!

Tuesday, 16 August 2011

1st Direct Bank throw it out there

First Direct have recently launched First Direct Labs. This is a new section of their site which shows you what new ideas and concepts they are testing for their on-line experience. Current tests being; QR Code functionality, redesign concepts, and mobile apps.Visitors are encouraged to rate these concepts and designs as well as make suggestions of their own for new tests. The missing link here is that this feedback is obviously a channel for qualitative testing of these ideas and for my mind seems to be their only current means of gauging how effective these new ideas will be. Obviously I'm not privy to their whole web testing strategy but I would hope there is more in their testing toolbox than just this focus group approach. Either way First Direct are to be commended for divulging part if not all of their testing strategy, it's a safe thing to do as experience has shown that competitors can rarely benefit from implementing test finding vicariously without first doing their own comprehensive testing. I'll address vicarious testing in greater detail in a later post.

Continuing with my ongoing retrospective theme it's worth pointing out that I haven't been averse to going public with test ideas and designs in the past as seen in this post from 2009 where I ask the general public to rate our page designs following a non-conclusive round of MVT testing.

Monday, 15 August 2011

Testing During a Traffic Spike revisited

In a post I wrote back in 2009 I talked of the benefits of combining MVT testing with campaign activity in a post called Riding the tsunami. Coming late to the party, but nonetheless getting there in the end are Get Elastic ,a very good web optimisation site with some valuable testing ideas and concepts, endorsing this very same approach after a bit of soul searching with an article titled "Should You Avoid Testing During a Traffic Spike?" Definitely worth a read. 

I think fundamentally the message remains the same, it's okay to be MVT testing during a campaign if you're trying to optimize that campaign and not the long-term web experience of your visitor.  As ever testing results are usually as a result of the following variables:

 product benefit + moment in time + market position + customer experience

Monday, 1 August 2011

The law of diminishing returns

Once you've optimised a page using either multi variant testing (MVT) and/or split testing (A/B Testing) and managed to achieve a respectable uplift in sales conversion, when it comes to revisiting that page with further testing, likely as not you're entering the realm of Diminishing Returns.

This is historically an economics term but it also applies to web optimisation testing. This is where subsequent testing or optimisation activities prove to be less rewarding in terms of finding web content that works than the original or earlier rounds of testing.

This has been recently illustrated last week when a colleague produced a new version of an optimised landing page. He wisely wanted to test that it could perform as well as the existing page or even better. The original page was the result of several rounds of previous optimisation testing and was already proving to be very good at converting visitors to submit an on-line application. The image below sees the ongoing split test as conducted in Google Website Optimizer . The original or default page is proving hard to beat, the new page (variant 1) is bettering the original but it's not pulling away in a massive uplift as you might see in the first or second rounds of testing.

It's important to realise that while you should always look to be frequently testing your pages, previously tested or otherwise, as part of a continued programme of testing, we should realise that the big headline results of earlier testing should start to decline test on test . This is a sign of successful testing, indicative that you're starting to get things right from the visitor conversion perspective.

As a very rough guide I would say the following is true for a successful roadmap of testing; Let's call it the ARSSS  approach (sorry I'm such a child!):
  1. Analyse your site metrics, establish user journeys, understand what's going on.
  2. Rationalise your site. Remove unnecessary  pages and clicks. Remove obvious leakage points in your sales funnel.
  3. Start MVT testing. Use this to get under the skin of the user experience. Do as many rounds of testing as it takes to answer your questions and hopefully start to improve your conversion. In essense you're starting to narrow and hone your sales funnel.
  4. Start split testing. Once you know what works on a page element by element basis through MVT you can start to use A/B testing to start look & feel testing entire pages.
  5. Segment Users. Once you've done all of the above start to get into User Segmentation, ie. start to group your customers into segments based on behaviour (I'll be writing a more in-depth post on this in the future).
Happy testing!

Tuesday, 31 May 2011 2011

Last week I spoke on the merits of creating a testing culture at the financial services web conference Net.Finance in Chicago on behalf of Maxymiser. See their blog here for details Max Blog. It was a great trip and an extremely exhausting week due to my own crazy schedule but I absolutely loved every second of it.

Key take-aways from the conference? 

  • Multivariate testing is yet to really take off in the states, especially in the financial sector. It's ripe for massive growth on an unprecedented scale.
  • US marketeers are really excited about 'Mobile Payments' and see it as a potential bank killer.The question remains which big names will forge alliances to make it finally happen.
  • The buzz word is 'mobile' and has been for a while, but everyone, including the main players are waiting to see who goes really big on mobile first and will then quickly follow suit. The US web sector are scared and have been stung before by 'novelties' that have failed to bear fruit, so there's a vast amount of tangible caution in the average US eCommerce department; dollars are quite rightly spent sparingly and wisely. 
  • There are a lot of companies offering what is perceived as real 'added value' but is really not worth the huge initial $ outlay when you start to dig deep on their technical claims. Qualitative testing and research continues to commit multiple crimes in the name of informed user feedback and fall vastly short of continuous multivariate testing by a country mile. 
Would be interesting to see if anything changes in the US market in the next 12 months and especially to see if mobile becomes the hunting ground of the web marketeer as predicted.

Wednesday, 25 May 2011

Calculating Uplift

This is fairly basic but a common query none-the-less:

Q. How do you calculate conversion uplift?
A. Winning% - Old% / Old% x 100 = UPLIFT%


Thursday, 21 April 2011

Above the fold - The Google Browser Size Tool

Here's one I'd stumbled across a couple of years back now and had subsequently forgotten about but Lord knows why because it's so useful and so simple in it's conception. Google Labs has a Browser Size tool that let's you overlay a summary of browser size (based on visits to the Google homepage) over any web page. For a while people had spoken about there being no page fold when it came to web design. Well that's poppycock, if multivariate testing and UX testing has taught us anything it's that if you have content which people have to scroll down to a lot of people will either not bother or just not realize the content is there in the first place. In testing I've found that if you cant get away from a lengthy page you need to make the design imply that its worth scrolling to the content or try and bring everything back above the fold through tabbed design etc.

Monday, 28 March 2011

The conception of Short Wave Testing

Right, well this is really work in progress. I think I've invented a new form of multivariate testing on the web. And for clarity this has nothing at all to do with Short Wave Radio. However, a couple of points to start off with; A) I'm not entirely sure it hasn't been done before and B) It's a valid test methodology.

Well hang it this blog is all about being a testing 'Maverick' so here goes nothing....

First off, let's not get confused by Iterative Wave Testing as used by Optimost. I think I'm right in saying that's where you test the same variants over a sustained period in 'waves' of testing to ensure what you have is validated and statistically significant. All very worthy, good stuff.

What I've been experimenting with is trying a set of test variants in one brief wave of testing and then ditching or culling any negative or lesser performing variants in favor of an entirely new variant in a new wave of testing that sees the positive or successful variants carried forward from the last wave. The whole process is repeated for as many waves as it takes to get a robust set of variants that out-perform everything else pitted against them. The only qualifying criteria for a variant to be carried forward to the next wave of testing is that they either continue to outperform the original default design or better the performance of anything that has gone before them, i.e; anything that has been previously removed.

I hope this simple (ish) diagram illustrates how this short wave testing works. Below we have 4 test areas in a web page and we have 4 phases of testing. As we can see in  Test Area 1, Variant A is successful enough never to be culled from the test and ultimately becomes the winner for Test Area 1.  Test Area 2 shows an initially unsuccessful Variant A that is culled after the first phase of testing and replaced with a new variant B which goes on to be the winning variant of Test Area 2. Test Area 3 has a different story, in the end it takes 4 different variants over 4 phases of testing to find a variant that is positive enough to be declared a winner. And Test Area 4 arrives at a winner on the third phase of testing with variant C.

Now I'm aware that this form of testing is both labour intensive and resource-heavy in it's undertaking. I was able to do this kind of testing because I was both motivated enough to dedicate resource to it and had enough ideas in the locker that I wanted to test for each test area and test wave. I used Google Optimizer to do it and coded the variants myself and the outcome has been, well staggering. A sustained uplift in the region of 18% for product purchase has been achieved (a personal best BTW)  and to me I am reasonably confident in the results because the final variants I had, had reported  consistently the same uplift over 9 separate waves of testing.
What I'm hoping for now is the counter-argument from my testing peers (drop me a line at I'm aware of the shortcomings of this approach but want others to have their say on this kind of testing methodology. Here's my bonfire, feel free to piddle all over it : ) Happy Testing!

UPDATE: One thing worth noting with this testing approach is that if it goes right your conversion rate for the test variants should improve for each wave where you attain, keep or build on positive performing variants but at the same time you will also see a diminishing uplift for each wave. This is because you are continually testing against improved and stronger performing variants in the test segment. Ultimately though you should still see a good uplift against the underlying original default design.

Friday, 25 March 2011

No need to shout about it

I've been running an MVT test on a comparison page and recently introduced an 'Ends Soon' label next to the product call to action. Initially the presence of this message was negative. I resized the image by half making it much smaller and the conversion results are much improved, illustrating that sometimes people just don't want to be shouted at : )

Update: Although this 'hurry message' didn't work well with this particular page which was a product comparison page the same image used on an already optimized product page using  Maxymiser has led to a 44% uplift in product application submit rate.

Thursday, 10 February 2011

An Offline Call To Action

A recent MVT test using Google Website Optimizer answered the question. 

"Exactly what impact does having an off-line Call To Action next to an on-line have?"

In this test I would measure the impact upon the click to apply rate on a landing page where using MVT I would serve up a link to a pop-up window which would show both a telephone sales number and a branch locator to a section of the page visitors.

During the test period, in addition to monitoring the test console results I monitored the Google Analytics report for the pop-up window.

Here's the summary of results:

675 Visitors saw the default (no offline CTA)
248 of which click Apply = 36.7% Conversion rate

678 Visitors saw the offline CTA variant
205 of which click Apply = 30.2% Conversion rate

The offline CTA variant is down –17.7% in Conversion rate
against the default page

The offline CTA pop-up received 569 Unique Views in the
test period. Therefore 83.9% of people who see an offline
CTA will click it.

Monday, 10 January 2011

The Gutenberg Rule

Recently a couple of people have reminded me about how we'd used this design principle during MVT testing and yielded some good results and insights as a result. So I thought I'd commit some learnings to a post on the subject .

The Gutenberg Rule is a design philosophy named after the designer of the printing press Johannes Gutenberg. This principle suggests that people read content top to bottom and left to right.  You can therefore split a page into four quadrants, the “Primary Optical Area” in top-left, the “Strong Fallow Area” in top-right, the “Weak Fallow Area” in the bottom-left and a “Terminal Area” in bottom-right. Splitting a web page into four quadrants as illustrated below we tested the various positions of a product offer by rotating it through these 4 positions (in more than 1 test). 

This testing confirmed that
  • position 1 yielded the highest uplift
  • position 2 the second highest
  • position 3 the third most profitable position
  • position 4 the least uplift
  • Additionally, just below position 2 proves to be the ideal location to place a Call To Action in numerous optimisation exercises
Horizontal Positioning

Extending on from this principle it's also worth noting that horizontal positioning is of equal significance, born out with the following test example. On a landing page we rotated three product benefits through a horizontal layout as follows and monitored the effects on click to apply rate. Swapping the 'Great rate' benefit to second position after a cash back offer yielded a 3.24% uplift.

Again, swapping the Overdraft benefit with the Cash back tile yielded an even greater uplift of 3.69%. I guess you could call this the "Gutenberg Horizontal Positioning rule".

So in conclusion, positioning of message and offers can be absolutely crucial to the success or failure of a web design based upon some highly established design principles.