Over the last couple of days I have looked into A/B testing to prepare myself for some work on the Choice Skills website. Though A/B testing in theory does not seem like an incredibly difficult concept, I am realizing that the method with which we will implement it is of some concern.
The first decision that needs to be made is what exactly are we going to change? Before we test, we need to have an idea of some of the aspects of the website that need improvement (which must be based off of current SiteCatalyst data). Testing for the sake of testing will be worthless unless we have successfully found some key data in SiteCatalyst that compels us to tinker around with some of the layout or design of the Choice Skills website. I think we have found a couple of things, we'll just need to decide where to focus our energies at the beginning.
I suppose that brings me to the next point - we need to be very specific with what we are changing. Now that I think about it, a total overhaul on www.choiceskills.com may not be a terrible way to start (we've already found a number of things to improve), but typically an A/B test is implemented to test specific changes that can be measured individually. If you are changing a whole bunch of things, you never really know what modifications are causing the increase or decrease in conversions, and that can be a serious problem. If we're looking for complete optimization, changing individual pieces of the site will more effectively lead us towards success.
Considering that it is in our best interests to modify a single aspect of the site at a time, it is also extremely important that we designate a time frame for the testing we will be doing. In order to successfully complete a test, Omniture suggests about 500 conversions as an accurate measurement. With a budding site like www.choiceskills.com, we don't have the opportunity to effectively test 500 conversions at this point. Finding a valid period of time in which we can test how effective the modifications are is going to be extremely difficult.
Not only is the duration of the test a challenge, but deciding how we want to implement an A/B test is another important decision to make. The easiest way to do it would be to set up a Consecutive Test, where we run site A for a given period of time and then site B thereafter for a given period of time, sending all site traffic to only one location. The more effective test, however, is a Synchronous Test, where we run both sites simultaneously and send only a portion of our site traffic to the newer design. Deciding between these two testing methods will be important. Our choice will either be based on ease of implementation or on the quality of data we want to extract.
If we decide to run a Synchronous Test we will also need to decide how exactly we want to implement the test on SiteCatalyst. My thought is that we should use Custom eVars to track the information coming from the variations of the site. The Custom eVars may not be the most elegant and detailed implementation, but considering the short period of time with which we have to work, and the fact that choosing a Synchronous Test is a more difficult implementation to begin with, I vote that we try and keep it relatively simple at this stage in the game. The Custom eVars will simply allow us to track conversions coming from the specific pages (i.e. Homepage A, Homepage B).
Though I don't completely understand the implementation of A/B testing at this point, I think I have the right frame of mind in making sure that we are making correct decisions on how exactly we should design our test before worrying about the specific implementation. I don't want to spend a ton of time worrying about the data coming out of our testing unless I know I can trust it.
Tuesday, March 20, 2007
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment