The purpose of website testing, in general, is to improve results – whether that be site engagement, form completions, ecommerce sales, etc. However, if we’re not smart about our testing plan then we will end up with inconclusive tests, failed tests and tests that have no impact on bottom-line business results. Consider the following examples that illustrate this:
Let’s imagine that we are a B2B company looking to drive more leads and inquires through form submissions. We set up our testing plan and methodology, have a thorough approach were we will test various elements on the page including the lay-out, imagery, supporting text, the form itself – number of fields etc.
As our tests conclude, we notice consistently that the results show little to no impact on conversion rates, even tests that have dramatically different layouts/designs. Why is this?
When we dig deeper we find that 80% of our traffic comes from mobile devices yet our tests are all designed for desktop experiences. Because of this we ended up testing elements and designs that the majority of visitors never saw because it was all stripped out of the page when reduced to a mobile screen size. It’s as if we were running an A/A test for 80% of our traffic. Knowing this we can now create mobile specific tests that will directly impact the majority of our visitors and drive real, impactful results.
Now imagine that we are an ecommerce company looking to increase revenue, so we decide to test our shopping cart experience in order to get more people to complete orders. Again we create our testing plan rigorously designed to test high impact elements in the shopping cart and checkout experience.
As we conclude tests, we see great results – increasing conversion rates by 60/70/80%. However, our tests are also taking months to conclude and upper management sends word that revenue remains flat. Why is this?
When we analyze further, we find that most visitors are leaving the site before putting anything into their shopping cart. Even though we had amazing results, it had little impact on the bottom-line business results of the company because we were improving a part of the funnel that most visitors were not even getting too.
Again as the ecommerce company, we realize that we need to increase the number of visitors placing products in their carts. We conduct tests on pages with high traffic in order to conclude tests faster and have a greater impact on performance.
We set up our testing plan and again see great results as we conclude our tests much faster this time. However, management still comes to us saying that revenue remains flat – how can this be?
We have greatly improved the percentage of people placing items in their cart and thanks to our previous testing we have a high purchase percentage as well. When we analyze further, the pages and products that had the most traffic and that we decided to test actually had the lowest profit margins and highest overhead costs. While we achieved great test results, unfortunately, there was no impact to the company’s bottom line.
Hopefully, these real world examples help to better illustrate that where and how we test is just as important, if not more important, as what we test. If we take time to truly understand our website data and success metrics, these insights will allow our tests to significantly impact business results – which is what we all strive to achieve.
Comments are closed.