Enterprise A/B Testing Software

Your Ultimate Digital Customer Experience Tool

Conversion rate optimization is one of the most powerful strategies for transforming your website’s performance. A/B testing is a key component of a website personalization strategy, offering the potential of powerful insights into which design ideas will bring in the most money for your enterprise. That makes it tempting to throw yourself in and start running tests without putting enough thought into what to test or how to measure the success.

But a lack of planning means many enterprise businesses end up throwing money away on A/B testing, thinking it will instantly deliver the silver bullet that will send their online sales through the roof. Optimization is not just about making websites perform better; it’s about making businesses perform better. That might mean faster customer acquisition, increased order value, identifying inefficiencies, or a combination of all these and more. But achieving these results requires a thoroughly planned and carefully measured approach.

Ultimately, your testing program should help your business grow, so let’s take a deep dive into some of the ways you can make sure it does.

Conversion rate optimization (CRO) is a slightly misleading term. It has become a catch-all title for a process of website optimization that often involves improving much more than just your conversion rate. Of course, conversion rate is an important metric to measure, but it’s not the only metric.

Conversion rate optimization is one of the most powerful strategies for transforming your website’s performance.

What’s important is the bottom line. Are the changes you’re testing going to make more money for your business in the long run? You could slash all the prices on your website by 50% tomorrow and your conversion rate would most likely go up very quickly. Or promise to give away a speedboat to everybody who fills in a callback form and your submissions will probably skyrocket. But in both cases, you’re likely to lose money, sacrificing your profit margins just to get people through the door.

While these may seem like extreme examples, margins and lead quality are crucial considerations for many businesses, so it would be risky to ignore these and concentrate blindly on your conversion rate.

So, what does this mean for an A/B testing plan that is for the enterprise? It means look beyond the vanity metrics and headline figures. Don’t take an improved conversion rate as gospel that a winning variation will bring long-term benefit to your business.

It can also be tempting to rely on averages when looking at your results, but you will want to segment your data to get real learnings. A simple example is making sure you know how your A/B tests affect users on different devices, or observing the impact on new versus returning visitors. Understand which audience segments are most important to your business and make sure you drill down to analyze your results across these segments.

It’s vital that you understand your business objectives before running tests. This will help you define clear and actionable goals, which in turn will help you define the KPIs that will show you how well your website is performing.

Optimization is not just about making websites perform better; it’s about making businesses perform better.

You also need to give some context to these KPIs, so that you know what counts as a ‘good’ performance. Your analytics platform may tell you how many more sales your winning A/B test has generated, but make sure you also know how many sales you need to hit to be profitable. Better still, make sure you can analyze the profitability of those sales—and look at what those uplifts will mean to your business in three, six and 12 months’ time. Advanced A/B testing platforms can track your business’s real revenue against each variation being tested, by allowing you to feed in the value of sales or inquiries from your CRM. Evolv AI lets you choose the metric that is most critical to your business and help you optimize against that measure, whether it’s leads, conversions or revenue.

However you measure it, the key point is that you need to know the real impact your website is having on your business. Setting clear and measurable targets is essential to ensuring your A/B testing software delivers positive results.

 

Measure your costs against revenue uplifts

Running tests and implementing changes can seem expensive. Not only do you need to invest in an A/B testing platform, but there’s also the time, resources and tools required to carry out meaningful research and generate ideas, as well as setting up, monitoring and reporting on experiments. You may also need to pay designers to draw up design treatments or wireframes and source developers to build your tests—as well as implement permanent changes on your site once you’ve discovered the winning variations.

What’s important is the bottom line. Are the changes you’re testing going to make more money for your business in the long run? You could slash all the prices on your website by 50% tomorrow and your conversion rate would most likely go up very quickly. Or promise to give away a speedboat to everybody who fills in a callback form and your submissions will probably skyrocket. But in both cases, you’re likely to lose money, sacrificing your profit margins just to get people through the door.

While these may seem like extreme examples, margins and lead quality are crucial considerations for many businesses, so it would be risky to ignore these and concentrate blindly on your conversion rate.

So, what does this mean for an A/B testing plan that is for the enterprise? It means look beyond the vanity metrics and headline figures. Don’t take an improved conversion rate as gospel that a winning variation will bring long-term benefit to your business.

When done properly, this process is an investment that will bring in more money than it costs, boosting profit margins in the short, medium and long term. But effective budgeting is an essential part of an effective CRO program.

And bear in mind that not all your tests will win. In fact, studies show that just one in seven A/B tests is a winning one. That means some tests will lose your business money—so you need to make the winners count.

There may be some occasions when the cost of setting up a test outweighs its potential financial benefits. Only if you have a clear handle on your business objectives and the impact your website can have on income will you be able to make this judgment.

For smaller businesses, especially those without in-house developers, budget restrictions often hamper the A/B testing program. But A/B testing done properly should deliver a measurable ROI, allowing you to steadily grow your business and make informed decisions on how much to invest along the way. Many of the costs associated with testing are scalable, meaning they only go up in proportion to your business growth. For example, most testing software, Evolv AI included, uses a tiered pricing model based on your traffic levels. This means the costs are relative to the number of visitors your site gets, making A/B testing more affordable for growing businesses.

Those who invest more time and money in CRO tend to see better results. Testing is an investment, but one that keeps paying off when done properly.

Run your tests for the right length of time

A common mistake in A/B testing is to declare a winner too early, without enough data for the results to be reliable. It can be a costly move, as choosing the wrong winner means you’re losing money in the long run, even if you thought your new design had driven a spike for the first few days.

Statistical significance is one of the key concepts that you need to understand to ensure that your A/B tests are worthwhile. Conversion optimization should be treated as an exact science—that means guessing, assumptions and declaring winners too early are not good enough.

In short, a result that is statistically significant means you can be confident that any performance swings (good or bad) are not just down to chance, but are due to the changes you have made. That means when you’ve declared a statistically significant winning variation, you can be relatively safe in the knowledge that permanently implementing those changes will bring long-term gains.

If you aim for a minimum significance level of 95% with your tests, that means there is only a 5% chance that the improvement was down to complete luck—those are good odds on which to make business decisions.

You also need to ensure that you have tested against a large enough sample size of site visitors for that significance level to mean something. But how do you know how much traffic you need? The required sample size can vary greatly, depending on the baseline conversion rate, how big a change you expect to see and what level of statistical significance you want to achieve. A couple of basic rules of thumb—the higher the statistical significance, the more traffic you need, but the more reliable the results will be. And the smaller the improvement you want to detect, again, the more traffic you will need.

But don’t worry if math isn’t your strong point, there are various calculators out there to help you calculate sample size. Evolv AI has one built in, to help you work out in advance how long you should run your test and allowing you to keep tabs on the progress of your live experiments.

One more thing to bear in mind when deciding when to stop your tests is your calendar. Usually, you will want to go through a full business cycle (often a full calendar week) or two, as results can be very different on a Sunday to a Wednesday, so it’s important that your test has taken the peaks and troughs into account.

Beware of external factors that can affect your results during this time too. For example, seasonality can have a significant impact on various businesses, so a test in the build-up to Christmas might see very different results compared to running it in April. Other marketing activity can also affect website behavior. Has your startup just been featured on Product Hunt or the New York Times? Activity like this can improve conversion rates by boosting your authority and visibility, so ensure you are taking a holistic view of anything that can affect your business when you review your test results. While you may want to take credit for all the uplifts, ego should be put to one side if you want to be successful at A/B testing.

But what should you do if it feels like your tests will never finish? Lower traffic or lower conversion numbers are common barriers to entry for smaller businesses looking to get into A/B testing. And it’s true, some tests on low traffic sites can take weeks, months or even years to reach statistical significance. You need traffic and conversions to be able to run A/B tests, but that shouldn’t stop you applying CROAC principles to your research and design process.

Testing is not completely off the table for you either. You may need to lower your risk threshold, but if you’re prepared to take a statistical significance of, say, 80%, then you will get results much quicker, with fewer visitors. It does mean there’s a 20% chance that the test results will be misleading, but it’s also still a four in five chance that implementing a winning design will bring you more money.

Test more, test quicker

The longer each test takes to achieve significant results, the greater the opportunity cost to the business. And any time spent not testing, or testing the wrong things, is essentially time wasted. Going through the process of setting up a CRO program and then only running one test a month is unlikely to generate the sort of returns you want.

For any size of business, it is good to approach A/B testing almost with a lean startup mentality—this is about iterative testing, you want to research, identify your problems, test solutions quickly, analyze the results, then rinse and repeat.

As we’ve already explored with statistical significance, the smaller your tweaks, the longer it will take for you to get reliable results. For very high traffic sites, small tweaks can have a huge impact. And even if you’re not on a par with Amazon traffic levels, the laws of compound interest mean that small gains can snowball into a major boost over time. For example, a 3% uplift every month would add up to more than 40% improvement over the course of a year. But just bear in mind that the smaller the potential increase, the longer it will be until you reach statistical significance.

For any size of business, running just one test at a time and taking weeks to get results can lead to major inefficiencies in your testing program. Not only are you missing out on the opportunity to find more winning variations during that time, but you also need to secure some big wins to generate a positive ROI. At the very least, you need to be making more money for the business than your testing is costing to plan, implement and analyze. Volume and velocity of tests, or more a lack thereof, can be one of the biggest blockers to a successful CRO strategy. On the other hand, you need to be careful when trying to run multiple tests at once. If traffic overlaps between your tests then you are in danger of contaminating your results—so you need to find the right balance in your testing schedule.

Evolv AI solves these problems by letting you test more in a shorter space of time. By using artificial intelligence to mimic the evolutionary process, it allows you to run multiple variations of tests and multiple tests across the same sales funnels without the risk of skewed results, while constantly learning what changes have had the biggest impact.

Be ambitious and go for big wins

Test complexity is another factor that can impact success. Being too conservative with what you test can stand in the way of progress. We’ve seen that smaller tweaks take longer to measure, and you may find they make no discernable difference to your site’s performance, even after weeks of waiting. One small test at a time is unlikely to move the needle for your business.

There are a couple of different ways to tackle this issue. The first, if testing one variation against another, is to test a radical change—a significant redesign or feature update that will bring noticeable results more quickly. So if the changes have a positive impact, you could be celebrating a big uplift much sooner. The trade-off is that you will lose some insight into what changes actually made the difference. Was it the tweaked headline, the new image or the redesigned call-to-action button? That’s why many optimizers tread more carefully with A/B testing and just change one thing at a time, keeping a closer eye on the impact of each individual change. This isn’t an issue in Evolv AI, however, which can test numerous variations of a radical redesign at the same time, while always learning which elements are having an impact.

Another approach is to combine multiple changes and test them as different combinations at the same time, rather than one after the other. This is known as multivariate testing. While A/B tests generally test one variable, multivariate tests involve changing various elements on a page to create multiple versions of that page. Say you wanted to test three different images, but also three different headlines, running these changes as a multivariate test would produce nine (3 x 3 = 9) different versions of the page. Testing each combination at the same time significantly reduces the time it takes to find the optimal design, compared to running A/B tests one after the other.

Multivariate testing has often seemed out of reach for many businesses, as it generally requires high traffic levels to record significant results within a reasonable timeframe. But Evolv AI makes multivariate testing accessible for sites that were previously only able to run A/B tests. The unique evolutionary computation technology cuts out the need to test variations one at a time, intelligently generating multiple combinations to work out what combinations of elements perform best together, reaching positive results much quicker than traditional A/B testing.

This reduces the guesswork and assumptions involved in a radical redesign, by breaking it down into smaller elements and understanding how the relationships between elements can impact conversion. If incremental changes added up can have a big impact on the bottom line for businesses, then testing many variations all at once can do this even quicker.

Prioritize your testing ideas properly

A lack of structure behind your testing is a sure route to failure. Those who implement a robust process and back up tests with solid research, data and clear hypotheses are more likely to come up with winning ideas.

Research is one of the most crucial parts of a successful CRO strategy. Qualitative research, such as user testing and customer surveys, should be combined with quantitative data, like the hard numbers available in your analytics platform, to paint a picture of the areas where your site is underperforming and provide insight as to why.

This research phase allows you to identify your customers’ pain points and the problems that you need to solve. Using this information will help you to generate a solid hypothesis—identifying what changes you can make to encourage site visitors to take the desired action, and why.

You can also use this research to make informed estimates about how big an impact your proposed changes might have, weigh up the costs of implementation against the potential returns, and use this to prioritize what to test next.

Correctly prioritizing tests can be tricky for some teams, especially those just starting out on their optimization program. They may overestimate the potential impact, or miss out on bigger wins by playing it safe. The AI built into Evolv AI can remove some of this guesswork, by allowing you to upload multiple test hypotheses and letting the platform learn which ideas will work.

Learn from all your tests

Good optimizers always have an extensive list of test ideas in their pipeline. But that doesn’t mean these ideas are set in stone. An effective optimization strategy is flexible and adaptive. New tests should not exist in a bubble; each one should be influenced by the results of previous experiments. Make sure you analyze all your results, both winners and losers, to understand why a variation had an impact.

If you learn from each test and apply those learnings to your future ideas, then there is no such thing as a wasted test. Your knowledge of your customers and understanding of what is important to them grows with every test, allowing you to develop increasingly robust and data-driven hypotheses. Using AI A/B testing software and effectively tracking the results, allows you to confirm these hypotheses and measure their impact.

Evolv AI is always learning too, but at a much faster rate than traditional A/B testing models. Combining algorithms with statistics allows the personalization platform to test permutations that may never have even made it to wireframe stage, by learning what elements work together and resonate with your customers. This efficient, evolutionary process allows you to try out thousands of page combinations, generated from dozens of ideas, taking iterative and user-centered design to another level.

Effective optimization is about data-driven design decisions. Through solid research, a robust testing schedule and a disciplined approach to interpreting results, you can learn more about your website’s role in your business than ever before. Combining this understanding with the statistical power and analysis offered by your multivariate A/B testing software provides a major opportunity to deliver ongoing improvements at an impressive scale.