Aaron Gallant
Aaron Gallant
Data Science Curriculum Lead
Unlike relative databases, you don't need to be a high-level expert to start exploring MongoDB. Since it’s a NoSQL database, you don't have to know SQL. You can work with MongoDB using JavaScript or any other major programming languages.
Chukwuemeka Okoli
ML engineer at Ledios
Former Petroleum engineer
LinkedIn
Unlike relative databases, you don't need to be a high-level expert to start exploring MongoDB. Since it’s a NoSQL database, you don't have to know SQL. You can work with MongoDB using JavaScript or any other major programming languages.
Chukwuemeka Okoli
ML engineer at Ledios
Former Petroleum engineer
LinkedIn
Practicum.Coding Bootcamps

Free SQL coding bootcamp

Learn the database basics in 15 hours

Start today
Practicum.Coding Bootcamps

Info: delivered

Sign up for our newsletter to get the freshest tips on getting a career in tech as well as the latest industry insights.

I’m in

Let’s say you are a data analyst at a company that’s developing a mobile app ― a neural network that draws art, based on descriptions.

The business isn’t going too well, since approximately 30% of potential users don’t complete the registration (Smells like a disaster!) And you have no idea how to fix it. Sure, coworkers are offering ideas, derived from intuition and personal experience. But what if these ideas are making things even worse? Such solutions often do not match reality, making them ineffective. How would you then assess the effectiveness of a future update?

Before you make changes, it’s a good idea to test them. A/B testing is one way to do this successfully.

What is A/B testing?

A/B testing is an experiment that allows you to compare two versions of something and find out which one is better. Professionals might say that A/B testing is really statistical hypothesis testing.

The method is used for many digital products ― websites, mobile apps, cloud services, and others. It is applied to user experience and usability research.

For example, developers are working on an app. In the process, they hit a wall and don’t know whether to use black or white buttons. A/B tests help to answer this question.

The process can be applied to any situation. A/B testing can help find out which website navigation style is best, or which order of registration is least likely to deter users.

So an A/B test solves the following tasks:

  • It helps understand the real needs, habits, and behavior of users and the objective factors that affect them
  • It reduces the risks associated with the influence of the developer's subjective perception of the decisions made
  • It helps to properly allocate resources to implement effective solutions

Generally, the overall purpose of A/B testing is to conduct a randomized controlled experiment, similar to drug trials: one group of participants gets a real pill and the other gets a placebo. 

Why people love A/B tests

A/B testing is often relied upon due to its solid reputation. This makes it one of the basic skills used by data scientists and data analysts. Here are its main advantages:

Versatility. It can be used in different fields, from marketing to medicine.

For instance, William Seeley Gossett used it for evaluating the quality of beer at Guinness in 1908.

Similarly, in 2000, Google performed its first A/B test to determine the optimal number of results to display in a search. 

Accuracy. A/B testing is the most accurate method in market research from the very beginning. Before an experiment is conducted, IT specialists always look at the conversion rate. 

Conversion is a simple marketing metric that measures the percentage of users who perform a desired action, which can be any action chosen by the marketer. It can be the percentage of website visitors who buy something, or the percentage of users who installed an app on their smartphone, registered on the website, and so on. 

Let's go back to the example we talked about in the beginning. The conversion rate of registration in our neural network app is 4%. A marketer who watched a webinar on color perception suggests changing the current color of the “Buy” button from an aggressive red to a more pleasant green, so the user feels less pressured. 

The assumption is, this can increase the conversion rate by 2–2.5 times. And you, as a data analyst, should investigate. 

As a result, the hypothesis will be as follows: “If you change the color of the 'Buy' button from red to green, the conversion rate will increase from 4% to 10%.” A/B testing will confirm or disprove this hypothesis. Conversion on the page with the green button will either increase from 4% to 10%, decrease, stay the same, or change by only 0.5–1%.

When A/B testing is not an option

Occasionally, you will have to think ten times before you start preparing for an A/B test, which means it’s probably better to choose a different method.

The most expensive and resource-intensive one. The result of A/B testing highly depends on the quality of data collection. Missing the smallest detail can result in months wasted.

Let's go back to the example of the marketer, choosing the color of the buttons.

To get accurate results, you need to correctly calculate the audience. How many people are participating in the test? Does it include all the users or a limited sample? 

Let’s imagine that the marketer decided to test everybody. Good news: there will be a lot of data. Bad news: an experiment costs money regardless of the outcome. So the company will have to spend considerable resources on testing, only to potentially realize that an idea doesn’t work. If the hypothesis is wrong, and the color of the button doesn't matter, the company will have wasted time and money on useless testing. In practice, this happens frequently, even with experienced data analysts. There are many potential reasons: completing the experiment ahead of time, inflated expectations, relying on someone else’s experience, technical errors, and so on.

That is why, the test version is sent to a smaller proportion of the user base.

Challenges due to small sample size. Imagine dividing four users of the app into two groups. So each participant represents 25% of the total result. 

Each group got a different version of the in-app registration form. During the experiment, one user got divorced and another got fired. Bad mood, stress, and other factors could affect their decision. This will distort the result, and the company will arrive at the wrong conclusion, making the product worse. 

Thus, the company will have wasted money, because A/B testing was unsuccessful and useless.

Meanwhile, 400 or 4000 participants could provide truly representative data. That is because the law of large numbers comes into play: it’s a mathematical theory, which states that the average of our sample is close to the “real” average. That is, the accuracy of the experiment depends on how large the sample is.

You can calculate the optimal size of samples by using a special calculator. For example, the initial conversion rate is 10%. The task is to increase it to 11% — that is, each page should be viewed by approximately 14,313 visitors. 

Sample size calculator

A/B testing will be effective if we have large samples that produce a lot of data. Otherwise, it may be better to use other methods. But that's another story.

A/B testing types

One hypothesis is often tested using different variations. That is why there are three types of A/B testing. Data scientists and data analysts choose the preferred type based on the specifics of the situation. 

A simple A/B testing or split test. A classic of the genre. It compares two versions — the control and the test kind. A control version is the original testing variable, and a test is the new version. They usually have only one difference between them.

For example, the data analyst ran an A/B test to find out the ideal size for the “Buy” button on the online store’s website. They divided the entire user traffic — 2,000 users per day — into two groups of 1,000 users each. The first group was shown a control version of the landing page with the small button; the second was a test version with the large button.

This method is used when planning point changes that will not have a major effect on the operation of the website.

Multivariate testing. Multivariate testing resembles a split test. This method also compares a higher number of variables. But there is a big difference: it uses different combinations to test a hypothesis.

For example, you can simultaneously analyze the target action button, feedback block, and logo. The goal is to find which combination of variations performs the best.

This test is more complex, but it helps to analyze how the combination of different elements affects the audience.

A/B/n-testing. Data scientists and data analysts use A/B/n-testing to assess a single change in different ways. For example, they need to check what button design on their website is more attractive to users. So they create several web pages with rectangular, round, and triangular buttons. 

This testing method also has a control and some testing versions. The next step would be to split the traffic into three groups and show these versions to users.

A/B/n-testing allows you to choose a suitable solution from several proposed options, which often makes it a pre-step before moving on to more advanced multivariate testing. This is because multivariate testing means testing all the combinations of variables, and A/B/n-testing is about choosing a subset.

The difference between the methods is easy to observe, such as in the following real-life example of a split-test in the article "A Beginner’s Guide to A/B Testing (Includes a Real Case!)".

How to learn A/B testing

You can become an A/B testing guru by constantly practicing applying the method. You will also need to know Python.

You can get all that and more with Practicum’s Data Science Bootcamp — our nine-month online program that teaches beginners essential IT skills, while an experienced tutor, code reviewers, and tech support will help you level up.

Unlock the potential of A/B testing

Data analysts, data scientists, and marketers all consider A/B testing an irreplaceable tool. It allows you to see, for example, what ads drive the most conversions, what offers your audience responds to, or which app interface is more attractive for users, among a thousand other things.

With Practicum, you can become a data analyst after just nine months of training. Our career coaching then helps you find a job. And if you don’t get one within six months of graduating Practicum, you get 100% of your tuition back.

Free SQL coding bootcamp

Learn the database basics in 15 hours

Start today

Info: delivered

Sign up for our newsletter to get the freshest tips on getting a career in tech as well as the latest industry insights.

I’m in
Share

Ready to hustle?

Jumpstart your new tech career by becoming a Practicum student.
Apply now