A/B Testing: Online Experiment Design and Analysis

Thank you for signing up for the course! We look forward to working with you and hearing your feedback in our forums. Let's get started!


Need help getting started?


Contents

Course Resources

Tools for A/B Testing

These tools were used in class to help with various parts of the A/B testing process:

Additional Learning

The following resources may also be helpful in learning more about A/B Testing:

Course Syllabus

Prerequisite Knowledge

This course requires introductory knowledge of descriptive and inferential statistics. If you haven't learned these topics, or need a refresher, they are covered in the Udacity courses Inferential Statistics and Descriptive Statistics.

Prior experience with A/B testing is not required, and neither is programming knowledge.

Lesson 1: Overview of A/B Testing

This lesson will cover what A/B testing is and what it can be used for. It will also cover an example A/B test from start to finish, including how to decide how long to run the experiment, how to construct a binomial confidence interval for the results, and how to decide whether the change is worth the cost of launching it.

Lesson 2: Policy and Ethics for Experiments

This lesson will cover how to make sure the participants of your experiments are adequately protected, and what questions you should be asking regarding the ethicality of experiments. It will cover four main ethics principles to consider when designing experiments: the risk to the user, the potential benefits, what alternatives users have to participating in the experiment, and the sensitivity of the data being collected.

Lesson 3: Choosing and Characterizing Metrics

One of the most important and time-consuming pieces of designing an A/B test is choosing and validating metrics to use in evaluating your experiment. This lesson will cover techniques for brainstorming metrics, what to do when you can't measure what you want directly, and characteristics of your metrics you should consider when validating those metrics.

Lesson 4: Designing an Experiment

This lesson will cover how to design an A/B test. This includes how to choose which users will be in your experiment and control group - different online definitions of a "user", and what effects different choices will have on your experiment. It will also cover when you should limit your experiment to a subset of your entire user base, how to calculate how many events you will need in order to draw strong conclusions from your results, and how this translates into how long you should run the experiment. The lesson will also cover how various design decisions affect the size of your experiment, so you will know which decisions to revisit if you need results more quickly.

Lesson 5: Analyzing Results

This lesson will cover how to analyze the results of your experiments. Step one is always to run some sanity checks so that you can catch problems with your experiment set-up. Then you will learn how to check conclusions with multiple methods, including a hypothesis test on the effect size and a binomial sign test, if you get results that surprise you. You will also learn how measuring multiple multiple metrics for the same experiment can make analysis difficult, and some techniques for handling multiple metrics. Finally, you will learn about several analysis "gotchas", and what to do if you see them, including how Simpson's Paradox can affect A/B tests, and why even statistically significant results might disappear when you launch.

Final Project: Design and Analyze an A/B Test

Make design decisions for an A/B test, including which metrics to measure and how long the test should be run. Analyze the results of an A/B test that was run by Udacity and recommend whether or not to launch the change.

Acknowledgements

We'd like to thank Justine Lai for producing and editing the course, Liz Keheler for managing the project, and Calvin Hu and Kim Dryden for visual styling advice. We couldn't have made this course without them.

-- Diane, Carrie, and Caroline