The primary goal of A/B testing, more than revenue or engagement, is to derive as many insights as possible. These insights can be leveraged to understand different aspects of a customer behavior. provides in-built analytics to track all relevant metrics that enable you to understand the performance of different customer journeys. 

What do I do if the A/B test succeeds?

When a test successfully concludes, you should go to the Performance Metrics tab and analyze all secondary metrics for these journeys. Then once insights are derived, the winning journey can be applied to the entire traffic and learnings from this experiment should be documented. It is important to analyze all secondary metrics before applying the winning journey to all users. 

What do I do if the A/B test fails?

There are no failed A/B tests. Once can always derive meaningful insights from a failed A/B test. If a test yields inconclusive results, you should go to the Performance Metrics screen for these journeys and carefully analyze the secondary metrics and try to deduce why this particular test failed. These learnings should then be documented and leveraged while designing future experiments.

Why should I analyze Performance metrics?

It is important to analyze performance metrics to get a comprehensive understanding of what worked and what did not. For example, some journeys might have failed the test but when bifurcated and analyzed across platforms (Mobile, Desktop) it may so happen that a particular experience though failed the overall test, performed really well on Mobile. Or some journey may have failed a test in which the primary metric was revenue, but could've had very high engagement metrics, and hence learnings from this journey can be leveraged to design future tests. 

Analytics Overview