Every year, marketers spend billions of dollars on campaigns meant to attract, retain, and upsell customers. Yet despite this massive investment, it can be extremely challenging to determine how effective these initiatives actually are, and how they can be improved. One common method of measuring a campaign’s Return on Investment (ROI) is to run an A/B test: Marketers will target customers with two different interventions, and then compare results between the two groups. With the right approach to analysis, these A/B tests can provide useful insights — but they also have the potential to be highly misleading.
Research: When A/B Testing Doesn’t Tell You the Whole Story
When it comes to churn prevention, marketers traditionally start by identifying which customers are most likely to churn, and then running A/B tests to determine whether a proposed retention intervention will be effective at retaining those high-risk customers. While this strategy can be effective, the author shares new research based on field experiments with over 14,000 customers that suggests it isn’t always the best way to maximize ROI on marketing spend. Instead, the author argues that firms should use A/B test data alongside customers’ behavioral and demographic data to determine which subgroup of customers will be most sensitive to the specific intervention that’s being considered. Importantly, the data suggests that this subgroup doesn’t necessarily correspond to the “high-risk” customer group — in other words, it’s very possible that the intervention won’t be as effective at retaining high-risk customers as it will be at retaining some other group of customers. By identifying the characteristics that actually correlate with high sensitivity to a given intervention, marketers can proactively target their campaigns at the customers who will be most receptive to them, ultimately reducing churn rates and increasing ROI.