Geo-Economics and Politics

Why offering rewards to stay can drive customers away

Raghuram Iyengar
Marketing professor, Wharton
Share:
Our Impact
What's the World Economic Forum doing to accelerate action on Geo-Economics and Politics?
The Big Picture
Explore and monitor how Geo-economics is affecting economies, industries and global issues
A hand holding a looking glass by a lake
Crowdsource Innovation
Get involved with our crowdsourced digital platform to deliver impact at scale
Stay up to date:

Geo-economics

Most companies work hard to lure customers — and once they have us, they want to keep us. Hence the development of the “retention campaign” — programs of rewards, recommendations and offers designed to compel us to continue our patronage of those companies. On the surface, this strategy makes perfect sense.

Wharton marketing professor Raghuram Iyengar, Martin Schleicher of the IAE Business School in Buenos Aires and Eva Ascarza of Columbia Business School dug into the questions of when, why and on whom these marketing tactics worked. Their paper, “The Perils of Proactive ChurnPrevention Using Plan Recommendations,” includes the counter-intuitive result that frequently, the results of these campaigns are exactly the opposite of what was intended.

In this interview with Knowledge@Wharton, Iyengar explains what they found, and why “retention” is a lot more complicated than one might expect.

An edited transcript of the conversation appears below.

Don’t remind your customers that they have options:

This particular research [was] looking at the efficacy of retention campaigns. Many companies do retention campaigns targeted towards consumers. They want to keep those consumers, so they might give incentives: For example, you might save on your next purchase; you might get $10 off. These campaigns are very common, and many companies think that they work. But do they really work?

We had an opportunity to work with a cell phone company that was trying to do these retention campaigns. What was for great for us is that it acted as a field experiment: For one group of customers who were randomly selected, they did a retention campaign. This was a pricing plan campaign. For another group of customers, they did nothing. The group of customers who were in the retention campaign actually left a lot more — in fact, staggeringly more. This was pretty bad for the company.

What’s the big picture? Do retention campaigns work? Actually, en masse, they might not work. What we found was more nuanced: They work, but only if you’re the target. Twitter  So, [companies should] do retention campaigns, but do them in a targeted way.

Key takeaways:

Primarily, as many companies start thinking about these campaigns, the typical thing that most companies do is a mass-marketed campaign — send a retention package to every one of their customers. Why? Because it’s easy to do. They don’t have to think too much. They’ll send it to everyone.

[However,] sometimes sending campaigns to people might actually make them start questioning their own behavior. For instance, in this particular case for the cell phone company, when they sent a retention campaign  — which was basically about looking at people’s usage patterns and saying, “Look, there might be other plans that are better for you” — it made customers question whether they were getting a deal. And if they’re getting a good deal here, why not look elsewhere?

What that suggests is retention campaigns must often be targeted campaigns. Think carefully about who in your customer base might be likely to leave. Don’t do it en masse; do it on a targeted basis.

Surprising conclusions:

On the surface, most people think retention campaigns work. So let me give you some hard numbers.

In the study that we did for a bunch of customers from a telecom service — we monitored their behavior three months before the campaign and three months after the campaign. What we found was [for] these customers who were in the retention campaign, two months afterwards, 10% of these customers left the service, as opposed to a control group where no campaigns were done, [in which] 6% left the service.

So 4% is a staggeringly high rate of churn.… What we found was on average, it’s very hard to find evidence where the retention campaigns work, but it was really easy to find evidence for whom it worked. There were many sets of customers [that] had certain characteristics — which are very easy for firms to observe — for whom these campaigns worked.

The surprising fact was, on average they didn’t work. And for many companies, I think this is an eye-opener, because what they should be thinking carefully about is, “How should we customize our retention campaigns?”

How companies should target their campaigns:

I think this is very interesting. In terms of customization, what is very good nowadays — especially [for the] many companies who are data driven — [is that] they have a lot of information about their customers. For instance, in the mobile phone context, companies routinely gather information about past consumption. In our case, for example, we had information from the past three months that we had been collecting — what was the level of overage? What is the amount that people are consuming over the number of plan minutes? We had information about how much availability they had. In one month, were they using 100 minutes? In the other month, were they suddenly using 200 minutes or 500 minutes?

These are very observable types of patterns that you can find in your own customer base. Many times, what we found was that cutting up the data on customers, using these observable patterns might be a great way to customize.

Let me give you a specific example. In our case, what we found was that people who are consuming a lot, way above the number of minutes that they had, consumers who had a huge amount of variation, consumers who are negative trend over the time — that they were consuming less and less — those are the people who are likely to leave. And, for many of these customers, doing a retention campaign actually made them more likely to leave.

Sometimes, it’s best to let sleeping dogs lie.

Where companies go wrong with their data:

I think one of the interesting things that’s going on nowadays, especially if you think about big data, analytics and all of that, is companies are rapidly experimenting. Many, many companies out there routinely do A-B testing.

What we found here was it’s not enough to just do A-B testing. It’s important to analyze the data correctly. Let me give you a specific example. In our A-B test, one group was the people who received recommendations; the other group was people who received no recommendations. People who received recommendations and accepted them churned less, compared to the control group. Now, one might think the retention campaign worked. But what’s important to remember is that customers decided to accept the campaign. So, there is self-selection there.

For people who were exposed to the campaign and decided not to accept it, they actually churned a lot more. The very fact that they were exposed to the campaign changed their behavior. So it’s important when companies are doing A-B testing, they think carefully about what’s randomized and what’s self-selected by consumers. Glossing over the fact can actually lead them to think that some campaigns are successful when they are not. And A-B testing is something that we believe is going to be rapidly taking off in the area of data analytics. But again, doing A-B testing is easy — analyzing and interpreting the results correctly is way more important.

Taking ‘self-selection’ out of the equation:

A lot of people have looked at pricing plans and how customers choose among pricing plans. Many of these things are self-selected by consumers. So, from a company’s perspective, if they want to look at the causal impact of what happens when they give pricing plans, it’s difficult to do so a priori,because there’s an aspect of consumer self-selection involved.

How do we get around this? Convincing a company that they should do a field experiment — the gold standard of causal interpretation. What we ended up doing was a field experiment where, again, people were given some pricing plans, and some people were not given any recommendations. That helped us give a causal interpretation, which was very, very hard to do just using second-degree data, which many researchers have done.

Next steps — making recommendations work better:

I’d like to continue to work more on this area of pricing plans and recommendations. After working with a company down in Austin, Texas, which is starting to look at energy consumption and the whole idea of smart meters, how do we make people understand that when they consume electricity they’re on different tariffs and different tiers? What we’re trying to do is a field experiment — making things salient to consumers and seeing how they might change their energy consumption over time and over days, depending upon how much they’re consuming now.

This article is published in collaboration with Knowledge@Wharton. Publication does not imply endorsement of views by the World Economic Forum.

To keep up with the Agenda subscribe to our weekly newsletter.

Author: Raghuram Iyengar is a Wharton marketing professor. 

Image: People walk through the Mall of Berlin shopping centre during its opening night in Berlin. REUTERS/Thomas Peter 

Don't miss any update on this topic

Create a free account and access your personalized content collection with our latest publications and analyses.

Sign up for free

License and Republishing

World Economic Forum articles may be republished in accordance with the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Public License, and in accordance with our Terms of Use.

The views expressed in this article are those of the author alone and not the World Economic Forum.

Share:
World Economic Forum logo
Global Agenda

The Agenda Weekly

A weekly update of the most important issues driving the global agenda

Subscribe today

You can unsubscribe at any time using the link in our emails. For more details, review our privacy policy.

Japan and the Middle East: Japan can be a bridge in an era of global fragmentation and conflict

Kiriko Honda

April 25, 2024

About Us

Events

Media

Partners & Members

  • Join Us

Language Editions

Privacy Policy & Terms of Service

© 2024 World Economic Forum