- agence olloweb 520914 unsplash - Monthly user churn is a terrible metric

It’s tempting for companies, especially smaller start-ups that are under-resourced in analytics, to measure user base growth from a macro perspective. Through this lens, it’s fairly easy to observe if a product is growing or not: is the number of new users added to the product each month greater than the number that churn? If so, then MAU should theoretically be increasing and the product is officially growing.

This sounds straightforward, but it hides a number of complexities that could end up handicapping the product team at some point. Imagine a 30-day DAU chart that looks like this for a product that on-boards 00 new users per day:

- 30dayDAU - Monthly user churn is a terrible metric

A product manager would surely be pleased to see a chart like this — DAU is growing and the chart is headed up and to the right.  Churn could be calculated by some sort of instrumentation with a broad definition for churn being applied to usage (eg. a user is considered churned if they haven’t been present in the product in 7 days), but looking at the broad top-line DAU chart, churn wouldn’t be considered problematic: obviously some users are churning since DAU isn’t growing linearly with new activations / on-boarding, but since the product is growing, it doesn’t really matter.

But if we extend the DAU chart to 180 Days, we see a different picture: DAU has flatlined and growth has disappeared. The top-level DAU number doesn’t provide much insight, and certainly whatever churn assumptions that were made from the 30-day DAU graph were incorrect:- day180DAU - Monthly user churn is a terrible metric

Breaking this graph out by cohort month — that is, putting each user in the DAU into a bucket based on the month in which they were acquired — adds a little more depth to the analysis. Users are churning out so quickly that the cohorts don’t really compound month-over-month; after some fairly early point in the user lifecycle, cohorts dwindle down to almost nothing, and the DAU base essentially hovers around some level as the same number of users join and leave the product each month:

- 90dayDAUbrokenoutbycohortmonth 1 - Monthly user churn is a terrible metric

The sharp reversal in the Month 1 cohort, which is represented by the blue bars, underscores the lack of stickiness of the cohorts: DAU shrinks significantly as soon as new users are no longer added to the cohort (because the month has ended).

So in the span of four months, the product analyst’s mood shifted from elation to dejection: the product was clearly growing after 30 days of user acquisition observation and is clearly stagnating after 180 days. The DAU chart and some very general sense of churn (or net additions) misled this analytical exercise and gave false hope to the product team.

Reverse-engineering the model used to build those charts provides a better analytical starting point for analyzing the growth trajectory of a product. Each cohort in the above charts is projected out using a retention profile with a fairly steep early-stage decline (which is common in freemium consumer apps):

- retention profile 1 - Monthly user churn is a terrible metric

The retention profile tells the analyst everything they need to know about churn. And what’s more, a retention profile is more flexible than a broad churn metric calculated off of topline DAU: since the composition of DAU at any given month can based on user acquisition activities and the age of the cohorts represented in DAU, it makes no sense to draw conclusions based on net changes in DAU, even when DAU is growing. What if an app was featured this month and those users are less valuable (ie. churn faster) than the other users in the DAU base? What if the product has only been released for a few months and the golden cohort represents a large portion of the product’s DAU?

By not breaking out DAU by cohort (and acquisition channel, geography, etc.), the analyst assumes that every user in the DAU base is the same and is equally likely to churn from one month to the next. Consider the case where the quality of traffic changes considerably for the product over ; from Days 1 – 60 the traffic is high quality and produces users with the retention profile in red below, from Days 61 – 120 the traffic quality is reduced and produces the retention profile in blue below, and from days 121 – 180 the traffic quality degrades even further and produces the retention profile in yellow below:

- retention profiles - Monthly user churn is a terrible metric

The overall DAU graph looks lumpy; a broad churn calculation (“we’re losing more users per month than we are gaining!”) would alert the analyst to the existence of a problem but wouldn’t be very helpful in providing guidance beyond that:

- dau 180 2 - Monthly user churn is a terrible metric

Breaking the DAU out by cohort month provides some direction on the problem: new cohorts aren’t retaining as well as older cohorts and the DAU base is shrinking over time.

- dau cohort month 2 - Monthly user churn is a terrible metric

But it’s not until each of these cohorts can be broken out into retention profiles that a product team would really what’s happening (early-stage retention is terrible and has declined over time). Broad, monthly churn metrics can’t help the team to narrow down the problem scope beyond “the product loses more people than it gains”.

That’s a very vague clue to pursue. Growth models should be forward-looking on the basis of projected retention, not on top-level DAU changes that don’t consider the makeup of the user base over time. If the product team isn’t aware of what user retention profiles look like — broken out by geography, acquisition source, platform, and over time — then it can’t understand why slowdowns and accelerations of growth happen. And if growth isn’t understood, it can’t be managed.

Photo by Agence Olloweb on Unsplash

Source link


Please enter your comment!
Please enter your name here