And if you think that sucks, wait until you either a) use CLV to justify marketing spend or b) use CLV to justify your valuation with an investor. Oof, self-inflicted damage.
One particular solution is to stop reporting CLV as a single number. I mean, if you happen to know that there are two disjoint sets of Good Customers and Bad Customers than that is very useful information to anyone who needs to know the CLV number to do their job. "What can we afford to spend to acquire a new customer? I have a new channel I want to try." "It really depends on whether you're getting Good Customers or Bad Customers. We can only pay $50 for BCs, but for GCs we can go $200+."
You then get into issues like "How do I tell the difference between a Bad Customer and a Good Customer at an equivalent vintage where neither have churned yet?" It may be the case that there are behaviors which you can use as a proxy for which group someone is likely to fall in. Dharmesh Shah talks often about a Customer Happiness Index that Hubspot uses, which is essentially a regression that predicts churn rate based on measurable customer behavior. It's all sorts of win to find that something like that works for your business. (Hypothetical example: if Dropbox found that customers who used photo sharing are the best possible Dropbox customers, it would make sense to test things like a) biasing marketing to target photo sharers or b) bias product design to push photo sharing as a feature.)
Absolutely - the next step is to calculate the CLV of different acquisition channels. Maybe Organic search results in a favorable mix of good vs mediocre vs bad customers, whereas affiliate marketing results in a poor mix.
You can use this information to help inform a new channel decision (a new paid search channel will likely be more similar to another paid search channel then it will be to an affiliate program).
Behavioral triggers (which Custora uses) get more complicated -- but maybe we'll touch on that in a future post.
Cohort analysis and other types of user segmentation can be useful in evaluating the quality of customers from different marketing channels, but you will still run into issues with averages, as discussed in the original article.
One particular solution is to stop reporting CLV as a single number. I mean, if you happen to know that there are two disjoint sets of Good Customers and Bad Customers than that is very useful information to anyone who needs to know the CLV number to do their job. "What can we afford to spend to acquire a new customer? I have a new channel I want to try." "It really depends on whether you're getting Good Customers or Bad Customers. We can only pay $50 for BCs, but for GCs we can go $200+."
You then get into issues like "How do I tell the difference between a Bad Customer and a Good Customer at an equivalent vintage where neither have churned yet?" It may be the case that there are behaviors which you can use as a proxy for which group someone is likely to fall in. Dharmesh Shah talks often about a Customer Happiness Index that Hubspot uses, which is essentially a regression that predicts churn rate based on measurable customer behavior. It's all sorts of win to find that something like that works for your business. (Hypothetical example: if Dropbox found that customers who used photo sharing are the best possible Dropbox customers, it would make sense to test things like a) biasing marketing to target photo sharers or b) bias product design to push photo sharing as a feature.)