We’re at that moment where our Russian e-commerce brand is serious about entering the US market, and now I’m realizing our entire measurement framework is built on Russian assumptions.
In Russia, we know our customer. We know what “conversion” looks like, what CAC should be, what lifetime value we can expect. We have three years of data that guides us. But the US market? It’s completely different landscape.
First issue: CAC is higher but LTV is uncertain. In Russia, we acquire customers comparatively cheaply because we know the cultural context, the platforms, the messaging. In the US, CAC is 2-3x higher, but I have no idea what LTV will be. Does this mean US is a worse market, or are we just in the learning phase?
Second issue: Attribution rules are different. In Russia, we use last-click for most analyses because it’s simpler and we know our funnel well. In the US, we see longer, more complex customer journeys. People see our ad, don’t click, come back 5 days later from Google, Add-to-cart, then check price on another site, then come back and buy. How do I attribute that? Our Russian model would give credit to the last touchpoint. But that feels wrong for the US market.
Third issue: benchmarking against nothing. In Russia, industry benchmarks are… not great, frankly. So we built ours. But in the US, there are established DTC benchmarks. And we’re performing 30-40% worse. Is that a red flag, or is that expected for a new market entrant?
What I decided to do: create two separate measurement frameworks. One for Russia (our proven model), and one for the US (a test-and-learn model). For the US expansion, I’m tracking:
- Customer acquisition by channel (not worrying about blended CAC yet)
- Repeat purchase rate (proxy for satisfaction)
- Time-to-first-repeat (how quickly do first-time customers come back?)
- Brand awareness lift (using surveys, since we’re new to market)
I’m deliberately NOT trying to calculate full LTV or blended ROI yet. Too much uncertainty. Instead, I’m building the data infrastructure to answer those questions once we have more market maturity.
But I want to align with people who’ve done this before: when you expand into a completely new market, how long do you run on “learning mode” metrics before you switch to your standard performance framework? And how do you convince leadership that lower performance in year one doesn’t mean the market is bad—it means we’re not optimized yet?