We spent six months trying to prove that our UGC campaigns were actually moving the needle on revenue. And the data was all over the place.
The problem wasn’t that we weren’t tracking UGC. We were tracking it meticulously—impressions, engagement rates, click-throughs, conversions. But when we tried to isolate the impact on sales—actual, bottom-line revenue—everything got murky.
Part of it was we were trying to apply the same attribution model across Russia and the US. But the sales cycles are completely different. In Russia, we’d see conversation-to-purchase happen in days sometimes. In the US, it takes weeks. The customer journey is longer, touchpoints are different, the path looks nothing alike.
So comparing a 2% conversion rate on UGC in Russia to a 1.5% rate in the US was meaningless. We were measuring apples in one market and oranges in the other.
What actually worked was building separate benchmarks, by market, by product, by season. A winter campaign for a skincare product in Moscow isn’t comparable to a summer campaign in Austin. The audience needs are different, the buying urgency is different, everything’s different.
We also started separating out UGC-specific metrics from campaign metrics. Not every metric that moves in a campaign is because of UGC. Sometimes it’s because of the timing, the influencer’s audience, the product itself, the competition, seasonal demand. We were conflating all of that.
The turning point came when we started tracking a few very specific things:
- Cost-per-acquisition specifically within UGC campaigns vs. non-UGC campaigns, by market
- Repeat purchase rate from customers acquired through UGC
- Time-to-purchase from initial UGC touchpoint to sale
Those three metrics told us if UGC was actually working or if we were just seeing noise.
One thing that surprised us: repeat purchase rates from UGC customers were significantly higher than from paid ads, even when initial conversion looked similar. That wasn’t showing up until we dug into cohort analysis.
The other thing: we had to accept that some markets would never show “perfect” ROI measurement, and that’s okay. Especially when you’re building brand trust in a new market, some of the value is long-term. You can’t always put a number on it in Q1.
But at least now we know what we’re actually measuring, and the data tells us something real instead of just confirming our biases.
When you’re tracking UGC ROI across markets, how are you handling the fact that markets fundamentally move at different speeds? Are you segregating metrics, or are you trying to find a unified model?