Building measurable UGC ROI when your DTC brand operates across completely different markets

We spent six months trying to prove that our UGC campaigns were actually moving the needle on revenue. And the data was all over the place.

The problem wasn’t that we weren’t tracking UGC. We were tracking it meticulously—impressions, engagement rates, click-throughs, conversions. But when we tried to isolate the impact on sales—actual, bottom-line revenue—everything got murky.

Part of it was we were trying to apply the same attribution model across Russia and the US. But the sales cycles are completely different. In Russia, we’d see conversation-to-purchase happen in days sometimes. In the US, it takes weeks. The customer journey is longer, touchpoints are different, the path looks nothing alike.

So comparing a 2% conversion rate on UGC in Russia to a 1.5% rate in the US was meaningless. We were measuring apples in one market and oranges in the other.

What actually worked was building separate benchmarks, by market, by product, by season. A winter campaign for a skincare product in Moscow isn’t comparable to a summer campaign in Austin. The audience needs are different, the buying urgency is different, everything’s different.

We also started separating out UGC-specific metrics from campaign metrics. Not every metric that moves in a campaign is because of UGC. Sometimes it’s because of the timing, the influencer’s audience, the product itself, the competition, seasonal demand. We were conflating all of that.

The turning point came when we started tracking a few very specific things:

  1. Cost-per-acquisition specifically within UGC campaigns vs. non-UGC campaigns, by market
  2. Repeat purchase rate from customers acquired through UGC
  3. Time-to-purchase from initial UGC touchpoint to sale

Those three metrics told us if UGC was actually working or if we were just seeing noise.

One thing that surprised us: repeat purchase rates from UGC customers were significantly higher than from paid ads, even when initial conversion looked similar. That wasn’t showing up until we dug into cohort analysis.

The other thing: we had to accept that some markets would never show “perfect” ROI measurement, and that’s okay. Especially when you’re building brand trust in a new market, some of the value is long-term. You can’t always put a number on it in Q1.

But at least now we know what we’re actually measuring, and the data tells us something real instead of just confirming our biases.

When you’re tracking UGC ROI across markets, how are you handling the fact that markets fundamentally move at different speeds? Are you segregating metrics, or are you trying to find a unified model?

This is a solid breakdown of a problem I see constantly with DTC brands trying to scale across regions.

One thing I’d push back on slightly: you can absolutely build a unified attribution model across markets, but you have to stop thinking about attribution as “conversion percentage” and start thinking about it as “contribution to revenue over a defined window.”

What I mean: instead of comparing 2% conversion in Russia to 1.5% in the US and asking why, you should be asking: of every $1 of revenue, how much can we credibly attribute to UGC touchpoints, and how does that ratio change by market?

Then suddenly the model works. Because you’re not comparing the rates—you’re comparing the contribution.

One specific thing that helped us: we built a simple multi-touch attribution model that weighted different touchpoints by their timing and sequence. A UGC post that was the first touchpoint got weighted differently than one that was the final touchpoint. Once you know the sequence patterns, you can model contribution even when sales cycles are wildly different.

The other thing: I’d be careful about over-interpreting repeat purchase rates as a sign of UGC effectiveness. Repeat purchases are influenced by product quality, customer service, email retention—tons of variables outside UGC. What matters more is whether customers acquired through UGC are more likely to repeat than customers acquired through other channels. That’s the real signal.

On your point about long-term brand value: I completely agree that sometimes you can’t put a number on it in Q1. But I’d recommend still trying to quantify brand lift, even if it’s imperfect. We use brand tracking surveys—same survey, same questions, both markets—and correlate UGC spending to shifts in brand perception metrics. It’s not perfect, but it gives you a defensible story for stakeholders when short-term ROI is hard to isolate.

How are you currently handling budget allocation between markets given different ROI profiles? Are you doubling down on the market with higher short-term ROI, or are you still investing in the market that’s slower but potentially higher LTV?

Great post overall. One question: when you built out those separate benchmarks by market/product/season, how many data points did you need before you felt confident the benchmarks were actually predictive vs. just reflecting past performance? That’s always the question I wrestle with—how much historical data is enough to use as a baseline for planning future campaigns?