Measuring UGC campaign ROI across markets—where my early metrics led me totally wrong

I’ve been working with UGC creators for eighteen months now, and I thought I had a handle on how to evaluate performance. Spoiler: I was measuring the wrong things, and it took analyzing campaigns across two markets to figure it out.

Here’s what went wrong initially.

I was obsessed with counting UGC videos and tracking engagement metrics. More videos = more reach, higher engagement = more sales. Seemed logical. I’d brief a creator, they’d deliver ten videos, I’d measure engagement, and then decide whether to continue the relationship. Sounds reasonable, except it was completely disconnected from actual revenue.

When I started running UGC campaigns simultaneously in Russia and the US, something weird happened: a creator’s engagement rates looked mediocre, but revenue was solid. Another creator had decent engagement but revenue was flat. I realized I was looking at engagement in a vacuum instead of connecting it to the bottom line.

So I rebuilt how I evaluate UGC. Here’s what changed:

1. I stopped counting videos and started measuring revenue per video.

Sounds obvious, but this is harder than it sounds. You need clean data on which products were featured in which videos, which videos drove which conversions, and then the revenue attribution. I spent a week setting up proper tagging so we could actually track this. Suddenly, I could say: “Video #7 from Creator A drove $2,400 in revenue. Video #3 from Creator B drove $180.” Those are completely different stories about creative quality and audience fit.

2. I started measuring engagement quality, not just engagement quantity.

In Russia, UGC that went super viral (10K+ views) sometimes had terrible conversion rates. The audience was watching, but they weren’t buying. In the US, creators with smaller reach but more targeted audiences drove better revenue. I started paying attention to comment sentiment and the types of people engaging. Are people asking “where to buy?” (good sign) or just reacting to entertainment (less relevant). This required manually sampling comments, which sucked, but it changed how I brief creators.

3. I started tracking creator repeatability.

Some creators are one-hit wonders. Others consistently drive revenue. But I wasn’t tracking which was which. I started measuring: “Across this creator’s last five videos, what was the average revenue per video?” That metric—consistency—mattered way more than a single viral video. A creator with 5 videos averaging $400 each is more valuable than a creator with 1 viral video at $2,000 and 4 videos at $50 each.

4. I created a creator scorecard that combines engagement, revenue, brand fit, and delivery reliability.

This is less about metrics and more about making decisions. I realized I was cherry-picking data. So I built a simple scorecard:

  • Revenue per video (40% weight)
  • Engagement quality (30% weight)
  • Brand alignment (20% weight)
  • On-time delivery (10% weight)

Suddenly, I could actually compare creators fairly and invest in the right relationships.

5. I started measuring audience overlap.

This was the thing that really surprised me. Two creators from the same market could have identical engagement rates but completely different audiences. Creator A’s audience was 80% my target demographic. Creator B’s audience was 40% target, 60% outside my funnel. Of course Creator A drove better ROI. But I wasn’t measuring this until I started doing manual audience analysis.

What’s changed operationally:

  • I spend more time upfront scoping creator partnerships. Instead of just briefing and waiting, I ask creators about their audience, what content performs, what they predict will work for this specific product.
  • I build more feedback loops. After a video drops, I don’t just wait a week and look at final numbers. I check in after 24 hours, 48 hours. Early engagement patterns predict final revenue pretty well.
  • I’m way more selective about creator partnerships. I’d rather have 5 creators driving consistent revenue than 20 creators with unpredictable performance.

The result: our UGC ROI is up 34% year-over-year, but not because we’re doing more UGC. We’re just doing smarter UGC—measuring what matters and investing in creators who consistently move the needle.

My question: How are you actually measuring UGC creator value? Are you looking at engagement metrics, revenue attribution, or something else? And if you’re running campaigns across multiple markets, are you seeing massive differences in what works, or is creator quality pretty consistent across regions?

Finally someone talking about UGC attribution properly. You’ve identified the core issue: engagement is a vanity metric. Revenue per video is what matters. But here’s where it gets tricky—are you controlling for product and pricing differences between markets? A creator pushing a $29 item in the US vs. a $15 item in Russia will have very different revenue numbers even with identical audience quality. And if you’re comparing across products, you need to account for margin, not just revenue. A high-margin UGC campaign at $300 revenue is worth more than a low-margin campaign at $500 revenue.

Also curious about your attribution window. How long are you measuring post-UGC conversion? If it’s just same-day, you’re missing likely sales. We use a 14-day window for social-driven UGC, but I’m wondering if you’re doing something different.

The creator repeatability metric is solid. That’s correlation with reliability. But have you measured causation? Just because Creator A consistently drives revenue doesn’t mean it’s the creator driving it—it could be the product, the timing, or pure luck. We use a/b testing with creators: same creator, different products, and vice versa. It’s more work, but it tells you if the creator is actually the lever or if something else is driving results. Have you done any controlled testing like that?

YES. Oh my god, this is exactly what I’ve been trying to tell brands. They obsess over my reach and engagement, but they don’t understand that my audience is hyper-specific and they convert like crazy. I have creators I know who have 50K followers and 8% engagement, and brands trip over themselves to work with them. I have 15K followers with 12% engagement, but my audience actually buys things.

The thing about audience overlap is real too. Brands sometimes send me products that are completely outside my niche, and they’re shocked when the video doesn’t convert. Of course it doesn’t—my audience is fitness people, not fashion people. But if a brand actually asked me upfront, “Is this a fit for your audience?” instead of just shipping me the product, we’d save so much time.

I love the idea of the creator scorecard. That gives us common language. Right now, brands judge me based on completely arbitrary metrics, and it’s frustrating. Something like what you described—where we can both see how I’m being evaluated—would actually let me improve and iterate.

This is the kind of framework that actually helps me build better partnerships. Right now, when I’m introducing a creator to a brand, I’m flying blind on metrics. If brands had something like your scorecard, I could match creators to brands based on data, not just gut feel.

One thing I’d add: feedback loops with the creator matter as much as feedback loops with the data. If you’re giving creators performance insights, they can actually improve their creative. I’ve been trying to facilitate these conversations, but brands are often hesitant to share data. Have you found a way to communicate performance to creators without it feeling like criticism?

This is valuable for us because we’re trying to figure out if UGC is actually scalable for our product category. We’re a B2B SaaS tool, and UGC feels like a consumer play, but I’m wondering if we’re wrong. Your focus on repeatability is interesting—it suggests UGC has a learning curve. Do you think creators can improve over time with feedback, or are some creators just naturally better at converting audiences? Because if they can improve, that changes how we think about investing in relationships.

Very thorough breakdown. One more thing: have you measured the incrementality of different creator sizes? Sometimes a micro-creator drives better ROI per dollar spent than a macro-creator just because of audience relevance and authenticity. But if you’re not measuring cost per acquisition against creator fees, you could be over-investing in big names. What does your creator fee structure look like? Flat fee, commission, hybrid?

Love the weight distribution in your scorecard (40/30/20/10). That’s deliberately thought-out. Most agencies just look at engagement because it’s easy to measure, and they miss revenue entirely. The fact that you’re weighting revenue at 40% and engagement quality (not quantity) at 30% tells me you’ve learned some hard lessons.

Question: how do you handle creator relationships that score high on reliability but low on brand fit? There’s a creator who consistently delivers, but their audience isn’t quite your demographic. Do you optimize them out, or do you find products that better match their audience and test them?