We just wrapped a UGC campaign with 27 different creators across both the Russian and US markets, and I realized I have no coherent way to compare their output.
Each creator sent us videos. Some were high-production, some were raw. Some performed great on TikTok, some were better on Instagram. Some creators understood our brand immediately, others needed three iterations. And now I’m supposed to tell leadership: “So here’s what we learned.”
The problem is that when you’re managing creators from different regions, working in different styles, with different platforms and different audience segments, comparing them using a single metric is maddening. If I only look at view count, I’m missing engagement quality. If I only look at engagement rate, I’m ignoring reach. If I only look at conversion, I’m ignoring brand-building content that doesn’t convert immediately but builds awareness.
What I ended up doing was creating three layers of analysis:
Layer 1: Raw Performance — Views, likes, comments, shares. Just the numbers as they sit. This is what each creator actually achieved.
Layer 2: Normalized Performance — I calculated reach-adjusted engagement (engagement per 1K views) and normalized for platform differences. Now I can actually compare a TikTok creator to an Instagram creator without comparing apples to oranges.
Layer 3: Strategic Contribution — Did this creator help us reach new audience segments? Did they bring brand understanding or authenticity that resonated? Did they exceed expectations for their audience size?
The hard part was Layer 3—it required actual judgment, not just plugging numbers into formulas. But that’s where the real insights lived.
I’m now building a playbook so that next time, I can brief creators better, set clearer expectations upfront, and analyze results faster. But I want to know: how do you all structure creator evaluation? Do you use one mega-metric, or do you break it down like I’m doing?