We’ve been running UGC campaigns across both markets for about two years, and I’ve learned the hard way that you can’t just copy-paste the same campaign playbook and expect the same results. Last year, we had a campaign that crushed it in the Russian market but completely flopped in the US with almost identical creative and messaging. I needed to figure out why.
My first instinct was to blame the creators. Maybe the US creators weren’t as good, or the audience just didn’t like the product. But when I actually parsed the data, I realised the problem was much simpler: I wasn’t measuring success the same way in both markets.
In Russia, we were tracking engagement—comments, likes, shares. Very visible. In the US, I was more focused on CTR and conversion-adjacent metrics. We were basically comparing two entirely different things and acting surprised when the numbers didn’t align.
So I spent about a month going back through every single UGC campaign we’d run—probably 40+ different ones across both markets. I documented everything: the brief, the creators involved, the content style, the metrics we used, and what actually happened business-wise.
Then I looked for patterns. And there they were.
First pattern: Russian audiences respond more to authenticity and relatability. The UGC creators who performed best weren’t necessarily the “perfect” ones—they were the ones who felt real, who acknowledged the product wasn’t perfect, who actually used it. US audiences seemed to care more about polish and credibility signaling.
Second pattern: content format totally mattered differently in each market. In Russia, longer-form video with more context performed better. US audiences wanted quick, punchy hooks. This isn’t rocket science, but when you’re trying to run unified campaigns, you miss these nuances.
Third pattern—and this was the kicker—the definition of “success” needed to be different. In Russia, I was satisfied with high engagement. But in the US market, even if engagement was decent, if the post didn’t drive traffic to the product page, I was calling it a failure. Two totally different success frameworks, and I was forcing them into one metric.
Now here’s what I changed: I built two separate but parallel frames for measuring UGC success. Not totally different playbooks—just adjusted metrics and evaluation criteria for each market. For Russia, I weight engagement and authenticity heavily. For US, I weight conversion signals and audience trust indicators. But I’m comparing impact against the right benchmarks for each market, not trying to make them equal.
The second thing I did was get way more structured about how I brief creators. Before sending a brief, I now include context about what success looks like in that specific market. “In the US, we’re looking for a 3-5% CTR, and the content should feel polished but still authentic.” versus “In Russia, we’re looking for high engagement and real usage stories.”
Third, I stopped updating campaign strategy mid-flight. I was constantly tweaking based on early metrics, which meant I couldn’t actually compare anything coherently. Now I let campaigns run their course, document everything clearly, and then adjust for the next cycle.
I know this sounds obvious in retrospect, but I’m genuinely curious: how many of you are running into the same trap of trying to make two markets fit one framework? And have you come up with a better way to structure this?
Also: am I overthinking this, or do you legitimately have to operate two separate systems to make sense of cross-market UGC results?