ROI tracking in cross-market campaigns feels like comparing apples and… different apples. I’m working with creators in both Russian and US markets for the same brands, and the attribution problem is real.
Like, we ran a campaign with a Russian-language creator and a US-based creator for the same product. Both drove traffic and sales. But when I tried to figure out which creator actually drove more revenue relative to what we paid them, it got messy fast.
The issues:
1. Different metrics across markets – A Russian e-commerce platform tracks conversions differently than a US shopping platform. The conversion rates don’t compare directly. What’s a ‘good’ conversion rate in one market is a failure in another.
2. Time lag differences – US audiences convert faster. Russian audiences often have longer consideration periods. So the same campaign driving sales on different timelines makes attribution impossible.
3. Pricing differences – What we pay a creator in one market is totally different from another. So the ROI calculation doesn’t work the same way.
4. Market saturation – US is saturated. Russian market is less saturated. But does that mean Russian creators automatically perform better, or are we just seeing market dynamics?
5. Currency and exchange rates – Kind of a minor point, but when you’re converting between RUB and USD, the numbers shift constantly.
I’ve tried building attribution models, but honestly, I’m not confident they’re accurate. I’m basically using a mix of utm parameters, pixel tracking, and best guesses.
Have you solved this? Do you have a framework for comparing ROI across creators in different markets? Is it even possible to do this accurately, or should I accept that I’ll never have a perfect attribution model?
This is the question I get asked most, and honestly, perfect attribution doesn’t exist. But you can get directional accuracy if you build your framework right.
Here’s what I do:
Step 1: Normalize your metrics
- Define what ‘success’ looks like in each market separately
- Don’t compare US conversion rates to Russian conversion rates directly
- Instead, compare each creator’s performance relative to the market average
- Example: If the market average conversion rate on US platforms is 3%, and your creator achieved 4.5%, they’re +50% above baseline
Step 2: Track ROI by creator, not by market
- Revenue attributed to creator / Amount paid to creator = ROI
- Do this calculation individually for each creator
- Then compare ROIs across creators using the same formula
- A creator with 5:1 ROI in Russia can be compared to a creator with 3:1 ROI in the US using the same metric
Step 3: Build a multi-touch attribution model
- Don’t assume one creator drives all the revenue
- Instead, assign partial credit based on where the customer came from
- Example: A user clicks a Russian creator’s link, leaves, then comes back through a US creator’s link and converts. That’s a 50/50 split, or 70/30 if you weight order dependency
- Tools like Mixpanel or Amplitude can model this
Step 4: Create benchmark case studies
- Every time you run a cross-market campaign, document:
- Creator tiers and rates in each market
- Traffic driven by each creator
- Conversions and revenue by creator
- Final ROI calculation
- Over time, you’ll build a database of benchmarks
- ‘Creators in tier X in Russia typically generate 4:1 ROI tangibly. This creator generated 3.2:1. That’s below benchmark.’
For your specific challenges:
Different conversion rates: Use relative performance vs. market average, not absolute rates.
Time lag differences: Extend your attribution window to 30-60 days depending on what makes sense. Some customers convert in 2 hours, others in 30 days.
Pricing differences: ROI formula handles this. If you pay creator A $2k for $10k revenue (5:1) and creator B $5k for $11k revenue (2.2:1), creator A was more efficient, even though they drove less total revenue.
Market saturation: This is environmental, not creator performance. Track separately from individual creator metrics.
Currency: Convert everything to a single currency (USD or EUR) at the time of measurement. Lock in the exchange rate so currency fluctuations don’t skew your analysis.
The key: stop trying to compare absolute numbers across markets. Compare relative performance instead. That’s repeatable and defensible.
One more actionable thing—if you’re not already doing this, implement a UTM parameter system that captures market as a variable.
Every creator link should have: utm_source=influencer&utm_medium=creator&utm_campaign=[campaign_id]&utm_content=[creator_name]&utm_term=[market]
That last parameter (market) lets you filter and analyze performance by geography easily in Google Analytics or your analytics platform.
Also, if you’re running a cross-market campaign, consider running it in phases:
- Phase 1: US creators for 2 weeks
- Phase 2: Russian creators for 2 weeks
- Phase 3: Both simultaneously
This gives you a cleaner attribution picture because you can isolate the impact of each market/creator cohort.
When we scaled to Europe, we had the exact same problem. Different markets, different conversion rates, different buyer behavior.
What finally worked for us was separating efficiency metrics from absolute metrics.
Efficiency metrics:
- Cost per click
- Cost per conversion
- ROI (revenue / spend)
- These work across all markets because they’re ratios
Absolute metrics:
- Total clicks, total conversions, total revenue
- These are useful for planning inventory and budget allocation
- But they’re not useful for comparing creator performance across markets
What we started doing: we’d run a campaign for 2-4 weeks, collect data, and then calculate efficiency metrics for each creator. That’s what told us which creators actually performed. Not absolute numbers.
We also learned to set market-specific KPIs before the campaign started. Like, ‘In the US, we’re targeting a 3% conversion rate. In Russia, we’re targeting a 2.5% conversion rate (because the market converts differently).’ Then we measure each creator against the KPI for their market, not against each other.
This shifted our thinking from ‘which creator performed best’ to ‘did each creator meet the market-appropriate targets?’ Much more accurate.
You need four separate metrics frameworks, and I’d be honest with you—implementing all four is necessary if you want defensible ROI attribution.
Framework 1: Efficiency Metrics (Comparable across markets)
- Cost per acquisition (CPA): Total spend / Total acquisitions
- Return on ad spend (ROAS): Revenue attributed / Total spend
- CAC payback period: How long until an acquired customer generates enough revenue to pay back the acquisition cost
Framework 2: Performance Benchmarks (Market-specific)
- Set benchmarks for each market based on historical data and industry averages
- Measure each creator’s performance against the benchmark for their market
- Example: ‘In Russia, our average CPM (cost per thousand impressions) is $2. Creator X achieved $1.75 CPM, which is 12.5% above benchmark.’
Framework 3: Attribution Window & Conversion Modeling
- Define an attribution window (30 days, 60 days, etc.) based on your business cycle
- Use first-touch, last-touch, or multi-touch attribution depending on your sales model
- Document your assumption so findings are repeatable
Framework 4: Comparative Analysis (Within-market only)
- Rank creators by efficiency metrics within each market
- Ask: ‘Among Russian creators, who drove the best ROI?’ and ‘Among US creators, who drove the best ROI?’
- This comparison makes sense. Cross-market comparison doesn’t.
Time lag issue specifically:
Extend your measurement window to account for your sales cycle. If your average customer takes 45 days to convert, measure over 45+ days. Use a ‘conversion lag’ metric to track median time between click and conversion—this tells you which markets/creators convert faster.
The hard truth: You probably won’t be able to say ‘Creator A in Russia is definitively better than Creator B in the US.’ You can say ‘Creator A achieved 4.2:1 ROI in Russia’ and ‘Creator B achieved 3.1:1 ROI in the US.’ Those are separate assessments.
If your leadership wants a single, cross-market ranking, that’s a strategic decision problem, not an attribution problem. You’re making a judgment call about how much to weight market dynamics vs. creator skill.
The simplest thing I do: tier creators by market, and measure them against peers in their own tier within their own market.
So I have:
- Tier 1 creators (500k+ followers) in Russia and Tier 1 creators in the US
- Tier 2 creators (100-500k) in Russia and Tier 2 creators in the US
- Etc.
When evaluating a creator, I compare them to other creators in the same tier in the same market. ‘Did this Tier 2 Russian creator outperform other Tier 2 Russian creators in similar campaigns?’ That’s a meaningful question.
I track:
- CPM (cost per thousand impressions)
- CPC (cost per click)
- CPL (cost per lead, if applicable)
- ROAS (revenue / spend)
These metrics work in any market if you’re careful about what you’re measuring.
For cross-market campaigns, I report results separately by market and creator tier. ‘Tier 1 Russian creators achieved 4.1:1 ROAS. Tier 1 US creators achieved 3.7:1 ROAS.’ Those are separate findings, and I’m fine with that.
What I don’t do is create a ranking that puts Russian creators and US creators against each other. That’s not analytically sound, and it leads to bad decision-making.
I don’t track ROI on my side, but I pay close attention to engagement and what converts. Sometimes a post gets huge engagement but doesn’t drive sales. Sometimes a post gets moderate engagement but people actually buy.
What I’ve noticed: American audiences engage faster but are more skeptical. Russian audiences engage differently—they read more carefully, ask more questions in the comments, but when they’re convinced, they convert.
That’s probably useful data for you. Like, when you’re evaluating a creator’s performance, don’t just look at engagement metrics. Look at engagement type. Russian audiences might leave 5% as many likes but twice as many substantive comments. That’s different, and it might correlate with better conversions.
Also, I’ve noticed that when brands try to run the exact same campaign in both markets with the same messaging, it flops in one market. Different audiences need different angles. So maybe ROI is lower not because the creator is worse, but because the campaign messaging wasn’t adapted for the market.
Just a thought from the creator side.