How are you actually measuring ROI when currency, markets, and attribution models keep shifting?

I’m struggling with a fundamental problem: we’re running campaigns across Russia, US, and LATAM, and our ROI measurement is a mess.

The issue isn’t just currency conversion (though that’s annoying). It’s that each market has different attribution models: In the US, we track directly via affiliate links and UTM parameters. In Russia, it’s a mix of direct sales and brand awareness (conversion data is less reliable). In LATAM, we’re honestly guessing.

Additionally, different creators deliver different value: one might drive immediate conversions, another builds longer-term brand awareness that shows up weeks later in organic search. How do I weight that in ROI?

Right now I’m doing a hybrid calculation: direct-attributed revenue + an estimate of brand lift from sentiment analysis + a guess at long-tail organic impact. It’s defensible but not precise. My CFO isn’t thrilled.

I’ve seen some teams use multi-touch attribution (Mixpanel, Segment), but those feel overengineered for what we’re doing. Others just track engagement and call it success, which feels too hand-wavy.

What’s your process? Are you tracking pure revenue ROI, or do you include brand metrics and weighted estimates? How do you present this to leadership without sounding like you’re making it up?

Multi-market ROI is hard because you’re comparing apples to oranges. Here’s our framework: (1) direct-response campaigns get pure revenue ROI tracking; (2) brand-building campaigns get a blended scorecard—revenue (30%), engagement quality (20%), sentiment shift (20%), audience growth (30%). We don’t try to force everything into a single number.

For cross-market comparison, we normalize by market size and e-commerce penetration. A campaign converting 2% in Russia might actually outperform a campaign converting 0.8% in the US because market conditions are different.

This makes reporting more complex, but it’s actually more honest than trying to jam all markets into one metric. Leadership respects that we’re measuring what matters for each market, not just hitting one vanity number.

On attribution: we use basic UTM parameters for clarity, not a fancy platform. UTM reliably captures about 60-70% of influence—that’s our baseline. We then survey customers quarterly about what influenced their purchase. The survey data fills in the remaining 30-40%, including brand awareness effects. It’s not perfect, but it’s more accurate than guessing, and it’s cheap to implement.

I built a tiered ROI model: Tier 1 (direct conversions via UTM/affiliate link) = high confidence measurement. Tier 2 (influenced conversions detected via IP matching and survey responses) = medium confidence. Tier 3 (brand awareness and long-tail organic lift) = lower confidence, reported separately.

For leadership, I present conservative ROI (Tier 1 only) and then show upside in Tier 2 and 3. Conservative number is usually credible and defensible. Most of our campaigns show positive Tier 1 ROI; Tier 2 and 3 add 20-40% on top.

This approach worked because it’s honest about certainty levels and doesn’t overstate impact.

For cross-market comparison, we benchmark each market against historical baseline, not across markets. “This campaign in Russia did 25% better than our historical Russia influencer average” is more meaningful than “Russia outperformed LATAM.” Market conditions are too different.

When we scaled to US market, I realized Russia ROI models don’t transfer. US customers use promo codes; Russian customers don’t. US has better analytics infrastructure; Russia is messier. So I stopped trying to force one model globally.

Now: clear direct-response tracking for markets where it works (US via UTM/promo code). For Russia, I focus on brand metrics and unit economics (cost per engaged user). Report separately, don’t try to average them. Leadership understood this immediately—it’s actually clearer than forcing false equivalence across markets.