I’m at a critical moment: my C-suite wants to fund a US market pilot with influencers, but they want proof it’ll work before they commit real money. The problem is, I don’t have strong benchmarks from our category in that market yet. I have good data from Russia, but translating that to the US feels like guessing.
I’ve been trying to build a forecast based on what I know: our Russian ROI metrics, some public case studies I found, conversations with US experts. But every time I present it, I can feel the skepticism. They’re not saying “no”—they’re saying “prove it,” and I don’t have clean data yet.
I’ve thought about positioning this as a test-and-learn phase rather than a fully guaranteed investment. Maybe that’s more honest anyway. But I need to structure it in a way that shows I’ve done my homework and that we’re not just throwing money at a new market.
Has anyone built a compelling ROI case when you were essentially entering uncharted territory? What data points or frameworks actually moved your C-suite from skeptical to supportive? I’m also curious about what you didn’t do that you’re glad about—like, what approaches seemed smart but fell flat?
This is the exact situation I was in about a year ago. Here’s what actually worked for my C-suite:
Instead of trying to predict ROI perfectly, I built a tiered structure with conservative, realistic, and optimistic scenarios. I showed them:
- Conservative: US performs at 60% of Russian ROI (essentially I’m pessimistic about market transfer)
- Realistic: US performs at 80% of Russian ROI (accounting for market differences and learning curve)
- Optimistic: US performs at 100%+ of Russian ROI (if we execute well and find better creators)
Then I showed them the investment required and payback for each scenario. The key insight I shared was: “We don’t know which scenario is real yet, but here’s what that looks like financially.”
My CFO’s response was to fund the “realistic” scenario as a pilot—not the full amount, but enough to gather real data. That bought me credibility.
Here’s the second part that mattered: I also built a stop-loss framework. I told them: “If X metric (let’s say engagement rate) falls below Y threshold in the first 45 days, we pause and reassess before scaling.” Suddenly the C-suite felt like we had guardrails, and they were more willing to fund the experiment.
The third thing I did was pull in expert opinions, even if they were just conversations. I talked to two US influencer agencies and asked them: “For a DTC brand entering your market, what ROI would you expect in the first 90 days?” I documented their answers in my presentation. It wasn’t formal benchmarking, but it was better than pure speculation.
What category are you in? That might change how conservative I’d suggest you be with the assumptions.
My approach when I was in your position: I didn’t try to predict ROI for the whole market. I picked the single thing my C-suite cared about most—in my case, it was customer acquisition cost—and I forecasted that.
Here’s what I presented: “Our Russian CAC is $X. Based on influencer market rates in the US, creative production costs, and platform CPMs, we project US CAC will be $Y (either higher or lower, depending on your market).”
That was one number, backed by real research. Then I said: “We’ll invest $Z to test this assumption against 5-10 campaigns. After that, we’ll know if the CAC forecast was right or wrong, and we can decide whether to scale.”
My board loved this because it was:
- Specific (one clear metric, not a giant dashboard)
- Testable (we could validate the assumption quickly)
- Low-commitment (pilot budget, not a massive bet)
- Honest (I wasn’t pretending to know things I didn’t)
What I didn’t do—and I’m really glad about this: I didn’t try to build a 3-year forecast. That would have been fiction. I focused on 90-day proof of concept instead.
I also didn’t oversell the expertise of my advisors. When I cited someone’s opinion, I was clear: “This is what one agency told me, not a systematic benchmark.” My CFO actually respected that honesty—it showed I understood the limits of the data.
One thing that helped: I showed them a couple of other brands’ US market expansions (even in different categories) and how long it actually took to prove ROI. That contextualized my pilot request.
I see a lot of teams make this mistake: they think they need perfect data to get buy-in. They don’t. They need clear thinking and a framework for learning.
Here’s what I’d present to the C-suite:
The Thesis: “We believe influencer marketing can acquire US customers at a profitable CAC within [timeframe].” (Make this specific.)
The Proof Points:
- It works in Russia at [metric]
- US influencer market operates at [basic fact], which suggests [implication]
- We’ve spoken to [expert] who sees [supporting opinion]
- Similar category brands have done [example]
The Test:
- Budget: [amount]
- Timeline: [specific end date, like 90 or 120 days]
- Success Metric: [one clear number]
- If [outcome], we invest $ more. If [different outcome], we reassess.
That’s it. Clean, simple, testable.
What I’ve found kills presentations to the C-suite:
- Too many metrics (they get lost)
- Speculation presented as fact (they distrust it)
- No clear decision framework (they don’t know how to say yes)
What works:
- One clear thesis
- Proof that I’ve done work (not that I have perfect knowledge)
- A small pilot with clear success criteria
Your competitive advantage is that you’re not running this alone—you have US expert input. Make that clear. “We’re piloting with guidance from [expert], which reduces our blind spots.”
How much pilot budget are you proposing? That number should signal confidence without being reckless.
From an agency perspective, I’d structure this as a partnership opportunity, not just a solo ROI case.
Here’s what I’d tell the C-suite: “We’re not making a $500k bet on the US market. We’re making a $50k partnership bet with a US agency who has actual benchmarks and expertise. If they can validate our assumptions, we scale. If they can’t, we learn and adjust without losing massive capital.”
Suddenly the conversation shifts from “Do you think this will work?” to “Who do you want to partner with to find out?”
I’d also pull in a small paid partnership with an agency or expert who has real US influencer data. Yes, it costs money upfront, but it gives you credible third-party validation instead of internal speculation. My C-suite always responds better to independent validation.
One other thing: I’d benchmark labor costs and production timeline against Russia. That’s usually a huge shocker for teams expanding from RU to US—things take longer and cost more. If you quantify that upfront, the C-suite respects the realism.
Do you have a specific agency or expert in mind to partner with, or are you still evaluating options?
What I love about this phase is that you’re being thoughtful instead of reactive. That’s going to serve you well.
Here’s what I’d add from a relationships and partnerships angle: bring the C-suite into select conversations with US experts. Not a full meeting, but even a 20-minute call with a credible US influencer agency where they explain the landscape could shift perception entirely. When they hear it from an external expert—not just from you—it lands differently.
I’ve also found that the best ROI cases include a humility statement. Something like: “Here’s what we know, here’s what we don’t know, and here’s how we’ll fill the gaps through this pilot.” C-suite people respect that way more than false confidence.
One specific thing: I’d also ask your US expert contacts for feedback on your assumptions. Not formal consulting—just honest feedback via email. When you present to the C-suite, you can say: “I validated these assumptions with [expert], and they flagged these potential issues, which we’re designing the pilot to test.” That shows rigor without perfect certainty.
I’m genuinely curious: what’s your timeline? Are they pushing for a decision, or do you have time to build the case thoughtfully?