I spent six months last year convinced our influencer campaign strategy was broken. We launched simultaneously in Russia and the US, same positioning, similar creator tiers, comparable budgets. The campaign flopped in both regions—low engagement, minimal conversions, influencers seemed disengaged. My first instinct was, ‘This strategy doesn’t work. We need a total pivot.’
But before I nuked everything and started over, I decided to actually dig into what went wrong. I pulled the data from both regions, and I started seeing something interesting: The data wasn’t actually telling the same story.
In Russia, the problem was audience mismatch. We picked influencers who had big followings, but their audience didn’t align with our product positioning. We were reaching people, just… the wrong people. The engagement numbers looked bad because people were checking the content out and then leaving.
In the US, the problem was different. The audience alignment was actually decent—we picked the right creators for the right audience. But the influencers themselves seemed half-committed. They posted the content, but they didn’t really talk about it, didn’t create variation, didn’t show actual usage. It was transactional from day one.
So here’s what blew my mind: The strategy was actually fine. The positioning worked when we got the right creator-audience pair. The execution was broken in completely different ways in each region, which made me think the strategy itself was the problem.
I’m now basically running a post-campaign analysis where I’m trying to isolate: What was regional? What was universal? What did I control vs. what did I mess up vs. what was just bad luck?
Has anyone else had a campaign that failed in multiple markets and had to figure out whether to change the strategy or change the execution? How do you actually tell the difference?
This is the analysis that most people skip, and it’s why so many bad campaigns lead to overcorrections. You did the hard work here.
What you’re describing—same strategy, different failure modes by region—is actually really common, but most people don’t notice it because they’re looking at aggregate numbers. They see ‘campaign failed’ and make a strategic decision based on incomplete data.
Here’s the framework I use: Break down the campaign into discrete hypothesis.
- Hypothesis about market: Does our target audience want this product? (Audience fit)
- Hypothesis about creator: Can this creator sell to their audience? (Creator credibility)
- Hypothesis about execution: Did the creator actually deliver what we briefed? (Delivery fit)
- Hypothesis about timing: Was the market ready? (Market timing)
In your case, it sounds like Russia failed on #1 (audience fit) and the US failed on #3 (execution/credibility). Those are two completely different fixes.
One thing I’d push back on: You said ‘the strategy was actually fine’—but was it? If the strategy was ‘influencer partnerships in both regions,’ and it failed in both regions for different reasons, maybe the strategy was too vague. Good strategies predict which specific failure modes you might hit.
How would you articulate the strategy more precisely so that it accounts for these regional differences?
Also—and this is important for your analysis—do you have a counterfactual? Like, did you run any A/B tests, or do you know what success would have looked like? Because if you don’t, you can’t actually tell if your fixes work next time. You just know ‘more engagement is better,’ which is obvious but not actionable.
This is really helpful because we’re sitting with a failed campaign right now, and honestly, I wasn’t even sure how to think about it.
Your breakdown is clear: Russia was creator-select fail, US was execution/partnership fail. But how did you actually decide what to change for round two? Like, did you:
- Keep the same creators in the US but add more oversight?
- Find new creators with actual passion for the product?
- Change the brief entirely?
Because the different fixes might actually require admitting you got different things wrong, which is not easy to do when you’re the one who planned the campaign.
This is good diagnostic thinking. You’ve separated audience selection from creator partnership quality, which is the right distinction. But I want to push on something: You said the strategy was fine, but I’d argue the strategy was incomplete.
A strategy that doesn’t account for regional execution risk isn’t a complete strategy—it’s a hope. A strategy that says ‘influencers in both markets’ without specifying ‘we’ll select based on audience alignment in Russia and creator track record in the US’ is too abstract to actually guide decisions.
For your next campaign, I’d suggest: Go deeper on the strategy layer. Document what you’re betting on for each region. What’s the thesis for Russia? What’s the thesis for the US? If those theses are different, your strategy is two strategies, not one.
Also, I’m curious: Did you talk to the creators after the campaign about why they were disengaged in the US? That’s data you already have—what did they say?
Ouch, but also… this is such a valuable moment for you. You’re literally differentiating between ‘I picked the wrong people’ and ‘I picked the right people but didn’t manage the relationship well.’ Those are completely different problems, and most people just blame the influencers and move on.
The US situation especially—‘influencers seemed disengaged’—that’s often a sign that you didn’t invest in the relationship. Influencers work harder for brands they genuinely like or who treat them well. If they seemed transactional, they probably felt transactional on their end too.
I’m curious: For round two, are you planning to go deeper with fewer creators, or are you going to try to fix both issues at once?
Also, would you be open to sharing this case (or a version of it) through the hub as a ‘postmortem’ rather than a success story? I think a lot of people would learn more from ‘here’s what went wrong and how I figured that out’ than from typical success stories.
Real talk: As a creator, I can tell you exactly what makes me engage with a campaign. If a brand is strategic with the pitch, clear about deliverables, and actually cares about my audience—I’ll put in effort. If it feels like I’m one of fifty creators they spammed and they don’t really care, I’ll do the minimum.
Your US situation sounds like the second one. The influencers probably didn’t feel like you valued them, so they didn’t value the campaign.
For Russia, the issue was different—wrong audience fit means my audience would just scroll past, so engagement tanks regardless of my effort.
My question: When you do round two, are you planning to compensate differently for the two regions? Because US creators might need relationship investment (better communication, higher pay, genuine interest) while Russian creators might just need better audience matching. Those are different budget allocations.