Capturing learning from cross-market campaign failures so you don't repeat them

I’ve been thinking about knowledge management in marketing teams, specifically around campaign failures. We run stuff in Russia and US, some campaigns work brilliantly, others completely bomb. But here’s what frustrates me: every time something fails in one market, we risk repeating the exact same mistake in the other market or on the next campaign because there’s no structured way to capture why it failed.

I tried building a simple failure log last quarter—just a doc where we’d document what went wrong, what we think caused it, and what we’d do differently. But it’s easy to fill with vague conclusions like ‘audience didn’t engage’ or ‘messaging didn’t resonate.’ That doesn’t actually help anyone.

I started thinking about this differently: what if you treated failed campaigns like case studies? Instead of just saying ‘this didn’t work,’ you’d actually map out the decision tree: What was the hypothesis? What did we assume about the audience? What did the data actually show? Where did the assumption break? What would we test differently?

For cross-market work especially, this matters even more. We’ve had campaigns fail in Russia for specific reasons, then watch the same mechanics play out in the US campaign. If we’d shared that learning properly, we could have avoided the second failure.

I’m wondering: how do you and your teams actually capture learning from failures? Is there a structure that works better than others? And how do you make sure that learning actually gets applied to future campaigns instead of just sitting in a forgotten doc?

This is exactly the kind of thinking that separates good marketing teams from great ones. I’ve been pushing our team toward failure analysis too, and I’ve found that structure is everything.

Here’s what we do now: For every campaign that underperforms, we ask four specific questions: (1) What was the input assumption? (2) What did the data show instead? (3) Why was our assumption wrong? (4) What would we change about the setup?

That fourth question is crucial because it moves you from ‘we were wrong’ to ‘here’s specifically what we’ll do differently.’ Over time, you start seeing patterns—like, we consistently underestimate how audience preferences change between Q1 and Q3, or we always overestimate how well messaging translates between markets.

Once you identify the pattern, you can actually build it into your planning process. It’s not just learning, it’s systematic improvement.

Do you have a specific taxonomy of failure types yet, or are you still at the ‘log everything’ stage?

One more thing: I’d recommend doing failure reviews quarterly with cross-functional team members—not just the person who ran the campaign. When the creative team, data team, and strategists all look at a failure together, the insights are so much richer. You catch assumptions from different angles.

For your cross-market issue specifically, I’d do joint reviews: Russia team + US team looking at both failures together. They’ll immediately spot if it’s a market-specific issue or if it’s a fundamental assumption problem that applies everywhere.

That collaborative angle probably matters as much as the documentation itself.

I love that you’re thinking about this structurally. In my experience, failure analysis works best when you bring in the people who were closest to the campaign—the creators, the community managers, the people actually interacting with audiences. They see patterns that analysts miss.

So we do something like this: We host a monthly ‘learning session’ where teams present a failed campaign, but instead of just listing what went wrong, they share what the audience actually told them. Like, creators will often give direct feedback about why content didn’t land. Community managers see comments that hint at audience sentiment. That’s the real gold.

Then we document that learning in a shared space where new team members can actually learn from it, not just read a generic postmortem.

How much are you engaging with the actual audience reaction in your failure analysis, or is it mostly internal metrics?

From a creator perspective, I can tell you that sometimes campaigns fail because the execution partners (us creators) see red flags early and none of the brand team is listening. I’ve had situations where I told a brand ‘this messaging isn’t going to land with my audience,’ and they pushed forward anyway because they’d already planned it. Then it flopped, and suddenly they want to do an analysis about why.

So, I’d say: when you’re capturing learning from failures, include what the creators flagged during the campaign planning phase. Sometimes the failure was predictable, and you already had the insight, just didn’t act on it.

That’s actually the most valuable learning: understanding why teams sometimes ignore red flags rather than just analyzing what went wrong.

Every agency deals with this. We’ve had campaigns bomb in one market, then we carry those learnings into the next market and suddenly things work better. But the reverse also happens—we miss signals we actually had.

Here’s the structure I’d recommend: For every campaign review, create a ‘decision snapshot’ that documents: (1) The core assumption, (2) What was supposed to happen, (3) What actually happened, (4) the hypothesis for why, (5) What we’d do next time. Then file that by market + campaign type.

Over time, you’ll have a reference library. When you’re planning a similar campaign in a different market, you can literally search ‘influencer campaign Q3 Russia’ and see what worked and what didn’t. It becomes searchable institutional knowledge.

That’s worth way more than vague postmortems.

Also, I’d create a standing monthly ‘learning review’ call with your cross-market teams. Not to blame anyone, but to literally extract patterns. After you’ve done this for 3-4 months, you’ll have enough data to start building decision rules. Like: ‘When audience demographic is X, we know that message type Y won’t work, so we should do Z instead.’

Those decision rules become your competitive advantage. They stop you from repeating failures and let you move faster on what actually works.

One tactical suggestion: When you document a failure, also document the ‘counter-evidence’—what was the thing we saw that should have told us this wouldn’t work. Often there’s data or signals we ignored. Understanding why we ignored them is actually more valuable than understanding why the campaign failed.

Then you can work on the decision-making process, not just the campaign strategy. That’s where real improvement happens.

Also, I’d make this learning visible to the whole team, not just the people who worked on that specific campaign. Put it in a monthly ‘learning digest’ or something. The more people understand why campaigns fail, the better decisions everyone makes across the board.