Learning from both wins and losses—how do you structure analysis when campaigns fail across markets?

I think we talk too much about successful campaigns and not enough about the ones that completely flopped. I’ve noticed that the failures actually teach me more about how markets work, but I’m terrible at documenting them in a way that’s useful later.

We had a UGC campaign that crushed it in Russia—huge volume, strong audience response, great metrics. We tried to scale it to the US and it just died. The content didn’t resonate, engagement tanked, we pulled it early. For weeks after, I was frustrated because I didn’t really understand why it failed. Was it targeting? Creative? Timing? Cultural mismatch? Audience expectations?

Eventually I sat down and actually compared the content that worked in Russia against what we ran in the US. Turned out the Russian content was very community-focused and conversational. The US audience wanted more polish and authority. But I only figured that out after the fact, and we’d already wasted budget.

I’ve also seen campaigns that succeeded for partly the wrong reasons. Like, a campaign that was supposed to drive conversions but actually succeeded because of viral reach in an unexpected demographic. If we only look at the headline metrics (it succeeded!), we miss the insight (but not for the reason we planned).

What I want to set up is a structured postmortem process for both wins and losses. Not blaming, just learning. And I want to document those learnings in a way that’s actually accessible when someone runs a similar campaign in six months.

Has anyone built a postmortem process that actually works across markets and languages? And how do you capture the stuff that’s harder to quantify—like cultural fit, audience expectations, content resonance—in a way that’s useful for future campaigns?

What would actually make you change your strategy based on a past failure or success?

Postmortem culture is everything, and most teams get it wrong because they make it about assigning blame instead of learning.

Here’s the structure I use:

Part 1—Quantitative: Lay out the plan vs. actual. What KPI did we target? What did we actually get? Graph it. Make it clear.

Part 2—Hypothesis: Form a hypothesis about why. Don’t guess wildly—look at data. If conversion was target vs. 25% reached, what changed? Targeting? Audience? Offer? External factors (seasonality, competition, news)?

Part 3—Qualitative: This is where the learning lives. Pull qualitative data: what did audience actually respond to? (Comments, sentiment analysis, creator feedback). What was the tone/expectation mismatch between what we created and what landed?

Part 4—Comparative: If the campaign ran in both markets, directly compare the success factors. Why did the same tactic work here, fail there? Document the differences.

Part 5—Actionable insight: Don’t end with “we learned something.” End with: “Next time a [specific situation] campaign runs, we will [specific action] because [data + hypothesis].”

For the harder stuff (cultural fit, resonance)—tag that explicitly. “This campaign failed due to cultural resonance mismatch (qualitative), not targeting.” Then when someone reviews it, they know what type of insight it is.

Store all postmortems in a searchable database by: outcome (win/loss), tactic (UGC, influencer, etc.), market, insight type (quantitative/qualitative). Then people can actually find relevant cases.

The highest-value postmortems are the surprising ones: when a campaign succeeded for different reasons than planned, or failed despite good execution. Those are where real learning lives.

You’re describing a learning system, and it’s harder to build than people think because it requires rigor.

Almost every team I work with does postmortems informally (people talk, then forget). What works is making it structured, mandatory, and quick.

The framework:

  1. Timing: Postmortem within 48-72 hours of campaign completion. Memory is fresh. Include the person who executed it.

  2. Template (fill-in-the-blank):

    • Objective: What were we trying to achieve?
    • Execution: What did we actually do?
    • Metrics: Target vs. actual (show variance)
    • Hypothesis: Most likely explanations for variance
    • Evidence: What data supports the hypothesis?
    • Insight: For next time, here’s what we’ll do differently
    • Confidence: How confident are we in this learning? (High/Medium/Low)
  3. Cross-market comparison: If campaign ran in both places, the template forces a side-by-side. “Why did this work here but not there?”

  4. Qualitative capture: Use specific tags: cultural resonance, messaging clarity, audience targeting, external factors, creative quality, platform fit, etc. These are searchable.

  5. Aggregation: Monthly, pull all postmortems. Look for patterns. Do UGC campaigns fail more in creative mismatch vs. targeting? That’s a strategic insight.

The magic: when someone is planning a similar campaign 6 months later, you can pull “all UGC campaigns that failed due to creative mismatch” and they see 3-4 examples. That informs the planning.

For the “campaign succeeded for wrong reasons” scenario: we track this separately. Every quarter, review wins and ask: “Did we succeed because of our strategy or despite it?” That’s where you find the actual signal.

We started doing postmortems because we were making the same mistakes repeatedly across markets.

What I found works: have the person who executed the campaign present it to someone who didn’t work on it. The outsider asks better questions. “Why did you choose that influencer?” Forces the exec to either explain their logic or realize there wasn’t any.

For learning across markets: we specifically compare. “Here’s what worked in Russia, here’s what we tried in US, here’s what we think the difference was.” Sometimes it’s just market maturity (US audience more ad-savvy, Russia audience more community-driven). Sometimes it’s the creator or platform. Naming it matters.

One thing that changed everything: we DON’T actually compare wins and losses the same way. For wins, we ask: “Why did this work?” For losses, we ask: “What did we assume that was wrong?” Different questions lead to different insights.

We also started tracking the confidence level in our hypothesis. Like, “we think this failed because of audience targeting” might be confident (we have data), vs. “we think the creator didn’t connect” where we’re mostly guessing. That confidence level tells us whether we can confidently use this learning in the future.

The hardest part is making it safe to talk about failures. If people think they’ll be blamed, they hide stuff. We made it explicit: postmortems are about learning, not performance review. Changed the conversation entirely.

From a partnership perspective, I think postmortems are valuable to share with creators too—not the blame part, but the learning.

When a campaign fails and we figure out why, I tell the creator: “Here’s what we learned. This is useful for you too.” Creators actually want that feedback. It helps them improve.

I’ve also found that getting creator feedback in a postmortem is valuable. Like, “your content got these reactions, audience said this in comments, here’s what might have resonated better.” That’s intel the analytics team might miss.

Also, when a campaign succeeds unexpectedly, looping back to the creator and saying “here’s what really resonated” helps them calibrate for next time.

Maybe worth building postmortem templates that are creator-friendly too? Not overly technical, but enough structure that insights are captured.

Client postmortems are where we actually lock clients in long-term. Not because we hide failures, but because we show we learn from them.

What we present to clients:

  1. What we planned (objective, strategy, tactics)
  2. What happened (results, variance, market context)
  3. Why (hypothesis, supported by data + qualitative)
  4. What we’re changing (concrete adjustments for next phase)

That last part is critical. Clients don’t care if you failed. They care if you learned and are fixing it.

For multi-market campaigns, we always do a simultaneous comparison postmortem. “Here’s how Russia campaign performed, here’s US, here’s the key differences, here’s what that tells us about each market.”

We also do this thing where we present both the obvious narrative and the alternate explanation. Like, “Most likely, targeting was off [hypothesis 1]. But it could also be that platform algorithm changed [hypothesis 2]. Here’s how we’d test.” That shows rigor and honesty.

One practice: we archive all postmortems in client-accessible format (not super technical, but detailed). Clients actually reference these months later. “Remember when that campaign failed? This new one has similar structure, did we learn from that?” Keeps us honest.

I really appreciate when brands do postmortems with me instead of just ghosting after a failed campaign.

I had a UGC campaign that underperformed, and instead of just disappearing, the brand reached out and said: “Here’s what we’re seeing, here’s our hypothesis about why, want to try differently next time?” We did round two with adjustments we designed together, and it actually worked.

From my perspective, the things that matter: engagement patterns in the comments, sentiment of the feedback, whether audience is even interested in the product category. Sometimes metrics are just fine but the vibe is off.

When brands can articulate what the audience said (not just what they did), it helps me understand the mismatch. Like, “audience loved the creator but didn’t care about the product” is different from “audience wasn’t interested because the messaging was off.”

Also, failed campaigns sometimes teach me what content doesn’t work, which is sometimes as valuable as knowing what does. Would love more brands to share that learning.