Structuring post-mortems across markets: how to document what actually happened (success or failure)

Last year, I implemented something that I’m convinced should be standard practice but almost nobody does: a structured post-mortem template for every single campaign, regardless of whether it succeeded or failed. And I’ve learned that the template itself is less important than being ruthlessly honest about what you’re documenting.

Here’s the problem I was trying to solve: we’d run campaigns across Russia and the US, and when we’d get together to discuss results, everyone had different narratives about what happened. Someone would say “the influencer was great,” someone else would say “the timing was bad,” another person would say “we didn’t invest enough in the brief.” We were pattern-matching to our own biases instead of actually understanding causation.

So I created a template. But templates aren’t magic—the actual process of filling them out, and doing it together across time zones and language barriers, is where the value lives.

The Template We Use:

Section 1: Setup (The Planning)

  • Campaign name and dates
  • Overall goal (awareness, consideration, conversion, retention)
  • Budget allocated
  • KPIs we committed to pre-campaign
  • Key assumptions we were testing (“We assume Russian audiences engage higher than US audiences,” or “We assume this creator’s audience has 40% overlap with our target demographic”)

Section 2: Execution (What We Actually Did)

  • Creators selected and why
  • Brief provided and key messaging points
  • Content types (video, carousel, stories, etc.)
  • Timing and cadence of posts
  • Any mid-campaign adjustments we made

Section 3: Outcomes (The Numbers)

  • Actual vs. planned KPIs (side-by-side comparison)
  • Engagement metrics (impressions, engagement rate, reach)
  • Conversion metrics (clicks, conversions, CAC)
  • Revenue impact (if applicable)
  • Time-to-result (how long until we saw the KPI)

Section 4: Analysis (Why It Happened)

  • For each KPI, did we hit target? If yes, why? If no, why?
  • Was it a planning issue, execution issue, or market issue?
  • What surprised us?
  • What did creators tell us about audience response?
  • What did customer data reveal? (comments, repeat purchase, customer support mentions)

Section 5: Learnings (What We’ll Do Different)

  • One thing we’d definitely do again
  • One thing we’d do differently
  • One assumption that was wrong
  • One insight about the market we gained

Section 6: Library Entry (For Future Campaigns)

  • A 2-3 sentence summary of the insight that future campaigns should know
  • What creator profile actually works for this goal
  • What content type actually converts
  • Any guardrails (“Never do X,” or “Always include Y”)

Why This Actually Matters:

I thought the template was the innovation. Turns out, it’s the discipline of filling it out that changes behavior.

Here’s an example. We ran a campaign promoting a new product category in both Russia and the US. The US campaign crushed it (2.1% conversion rate). The Russia campaign flopped (0.3% conversion rate). My first instinct: “The product doesn’t work in Russia, or Russian audiences aren’t ready for it.”

But when I actually filled out the post-mortem structure and asked hard questions:

  • Different creators? (Yes, different tier)
  • Different messaging? (Yes, we localized more heavily in Russia)
  • Different audience targeting? (Yes, Russia was broader, US was more tight)
  • Different timing? (Yes, US was during a product launch window, Russia was random)

Turns out it wasn’t the product or the market. It was that we hadn’t committed to the same level of precision in Russia. We’d done a one-off test instead of a structured campaign. When we re-ran the Russia campaign with the same structure, we got 1.8% conversion.

Without the disciplined post-mortem, I would’ve concluded the product doesn’t work in Russia. Instead, I learned that inconsistent execution kills results more than anything else.

How We Actually Do This:

  1. We start the post-mortem immediately after campaign ends. Not weeks later, when memory fades. Day 2 or 3, someone (usually the campaign lead) starts filling out the template with fresh data.

  2. We involve multiple stakeholders in the analysis section. The person who executed wants to explain decisions. The partner from the other market asks questions they might not know the answer to. The analyst looks at data patterns. This cross-market perspective is critical—it stops you from making assumptions.

  3. We separate “execution feedback” from “learnings.” Execution feedback is about process: “The brief could have been clearer.” Learnings are about market insight: “US audiences engage more with video than carousel.” They’re different, and we document them separately.

  4. We actually use the library entries for future campaigns. This is where discipline breaks down at most companies. You do all this work, document it, and then the next campaign team ignores the library because they’re on a deadline. We’ve built it into the brief template: “Have you reviewed related post-mortems in the library?” It’s a gate, not a suggestion.

  5. We share the template and learnings across the team, even campaigns that bombed. Especially campaigns that bombed. A failed campaign documented well is worth more than a successful campaign documented poorly. You learn nothing from success that just gets attributed to luck.

What Changed:

  • Campaign repeatability went up. When we run similar campaigns now, we’re not starting from zero. We’re building on documented insight.
  • Faster decision-making. Instead of debating assumptions, we reference what we learned last time.
  • Better partnerships. When my US partner sees that I’m actually serious about documenting what happened, they invest more time in analysis instead of just executing.
  • Less blame, more learning. When failures are documented without judgment, people stop defending their decisions and start investigating root causes.

The Template Itself Isn’t Special:
Honestly, the structure of the template isn’t the magic. You could use a different one. What matters is:

  • You document what you planned to do vs. what you actually did
  • You investigate why there’s a gap
  • You extract learnings that future teams can use
  • You do this for successes and failures equally

My question for the community: How many of you actually do formal post-mortems, and more importantly, do you use them to inform future campaigns? Or do you run campaign after campaign without building institutional knowledge? And for those of you in cross-market teams, how do you handle post-mortems when people aren’t in the same time zone or even the same language?

This is the kind of infrastructure that separates mature marketing teams from ones that are just running campaigns. I love that you’ve built post-mortems into the process, not as an add-on. It makes me think about how community members could benefit from shared templates like this. What if the platform had a post-mortem template that everyone could use? Then people could contribute their insights, and others could learn without reinventing the process every time.

The part about involving multiple stakeholders in the analysis is crucial. That’s how you catch blind spots. When someone from the other market asks a question, it exposes assumptions you didn’t even know you were making. I try to facilitate these conversations when I’m introducing partners, but it usually happens randomly. Having a structured template that requires cross-market perspective would actually change behavior.

Perfect breakdown of root cause analysis. The section where you separated “execution feedback” from “learnings” is exactly right. Too many post-mortems are just venting sessions (“the brief was unclear”) without actual market insight. Your example about Russia vs. US conversion rates perfectly illustrates this—without structured investigation, you make the wrong conclusion.

One thing I’d add to your Section 4: you should quantify the relative impact of different variables. When multiple things went wrong (different creators, messaging, targeting, timing), how do you weight which had the biggest impact? We sometimes do post-hoc A/B analysis to try to isolate variables, but it’s complex. How are you handling that?

This is exactly what we’re trying to build. We’ve been running campaigns somewhat haphazardly, learning as we go, but losing knowledge because nothing’s documented. The idea of a library of learnings that compounds over time is compelling. But I’m wondering: how do you handle disagreement in post-mortems? If someone thinks the campaign underperformed because of X and someone else thinks it’s Y, how do you navigate that in a collaborative setting?

The point about doing this across time zones is real for us. We have people in Moscow and people in Europe and the US. Doing a synchronous post-mortem meeting is painful. Have you moved to async post-mortems where people document their perspective and then read each other’s thoughts? Or do you force synchronous discussion because the context-sharing is that valuable?

This is mature marketing practice. You’re essentially building institutional memory, which is what separates companies that scale from ones that don’t. One thing I’d emphasize: make sure your post-mortems have teeth. At most companies, the post-mortem is a feel-good exercise where you document learnings that don’t actually change behavior. You mentioned that future campaign teams check the library—how do you enforce that? Is it a gate in your approval process, or just a suggestion?

Your Section 4 analysis is where most post-mortems fall apart. Everyone has a theory about why something happened, but without controlled testing or deep data analysis, it’s all speculation. You mentioned the Russia vs. US campaign example, but how did you actually prove that inconsistent execution was the problem? Did you control for other variables, or are you inferring based on the similarity of the second campaign structure?

We do something similar at the agency level, and I’ve found that post-mortems are actually a recruiting and retention tool. When people see that failures are documented without blame, and that learnings are actually used, they feel like their work compounds. They’re not just executing one campaign; they’re building something. The fact that you do this across markets makes it even more powerful because it legitimizes the cross-market practice.

I wish brands did this, honestly. Most of the time I deliver content and I never hear what actually happened after. Did it convert? Did people engage? Was the audience the right fit? If creators actually got post-campaign debrief—not criticism, just data—I could learn and improve for the next campaign. Right now it’s like throwing content into a black hole and hoping for the best.

The willingness to document failures equally with successes is huge. That would actually let me understand what works for different audiences and products, instead of just guessing based on my gut.