Last year, I implemented something that I’m convinced should be standard practice but almost nobody does: a structured post-mortem template for every single campaign, regardless of whether it succeeded or failed. And I’ve learned that the template itself is less important than being ruthlessly honest about what you’re documenting.
Here’s the problem I was trying to solve: we’d run campaigns across Russia and the US, and when we’d get together to discuss results, everyone had different narratives about what happened. Someone would say “the influencer was great,” someone else would say “the timing was bad,” another person would say “we didn’t invest enough in the brief.” We were pattern-matching to our own biases instead of actually understanding causation.
So I created a template. But templates aren’t magic—the actual process of filling them out, and doing it together across time zones and language barriers, is where the value lives.
The Template We Use:
Section 1: Setup (The Planning)
- Campaign name and dates
- Overall goal (awareness, consideration, conversion, retention)
- Budget allocated
- KPIs we committed to pre-campaign
- Key assumptions we were testing (“We assume Russian audiences engage higher than US audiences,” or “We assume this creator’s audience has 40% overlap with our target demographic”)
Section 2: Execution (What We Actually Did)
- Creators selected and why
- Brief provided and key messaging points
- Content types (video, carousel, stories, etc.)
- Timing and cadence of posts
- Any mid-campaign adjustments we made
Section 3: Outcomes (The Numbers)
- Actual vs. planned KPIs (side-by-side comparison)
- Engagement metrics (impressions, engagement rate, reach)
- Conversion metrics (clicks, conversions, CAC)
- Revenue impact (if applicable)
- Time-to-result (how long until we saw the KPI)
Section 4: Analysis (Why It Happened)
- For each KPI, did we hit target? If yes, why? If no, why?
- Was it a planning issue, execution issue, or market issue?
- What surprised us?
- What did creators tell us about audience response?
- What did customer data reveal? (comments, repeat purchase, customer support mentions)
Section 5: Learnings (What We’ll Do Different)
- One thing we’d definitely do again
- One thing we’d do differently
- One assumption that was wrong
- One insight about the market we gained
Section 6: Library Entry (For Future Campaigns)
- A 2-3 sentence summary of the insight that future campaigns should know
- What creator profile actually works for this goal
- What content type actually converts
- Any guardrails (“Never do X,” or “Always include Y”)
Why This Actually Matters:
I thought the template was the innovation. Turns out, it’s the discipline of filling it out that changes behavior.
Here’s an example. We ran a campaign promoting a new product category in both Russia and the US. The US campaign crushed it (2.1% conversion rate). The Russia campaign flopped (0.3% conversion rate). My first instinct: “The product doesn’t work in Russia, or Russian audiences aren’t ready for it.”
But when I actually filled out the post-mortem structure and asked hard questions:
- Different creators? (Yes, different tier)
- Different messaging? (Yes, we localized more heavily in Russia)
- Different audience targeting? (Yes, Russia was broader, US was more tight)
- Different timing? (Yes, US was during a product launch window, Russia was random)
Turns out it wasn’t the product or the market. It was that we hadn’t committed to the same level of precision in Russia. We’d done a one-off test instead of a structured campaign. When we re-ran the Russia campaign with the same structure, we got 1.8% conversion.
Without the disciplined post-mortem, I would’ve concluded the product doesn’t work in Russia. Instead, I learned that inconsistent execution kills results more than anything else.
How We Actually Do This:
-
We start the post-mortem immediately after campaign ends. Not weeks later, when memory fades. Day 2 or 3, someone (usually the campaign lead) starts filling out the template with fresh data.
-
We involve multiple stakeholders in the analysis section. The person who executed wants to explain decisions. The partner from the other market asks questions they might not know the answer to. The analyst looks at data patterns. This cross-market perspective is critical—it stops you from making assumptions.
-
We separate “execution feedback” from “learnings.” Execution feedback is about process: “The brief could have been clearer.” Learnings are about market insight: “US audiences engage more with video than carousel.” They’re different, and we document them separately.
-
We actually use the library entries for future campaigns. This is where discipline breaks down at most companies. You do all this work, document it, and then the next campaign team ignores the library because they’re on a deadline. We’ve built it into the brief template: “Have you reviewed related post-mortems in the library?” It’s a gate, not a suggestion.
-
We share the template and learnings across the team, even campaigns that bombed. Especially campaigns that bombed. A failed campaign documented well is worth more than a successful campaign documented poorly. You learn nothing from success that just gets attributed to luck.
What Changed:
- Campaign repeatability went up. When we run similar campaigns now, we’re not starting from zero. We’re building on documented insight.
- Faster decision-making. Instead of debating assumptions, we reference what we learned last time.
- Better partnerships. When my US partner sees that I’m actually serious about documenting what happened, they invest more time in analysis instead of just executing.
- Less blame, more learning. When failures are documented without judgment, people stop defending their decisions and start investigating root causes.
The Template Itself Isn’t Special:
Honestly, the structure of the template isn’t the magic. You could use a different one. What matters is:
- You document what you planned to do vs. what you actually did
- You investigate why there’s a gap
- You extract learnings that future teams can use
- You do this for successes and failures equally
My question for the community: How many of you actually do formal post-mortems, and more importantly, do you use them to inform future campaigns? Or do you run campaign after campaign without building institutional knowledge? And for those of you in cross-market teams, how do you handle post-mortems when people aren’t in the same time zone or even the same language?