How do you build a postmortem when a campaign didn't hit targets?

We just wrapped a campaign that didn’t deliver. ROI was 40% below our Q1 average, engagement was weak, and I know there were execution gaps, but I’m struggling to articulate why it underperformed and what we actually learned.

I’ve got raw data—spend, impressions, clicks, conversions—but no structure for turning that into insight. I could blame it on “the creators didn’t engage the audience” or “the timing was off,” but that’s surface-level stuff. I need to dig deeper and extract actual lessons.

What I’m realizing is that most campaigns I’ve run, I just move on to the next one without a real postmortem. If they hit targets, I celebrate. If they miss, I usually just adjust something for the next round without really understanding what broke.

This time, I want to do it right. I want to:

  1. Identify what specific decisions led to underperformance
  2. Distinguish between “we made a bad call” vs. “we executed poorly” vs. “our assumptions were wrong”
  3. Actually document it so the team can learn
  4. Build a checklist to avoid similar issues next time

I know other marketers must have frameworks for this. Do you run structured postmortems? What does your template look like, and how do you avoid the postmortem turning into “we’ll just do better next time”?

I love this question because it speaks to real team growth. We’ve built postmortems into every campaign at this point, and they’re actually valuable now (they used to be demoralizing).

Here’s our structure, and it’s helped create psychological safety so people actually admit what went wrong:

1. Setup (20 mins)

  • What was the campaign goal? (specific number)
  • What did we project we’d hit?
  • What actually happened?

2. Root Cause Analysis (40 mins)

  • List every decision made before launch
  • For each decision, ask: “Given what we knew at the time, was this the right call?”
  • Separately, ask: “Did we execute this decision correctly?”

That distinction is crucial. You might have made the right strategy call but executed it poorly. Or vice versa.

3. Learning (20 mins)

  • What surprised us?
  • What would we do differently?
  • What couldn’t we have known until we ran it?

4. Action (15 mins)

  • What specific change are we making next time?
  • Who owns it?
  • When do we try it?

The magic is separating decision quality from execution quality. It makes people way less defensive and actually able to learn.

I’d also recommend having someone neutral (not the campaign owner) facilitate the first postmortem. It helps.

Did you involve the creators in any debrief? Sometimes they see exactly what went wrong and can tell you.

As an analyst, I love a good postmortem because it forces rigor. Here’s my template, stripped down:

Performance Analysis (objective)

  • Target: X
  • Actual: Y
  • Delta: Z
  • Which component caused the delta? (Reach? Engagement? Conversion rate?)

Root Cause Investigation (diagnostic)

  • If reach was low: Was it creator reach? Paid media underperformance? Platform algorithm changes?
  • If engagement was low: Was it creative underperformance? Audience mismatch? Brief misalignment?
  • If conversion was low: Was it traffic quality? Product/market fit? Landing page?

Here’s the key: track each metric separately. Don’t just look at overall ROI.

Example: Your ROI was down 40%. Was it because:

  • You spent less (actual reach was lower)?
  • You spent the same but got worse engagement (creative failed)?
  • You got the same traffic but lower conversion (audience quality fell)?

Each answer tells a different story and demands a different fix.

Learnings (synthesis)

  • What assumption was proven wrong?
  • What variable matters more than we thought?
  • What do we test next?

What I’d do: Pull the performance data, break it into component pieces (reach, engagement, conversion), and identify which piece was the deviation. That’s your diagnosis. Then ask why.

Can you share the actual numbers? Without seeing whether your issue was reach, engagement, or conversion, I can’t tell you what went wrong.

I run structured postmortems at a higher level. Let me map this to strategy:

Strategic Postmortem Template:

1. Decision Clarity

  • Why did we choose these creators?
  • Why this brief?
  • Why this timing?
  • Why this budget allocation?

For each decision, rate: High confidence / Medium confidence / Low confidence.

Low-confidence decisions that underperform? That’s where your learning lives.

2. Performance Measurement

  • What metric did we predict?
  • What actually happened?
  • Which component drove the variance?
    • Execution variance (we did what we planned, but results differed)
    • Strategy variance (we deviated from the plan, so results differed)
    • Environmental variance (external factors changed the game)

3. Strategic Lesson

  • Was our core assumption wrong? (e.g., “creators at this tier can drive conversion” — but they couldn’t)
  • Was our execution wrong? (e.g., we had the right strategy but briefed creators poorly)
  • Was our timing/environment wrong? (e.g., market shifted unexpectedly)

Only the first category should change your strategy going forward. The other two just need better execution or adaptation.

4. Next Test

  • Based on what we learned, what’s our hypothesis for the next campaign?
  • How will we test it?
  • What data will validate or disprove it?

The postmortem isn’t about blame. It’s about building a compounding knowledge base.

I’ve found teams that do this quarterly build institutional knowledge 3-4x faster than teams that just bounce from campaign to campaign.

Want to walk through your specific campaign breakdown? I can help you map it to this framework.

For a startup, postmortems are survival. We do them for every campaign, and they’re the difference between scaling what works and hemorrhaging on what doesn’t.

My rough template:

What we expected: Our hypothesis before launch (specific)
What happened: Actual data (specific)
Gap analysis: Which piece was wrong—reach? engagement? conversion?
Why the gap? Root cause (be brutally honest)
What we’ll do differently: Specific next action

Example from a campaign that tanked:

  • Expected: 10K reach, 5% engagement, 2% conversion = 10 sales
  • Actual: 8K reach, 2.1% engagement, 3.2% conversion = 5 sales
  • Gap: Both reach AND engagement were weak
  • Why: Creators we chose were right size but wrong audience fit
  • Next: Vet creator audience match first before booking

Without this, you just bleed money. With it, you build a playbook.

I’d also recommend keeping every postmortem in one document. After 20 campaigns, you’ll see patterns you never would otherwise.

How many campaigns have you run in total? And do you have historical postmortems, or is this your first real attempt at one?

From a creator’s perspective, I LOVE when brands do postmortems because sometimes they provide feedback that helps me improve.

But honestly? A lot of brands don’t even ask creators what went wrong. They just look at the numbers and move on.

Here’s what would help you: after the campaign ends, hop on a 20-minute call with 2-3 creators and ask:

  • What feedback did you get from your audience?
  • What felt off about the brief?
  • If you had to do it again, what would you change?
  • Do you know why engagement was lower than expected?

Creators often have qualitative insight that numbers don’t show. Like, I might know my audience thought the product presentation felt inauthentic, but that wouldn’t show in engagement metrics alone.

Then when you’re doing your internal postmortem, you have both quantitative data AND creator insight. That combo is super powerful.

Does your team debrief with creators, or do you usually just ghost them after reporting final metrics?