I’ve been thinking about this for weeks now, and I realized we’ve been flipping ROI predictions backwards. We evaluate creators, they perform, and then we’re always surprised by the results—either good or bad. But we rarely connect what we saw at the beginning to what happened at the end.
So I started building what I’m calling a ‘creator scoring rubric’ for our campaigns. Not just engagement rates and follower counts, but actually trying to predict which partnerships will generate real business value.
Here’s what I’ve included so far:
The obvious stuff:
- Engagement rate (but weighted by comment quality, not just likes)
- Follower authenticity (I check for bot followers using a few tools)
- Audience composition (does their followers match our target demo?)
The less obvious stuff:
- Content consistency: How often do they post? Is the quality stable or all over the place?
- Audience loyalty: How many repeat commenters do they have? Are the comments thoughtful or just emojis?
- Brand compatibility track record: Have they worked with similar brands? How did those collabs perform (if public)?
- Conversion signal: Do they ever mention product links, discount codes, or specific CTA’s in their content? Or is it all abstract brand vibes?
The tricky part:
I don’t have historical data yet on how these signals correlate to actual ROI. I’m essentially guessing which weights matter. I’ve been talking to other marketing folks, and it sounds like most people are doing the same thing—scoring by feel rather than data.
But here’s what I’m wondering: if I could look at case studies or benchmarks from creators who actually crushed campaigns vs. creators who flopped, I could start building a real predictive model. Like, “creators with X engagement authenticity + Y audience loyalty + Z conversion signal tendency historically deliver M% ROI.”
I know Holy Marketing has case studies and real campaign data—but I haven’t seen a public breakdown of what actually separates high-ROI partnerships from mediocre ones.
Has anyone here built something similar? Or do you have a different approach? What metrics are you actually using to decide whether a creator partnership is worth the investment?
Это интересный вопрос! Я организую кучу партнёрств, и я часто вижу, что бренды и креаторы говорят совсем о разных вещах, когда обсуждают успех.
Для меня главное—это не метрики, а реальный резонанс. Я смотрю: говорит ли аудитория креатора про продукт после публикации? Есть ли органические разговоры? Переходят ли люди в комментах по ссылкам?
Твоя идея с ‘repeat commenters’—классная. Потому что это показывает, что аудитория реально engage с контентом креатора, а не просто скроллит.
Может быть, нам нужна общая базу данных успешных кейсов? Где брендов и креаторов匹配 через сообщество? Я бы хотела видеть реальные примеры.
Отличный запрос на данные. Я работаю с ROI инфлюенс кампаний, и могу сказать—большинство брендов не измеряют его правильно.
Проблема: они смотрят на vanity metrics (лайки, комментарии), но не смотрят на bottom-line results (продажи, cost per acquisition, lifetime value).
Вот что я рекомендую отслеживать:
- Discount code redemption rate: если дал creator прокод, сколько его использовали? Это показывает реальный impact.
- Click-through rate: сколько людей перешли по ссылке? (Это должно быть минимум 0.5-1% для хорошего creator)
- Conversion rate: из тех, кто перешёл, сколько купили? (Обычно 2-5% для UGC creator)
- Cost per acquisition: выиграл ли ты бюджет креатора в виде прибыли? (CPA должен быть ниже, чем через другие каналы)
- Customer lifetime value: кто был привлечён через эту кампанию, они ещё покупают? Или one-time?
Именно эти метрики показывают реальный ROI. Но для этого нужно хорошо трекировать данные.
Твой scoring rubric—хорошее начало, но я бы рекомендовала сосредоточиться именно на этих конверс metrics, а не на engagement rates. Потому что high engagement не всегда = high sales.
Ещё одный важный пункт: когда ты собираешь case studies, убедись, что ты сравниваешь одинаковые вещи.
Один creator может блестять в awareness кампаниях (много reach, но low conversion).
Другой creator лучше в performance marketing (меньше reach, но выше conversion rate).
Третий creator хорош в brand building (долгосрочный эффект).
Так что твой scoring rubric должен быть разным для разных целей кампании. Пока это по-другому, сравнение не будет fair.
Второй вопрос: ты разделяешь creators на типы? Как в—есть creators которые хороши для brand awareness, есть которые хороши для conversions, есть которые хороши для community building?
Мне кажется, что это критично, но я не знаю, как это систематизировать. Как ты это делаешь?
Okay, I want to give you the creator perspective because I think brands often miss this.
When you’re building your scoring rubric, remember: everything you measure, we feel. If you’re scoring us on ‘conversion signal tendency’, that means you’re looking at whether we aggressively push products. Some of us refuse to do that because our audience trusts us.
So your rubric might penalize creators who are actually higher quality—the ones who integrate products naturally instead of pushing them hard.
I think your approach is smart, but I’d be careful not to optimize for the wrong thing. Don’t end up only working with creators who are basically just sales channels. Those exist, but they often have lower trust in their audiences.
Maybe have two scoring tiers? One for high-push performance campaigns, one for brand-building collaborations? They require different creator profiles.
Also—and I hesitate to say this because I want to help you—but be careful asking creators directly about their past performance. A lot of creators will exaggerate or straight-up lie about what previous campaigns did. It’s not malicious, it’s just that not everyone tracks numbers well.
Better approach: ask for a portfolio of past work, and then independently verify. Look at the actual posts, check comments, see if there are discount codes still active. Do your own research instead of taking their word for it.
And honestly? I respect brands way more when they do this. It shows they’re serious.
This is the right direction. Let me give you the advanced version:
What you need is a multi-stage scoring model. Not just one rubric, but a sequential qualification process:
Stage 1 (Primary Filters - automated or tools-based):
- Audience authenticity (bot detection, demographics match)
- Engagement quality (sentiment analysis on comments, not just count)
- Content consistency (posting frequency, quality variance)
Stage 2 (Secondary Signals - manual review):
- Brand compatibility history
- Audience overlap with your customer base
- Content style fit
Stage 3 (Predictive Scoring - based on historical data):
- Apply weighting based on your internal case study analysis
- Calculate predicted performance range
- Decide on go/no-go threshold
Stage 4 (Micro-test if needed):
- Small pilot campaign to validate predictions
- Update scoring model based on actual results
The key: you build the model iteratively. Start with weights based on reasonable assumptions. Run campaigns. Measure actual ROI. Adjust weights. Repeat.
After 20-30 campaigns, your model should be significantly more predictive than random guessing.
Have you thought about what your minimum sample size needs to be to have statistical confidence in any weights you derive?
One more technical point: make sure you’re accounting for selection bias in your analysis.
Example: if you only work with creators who score above a certain threshold, you won’t have data on what happens with lower-scoring creators. So you can’t know if low score = low ROI, or just = no data.
You need some controlled experiments. Occasionally work with a creator that scores slightly lower than your threshold, just to test. Otherwise your model gets increasingly confident but increasingly wrong.
This is how most recommendation systems break down—they’re optimized on historical success data, but they haven’t tested their assumptions.