We’ve been working with a few US creators and agencies over the past 3-4 months, and here’s what’s frustrating: everyone talks about metrics, but I’m not sure I’m even tracking the right ones.
Right now, we’re looking at: impressions, engagement rate, clicks to our landing page, and cost per lead. But when I compare performance across partners, something feels off. One partner has lower engagement rate but higher-quality leads. Another has higher volume but lower conversion.
I know vanity metrics are a trap, but I’m drowning in data. My team is sending me dashboards with 30+ metrics, and I’m no closer to knowing if a partner is actually worth continuing to work with.
The other challenge: we’re not just measuring campaign performance—we’re also measuring partnership quality. Like, does this person understand our business? Are they reliable? Can we scale with them? But how do I quantify that in a way that actually informs my decision?
We’re also bootstrapped, so I need to know quickly—within 60-90 days—whether to deepen the partnership or move on. I don’t have months to wait for perfect data.
Which metrics do you actually use to decide if a partner is worth investing in? And how do you know the difference between someone who had an off month vs. someone who just isn’t the right fit?
Это отличный вопрос, потому что я вижу, что много брендов путают «хорошие числа» с «хорошего партнерства».
Вот что я смотрю, когда оцениваю, стоит ли продолжать работу:
Hard metrics:
- CAC (customer acquisition cost) vs. LTV (lifetime value)
- Conversion rate от клика до lead (не impressions, именно конверсия)
- Repeat partnership willingness: готов ли партнер работать со мной снова в следующем квартале? Если да—это хороший знак
Soft metrics (которые часто важнее):
- Скорость реакции: как быстро они отвечают на вопросы? Если за 48+ часов—красный флаг
- Качество идей: приносят ли они 2-3 собственные идеи в начале работы, или просто ждут, что вы им скажете?
- Честность по результатам: если не все сработало, признают ли они это и предлагают ли решения?
За 60-90 дней я смотрю:
Месяц 1-2: может быть обучение, пиковать не обязательно
Месяц 3: здесь видна тенденция. Если метрики улучшаются или стабильны—можно масштабировать. Если падают—нужен сложный разговор.
И еще: я всегда спрашиваю других партнеров об этом человеке. Если он хорошо работает с конкурентом или другим брендом—это говорит о его надежности.
Я бы рекомендовала фокусироваться на трех метриках, а не тридцати:
1. Efficiency Metric: Cost per Outcome (CpO)
Это не CAC, это стоимость единицы того, что вам действительно важно. Для SaaS это может быть:
- Cost per qualified lead (не просто lead, а lead, который прошел ваш квалификационный критерий)
- Cost per trial signup
- Cost per активного пользователя через 30 дней
Это единственный метрик, который имеет смысл сравнивать между партнерами.
2. Consistency Metric: Coefficient of Variation (CoV)
Даже если среднее значение немного хуже, партнер, который дает стабильные результаты, часто лучше, чем партнер с высокой волатильностью.
Просто: принимали результаты неделю за неделей. Если Парнер А дает 8-10 leads в неделю (стабильно), а Партнер B дает 6 одну неделю, 15 другую—Партнер А лучше для планирования.
3. Scalability Signal: Engagement Quality Score
Это комбо:
- Repeat clicks (сколько людей приходят с этого партнера фильма за неделю?)
- Dwell time на вашем сайте
- Вероятность второго взаимодействия (они вернулись?)
Это показывает, не только приводит ли партнер трафик, но приводит ли трафик качественный.
За 60-90 дней:
Месяц 1: собираете исходные данные
Месяц 2: видите тренд
Месяц 3: решение
И вот что критично: control for external factors. Если в месяц 2 был эффект сезонности или вирусный момент—фактор это в анализ. Иначе вы неправильно оцените партнера.
Читать вам мои spreadsheet? Я могу поделиться шаблоном, который я использую для своих оценок.
Alright, so I help clients evaluate partners all the time. Here’s the reality: you should cut this down to 5 metrics maximum, and two of them should be qualitative.
The 3 quantitative metrics that actually matter:
1. Blended CAC (all partners combined) vs. LTV
- This is your north star. Everything else is noise.
- If your blended CAC is dropping or stable, and LTV is stable or rising, you’re winning—regardless of which partner is “best.”
2. Cost per Qualified Opportunity (CpQO)
- Not cost per click. Cost per actual opportunity.
- This is where most teams fail. They lose track of whether the traffic is right traffic.
3. Attribution-adjusted ROAS (Return on Ad Spend, adjusted for attribution model)
- I use a 40-30-30 model: 40% credit to first touch, 30% to last touch, 30% distributed across middle touches.
- Partners who generate top-of-funnel awareness look better with first-touch attribution, but your real business probably needs last-touch contribution.
- Use a multi-touch model, and you’ll make better decisions.
The 2 qualitative metrics:
4. Communication Quality
- Does this person/team understand the ask within 1-2 conversations?
- Do they proactively flag issues, or do you only hear from them when things break?
- Can you have a “no” conversation with them, or do they get defensive?
5. Strategic Alignment
- Do they care about your business goals, or just their performance metrics?
- In contract negotiations, do they ask questions about your product, your ideal customer, your market?
- Or do they just want to know budget?
My 60-90 day decision framework:
- Days 1-30: let them learn. Metrics might not be stellar yet. But check if they’re communicating and asking smart questions.
- Days 30-60: you should see them hitting 70-80% of agreed targets. If they’re at 40%, something’s wrong.
- Days 60-90: you should see growth or consistency. If it’s declining, they’re not the fit.
Pro tip: do this evaluation in writing at day 30. Don’t wait until day 90. If you see red flags at day 30, you can course-correct immediately instead of wasting 60 more days.
Okay, so from my side—I hate being judged on engagement rate because it’s so fake. Like, I could post something that gets tons of engagement but doesn’t actually mean anything for a brand.
Here’s what I think you should actually care about:
1. Click-through rate to your site. Not engagement, not impressions. How many people actually clicked? That’s real.
2. How good the people are who clicked. This is hard to measure, but I think you can figure it out by looking at: did they stay on your site for more than 30 seconds? Did they scroll? Did any of them come back the next week? Those are signals that it was real traffic, not bots or accidental clicks.
3. If I actually convert a follower into a customer for you. Like, literally—did anyone buy because of my content? You can probably track this with UTM codes.
Honestly, the metrics I’d cut: impressions, likes, comments. None of that matters if nobody actually takes action.
One thing that helps: ask creators directly—ask them what they think is actually happening with your audience. Like, I can tell if my followers are interested or just scrolling. If I see low engagement, I feel it. But if I see people saving the post or asking me questions about the product—that’s real interest.
So maybe ask your creators: “Do you feel like this is resonating?” They’ll give you signals that dashboards won’t.
This is a portfolio optimization problem, not just a metrics problem.
Here’s my framework for evaluating partner performance in 60-90 days:
Week 1-2: Establish the baseline model
Define your attribution model (I use first-touch, last-touch, and 40-20-40 linear for consideration/conversion stages). This is critical because different attribution models will rank partners very differently.
Week 3-6: Collect clean data
You need:
- Gross conversions (lead form, trial signup, whatever your conversion event is)
- Cost per conversion, attributed properly
- Conversion quality (which of these converts to paying customer? Which doesn’t churn?)
- CAC payback period (how many months until LTV > 3x CAC for customers from this partner?)
Week 7-12: Perform variance analysis
The question isn’t “Is Partner A or B better?” It’s “What is the confidence interval around each partner’s performance, and is the difference statistically significant?”
For example:
- Partner A: $120 CAC, ±$30 confidence interval
- Partner B: $100 CAC, ±$50 confidence interval
Partner B’s lower CAC might be statistical noise. Partner A might actually be more reliable.
Decision framework:
- Scale: If efficiency is improving and variance is shrinking → scale aggressively
- Maintain: If efficiency is flat but reliable → keep as baseline
- Test: If you haven’t hit statistical significance yet → extend runway by 30 days, but set a clear go/no-go date
- Cut: If efficiency is declining after the learning curve → move to exit
The qualitative factors (these are tiebreakers, not primary signals):
- Responsiveness to strategic feedback
- Proactive optimization suggestions
- Alignment with your product vision
Reality check: if after 90 days you’re still uncertain, the partner isn’t performing well enough. Good partners create certainty through clear results.