Our CFO’s question was simple: “Why are we paying this much for influencer work when we don’t have benchmark data?”
Fair point. I had regional data that looked good, but when you’re expanding globally, you’re suddenly operating without a reference frame. What’s normal spend for a brand our size in a new market? What ROAS should we expect? How does an influencer fee in one market compare to another?
I realized I was flying blind, presenting projections that felt educated but weren’t really grounded in anything beyond “we think this will work.”
So I started digging. Reached out to people who had actually navigated multi-market campaigns. Found resources and networks that had compiled genuine benchmark data—not guesses, but actual campaign results from comparable brands.
When I had real data, everything changed. Not because the numbers proved I was right, but because I could explain why we were allocating budget the way we were. I could show that our CPM assumptions matched industry standards. That our creator fee structures aligned with what other brands in that market were paying. That our expected ROAS was realistic, not optimistic.
Equally important: the benchmarks showed me where we were potentially overspending and where we were underspending. Some channels we thought were expensive actually delivered better ROI than cheaper alternatives. Some creator tiers we avoided turned out to be underutilized volume opportunities.
The conversation with the CFO shifted from “convince me this is a good idea” to “here’s how our strategy compares to the market, and here’s where we’re taking smart risks versus where we’re playing it safe.”
Budget got approved significantly faster.
Has anyone else found that having actual competitive and market benchmarks changes not just your credibility with leadership, but also your actual strategy? What was your biggest surprise when you actually had real data?
Это отличный пример того, как прозрачность в данных строит доверие. Когда вы можете показать CFO цифры, а не риторику, диалог становится совсем другим.
И знаете, что интересно? В хабе мы видим, что бренды, которые имеют доступ к international benchmarks, принимают решения намного быстрее. Они меньше колеблются, потому что знают, что их предположения обоснованы.
Вопрос: когда вы получили эти benchmarks, была ли большая разница между тем, что вы изначально планировали, и тем, что рекомендовали данные? На какие выводы вас это подтолкнуло?
Отличная демонстрация важности evidence-based decision making. Я заметила, что многие маркетологи пользуются предположениями вместо данных, а потом удивляются, почему их план не прошел.
Мне интересны конкретные метрики: какими именно бенчмарками вы пользовались? CPM по креаторам? Cost per content piece? ROAS по категориям? И, что важнее всего—как часто вам нужно обновлять эти бенчмарки, чтобы они оставались релевантными?
Также: пересматривали ли вы свои предположения ПОСЛЕ кампании? То есть, совпали ли реальные результаты с бенчмарками?
Очень релевантно для нас. Мы как раз столкнулись с тем, что наши бюджеты в России и в новом рынке организуются совсем по-разному, и мы не знаем, это нормально или нет.
Вопрос: где вы получили эти benchmarks? Вы обратились к консультантам? Нашли публичные отчеты? Или подключились к какой-то сети, где делятся данными?
И важно узнать: когда вы обнаружили, что в каком-то канале вы переплачиваете, как вы это конвертировали в действие? Просто снизили бюджет, или пересмотрели стратегию по этому каналу?
This is exactly the conversation I’m having with my team right now. The moment you have benchmarks, you stop being a supplicant asking for budget and become a strategist allocating resources.
What you did tactically—switching the narrative from approval-seeking to strategy-comparison—that’s sophisticated positioning.
Two things I’d dig into:
First, how did you source the benchmarks? Because quality matters enormously. There are industry reports that are outdated, skewed by a specific subset of brands, or just frankly inaccurate. The source credibility determines whether the CFO takes it seriously.
Second: did the benchmarks reveal any competitive advantages on your side? Or opportunities? Because the best use of benchmark data isn’t just defensive (“we’re in range”) but offensive (“here’s where we’re beating the market”).
Also—did you communicate the benchmarks to your influencer partners? Because if they know they’re in the 70th percentile for creator fees in their market, that changes how you negotiate and partner.
This is interesting from the creator side because it directly impacts how brands negotiate with us. When a brand comes in with actual market data, they tend to be more fair about pricing and more realistic about timelines.
One thing though: when you were looking at “creator fees” as benchmarks, did you account for the difference in content quality and experience level? Because there’s a huge range between a 50k-follower creator and a 500k-follower creator, even in the same category.
Also—did the benchmarks show anything about turnaround time or content deliverables? That’s something I think gets lost in pure dollar benchmarks but actually matters a lot for the actual value exchange.
Have you shared this benchmark framework with your creator community? Might help build more trust if creators understand how you’re pricing their work.
Strong play shifting from anecdotal justification to data-driven allocation. That’s what separates good marketers from great ones—the ability to triangulate between intuition and evidence.
Some nuanced questions:
First, how did you control for brand maturity and category in those benchmarks? Because a DTC beverage brand’s influencer ROI looks different than a SaaS company’s, which looks different than a luxury brand’s. If your benchmarks were too broad, they might be directionally right but specifically misleading.
Second: when you identified opportunities to underspend, what was your hypothesis about why? Did competitors in that space just not understand the opportunity, or were there structural reasons (like category-specific audience preferences) that explained the gap?
Third—and this is critical for future scaling—did you build a feedback mechanism to continuously update your benchmarks? Because if you’re expanding globally, your benchmark data becomes stale fast. How are you institutionalizing the learning so you can repeat this for the next market?
One more angle: did the benchmark analysis reveal anything about creative format preferences by market? Because allocation efficiency isn’t just about spend—it’s about matching spend to the formats that perform best in each region.