I’ve been thinking a lot about this, and honestly, I keep coming back to the same conclusion: AI predictions alone aren’t trustworthy, but neither is pure human instinct anymore. The real insight seems to come from combining both.
Here’s the situation: we have access to increasingly powerful AI tools that can analyze massive amounts of cross-market data and predict campaign performance. But the predictions are good at spotting patterns, not at understanding context. A model might predict that a particular influencer will perform well with a specific audience based purely on historical engagement data. But it won’t catch that the influencer is currently going through a public controversy, or that the audience sentiment has shifted in recent weeks, or that there’s a cultural moment coming that makes the whole approach tone-deaf.
So I started experimenting with a hybrid approach: use AI to surface patterns and generate predictions, then run those predictions by human experts who understand the specific markets.
For our US market work, I have a colleague with 8 years of influencer marketing experience who knows the landscape intimately. For the Russian market, I partner with someone based there who understands cultural nuances I’ll never catch as an outsider. I feed the AI predictions to them and ask: ‘Does this make sense given what you know about the market right now?’
It sounds like it would slow things down, but in practice, it doesn’t. The AI does the heavy lifting of analyzing thousands of data points. The humans validate, add context, and flag concerns. It’s faster than doing either thing alone because we’re not wasting time on predictions that don’t make real-world sense.
What’s emerged from this process: the best insights come from questioning why the AI prediction exists. Like, the model predicts Influencer A will outperform Influencer B. The human expert asks: ‘But why? What signals is the model using?’ And sometimes that conversation surfaces something neither the model nor the human would have caught alone.
I think this hybrid model is where things are heading. Pure AI can’t make judgment calls about culture and context. Pure humans can’t process the volume of data. But humans + AI that know how to work together? That’s where the competitive advantage is.
But I’m still figuring out the operational side—how to structure these reviews so they don’t become bottlenecks, how to scale expert input when you’re working across multiple markets, and how much weight to give expert opinion when it conflicts with AI prediction.
How are the rest of you thinking about bringing human expertise into AI-driven processes? Are you finding ways to make it work at scale, or does it only work for high-value decisions?