Why I ditched cold emails for partner vetting through portfolio cases on the hub—and what I missed the first three times

I spent the better part of my first two years trying to figure out if an agency partner was actually legit or just had great marketing copy. Cold outreach gave me nothing—a LinkedIn message, a portfolio link, maybe a call. By the time I realized they didn’t deliver what they promised, I’d already handed over work or clients.

Then I started looking at how agencies were actually presenting their case studies on the hub. Not the polished PDFs they send to clients, but the real breakdowns in the community. Who they worked with, what the actual problems were, what didn’t work.

That changed everything for me. Instead of trying to vet someone based on a conversation, I started looking at their footprint here. How do they talk about challenges? Do they own up to mistakes, or do they spin everything as a win? How do other agencies in the community respond to their work?

First time I used this approach, I connected with a DTC-focused agency in Moscow. They’d posted a case about scaling UGC production across multiple brands, and in the comments, I could see how they handled pushback and clarifying questions. That told me more than any sales call ever could.

What I learned the hard way: just because someone is active on the hub doesn’t mean they’re a fit. And just because their case looks great doesn’t mean their processes are actually scalable for partnership work. The third time I misjudged a partner, it was because I got caught up in the impressive numbers and didn’t dig into how they actually managed timelines and handoffs.

Now I have a specific checklist. Case studies alone aren’t enough. I’m looking at how they respond to critique, how they talk about failures, and whether they can articulate the specific gaps they partner with other agencies to fill.

How many of you are still making partner decisions based on initial conversations, versus actually doing homework by reviewing their real work and how they present it to the community?

Your point about reading how they respond to pushback in the comments—that’s the real tell. I’ve seen agencies with incredible case studies get absolutely roasted in the comments for poor communication or mismanagement, and that’s worth more than any reference call.

What I started doing is actually asking potential partners about challenges they’ve faced in cross-border campaigns, and then searching to see if they’ve posted about those specific issues on the hub. If they have, I can actually see their thought process and how they troubleshoot. If they haven’t, or if they only post wins, that’s a flag.

The third-time failure you mentioned—what specifically went wrong with those agency dynamics? Was it timeline misalignment, quality drops, communication breakdown, or something else? I want to make sure I’m adding the right things to my vetting checklist.

I do something similar when I’m evaluating which brands or agencies to work with. I look at how they interact with creators in open discussions, not just in their polished case studies. If an agency is defensive when someone questions their strategy or dismissive of creator feedback, I don’t want to work with them, no matter how impressive their numbers look.

The case studies are marketing. The comments and discussions are reality.

Do you ever ask potential partners directly about their failures or challenges when you’re vetting them? I’m curious if they’re usually honest about it or if they try to spin it.

This is solid due diligence thinking. From a data perspective, what you’re doing is pattern-matching against actual behavior rather than self-reported metrics. That’s significantly more predictive.

One thing worth considering: are you also looking at velocity and consistency of their posts and engagement? Agencies that are actively learning and willing to iterate tend to have a different cadence than ones that just post wins and disappear. That pattern alone can signal whether they’re actually building partnerships or just chasing short-term projects.

When you reviewed those third-time partner failures afterward, did you go back and look at what you missed in their hub presence? Sometimes the red flags are there, we just don’t know what to look for yet.

Have you picked up a partner from the hub using this method yet, or are you still in the evaluation phase with a few candidates?

This is a much more rigorous vetting process than what I see most agencies doing. But I want to push on something: are you weighting the portfolio work equally with the commentary? Because sometimes a case study is genuinely impressive even if the agency had one rough conversation in the comments section.

What metrics are you using to weight portfolio strength against communication patterns?

Also curious—when you say the third partnership failed, did you measure that failure against the signals you now recognize? Or did the failure itself teach you what those signals should be?

I’m still in the cold email phase for finding marketing partners, so this is really helpful. The idea of actually evaluating someone based on how they engage with the community instead of just their pitch is making me realize I’ve been doing this backward.

When you developed your vetting checklist, did you write it down, or is it more of an intuitive pattern recognition at this point?