Credentials
Topics I cover
I run real campaigns through every tool we review. If a vendor claims 99% deliverability and I see 78% in my Gmail seedlist, I write that down. If a sequence editor crashes when you paste a 12-step flow, that ends up in the cons.
How I test each tool
This is the protocol every tool comparison on this blog goes through before I write a single word. Same protocol, same seedlist, same campaign content - so the comparison is apples to apples.
- Day 0 - Sign up like a real customer. No vendor handshake, no demo call, no insider account. Pay the same price a normal SDR would pay. Document the onboarding flow as it actually is.
- Day 1-3 - Configure deliverability stack. SPF / DKIM / DMARC. Connect inboxes. Run the vendor's warmup if available. Note any setup friction (custom tracking domains, reputation checks).
- Day 4-7 - Build a sequence the way an operator would. 5-step email + LinkedIn cadence. Real prospects from a fresh ICP. Test personalization tokens, conditional branches, fallback values.
- Day 8-14 - Send 1,000+ messages and track everything. Open rate, reply rate, bounce rate. Inbox vs spam placement across Gmail, Outlook, GMX, Free.fr. Bounce typology (hard, soft, catch-all).
- Day 14 - Stress test the edges. Pricing math at 3 seats and 10 seats. Support response time (open a ticket, count hours). EU data residency claims (read the DPA, do not trust the marketing page).
- Day 14 - Verdict only after the receipts are in. Pros, cons, who it is for, who it is not for. Signed by Vincenzo. Reviewed by Nicolas and Nathalie before publishing.
Recent comparisons I tested
A sample of tool comparisons where I ran the deliverability and sequence testing. See our full methodology for the protocol.
Why I write hands-on reviews
What I look for in a deliverability test
Deliverability is the dimension most "top X tools" articles fake the hardest. A vendor will quote a 99% inbox rate and a reviewer will copy-paste the number. I do not. The deliverability test I run on every tool covers the following:
- Seedlist coverage across providers. Gmail consumer + Workspace, Outlook consumer + Microsoft 365, GMX, Free.fr, Libero, Tiscali, plus a long-tail mix of corporate domains. A tool that aces Gmail and flunks Outlook is a tool I cannot recommend to an enterprise buyer.
- Warmup quality, not warmup presence. Most platforms now ship "warmup". Few do it well. I watch the warmup behaviour for a week before sending, then for two weeks during the campaign. Reputation drift is what kills SDR teams in month two - that is what the test catches.
- Bounce typology. Hard vs soft, syntactic vs SMTP vs greylisting, catch-all behaviour. I segment the bounce data because a 5% bounce on hard rejections is fine while a 5% bounce on greylisting tells you the tool is hitting your IP reputation.
- Sending IP behaviour. Shared vs dedicated, rotation logic, IP reputation across major blocklists (Spamhaus, SURBL, Barracuda). I check before, during, and after the test.
- SPF / DKIM / DMARC handling. Some tools quietly bypass SPF alignment. Some misconfigure DKIM rotation when you connect a Workspace tenant. These are the bugs that wreck deliverability silently.
How I work with Nicolas and Nathalie
I am one third of the review pipeline on this blog. The other two:
- Nicolas Finet - CEO Sortlist + Overloop. Brings the data lens (600K monthly Sortlist demands, 1.2M Overloop sequences), pricing math at scale, and EU regulatory posture.
- Nathalie Saikali - Head of Sales Overloop. Operator pass. The "would I actually run this on my pipeline next Monday" sanity check that keeps theoretical wins from becoming published wins.
The comparison piece does not ship until all three of us sign off. See the full testing methodology for the protocol every tool goes through.
Get in touch
Tool you want me to test? Pricing change you spotted in our comparisons? LinkedIn DM works. For factual corrections, email corrections@overloop.com with the URL and a source.