Got a bit of data for you.

Good marketers test. You know this. They’ll try out lots of different approaches to see what works.

In terms of e-mail marketing, this means splitting your list into random segments of equal-ish size, sending slightly different e-mails to each and tracking click data.

From the results, you can tell which variation works best. There’s a neat little statistical tool called a T-test you can use to judge the significance of the test. Roughly speaking, if you run a T-test on the data and it says it was significant at the 10% level, it means there’s a 10% probability the result you see happened through random chance, rather than because one e-mail was actually better than the other.

I won’t go into details here. Maybe in another post if I’m feeling particularly mathsy. But that’s the basic idea. Worth noting is that something normally needs to be significant at the 5% level before you should feel comfortable saying one of your tests is actually better than the other.

Anyhow, that’s the background. Here’s the data.

Not long ago, I ran a 2-email campaign.

On the first e-mail, I split-test 2 different subject lines, neither of which ended up particularly better than the other.

On the second e-mail, I split each of those 2 groups down further. One half of each group had a subject line focused on urgency (because the offer was closing) and one half had ‘Re: [the first subject line]‘.

There’s a lot of anecdotal evidence about the Re:-style subject lines getting a lot of clicks, but in this test it was destroyed. In both groups, the urgency-focused subject outpulled it at a significance level of 0.5%.

I’ve yet to test it against more general curiosity-based subject lines, but the data here’s pretty clear – when you can use urgency, use urgency. It works like gangbusters.