Email Testing Inspiration: Tips and a Recent Example

Sure, you’ve heard about the value of testing in your email marketing campaigns. You’re familiar with some of the big gains that tests can bring you. Perhaps you’ve even looked over some of the examples of things you can split test and considered doing them. But there’s a pretty good chance you’re not testing as much as you could, or even as much as you might like to. Most marketers aren’t. Getting in the habit of testing requires us to think not only about what we want to test in a broad sense, but also what tests to conduct (and how to conduct them) so that the results are useful to us. Recently, I found a couple articles that I thought would help you get more out of your existing email testing – or simply start conducting more tests to optimize your campaigns.

Testing...Sure, you’ve heard about the value of testing in your bulk email marketing campaigns.

You’re familiar with some of the big gains that tests can bring you. Perhaps you’ve even looked over some of the examples of things you can split test and considered doing them.


But there’s a pretty good chance you’re not testing as much as you could, or even as much as you might like to. Most marketers aren’t.

Getting in the habit of testing requires us to think not only about what we want to test in a broad sense, but also what tests to conduct (and how to conduct them) so that the results are useful to us.

Recently, I found a couple articles that I thought would help you get more out of your existing email testing – or simply start conducting more tests to optimize your campaigns.

Email Testing Tips

Linda shares a presentation she gave on email testing.

While at first glance this presentation seems targeted to retailers and similar businesses, the key points are useful to anyone using email marketing to increase sales.

You’ll note that in the presentation, Linda argues against testing minor things (like the design of your call-to-action button) in favor of testing major changes (like the overall layout of your email).

While I think there is value in testing minor changes (we’ve even tested button design ourselves), making larger changes is a better place to start for most people:

  • The larger changes in your results that you’re likely to see when testing larger changes instead of smaller ones can help inspire you to continue testing
  • It’s easier to start with broad changes and drill down to smaller ones in future tests than it is to test something minor and try to expand logically to larger tests. (Example: if you test multiple offers in one email vs. one offer, you can later test offer order and location; if all you test is whether adding an exclamation point lifts sales, where do you go from there – 2 exclamation points? 3?)

She also discusses “smart” landing pages. This is a great point that some people miss when testing:

If you test a different offer or content to different groups, make sure you’re not just dumping everybody on the same page.

There’s almost certainly an increase in conversions to be had by making what you say in each version of the email you’re testing more relevant to the landing pages you link to from each version.

How to Test and Track Web Forms

Darren discusses how he uses multiple forms to test what gets people to subscribe to his newsletter.

Not only does he give us an example of how you can track where subscribers are coming from, he also points out a few other uses of web form tracking that you can use to test the effectiveness of form location, web page copy and other aspects of online list-building. Useful stuff, especially if you haven’t done much web form testing.

Of course, there are certainly other things to consider when creating a signup form đŸ™‚

Any Other Email Testing Inspiration?

Have you come across an example of a useful or thought-provoking email test, or tips on testing? Share them below!

10 Comments

  1. Tak

    4/7/2009 5:19 pm

    Justin,

    What do you think about having multiple landing pages each offering different items? I saw a site, offering an audio, PDF, video, all from the same site, but the author was driving different articles back to each.

    That would make overall conversion rate better because of its super laser targetted traffic.

    Any data on something like that?

  2. Nick Stamoulis

    4/16/2009 2:11 pm

    Testing and trying new things with email marketing is always very important. You don’t want to get to comfortable with one approach.

  3. Jacob

    4/16/2009 3:34 pm

    My big question is this:

    What’s the ‘margin of error’ in tests. The standard deviation.

    How do we know a test is valid?

    I ran a split test on a list of 30,000+ emails. The results differed by 15%. But the two messages were identical.

    This makes me very skeptical of ANY split test that I do.

    How can we know the results are valid?

  4. Justin Premick

    4/17/2009 9:48 am

    Hi Jacob,

    Good question. Here’s how I look at it:

    Statistical significance does matter. However, in an A/B split test you’re working with 2 data points. The standard deviation is going to be whatever the difference is between those 2 data points. There’s really nothing to calculate. In those single-test cases, I like to make a judgment call – did whatever I tested work well enough to justify doing it more in the future – in other words, to test it again?

    Once you start running a test repeatedly (like we did with our buttons vs. text links test), I do think it’s a good idea to keep track of your data points and check for significance. Here’s a handy site for doing so.

    Just out of curiosity: why in the first place did you run a split test where the messages were identical?

  5. David

    4/19/2009 8:30 am

    If your messages are *identical* and only the landing pages differ, you could use Google’s Website Optimizer to run an A/B test and it will do the necessary calculations for you.

  6. Justin Premick

    4/20/2009 9:16 am

    Tak,

    I don’t have any data on that, but in general, the more closely you can connect your offer to your subscribers’ wants, the better your conversion rate will be.

  7. Is Your Thank You Page Awful? Here Are 2 Ways You Could Make It Better - Inbox Ideas: Email Marketing Tips by AWeber

    4/22/2009 11:35 am

    […] noted in a recent post an example of how one of our users tracks the effectiveness of his signup […]

  8. Lori Titus

    5/2/2009 6:56 am

    Justin,

    Actually, running a split test with identical messages sounds like an interesting test of the natural variance within your email list. I think Jacob has a good point.

    If he replicates that experiment on, say, 4-6 emails, and the split test always has that much of a variance, then that needs to be taken into consideration on a real split test. Then, if a real split test has a 15% variation, he may want to ignore the results, but pay attention if they hit 30%.

    If he replicates that experiment on 4-6 emails, and the split test typically has a much smaller variance, then his original test is just an outlier, and statistically should be ignored.

    I’m sure there is a statistical term for this, although I’m tired enough right now it is not coming to me. Similar to the reason for testing a placebo in a drug study.

    Next time I run my newsletter, I think I’ll give it a try and see what happens. Just out of curiousity.

  9. Justin Premick

    5/5/2009 9:49 am

    Lori,

    That makes a lot of sense – but you definitely need to do it a number of times (as you suggest) before drawing a conclusion. Thanks!

  10. Chuck Rosseel

    10/4/2009 5:04 am

    Thanks for this eye opening article. It hit home when you wrote, “there’s a pretty good chance you’re not testing as much as you could, or even as much as you might like to”. I have to admit this is true. Thank you for the excellent reminder.