Split Testing: Interpreting An Example

We talked about split testing a while back. However, I didn’t have a sample split test to refer you to at the time.

So, I went back and found one. Let’s take a look at a split test, what was varied, and what we might infer from our results.

The following stats are from a message that we sent out to our AWeber Test Drive subscribers to inform them about a new article on our website. Open percentages for each appear in the right-hand column.

split_test_statistics.png

The complete subjects were:

  • Learn How to Get More Customers from Free Downloads
  • {!firstname_fix} Learns How to Convert Free Downloads to Customers
  • Converting More Free Downloads to Paid Customers
  • Conversion Secrets for Free Downloads to Paid Customers

By looking at the open rate statistics, we see that the message with subject Conversion Secrets for Free Downloads to Paid Customers garnered the best open rate at 20.6%.

So What Do We Learn From This?

First of all, all four messages were sent at the same time, so differences in send date and time did not contribute to the difference in open rates. Also, the content of the messages is identical, so any effects due to content filtering would be based on the subject only, which is what we’re testing.

The use of the word “Secrets” may have contributed to a greater open rate by implying that the information in the message is not widely known, and is valuable due to that scarcity.

I attribute the success of the next-best subject to personalization.

Including the recipient’s first name didn’t get us as high an open rate as using the word “Secrets,” but it did get a better open rate than not using “Secrets” nor personalization.

A future message might use a subject that included personalization and a psychological trigger such as the word “Secrets” to maximize open rates.