Using Split Testing to Improve Your Advertising Effectiveness

Using Split Testing to Improve Your Advertising Effectiveness

Mike Armour

In the previous issue I outlined the power of using split testing in your marketing. If you've not read that article, I encourage you to do so. It provides the conceptual backdrop for our topic today.

Fundamentally, split testing is a technique which allows you to determine how small changes in your advertising materials impact your response rate. In my earlier article I cited some amazing statistics from a recent Facebook campaign, where the mere swapping out two very similar pictures made a ten-fold difference in the number of clicks on the ad.

The structure of Facebook advertising is very conducive to split testing. But you can do it with almost any type of marketing which aims at creating customer responses. In today's article I'll show you some ways to do this. But first, let me lay out some important guidelines.

1. When doing a split test, avoid multiple variations from one test piece to another. The purpose of split testing is to determine how much difference a single variation makes in the response that you receive. If your test pieces differ from one another in three or four significant ways, then your outcome won't tell you which change made the difference.

In my Facebook split test, for example, I tested seven different photos, all with exactly the same copy. The ads were displayed to the same Facebook demographics and were all run in the same time frames. This then allowed me to isolate the impact of the photo choice itself. I can now run another test, using my best-response photo in every ad. But this time, while keeping the photo constant, I will vary the ad copy , but this time make various changes in the body of the ad. in which I use the best photo in every ad, but try out various wordingwith several variations of the ad's text.

2. Conduct split tests over a reasonably compact timeframe. That is, don't distribute one test piece this month, another next month. If you do, you won't know how much timing itself affects the response rate.

As I write this article, most of the country is shivering under an Arctic blast that has persisted for days. Even UPS is sending out notices that they can't meet delivery schedules due to the cold, ice, and snow. All business is suffering.

Now imagine that you had sent out your first test piece last month, then your second one last week. Your test would give you little by way of worthwhile feedback, since you can't account for how much the blizzard affected your response rate.

Response rates can also be skewed by other calendar events. Holiday and vacation seasons, for instance. Or the final weeks before a major election, when mailboxes and the airwaves become so overloaded with political advertising that people start turning a blind eye to advertising pieces in general. So you want to remove timing as a variable in your split tests whenever possible.

However, there are occasions when what you want to split test is the role that timing itself plays in responses to your ad. If you send out an electronic newsletter, what day (or what time of day) gives you the greatest open rate? What times of the day or what days of the week give you the most success with a radio ad? Are there particular days when your newspaper ad gets a higher response rate? Split tests like this may in fact require several weeks of testing before you have sufficient data to answer your question.

3. Split tests offer little benefit unless you tenaciously capture, track, and evaluate metrics. Don't start split testing until you have the mechanisms in place to let you make informed decisions based on the feedback that your test provides.

The beauty of web-based businesses and internet advertising is that companies like Facebook and LinkeIn can provide many of the critical metrics for you. And if the purpose of an ad or email is to lead people to visit a particular web page, running Google analytics on that page (a free service) will allow you to monitor how many people see that page each day of your advertising campaign.

What Should You Split Test?

There's no need to split test every element of a marketing piece. Overwhelming data, however, demonstrates that changes in certain elements can have a profound impact on customer response rates. These include:

  • The headline in a brochure or on a web page
  • The selection and positioning of photos and graphics
  • The background color
  • The weight of the stock on which the piece is printed
  • The style and color of envelope used in a direct mail campaign
  • The size of a brochure or standalone mailer
  • The style and size of fonts
  • The overall color palette of the piece
  • The way you word your offer to the customer
  • What appears in the upper right hand corner of a brochure or web page
  • The lead sentence of your advertising copy
  • The opening seconds of a radio or TV ad
  • The way an ad, email promotion, or marketing letter closes
  • The timing and frequency of the ad

Conducting a Split Test

The key with any split test is to be able to determine which variant of your marketing piece had the greatest response rate. This means that you must have some feedback method by which you can know the particular variant to which a customer is responding.

So imagine that you own a small restaurant and you are mailing a promotional piece to 1000 households in your vicinity. And let's assume that you want to split test the paper color on which you print your mailing. This leads you to send 500 pieces on one color of stock, the balance on another color.

Your promotional mailing might include a coupon for a 30% discount on a meal over the next 30 days. When the coupon is presented, the color of the underlying stock tells you immediately which mailing this patron received.

Or if your stock color is the same for an entire mailing (because you are split testing some other element of your promotional piece), you can print tiny offer codes inconspicuously on the coupon, with each variant of your mailing having a unique offer code.

But what if both colors yield similar results? If the overall response rate is acceptable, you might forego further testing of stock color and continue to use one or both of the test colors in future mailings.

On the other hand, if you're not satisfied with the response rate, your next mailing might substitute a third color of paper stock for one of the two in your first test. Keep pairing different color combinations like this until you find the one which works best for you.

Web-Based Split Tests

For email advertising campaigns, the most important thing to split test is your subject line, followed by your headline. Your open rate will depend primarily on the impact of your subject line. Once the email is opened, the power of your headline will primarily determine whether people read the rest of the email.

Fortunately, most automated email services make it reasonably easy and straightforward to do split testing. These services will also provide you a report as to how many of your emails were opened.

And if you want to know which headlines in your email campaign work best, set up two somewhat identical web pages to which people will be sent from your email. In emails with the first headline, link to one web page. Use a link to the second page in emails containing the other headline. Then use Google Analytics to see which page receives the most visits.

As you can see, split testing takes some time, both time to monitor the metrics and continuous testing of various combinations over an extended period. I should also mention that the most successful marketers NEVER stop split testing. They do it all the time. Some have staff members who do nothing but design and monitor split tests.

Which is another way of saying that split testing takes effort and time. But the payoff is often tremendous.


This article first appeared in Encore Entrepreneur inbox magazine on February 6, 2014.


Leave a Reply

Your email address will not be published. Required fields are marked *