How Your Advertising Can Double as Market Research

How Your Advertising Can Double as Market Research

Mike Armour

Startups are usually limited on advertising dollars. Every dime that they spend on marketing must stretch as far as possible.

So why not let your advertising do double duty for you? Why not let your advertising promote your product or service and at the same time perform market research?

Of course, you should never undertake an advertising campaign until you've done marketing research already. Your advertising, however, can carry your research forward and refine it.

The Magic of Split Testing

One way to do this is with a technique called "split testing." Done consistently, split testing gives you vital insights into the decision-making of your potential customers — insights which you can gain in no other way.

Split testing allows you to use your advertising for market research.

The idea of split testing is to monitor the response rate to your advertising and to identify how the response rate changes when you make small variations in an ad or a marketing piece.

This means that you do not mail copies of the same brochure to 5000 nearby homes. Instead, you send five different mailings of 1000 each, all of the brochures substantively the same, but with slight differences from one to another. Then you monitor which variation draws the strongest customer response. (In the next issue I'll show you simple ways to do this.)

A Split Test Case Study

Even though I just illustrated the concept of split testing with an example from direct mail advertising, the technique is applicable to virtually any type of advertising that you do. Let me give you an example from one of my own advertising campaigns recently on Facebook.

Facebook ads are very compact, limiting you to one small picture and a message of only a dozen words or so. Yet, within this limited footprint, Facebook facilitates split testing to determine which picture works best for a given ad.

You simply identify the images that you want to test. Then Facebook puts these photos in a rotation, changing the picture in the ad from user to user.

Recently I launched a Facebook campaign to drive more traffic to one of my websites. First I crafted the wording of the ad. Then, for the associated image, I uploaded seven different pictures for the Facebook rotation.

Metrics from My Split Test

One of the beauties of Facebook advertising is that you have quick, easy access to detailed reports about how well your ad performs. (Other online marketing platforms provide comparable data.) For instance, you can know how many people saw the ad on any given day and how many of them clicked on it.

In addition, when you use multiple photos as I did, your report is broken down by pictures. You can see the number of clicks which occurred with each picture in the rotation.

Here were the results, with the click-throughs converted into percentages:

  • Image 1 — 19.6%
  • Image 2 — 24%
  • Image 3 — 6.5%
  • Image 4 — 13%
  • Image 5 — 17.4%
  • Image 6 — 17.4%
  • Image 7 — 2.2%

Now the interesting thing about these images is that they closely resembled one another. They were all headshots of attractive middle-aged men and women, appearing in similar poses, with the photos cropped to display their faces at near identical sizes. Two images were of a man alone, three of a woman alone, and two depicted a male-female duo.

Yet, notice the difference in how viewers responded to the various pictures. Even though the images were substantively alike, the choice of photos made a distinct difference in the likelihood of someone clicking on the ad. Indeed, click-throughs were ten times more likely with Image 2 than with Image 7.

What makes this particularly interesting is that Image 2 and Image 7 were both solo shots of gray-haired men, apparently in their 50s, both with similar hair styles and both presenting a successful, professional appearance.

The most significant difference was that Image 1 showed its subject against a plain white backgroud, while the subject in Image 7 was pictured in a retail setting. Nevertheless this one difference, however insignificant it might seem on the surface, led to strikingly different outcomes.

The Customer's Preference Is What Counts

You should also know that I did not originally intend to include Image 2 in the mix at all. I threw it in at the last moment on a whim, mostly to have two images of men alone in my test. I really expected it to perform poorly. As things turned out, it was the top performer.

This demonstrates why split testing is so critical. I could have used only one picture in my ad, choosing the one which I found most appealing. Or I could have circulated the pictures among several friends, to see which one they thought best. But what I might like or what my friends might prefer does not necessarily coincide with how customers will react.

Had I gone with my instincts, without validating them through customer feedback, I would have probably picked Image 3 and used it for every ad. My instincts, however, would have settled on a picture which ranked next-to-last in terms of click-throughs.

And there's another piece of intriguing feedback from Image 3. It's cropped from the same photo as Image 6. And it's the same woman's face. The difference is that her face is centered in Image 3, whereas Image 6 is unbalanced, with her face in the left half of the picture.

But notice the impact from this seemingly minor change. With the same woman portrayed against the same background, viewers were almost three times as likely to click on the unbalanced image. This was one of only two photos in the set with an offset like this. The other was Image 5. They each garnered 17.4% of the clicks, combining for nearly 35% of the total.

The fact that the two of them performed so well tells me that I need to experiment more extensively with offset images in this ad.

Other helpful information also came out of the split test. Notably, photos with two people (Images 1 and 5) were disproportionately more successful than those with one. Even though only two pictures depicted two figures, 37% of the click-throughs were on ads with these images.

The Next Iteration of the Test

So what do I do with all of this information? I will run this same ad again for several days, eliminating Images 3, 4, and 7 from the rotation. With fewer images in the comparison, I will have a significantly larger sample rate on each photo.

And given what I've learned about the impact of unbalanced pictures, coupled with the success of Image 2, I will probably experiment with that image a bit more. I will match the centered image from the first test with an image that is off-centered, to see which performs best.

By continuing to test along these lines, I can eventually reduce the photos in my rotation to the one or two which have proven most effective. In this way I should be able to steadily improve my click-through rate.

Timing Makes a Difference

I should also mention that the metrics from this brief ad campaign revealed another noteworthy piece of information. The number of click-throughs were markedly higher on Sundays and Mondays than on other days of the week. Here's the day-by-day breakdown, again stated as percentages.

  • Sunday — 21.3%
  • Monday — 27.7%
  • Tuesday — 10.6%
  • Wednesday — 10.6%
  • Thursday — 12.8%
  • Friday — 8.5%
  • Saturday — 8.5%

These number suggest that in the future it would be far more cost effective to run this ad in a number of Sunday-Monday "mini-campaigns" rather than having campaigns that stretch over one or more weeks.

While Facebook's reports make it easy to track vital metrics about your ad, with minimal work you can achieve the same quality of feedback from your advertising efforts in any medium. In the next issue, I'll step you through some ways to utilize split-testing, whatever your advertising medium.


This article first appeared in Encore Entrepreneur inbox magazine on February 3, 2014.


Leave a Reply

Your email address will not be published. Required fields are marked *