Tuesday, September 25, 2007

Testing, Testing, 1, 2, 3

By Ben Delaney © 2007

The importance of testing your ideas and delivery, and how to do it.


How do you know what’s best in your marketing and communications? You test, test, test.

MarCom testing is the research that makes MarCom a science. You can test message, demographic selections, imagery, different media, and different options within a type of media. You do this testing by setting up small, controlled experiments, and evaluating the results.

You can test almost every part of your marketing. Where to place your advertising can be tested by running the same ad in several publications and gauging response. The content of the ad can be tested by running different versions, with different response tracking, in the same publication. Website ideas can be tested by alternating web pages to see which one works better. New product ideas can be tested with focus groups. Pricing can be tested by varying prices to see if one elicits more sales. Almost any marketing idea can, and should be tested.
Direct response is one of the easiest media to test, so let’s use that as an example. Direct response marketing means that you send an offer directly to your prospect, and attempt to get a response. That response could be a purchase, signing up for a newsletter, a donation, or buying tickets to an event. Direct response can be sent by email, postal mail, even a telegram.

Running a test


Let me give you an example of a very simple test of a direct response campaign. Keep in mind that real life testing can be much more complex that this, testing each part of a campaign to optimize your results. For important campaigns, I test the list, the message, the presentation, what is in the envelope, pricing, incentives, and even the color of the envelope. In this example, we are testing the quality of our mailing list, delivery methods, and the impact of our message. The same ideas and techniques can be applied to every aspect of your effort.

Let’s assume that you are tasked with raising money for a children’s vaccination campaign in Tracy, California. You need to test your mailing list and your message.
Let’s assume that you have available three lists of about 6,000 people each. One is high-value donors to health campaigns in the Bay Area. Another is parents of kids in school in Tracy. The other is doctors in the Tracy area. Each list has both postal and email addresses.

We take the three lists and do what’s called a random Nth name selection to cut each into four groups with approximately the same number of names in it. This gives us 12 lists of 1,500 names each. Each is coded so we know which name came from each list. (I’m assuming there are no duplicates.) We call these lists A1, A2, A3, A4, B1, B2, B3, B4, and C1, C2, C3, C4.

Now we create two message/image combinations. For example, one mailer has a picture of a sick child and the headline: “Don’t let this happen to the kids in your neighborhood.” Number two shows a group of mixed race children playing together. Its headline reads, “Illness doesn’t recognize income, race, or gender.” We create a printed and email version of each. We set up a website with a landing page for our test group.

The test runs like this. We take lists A1, B1, and C1 and email message one. To lists A2, B2, and C2, we postal mail message one. Lists A3, B3, C3 get message two in email, and the last group gets message two as postal mail. What we have done is send statistically identical groups one of four possible message/media combinations. The return mailer for the postal efforts are each coded so that we know which list that person’s name was on, and which mailing they got. The email versions have a similar code that we ask be inputted on the web page we direct our prospects to.
We expect email response to be faster, so we send the postal mail a week before the email goes out. Now we wait. As the results start coming in, coded so that we know from which list and which message/media combination was received, we count. And we look for which lists performed best, both in terms of response and amount of donation. We wait a predetermined time, typically 2-4 weeks from the first response. And then we tabulate our results.

What we’re looking for is this:

  • When did the response come in? Response rates typically follow a bell curve, so this will tell us when to expect the bulk of the responses for the full effort.
  • How many responded to each test variant? This tell us which message, list and delivery style worked best.
  • Who responded to each test? This will show us if people in different demographic groups or geographic locations responded differently.
  • What was the value of the response from each group. Specifically if you are soliciting donations, or selling something, this will tell you which variant provided the most valuable response.
  • Anything else in those numbers? Looking closely at your results may yield more information. If you tested two web pages, did one perform better? Did more women than men respond? Did particular zip codes exceed expectations? Did people seem confused or respond in unexpected ways? There’s gold in them numbers. Mine it.

When a testing is done this way, it shows you which list is good, which message is good, to whom you are appealing, if a particular message was more effective in postal mail or by email, and other results that you can tease out of the statistics.

And don’t consider any result a failure. Testing is designed to show you what doesn’t work, as well as what does. If a test gives you unexpected results, you’ve learned a lot, saved a lot of money, and have new ideas to work with.

Some campaigns are so important you may want to retest to see if your results are consistent. At the end of your testing, you should have a pretty good idea of how to best communicate with your donors. Then you do your big mailing and bank your success.

No comments: