Showing posts with label response testing. Show all posts
Showing posts with label response testing. Show all posts

Tuesday, July 22, 2014

What's in a Name? A lot!

Just came across this interesting post on the Advanced Marketing Institute website. Thought you would find it interesting.


Unique Marketing Test Reveals Likely Election Winner.
The Secret? One Name Captures the Emotions of Voters.

How do you reach inside the mind of voters to determine who they will vote for, even before they may even know themselves?

That is the particularly intriguing question faced by hundreds, even thousands of pollsters, not to mention politicians. As the public is exposed to the daily, hourly barrage of skittishly jumping results of the polls, there seems to be no way to measure what will happen, except to say it is "virtually a dead heat."

Hakim Chishti, Executive Director of the research firm Advanced Marketing Institute (AMI) believes his form picked the likely winner. And it will not be anywhere near a "too close to call election." In fact, according to Chishti, "one of the candidates is three more times more likely to be chosen in this election."

“It’s All in the Sound of Your Voice”

According to Chishti, "Going all the way back to Plato, it has been known that our emotions are affected by the actual sound of words; the tones within words 'mean' something to us, regardless of which language is being spoken, and regardless of whether we even know the
language."

Chishti, who is fluent in several Near Eastern languages and a US Government Fulbright Research Scholar, says "I became interested in the harmonics of languages, when in my travels I found that people had emotional reactions to languages which they did not know."

Phonetic Symbolism: the Key to the Emotional ‘Meaning’ of Words

In linguistics this phenomenon is known as "phonetic symbolism." Marketers and researchers for decades have used this awareness to develop brand names and evaluate marketing communications. Russian researchers discovered that these sound affect a child while it is still in its mothers womb.

While the effects of spoken words on our emotions can be profound, understanding the specific mechanics of how sounds produce specific emotional reactions has been an elusive goal for researchers.

200,000 Words Analyzed for Emotional Impact

In the 1990s, Dr. Chishti led his team of researchers at the Advanced Marketing Institute to develop special algorithms. Based upon research at Bell Labs, Chishti's computer experts analyzed the tonal qualities in more than 200,000 words in the English language, and the specific centers in the mind and body activate by specific waveforms made by each sound.

The Advanced Marketing Institute provides a free analysis tool online which provides free evaluation of headlines. Site visitors run more than 30,000 headlines through the tool each month, to improve the emotional connection of their marketing slogans with potential customers.

The analysis results provide a breakdown of words into three categories - those affecting the emotional, intellectual and spiritual centers of a person. Based upon these criteria, Chishti's firm provides special computer analysis to Fortune 100 clients and others. The results are stated as an "Index" for each component of the emotional value of a particular series of words.

“Inside the Mind of the Marketplace” (And Voters)

According to Chishti, this type of research provides very deep insight into how customers interact with products, services, and other people. "We call this analysis "Inside the Mind of The Marketplace," Chishti said.

"It was possible for us to evaluate, as just one example," said Chishti, "political candidate's speeches, to discern how emotional, or intellectual or spiritual their communication is."

Chishti also said, "If you also evaluate the blog posts
of a candidates' web site, one can more fully match the communication style of prospective voters. That is a considerable advantage."

So after an evaluation of all the candidates' speeches and all of the campaigning across millions of miles and thousands of hours of stale dinners and limp shrimp, which candidate does Chishti predict will be the winner?

“It’s all in the name …”

"Of course many factors influence an election," he said. "The area we thought most relevant was the name of the candidate himself. Since this is the most obvious and often-repeated aspect of everyone's connection to a candidate, we wanted to get to the core perception for each candidate. We felt the name provided just such a focus.

And Chishti revealed to us that purely in terms of the harmonics of the names, one candidate is a clear winner, and overwhelming winner. And that is Senator Barak Obama.

"At least according to the science of linguistics and our computer analysis of how people respond emotionally, Sen. Barak Obama's name has an overall emotional content index value of 150%, whereas John McCain's rating is only 50%."

Candidate’s name “off the charts in terms of emotional appeal…”

To put that in perspective, even the best copywriters attain an index rating of around 30%. So while McCain's name is not necessarily weak, the harmonic strength of Obama's name if essentially off the charts. You practically could not have invented a more emotionally connective name for a political candidate," said Chishti.

"Without getting too technical about it all" Chishti said, "simply in terms the emotional, heartfelt connection, common people have three times the "emotional" connection with Sen. Obama."

Though considered ‘intellectual’, people ‘feel’ him as the more as emotional and empathetic candidate.”

Even more interesting, said Chishti, is that we can further break down the specific format of emotions, into heartfelt qualities or emotions, intellectual values and spiritual values. "Interestingly, " even though Obama is considered the "intellectual" of the candidates, his name conveys only "emotional" or heartfelt values to people.

"Perhaps that explains to some extent the rising tide of veneration enjoyed by the Obama campaign, and the large crowds, their sense of commonality of purpose and community exhibited by the huge crowds he draws," Chishti said.



This report may be forwarded or republished on any website with attribution to www.aminstitute.com

Published by Advanced Marketing Institute
Carlsbad, CA 92008
Media Contact: press@aminstitute.com
© 2008 Advanced Marketing Institute. World rights reserved.

Please send all comments, questions, and concerns to info@aminstitute.com.

THIS DOCUMENT IS PROVIDED FOR INFORMATIONAL PURPOSES ONLY. INFORMATION PROVIDED IN THIS DOCUMENT IS PROVIDED 'AS IS' WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE. The user assumes the entire risk as to the accuracy and the use of this document.

Tuesday, October 8, 2013

What to Measure, Why and When

Research, Assessment, and Evaluation
by Ben Delaney, © 2013

How many meetings have you attended in which the term “data-driven” was tossed about, its utter essentiality stressed?

And how many times did you have the feeling that no one had a clue what “data-driven” really means?

As one who has made a living doing market research, I learned to like data. Now I just love data. One of my greatest thrills at work in the last year was finally discovering a seasonal cycle in the sales of our social enterprise store. This information helped us plan a sale at the right time that doubled the store receipts that month. That's an example of data-driven decision making.

I can;t provide you with a course on statistics. Even if I were qualified to do so, we don't have the space for that level of detail. What I do hope to provide is a framework for thinking about data and evaluation that will make your work a bit easier.

Data doesn't just measure results

I think it is very important to use data to shape programs, both in initial planning and through a reiterative, ongoing analysis. Changes are driven by the findings, and often, the answer to one question raises other questions. 

Data-driven programs work this way:
  • Program planning is based on research, with measurement points built in, plus
  • Ongoing, reiterative analysis of the collected data used to refine the program and deepen understanding, then
  • Programs are changed as new knowledge emerges from the data.
That is what a true data-driven organization does.

It takes rigor and discipline to work this way, but the resulting improvements in programs designed like this are worth the effort. That's why all major retailers use a similar model, as pioneered by Wal-Mart.

Planning for data

The very first thing to consider when planning assessment is what you want to know, and why. Having a clear picture of how the information you collect will positively impact your organization makes the process easier and enables good decisions as you design your research or evaluation protocol.

Key in determining what you want to know is evaluating your questions in regard to their impact on your program and the ability to collect meaningful data.

For example, a hypothetical child nutrition program, which we'll call Kids.org, is planning a new child nutrition program. Their questions include: What is the dietary value of the average child’s meals? Does smoking in the home affect a child's appetite? What foods are both nutritious and appealing to kids? If these are significant issues, what Kids.org programs will impact them positively?

There are five concerns that must be addressed when creating the Kids.org assessment plan. Let's address each of the five key aspects of their, or your, assessment plan.

1:   What do we want to know, and why?
Assuming that good nutrition promotes good health and better learning, Kids.org wants to know the following about the kids it serves:
  • What is the dietary value of the average child's meals? Are they getting enough of what they need? Are any key nutrients missing from their diet?
  • Does smoking in the home affect a child's appetite? If so, is there a correlation with illness or learning/behavioral issues?
  • What foods are both nutritious, inexpensive, and appealing to kids? What can we afford to provide that the kids will like and is good for them?
2:   What information will tell us what we need to know?
  • What do the kids eat for some period of time. A detailed diary may be required.
  • A census of smokers in the children's homes.
  • A list of affordable, nutritious foods, taste tested with the kids.
3:   Has anyone already answered this question?
  • There are probably studies available to provide dietary information that is good enough. It will be hard to have enough diaries completed to gather significant data.
  • Kids.org probably will have to find the smokers' houses, though a questionnaire or personal contact with the parents.
  • Nutritional information for the foods Kids.org can afford can probably be easily obtained. Taste testing can take place by evaluating orders for food, or servings eaten, and by asking questions.
4:   How do we collect the data we need?
  • Research in online sources, including government, universities, journals, and general web searches.
  • Online, written, phone, or personal surveys.
  • Measuring and tracking food ordered over time. Frequently interacting with clients to ask what they think of the food, your organization, how you do business, and more.
5:   How will we analyze the data to inform our future actions?
  • Someone on staff knows enough to collect and analyze the data. Offices that use programs like SalesForce and QuickBooks can output reports into Excel for analysis. Many CRM/accounting systems offer advanced and customizable reporting to provide much of the data you need.
  • A local college or business school can provide interns who understand how to manipulate data to find the information you need. It's important to have these interns carefully document their methods and cross train staff to take over when the intern leaves.
  • Reporting experts can be hired on contract to periodically provide the information you need from your data.
Once you have the information you sought, you can modify your programs to be even more effective. Kids.org finds that there are tons of reports on average child diets in various locations, including a city near them that has very similar demographics. A quick check with a few of their clients indicated that their clients were eating pretty much what the study reports. The results of the study showed that kids ate too much sugar and salt, and not enough fruits and veggies. Kids.org starts an education campaign while also finding particular foods that provide needed nutrients.

Finding that smoking in the house caused bad effects on the kids, Kids.org started providing information for parents to explain the importance of a smoke-free environment for their kids. They continue to measure smoking vs. achievement to determine the impact of the smoke-free program, and modify it until it has the desired impact.

Kids.org also changed their food offerings and started requesting different foods from their donors. They discover that small variations in sourcing can make significant improvements in child nutrition.

Added benefits

Not only does Kids.org have a better understanding of its clients, it also has better impact data, and is able to make some changes based on what was found in the data. They can collect data continuously, and evaluate it at any time to assess their work. They can also provide greater insights and impact, which will please their funders. 

All of this applies equally well to marketing. You can, and should, design all of your marketing campaigns with measuring points built in. You can count clicks, calls, and customers. With opportunities like Google's AdWords, Twitter hashtags, specialized landing pages and other tools, you can evaluate the success of online campaigns. Online marketing can change by the minute as new data arrive. Print ads, direct response, press releases, even the Yellow Pages (yes, still good for some businesses), can be measured and adjusted. Obviously, donations provide their own inherent measurement systems, but even in fund raising you can measure other variables that enable you to better craft your message and delivery to improve giving.

The bottom line is the bottom line. No matter how you measure your success, be it families helped, revenue from a social venture, kilowatts saved, jobs created, or new money raised, you can determine significant measuring points. By taking frequent readings, and acting on the data you collect, you can make any organization work better and have greater impact.


Tuesday, September 25, 2007

Testing, Testing, 1, 2, 3

By Ben Delaney © 2007

The importance of testing your ideas and delivery, and how to do it.


How do you know what’s best in your marketing and communications? You test, test, test.

MarCom testing is the research that makes MarCom a science. You can test message, demographic selections, imagery, different media, and different options within a type of media. You do this testing by setting up small, controlled experiments, and evaluating the results.

You can test almost every part of your marketing. Where to place your advertising can be tested by running the same ad in several publications and gauging response. The content of the ad can be tested by running different versions, with different response tracking, in the same publication. Website ideas can be tested by alternating web pages to see which one works better. New product ideas can be tested with focus groups. Pricing can be tested by varying prices to see if one elicits more sales. Almost any marketing idea can, and should be tested.
Direct response is one of the easiest media to test, so let’s use that as an example. Direct response marketing means that you send an offer directly to your prospect, and attempt to get a response. That response could be a purchase, signing up for a newsletter, a donation, or buying tickets to an event. Direct response can be sent by email, postal mail, even a telegram.

Running a test


Let me give you an example of a very simple test of a direct response campaign. Keep in mind that real life testing can be much more complex that this, testing each part of a campaign to optimize your results. For important campaigns, I test the list, the message, the presentation, what is in the envelope, pricing, incentives, and even the color of the envelope. In this example, we are testing the quality of our mailing list, delivery methods, and the impact of our message. The same ideas and techniques can be applied to every aspect of your effort.

Let’s assume that you are tasked with raising money for a children’s vaccination campaign in Tracy, California. You need to test your mailing list and your message.
Let’s assume that you have available three lists of about 6,000 people each. One is high-value donors to health campaigns in the Bay Area. Another is parents of kids in school in Tracy. The other is doctors in the Tracy area. Each list has both postal and email addresses.

We take the three lists and do what’s called a random Nth name selection to cut each into four groups with approximately the same number of names in it. This gives us 12 lists of 1,500 names each. Each is coded so we know which name came from each list. (I’m assuming there are no duplicates.) We call these lists A1, A2, A3, A4, B1, B2, B3, B4, and C1, C2, C3, C4.

Now we create two message/image combinations. For example, one mailer has a picture of a sick child and the headline: “Don’t let this happen to the kids in your neighborhood.” Number two shows a group of mixed race children playing together. Its headline reads, “Illness doesn’t recognize income, race, or gender.” We create a printed and email version of each. We set up a website with a landing page for our test group.

The test runs like this. We take lists A1, B1, and C1 and email message one. To lists A2, B2, and C2, we postal mail message one. Lists A3, B3, C3 get message two in email, and the last group gets message two as postal mail. What we have done is send statistically identical groups one of four possible message/media combinations. The return mailer for the postal efforts are each coded so that we know which list that person’s name was on, and which mailing they got. The email versions have a similar code that we ask be inputted on the web page we direct our prospects to.
We expect email response to be faster, so we send the postal mail a week before the email goes out. Now we wait. As the results start coming in, coded so that we know from which list and which message/media combination was received, we count. And we look for which lists performed best, both in terms of response and amount of donation. We wait a predetermined time, typically 2-4 weeks from the first response. And then we tabulate our results.

What we’re looking for is this:

  • When did the response come in? Response rates typically follow a bell curve, so this will tell us when to expect the bulk of the responses for the full effort.
  • How many responded to each test variant? This tell us which message, list and delivery style worked best.
  • Who responded to each test? This will show us if people in different demographic groups or geographic locations responded differently.
  • What was the value of the response from each group. Specifically if you are soliciting donations, or selling something, this will tell you which variant provided the most valuable response.
  • Anything else in those numbers? Looking closely at your results may yield more information. If you tested two web pages, did one perform better? Did more women than men respond? Did particular zip codes exceed expectations? Did people seem confused or respond in unexpected ways? There’s gold in them numbers. Mine it.

When a testing is done this way, it shows you which list is good, which message is good, to whom you are appealing, if a particular message was more effective in postal mail or by email, and other results that you can tease out of the statistics.

And don’t consider any result a failure. Testing is designed to show you what doesn’t work, as well as what does. If a test gives you unexpected results, you’ve learned a lot, saved a lot of money, and have new ideas to work with.

Some campaigns are so important you may want to retest to see if your results are consistent. At the end of your testing, you should have a pretty good idea of how to best communicate with your donors. Then you do your big mailing and bank your success.