PR research is like buying low and selling high: almost nobody does it.
I started in the PR business when the height of communication technology was the IBM Selectric typewriter with, omigosh, correction tape built right in!
I remember gathering with my colleagues around a different mechanical behemoth, agog that we were now able to send documents to clients electronically, over the telephone! It required encasing a single page in a plastic sleeve and clamping it to a rotating drum. Then in only 30 minutes of so, that whole page would someone appear magically on a matching machine anywhere in the world.
My first cell phone had a 15-pound shoulder-mounted battery pack. So when the Philadelphia Chapter of the Public Relations Society of America asked me in October to join several other "seasoned" practitioners on a panel about the future of PR, I felt better prepared to discuss the past. I had to give some thought to the future regarding my assigned topic, the future of PR research. In my blog this month, I reprise my conversation with a roomful of my peers on a subject of intense concern among public relations practitioners. Summary: the past is prologue to the future.
This issue of my Update also includes links to a couple of other authors' thoughts on the subject. In 1983, the year after I was Accredited in Public Relations, James Grunig complained about the lack of research in PR practice:
"Although considerable lip service is paid to the importance for program evaluation in public relations, the rhetorical line is much more enthusiastic than actual utilization. I have begun to feel more and more like a fundamentalist minister railing against sin; the difference being that I have railed for evaluation in public relations practice. Just as everyone is against sin, so most public relations people I talk to are for evaluation. People keep on sinning … and PR people continue not to do evaluation research."
A study conducted by Judy Van Slyke at Syracuse University compared public relations to a certain "model of an immature and ineffective science" and concluded that "public relations fits the model."
Dr. Walt Lindemann, with whom I worked at Ketchum PR, did a landmark study in 1988, which concluded that "most public relations research was casual and informal rather than scientific and precise" and that "most public relations research is done by individuals trained in public relations rather than by individuals trained as researchers." Under any other circumstances, I wouldn't even consider pointing to comments and studies that are 20 or more years old. But in this case, it's perfectly safe — because nothing much has changed. And I don't need a study to say that with confidence; I see it every day. Everyone agrees that research for planning, monitoring and evaluating the success of PR programs is very important. But in practice, it's like buying low and selling high in the stock market —hardly anyone does it. These are the reasons I hear most frequently:
They don't know that they're supposed to.
They don't have the time.
They don't have the budget.
They don't know how.
The truth is that even if most practitioners had the time and money, they wouldn't know how to do either formative or evaluative research. Here's a fair description of what passes of research in public relations in the vast majority of situations:
Formative research is the fact-finding in which we seek to understand the dynamics of our problem, how people relate to it, what motivates then to take the kind of action we want them to take and so forth. In the real world, it usually consists of some combination of unsubstantiated assertions about the way things are, preceded by some form of the phrase: "The boss wants this."
Objectives are set in terms that can't be measured. These are common: "Educate the public," "generate enthusiasm" and "get the word out." Strategies are skipped or given lip service in phrases such as:"Position the client as a leading provider of end-to-end solutions in the (insert name of industry here) space." Don't get me started on the use of "position" as a verb.
Tactics consist of whatever they always do and have the capabilities to produce.
Results are measured by trumping-up whatever random evidence they see and tying it to the aforementioned objectives such as generating enthusiasm and educating the public. Or they just default to how many clips they got and dollar value if it had been paid advertising.
So what's in my crystal ball about the future of PR research? Well, I don't mean to be a curmudgeon or a pessimist — this turned out to be a big laugh line in my presentation to the PRSA meeting — but in more than 30 years of practice, in all sorts of economic conditions, I haven't seen much change.
However for those who do get this message and put it into practice, there's very good news on the horizon: the tools for conducting research at every stage of a program are better, cheaper — or free —and becoming more plentiful and sophisticated every day.
It's not just the capability of the Internet to supercharge secondary literature research; you can do very sophisticated primary research through polls on forums and Web sites and by using cheap or free survey tools like Survey Monkey. You can discover trends, test messaging and monitor results with tools we couldn't have even imagined 30 years ago such as TweetDeck and Hoot Suite that let you analyze popularity of Tweets; Google Analytics for Web sites; Google alerts; and on and on and on.
This will never change: It will always be true that those practitioners who use research properly to design, monitor and evaluate their programs will sit at the table with the big boys and girls. Those who don't, won't and will forever be doomed to "getting the word out there."