Spiked Articles

You, Inc.

You Don't Want Me As Your Judge, Unless ...Part 5

By Richard B. Barger, ABC, APR

Originally Posted
Updated



What's the most glaring weakness of entries in communications competitions? It seems to be the section on measurement and evaluation. But this crucial facet of organizational communications doesn't have to be a stumbling block, says Richard B. Barger, ABC, APR, two-time Blue Ribbon Panelist for the International Association of Business Communicators' Gold Quill competition and two-term member of IABC's Accreditation Board. Rich, who also has served five terms as a Division Coordinator for the Gold Quill competition, judges accreditation portfolios and scores of contest entries each year. Today, Rich shows us how to build feedback into communications efforts, and how to demonstrate positive results, in the penultimate article of his series, "You don't want me as your judge, UNLESS ..."

Well, you're about finished with your contest entry or portfolio submission. You've described the problem you set about solving, your marvelous planning process, the terrific rationale. You even highlighted the objectives, so the judge could find them! But now, you're to that last part of the Call for Entries ... the part that asks, "How did you measure your project's success in meeting its objectives?" You may think you've been saved by a corollary question, "If no evaluation mechanism was built in, why not?" But that doesn't let any good communicator "off the hook." Anyway, several of the other entrants in your category will have built-in evaluation. And that will make them look really good in comparison to you. So, what do you do? What Are Your Measurements? Well, first, and easiest, is to write evaluable objectives, then build some form of measurement into the project at the outset. [For more tips, see Part 4 of this series.] This is how you should operate on the job. Even when the boss won't spring for the cost of a big research project, you can -- and should -- include all the forms of evaluation you can manage. Base the project on whatever planning research you have available; use some form of tracking research as the project is under way; measure the results. Build evaluation right into your project. Otherwise, you won't know if you hit your target, or why you missed it. If you've thought about project objectives in the right way, then the measurement is almost automatic. I'm Late But you're preparing the entry at 4:30 p.m. the afternoon it must be overnighted to the competition coordinator. Other than submit earlier next year, what should you do today? The answer:

Not every evaluation has to be a multivariate, Chi-square, triple-regression, Beta-curve, sixth Sigma monstrosity with a 99.4 percent accuracy rating and a 0.0004 chance of a Type II error with a half twist in the pike position.

There IS such a thing as qualitative evaluation. While formal research is best, informal evaluation is far better than no measurement at all. And in most competitions, good informal evaluation will not only be acceptable, it will rank you ahead of all the entrants who didn't include any. These poor folks will be of the quantitative school: If the data aren't from a formal, statistically valid survey, they aren't worth reporting. Wrong! Since so many of the entries in any competition will have no evaluation mechanism at all, any evaluation that you can demonstrate immediately moves you toward the front of the line. Of course, you'll get far more style points if you built some sort of evaluation into the project at the beginning. And if you designed your project around "planning" research, you go to the head of the class. But even informal, qualitative measures are better than nothing, particularly if they were planned in advance. Comments From Strangers Were you expecting -- and did you receive -- any unsolicited feedback from the project? Perhaps some letters or phone calls? Or people stopping you in the hall, saying, "Nice job!"? Maybe the boss said, "Now that's exactly what we needed; the Board of Directors felt it was right on target." Let's face it, this isn't great. But it is somewhat better than absolutely no results, particularly those unsolicited letters or emails. Use some creativity: Possibly a call to the receptionist or the order department or your marketing unit will reveal that inquiries or sales or registrations were up, or will give you other anecdotal evidence that your work had a positive impact -- especially if you have a field sales force. Evidence of Performance And anecdotal information is fine. Oh, it's not what the lawyers would call "best evidence," but it's far better than NO evidence! If it was part of your evaluation plan, so much the better. You can do more to help the process along a bit. Just because the company won't budget for a survey doesn't mean you can't do a little informal research -- for planning, tracking, and results measurement, as well. Maybe 10 quick phone calls will give you some feedback about the piece. As you walk up and down the halls, listen to what people are saying about your work ... or ask them! Ride 'er, Baby! Piggy-back on something else: If the company is doing market research, see if you can include a question or two about communications efforts. If you're doing a publication, include a plea for comments, or a clip-out feedback form. Throw a reader's survey right into the publication itself. Results obviously won't be as predictively accurate as with a random survey, but they're better than nothing, and they may give you some very good ideas. The point is, don't feel that soft soundings, informal feedback mechanisms, unsolicited comments, focus groups, or other qualitative sources of information are useless. I Know My Limitations They may not have statistical validity, but they are useful forms of evaluation, so long as you understand their limitations. And when you include them as part of the planned evaluation process for your contest work, you show a sophistication about evaluation that many other entrants will lack. Now, when you write these up, be sure the evaluation specifically supports the project objectives listed earlier in your entry. Otherwise, you've just wasted your effort. One Look, and I'm Not Impressed If you tell me results that were nice, but that had nothing to do with your project's stated objectives, can you understand why I'm not very impressed? Be sure to tell the judge why the project had no formal measurement: company policy, budget limitations, recalcitrant boss, whatever. If you planned the informal measures, or did any sort of planning or tracking research, make that clear. And next time, prepare self-evaluating, quantifiable objectives, and build in some means of determining whether and to what degree you met them, or, if not, why not. But don't for a minute think that, just because you don't have hard data, you don't have useful information. Just figure out what it is, and tell the judges. It may be enough. The wrap-up article of this series will include a few more tidbits on entering competitions, a quick summary, a Q&A, and some final thoughts. If you have questions, email me.

Join The Discussion

We will never post your email address publicly; it's used solely as part of our verification process to keep the spammers under control. After submitting your comment or question, you'll receive an email confirmation message with a link back to CornerBarPR.com® that you'll need to click before your post appears for others to see. By submitting this post you agree to the CornerBarPR.com Terms Of Service.