The last time I went on a short cruise the service was very good, and you couldn’t help but notice how hard the staff was working. But – just in case you didn’t – they spent a lot of time teaching us how to answer the company’s customer satisfaction “survey.”
They explained that the results impact their compensation and future. The question topics and rating scale were shared in detail (a one-to-ten scale with ten being best). Although the questionnaire said “ten for best” and “one for the worst” service, they explained, really the way it works is that ten is best and nine is “worst,” i.e. nothing but ten counts.
This situation isn’t limited to the travel sector, of course. If you’ve bought a car lately, you may have been coached through a customer satisfaction “survey” by the salesperson or invited to fill out a “survey” if you were happy with the service. One such “survey” I saw recently said “If for any reason you feel you cannot answer “Very Satisfied” and “Definitely Recommend,” then please call our Manager… so that we may resolve your concerns immediately.”
Another said “How are we doing?… An 8 is great but, 9 or 10 means you’ll visit us again!!”
It’s not surprising that staff might take matters into their own hands and be proactive when they believe these scores are critically important to their jobs and compensation. Many of these types of “surveys” may be used within the sponsoring companies to drive service improvements, so if that is the case the open-ended follow up questions are of more interest to them than the numerical scores. These companies may have a separate research program to give a true measure of performance on customer satisfaction.
The practice of coaching may or may not be a problem for the sponsoring companies. But we in the market research industry might need to be worried about the effects of their “survey” execution on our ability to accurately measure opinions. After all, what type of message does this send to research participants in general? We tend to be very impatient with people who don’t perform to our expectations in online surveys, but when we coach someone on how to answer a survey, do we breed cynicism among the public about the whole process? Why should people answer our surveys honestly and thoughtfully when they’ve experienced for themselves how much coaching goes on and therefore how unreliable survey results must be?
The people I was travelling with weren’t bothered by this; they were pretty cynical about research in general. “Don’t people come to you with a research project and tell you what answers they want you to deliver?” I was asked. Well no, they don’t; and if they did, we wouldn’t oblige. But perhaps it’s not surprising that this could be a common assumption, given some of the “surveys” people are exposed to.
Should there be some industry standard to address this? Should surveys where coaching is part of the process be labelled as such? “Employees had the opportunity to discuss this survey with respondents before they took it” for example? Or, do we just distance ourselves from this, viewing these “surveys” as a completely different type of research from the “real” research we do, and not buying the argument that it calls into question every type of research in the public’s mind?
Yes, the service was great on my trip – but this traveler was a non-responder when asked to rate it on a scale of one to ten.