How Should I Put Together a Mixed Mode Questionnaire?

By Pete Cape, Director, Global Knowledge

Mixed mode means different things to different people, but in this instance we are thinking about studies where the respondent can choose which mode to respond in (or is approached in one mode or another). These types of studies are often employed to reduce non-coverage errors, or possibly to reduce data collection costs.

The case of mixing telephone and online interviews is one where both objectives can be met. If you undertake a telephone contact study (assuming you include a mobile frame as well as a landline frame) you will get close to 100% coverage. In the US, and in many other developed countries you would find that over 85% of those you contact have access to the internet.

At this point you have a choice. You can either hang up on what will be an expensive telephone interview and replace that person with a “demographic twin” from an online access panel, or invite the person you are talking with to do the survey online and send them a link to it.

The second of these, of course, is less efficient since many will not actually complete the online study, so the first option is usually preferred. However, this type of approach introduces measurement differences since the nominally same people (the demographic twins) will answer the same question differently, depending on the mode employed, even when they hold the same opinion. This measurement error may be further exacerbated when the choice of data collection mode is systematic in relation to the survey topic, i.e. is itself biased.

Don Dillman comments in his text book on the subject*; “Thus, this type of mixed-mode survey raises the difficult question of whether the gains in coverage and response rates will offset any measurement differences that will occur.”Modal differences in the data collected, that is differences attributable entirely to the mode of interviewing, are due to three factors:

  1. The presence or absence of an interviewer. Social Desirability is the big issue here. We all want to give good account of ourselves, and few want to admit to holding undesirable opinions, especially to a stranger who is writing them down! We are all generally agreeable and therefore tend to say “yes” even when perhaps our truth is a “no” – this is acquiescence bias and possibly more prevalent in interviewed modes. And then we must consider control. In an interview the interviewer is, generally, in control of the pace of the survey and when the next question is asked. This may, however, come before the participant has finished giving all their answers to the previous question.
  2. The difference between aural and visual communications. In a telephone survey the participant is more likely to recall the last thing said to them, in a self-administered survey the first things on a list maybe taken more account of. This is the issue of Primacy vs Recency. The limits of human memory make some question types, picking one from many or ranking many items for example, very difficult to do on the telephone, whilst they are relatively easy to do online.
  3. How do people respond to scalar questions? In interviewed modes the participant is much more likely to use the extremes of a scale whilst in self administered surveys they will be more neutral in their response. Part of this will be due to wanting to look good, to have an opinion where in fact there is no opinion. Another part is due to the mechanic of asking the question. To make life easy, telephone interviewers may ask the participant if they agree or disagree with the item being measured, without specifying the “neither/nor” option and then go on to press for an extent of that agreement or disagreement – thus pushing the responses out to the extremes. In a self-administered survey the “neither/nor” option is right there on the page. The (correct) alternative for a telephone interviewer is to ask “do you agree strongly, agree slightly, neither agree nor disagree, disagree slightly or disagree strongly that X…? This is not only a mouthful to repeat for many grid items but also potentially brings with it recency effects.

Dillman* outlines 3 approaches to reducing modal differences. In each the idea is to produce a common stimulus that produces equivalent answers:

  1. Unimode (or Unified) design requires the same question to be presented in the same way across all modes. This can often led to each data collector having to do something in a way that is “not standard” for them.
  2. Mode-specific allows different question styles per mode, as long as the stimulus and data outcome is the same. This may require piloting to check the surveys and is the most demanding to undertake
  3. Mode-enhancement makes the best use of the features of each of the modes – data quality within mode is maximized but data equivalency across modes may be compromised. Usually the mode that is used for the majority of the data is optimized (i.e. makes no compromises).

In designing a questionnaire for mixed-mode studies that require the data outcomes to be as equivalent as possible, the approach taken should be either ‘unimode’ or ‘mode-specific,’ rather than ‘mode-enhancement.’ This is to say the questions must be deliberately designed to get the same data outcome, and may look or sound very different between the two modes.

The telephone approach is often more limited in how it can be presented, hence much of the change is to the “standard” way questions are displayed online to be more like a telephone survey:

  • Grids, for example, must be replaced by individual questions to replicate the telephone
  • Agree-Disagree statements should be replaced by construct specific scales in both modes
  • Yes/No questions, normally avoided in online research, may need to be employed
  • Long-answer lists should be avoided
  • Multi-coded questions should be presented as single questions (with a Yes/No response) to replicate the telephone approach
  • Ranking questions must be avoided or extensive ‘card shuffles’ designed for the telephone
  • Telephone interviews will often be forced to read out options that would normally not be read out – like “don’t know” for example
  • The exact form of each questionnaire will depend heavily on the topics and question types required to meet the objectives

Mixed mode is not easy to execute successfully. Ignoring the mode differences and hoping that the data will be compatible enough to be simply added together to make a valid total may well prove unsuccessful. It is worth mentioning that it took a 20 strong task force almost a year to decide on the guidelines that would make the US Census a unimode design (the US Census is a six question survey!) These guidelines are here and well worth a read.

*Don A Dillman et al. Internet, Mail and Mixed-Mode Surveys; The Tailored Design Method