Session 22

Survey Data Quality in Interviewer-Administered Surveys in LMIC Contexts

In low- and middle-income (LMIC) countries both within and outside Africa, interviewer-administered, face-to-face (F2F) surveys are and will remain the main data collection tool for the foreseeable future. In fact, previous research has shown that 85% of surveys in LMIC countries published in renowned social science journals between January 2010 and October 2015 were conducted in a face-to-face context. While high penetration of smartphones and internet usage facilitates the transition to often less costly self-administered mobile and web surveys in high-income countries, their absence underlies the continued reliance on face-to-face surveys in countries with fewer resources. Considerably few people in these countries, particularly in rural areas, are connected to the internet or have reliable telephone connections. Varying levels of literacy in such contexts also presents a challenge to collecting data in modes other than F2F. Therefore, telephone and web surveys for a general population survey are difficult to implement. In this setting in which both survey implementers and respondents are highly dependent on interviewers, the role of the interviewer and assessment of data quality vulnerable to error introduced by interviewers is particularly critical. However, there has been limited research in the translation of relevant quality assessment approaches to LMIC contexts.

The objective of this session is to explore approaches to quality assurance and quality control relating to F2F surveys in LMIC countries. Papers will focus on how survey researchers and practitioners, particularly across Africa, consider quality in those components of the total survey error framework where interviewers are likely to introduce error. This session welcomes research on practical approaches to minimizing and assessing interviewer-related error in these contexts as well as on implementation of innovative approaches to achieving collection of high-quality data in interviewer-administered surveys.




1.The Anchoring Method: Estimation of Interviewer Effects in the Absence of Interpenetrated Sample Assignment

Brady West  (Institute for Social Research, University of Michigan-Ann Arbor, USA)

Michael R. Elliott  (University of Michigan)

Stephanie Coffey  (U.S. Census Bureau)

Xinyu Zhang  (University of Michigan)

Methodological studies of the effects that human interviewers can have on the quality of survey data have long been limited by a critical assumption: that interviewers in a given survey are assigned completely random subsets of the larger overall sample that is being measured (also known as interpenetrated assignment). In the absence of this type of study design, estimates of interviewer effects on survey measures of interest may simply reflect differences between interviewers in the characteristics of their assigned sample members, rather than recruitment or measurement effects specifically introduced by the interviewers. We introduce a new Bayesian approach for overcoming this lack of interpenetrated assignment when estimating interviewer effects. This approach, which we refer to as the “anchoring” method, leverages correlations between observed variables that are unlikely to be affected by interviewers (“anchors”) and variables that may be prone to interviewer effects (e.g., sensitive, or complex factual questions) to statistically remove components of within-interviewer correlations that a lack of interpenetrated assignment may introduce. The improved estimates of interviewer effects on survey measures will enable survey managers to manage data collection more effectively in real time and intervene when particular interviewers are producing survey outcomes that vary substantially from expectations. We evaluate this new methodology empirically using a simulation study, and then illustrate its application using real survey data from the Behavioral Risk Factor Surveillance System (BRFSS), where interviewer IDs are systematically provided on public-use data files.


2.Do SMS messages improve response rates in a longitudinal phone survey? Evidence from an electricity sector survey in Benin

Sarah Hughes  (Mathematica, United States)

Gabriel Lawin  (Laval University, Canada)

Serge Wongla  (Mathematica, Benin)

Cullen Seaton  (Mathematica, United States)

This paper presents experimental evidence on the effect of SMS pre-notification on telephone survey response rates and attrition in a longitudinal survey of electricity consumers in Benin. The survey began with an in-person baseline survey in mid-2019 and includes 10 waves of telephone follow-up surveys over a 5-year period. The sample includes three distinct cohorts: households, small businesses, and medium to large businesses. In the second wave of the telephone survey, half of each cohort was randomly selected to receive SMS pre-notification prior to the first survey contact attempt. No pre-notification messages were sent in the subsequent wave. We compared the survey response rates for pre-notification and control groups in the initial SMS-treated wave to understand the immediate effect of these messages. We then reviewed results for the subsequent wave to ascertain whether the messages affected future attrition. We found that SMS pre-notification led to a small but significant increase (7 percentage points) in the response rate for households in the initial wave but no change in the response rate for the small business or medium/large business cohorts. In the subsequent wave, which did not include an SMS pre-notification, we found that businesses which had received pre-notification in the initial wave were significantly less likely to respond in the subsequent wave. Response rates were 7 percentage points lower for the pre-notification group in the small business cohort and 14 percentage points lower in the medium/large business group. We conclude that SMS pre-notification has mixed results across sample types for this longitudinal phone survey and may even increase attrition in certain cases.


3.Data Quality Control approaches in the LeCellPHIA phone based COVID-19 Surveillance Survey

Gerald Mwima  (ICAP at Columbia University, Lesotho)

Background: The global COVID-19 pandemic swiftly shifted interactions from physical to virtual and accelerated the use of cell phone surveys in low and middle-income countries. From May 2020 ICAP in Lesotho established a phone-based community surveillance system to collect influenza-like illness (ILI) symptoms from a subset of respondents who had participated in the Lesotho Population-based HIV Impact Assessment (PHIA). Interviewers administer a weekly questionnaire to 1700 respondents uploading data daily to a central server.

Objective: The objective of this abstract is to describe the approach to quality assurance and quality control for a cell phone surveillance system in Lesotho.

Methodology: Any interviewer-administered mode is prone to interviewer error. Therefore, practical approaches to identifying, minimizing, and addressing interviewer-related error in these contexts is critical. This paper describes our experience assuring quality in the telephone interviews. For the Lesotho Cell phone-based Population HIV Impact Assessment (PHIA) survey (LeCellPHIA), survey calls were recorded and reviewed by supervisors to assess performance of interviewers to identify measurement errors. To assess interviewer influence on nonresponse error, response rates for all interviewers were tracked weekly. Interviewers who had Response Rates below 75% had performance discussions with their supervisors to explore challenges they were facing and devise ways of improvement.

Results: The survey recruited 1,700 index participants who received weekly call interviews inquiring if they or any of their HH members reported Influenza-Like Illness (ILI) symptoms, as a proxy for COVID-19 symptoms in the past week. Interviewers inquired about fever, dry cough, and shortness of breath for the participants as well as household members. We experienced a challenge with play back of recorded interviews as only the respondent side was captured and not the interviewer side. However, based on the participant’s responses the supervisors were able to determine the quality of the interview as sufficient or requiring engagement with the interviewer. Response rates were generally good at 75% on average with minimal occasions where less than 75% RR was achieved with a need to engage interviewers about their performance. However male interviewers were more likely than their female counterparts to have lower RR were their female respondents expressed concerns from their spouses who had discomfort with a male regularly calling their spouse. In these rare cases the participant was re-allocated to a similar gender interviewer but overall, interviewers and respondents were not gender matched. From the weekly interviews data was uploaded analyzed and weekly ILI-estimates were generated and shared with CDC and MoH.

Discussion: We found that establishing rapport and connections with the participants occurred and fostered trusted relationships between interviewers and participants with high week on week response rates. In some cases, concerns arose especially from non-gender matched pairs particularly female respondents who had to be switched from Male to female interviewers. Whereas the surveillance system was not directly evaluating incidence of COVID-19 symptoms trends from this ILI system mirrored national prevalence data collected by a completely different laboratory reporting system indicating good acceptable quality data through the telephone survey.

Conclusion: We find telephone interviews are a valuable method of collecting data especially in the context of COVID-19.


4.Introducing real-time quality control systems for face-to-face interviewing

Alexandra Cronberg (Kantar, Kenya)

Godfrey Akisa  (Kantar, Kenya)

Intro: The role of interviewers is critical in face-to-face surveys, and it is essential that this type of interviewing applies rigorous quality control measures to minimize interviewer-introduced errors. The more of these checks can be done in real-time, the more efficient the fieldwork. To address the challenge of real-time monitoring and quality control checks, Kantar has developed an online tool for computer-assisted personal interviewing based on the CAPI software Nfield (“Kantar Quality Control System”, KQCS) that enables monitoring and quality control on the go. Kantar also deploys Microsoft Forms for instantaneous reporting of infield back-checks, and the Nfield audit trail tool for monitoring of question duration. Methods: The KQCS has been developed to allow for the following capabilities:


  • View of real-time count of interviews (Live, test, dropped out, rejected, screened out, successful count).
  • View of Survey / Project Overview (Quota, Quota framework, GPS, maps, variable reporting)

Quality control:

  • View all survey data live
  • View all surveys / records with important survey-specific metrics flagged, e.g., length of interview, duration for securing the next interview, working hours, putting a limit to straight lining
  • Access the details of the records / surveys that have breached the survey specific metrics
  • Edit and re-run survey-specific metrics
  • Accept or reject records flagged

Kantar uses Microsoft Forms to collect reports generated by infield personnel who conduct back-checks and accompaniments. The reports provide real-time statistics for each interviewer, which allows for quick actions to address potential quality concerns. The Nfield audit trail provides a breakdown of time spent administering each question or question blocks, allowing for scrutiny of each interviewer and is pivotal in flagging incorrect questionnaire administration. Discussion and conclusions: The above-mentioned measures have resulted in reduced interviewer error rates during fieldwork due to early diagnosis of problems and immediate remedial actions. The KQCS is, however, currently limited in the number of variables that can be included and Kantar is working to expand its capabilities.


5.Monitoring Interviewers Real-Time: A Data-driven Tool

Zeina Mneimneh  (Survey Research Center, University of Michigan)

Interviewers have the potential to contribute to error in face-to-face surveys. To monitor interviewer behaviour and reduce this error source, survey practitioners have used different quality control procedures including call-backs, audio recordings, and in-person evaluation. These procedures have been recently supplemented by the increased use of data-driven tools to achieve greater efficiency and improve data quality. Such tools allow for closer and real-time interviewer monitoring especially in international work with geographically distant field operations.

This presentation will focus on the development of a real-time tool for monitoring interviewers using multiple sources of data, including contact attempt history, audit trail data, and key survey measures. Two types of measures are implemented: 1) a non-comparative measure where interviewers are flagged based on a specified metric; and 2) a comparative measure where interviewers are flagged based on their relative performance to other interviewers. Non-comparative measures include very short administration of a single question (less than 3 seconds), visiting a single question more than three times, and short duration between interviews. Comparative measures include interview length and proportion of key survey measures. The presentation will also highlight two key features of the tool: 1) the ability to drill down to a single question within the tool when a question is flagged, 2) and the ability to graph comparative measures over time.