Data publikacji: 03 Dec 2022 Zakres stron: 939 - 961
Abstrakt
Abstract
Concerns about the burden that surveys place on respondents have a long history in the survey field. This article reviews existing conceptualizations and measurements of response burden in the survey literature. Instead of conceptualizing response burden as a one-time overall outcome, we expand the conceptual framework of response burden by positing response burden as reflecting a continuous evaluation of the requirements imposed on respondents throughout the survey process. We specifically distinguish response burden at three time points: initial burden at the time of the survey request, cumulative burden that respondents experience after starting the interview, and continuous burden for those asked to participate in a later round of interviews in a longitudinal setting. At each time point, survey and question features affect response burden. In addition, respondent characteristics can affect response burden directly, or they can moderate or mediate the relationship between survey and question characteristics and the end perception of burden. Our conceptual framework reflects the dynamic and complex interactive nature of response burden at different time points over the course of a survey. We show how this framework can be used to explain conflicting empirical findings and guide methodological research.
Data publikacji: 03 Dec 2022 Zakres stron: 963 - 986
Abstrakt
Abstract
We test a planned missing design to reduce respondent burden in Web and SMS administrations of the CAHPS Clinician and Group Survey (CG-CAHPS), a survey of patient experiences widely used by health care providers. Members of an online nonprobability panel were randomly assigned to one of three invitation and data collection mode protocols: email invitation to a Web survey, SMS invitation to a Web survey, or SMS invitation to an SMS survey. Within these three mode protocols, respondents were randomly assigned to a planned missing design, which shortened the survey by about 40%, or to a control group that received the survey in its entirety. We compare survey duration, breakoff and completion rates, and five key patient experience measures across conditions to assess the effect of the planned missing design across the three modes. We found that a planned missing design worked well with our Web survey, reducing survey duration and breakoff without changing estimates relative to the full-survey control condition. However, mixed findings in the SMS survey suggest that even shortened, 15-item surveys may be too long to substantially reduce respondent burden. We conclude with recommendations for future research.
Data publikacji: 03 Dec 2022 Zakres stron: 987 - 1017
Abstrakt
Abstract
Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three-wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1.
Data publikacji: 03 Dec 2022 Zakres stron: 1019 - 1050
Abstrakt
Abstract
In interviewer-administered omnibus surveys, burdensome questions asked early in a survey may result in lower quality responses to questions asked later in a survey. Two examples of these burdensome questions are social network questions, wherein respondents are asked about members of their personal network, and knowledge questions, wherein respondents are asked to provide a factually correct response to a question. In this study, we explore how the presence of potentially burdensome questions are associated with item nonresponse and acquiescence rates on subsequent survey questions, and whether this effect differs by respondent age and education. We use data from the 2010 General Social Survey (AAPOR RR5 ¼ 70.3%, AAPOR 2016), which experimentally varied the location of a social network module and the presence of a knowledge question module. Those who received knowledge questions had higher item nonresponse rates on subsequent questions than those who did not receive knowledge questions, but the quality of responses did not differ by the presence of social network questions. Further, respondents with different characteristics were not differentially burdened by the knowledge questions or the social network questions. We conclude that knowledge questions may be better asked near the end of omnibus surveys to preserve the response quality for subsequent questions.
Data publikacji: 03 Dec 2022 Zakres stron: 1051 - 1067
Abstrakt
Abstract
We conducted an idiographic analysis to examine the effect of survey burden, measured by the length of the most recent questionnaire, or number of survey invitations (survey frequency) in a one-year period preceding a new survey, on the response probability to a new survey in a probability-based Internet panel. The individual response process was modeled by a latent Markov chain with questionnaire length and survey frequency as explanatory variables. The individual estimates were obtained using a Monte Carlo based method and then pooled to derive estimates of the overall relationships and to identify specific subgroups whose responses were more likely to be impacted by questionnaire length or survey frequency. The results show an overall positive relationship between questionnaire length and response probability, and no significant relationship between survey frequency and response probability. Further analysis showed that longer questionnaires were more likely to be associated with decreased response rates among racial/ethnic minorities and introverted participants. Frequent surveys were more likely to be associated with decreased response rates among participants with a large household. We discuss the implications for panel management and advocate targeted interventions for the small subgroups whose response probability may be negatively impacted by longer questionnaires or frequent surveys.
Data publikacji: 03 Dec 2022 Zakres stron: 1069 - 1095
Abstrakt
Abstract
Collecting life history data is highly demanding and therefore prone to error since respondents must retrieve and provide extensive complex information. Research has shown that response burden is an important factor influencing data quality. We examine whether increases in different measures of response burden in a (mixed-device) online survey lead to adverse effects on the data quality and whether these effects vary by the type of device used (mobile versus non-mobile).
We conducted an experimental study in an online mixed-device survey, for which we developed a questionnaire on the educational and occupational trajectories of secondary-school graduates, undergraduates, and university graduates. To address our research question, we randomly assigned different levels of response burden to the participants and compared different measures on the data quality and response.
We found mixed evidence for unfavourable effects of response burden on the examined outcomes. While some of our results were expected, they were not consistent across all subgroups. Most interestingly, the effects of response burden on outcomes seemed to differ based on the device used. Hence, we conclude that further research is needed to optimise the collection of complex data from different groups of participants.
Data publikacji: 03 Dec 2022 Zakres stron: 1097 - 1123
Abstrakt
Abstract
Providing an exact answer to open-ended numeric questions can be a burdensome task for respondents. Researchers often assume that adding an invitation to estimate (e.g., “Your best estimate is fine”) to these questions reduces cognitive burden, and in turn, reduces rates of undesirable response behaviors like item nonresponse, nonsubstantive answers, and answers that must be processed into a final response (e.g., qualified answers like “about 12” and ranges). Yet there is little research investigating this claim. Additionally, explicitly inviting estimation may lead respondents to round their answers, which may affect survey estimates. In this study, we investigate the effect of adding an invitation to estimate to 22 open-ended numeric questions in a mail survey and three questions in a separate telephone survey. Generally, we find that explicitly inviting estimation does not significantly change rates of item nonresponse, rounding, or qualified/range answers in either mode, though it does slightly reduce nonsubstantive answers for mail respondents. In the telephone survey, an invitation to estimate results in fewer conversational turns and shorter response times. Our results indicate that an invitation to estimate may simplify the interaction between interviewers and respondents in telephone surveys, and neither hurts nor helps data quality in mail surveys.
Data publikacji: 03 Dec 2022 Zakres stron: 1125 - 1144
Abstrakt
Abstract
Higher levels of perceived burden by respondents can lead to ambiguous responses to a questionnaire, item nonresponse, or refusals to continue participation in the survey which can introduce bias and downgrade the quality of the data. Therefore, it is important to understand what might influence the perception of burden in respondents. In this article, we demonstrate, using U.S. Consumer Expenditure Survey data, how regression tree models can be used to analyze the associations between perceived burden and objective burden measures conditioning on household demographics and other explanatory variables. The structure of the tree models allows these associations to easily be explored.
Our analysis shows a relationship between perceived burden and some of the objective measures after conditioning on different demographic and household variables and that these relationships are quite affected by different respondent characteristics and the mode of the survey. Since the tree models were constructed using an algorithm that accounts for the sample design, inferences from the analysis can be made about the population. Therefore, any insights could be used to help guide future decisions about survey design and data collection to help reduce respondent burden.
Data publikacji: 03 Dec 2022 Zakres stron: 1145 - 1175
Abstrakt
Abstract
Respondent burden has important implications for survey outcomes, including response rates and attrition in panel surveys. Despite this, respondent burden remains an understudied topic in the field of survey methodology, with few researchers systematically measuring objective and subjective burden factors in surveys used to produce official statistics. This research was designed to assess the impact of proxy measures of respondent burden, drawing on both objective (survey length and frequency), and subjective (effort, saliency, and sensitivity) burden measures on response rates over time in the Current Population Survey (CPS). Exploratory Factor Analysis confirmed the burden proxy measures were interrelated and formed five distinct factors. Regression tree models further indicated that both objective and subjective proxy burden factors were predictive of future CPS response rates. Additionally, respondent characteristics, including employment and marital status, interacted with these burden factors to further help predict response rates over time. We discuss the implications of these findings, including the importance of measuring both objective and subjective burden factors in production surveys. Our findings support a growing body of research suggesting that subjective burden and individual respondent characteristics should be incorporated into conceptual definitions of respondent burden and have implications for adaptive design.
Data publikacji: 03 Dec 2022 Zakres stron: 1177 - 1203
Abstrakt
Abstract
Minimizing respondent survey burden may help decrease nonresponse and increase data quality, but the measurement of burden has varied widely. Recent efforts have paid more attention to respondents’ subjective perceptions of burden, measured through the addition of questions to a survey. Despite reliance on these questions as key measures, little qualitative research has been conducted for household surveys. This study used focus groups to examine respondents’ reactions to possible sources of burden in the American Community Survey (ACS) such as survey length, sensitivity, and contact strategy; respondents’ knowledge, attitudes, and beliefs about burden; and overall perceptions of burden. Feedback was used to guide subsequent selection and cognitive testing of questions on subjective perceptions of burden. Generally, respondents did not find the ACS to be burdensome. When deciding whether it was burdensome, respondents thought about the process of responding to the questionnaire, the value of the data, that response is mandatory, and to a lesser extent, the contacts they received, suggesting these constructs are key components of burden in the ACS. There were some differences by response mode and household characteristics. Findings reinforce the importance of conducting qualitative research to ensure questions capture important respondent burden perceptions for a particular survey.
Data publikacji: 03 Dec 2022 Zakres stron: 1205 - 1234
Abstrakt
Abstract
Statistical offices frequently use cutoff sampling to determine which businesses in a population should be surveyed. Examples include business surveys about international trade, production, innovation, ICT usage and so on. Cutoff thresholds are typically set in terms of key variables of interest and aim to satisfy a minimum coverage ratio–the share of aggregate values of reporting units. In this article we propose a simple cost-benefit approach to determination of the sampling cutoff by taking into account the response burden. In line with existing practice, we use the coverage ratio as our measure of accuracy and provide either analytical or numerical solutions to cutoff determination. Using a business survey on response burden of reporting trade flows within the EU (Intrastat), we present an application that illustrates our approach to cutoff determination. An important practical implication is the possibility to set industry-contingent cutoffs.
Data publikacji: 03 Dec 2022 Zakres stron: 1235 - 1251
Abstrakt
Abstract
Large-scale, nationally representative surveys serve many vital functions, but these surveys can be long and burdensome for respondents. Cutting survey length can help to reduce respondent burden and may improve data quality but removing items from these surveys is not a trivial matter. We propose a method to empirically assess item importance and associated burden in national surveys and guide this decision-making process using different research products produced from such surveys. This method is demonstrated using the Survey of Doctorate Recipients (SDR), a biennial survey administered to individuals with a science, engineering, and health doctorate. We used three main sources of information on the SDR variables: a bibliography of documents using the SDR data as a measure of item use and importance, SDR data table download statistics from the Scientists and Engineers Statistical Data System as an additional measure of item use, and web timing paradata and break-off rates as a measure of burden. Putting this information together, we identified 35 unused items (17% of the survey) and found that the most burdensome items are highly important. We conclude with general recommendations for those hoping to employ similar methodologies in the future.
Concerns about the burden that surveys place on respondents have a long history in the survey field. This article reviews existing conceptualizations and measurements of response burden in the survey literature. Instead of conceptualizing response burden as a one-time overall outcome, we expand the conceptual framework of response burden by positing response burden as reflecting a continuous evaluation of the requirements imposed on respondents throughout the survey process. We specifically distinguish response burden at three time points: initial burden at the time of the survey request, cumulative burden that respondents experience after starting the interview, and continuous burden for those asked to participate in a later round of interviews in a longitudinal setting. At each time point, survey and question features affect response burden. In addition, respondent characteristics can affect response burden directly, or they can moderate or mediate the relationship between survey and question characteristics and the end perception of burden. Our conceptual framework reflects the dynamic and complex interactive nature of response burden at different time points over the course of a survey. We show how this framework can be used to explain conflicting empirical findings and guide methodological research.
We test a planned missing design to reduce respondent burden in Web and SMS administrations of the CAHPS Clinician and Group Survey (CG-CAHPS), a survey of patient experiences widely used by health care providers. Members of an online nonprobability panel were randomly assigned to one of three invitation and data collection mode protocols: email invitation to a Web survey, SMS invitation to a Web survey, or SMS invitation to an SMS survey. Within these three mode protocols, respondents were randomly assigned to a planned missing design, which shortened the survey by about 40%, or to a control group that received the survey in its entirety. We compare survey duration, breakoff and completion rates, and five key patient experience measures across conditions to assess the effect of the planned missing design across the three modes. We found that a planned missing design worked well with our Web survey, reducing survey duration and breakoff without changing estimates relative to the full-survey control condition. However, mixed findings in the SMS survey suggest that even shortened, 15-item surveys may be too long to substantially reduce respondent burden. We conclude with recommendations for future research.
Survey respondents can complete web surveys using different Internet-enabled devices (PCs versus mobile phones and tablets) and using different software (web browser versus a mobile software application, “app”). Previous research has found that completing questionnaires via a browser on mobile devices can lead to higher breakoff rates and reduced measurement quality compared to using PCs, especially where questionnaires have not been adapted for mobile administration. A key explanation is that using a mobile browser is more burdensome and less enjoyable for respondents. There are reasons to assume apps should perform better than browsers, but so far, there have been few attempts to assess this empirically. In this study, we investigate variation in experienced burden across device and software in wave 1 of a three-wave panel study, comparing an app with a browser-based survey, in which sample members were encouraged to use a mobile device. We also assess device/software effects on participation at wave 2. We find that compared to mobile browser respondents, app respondents were less likely to drop out of the study after the first wave and the effect of the device used was mediated by subjective burden experienced during wave 1.
In interviewer-administered omnibus surveys, burdensome questions asked early in a survey may result in lower quality responses to questions asked later in a survey. Two examples of these burdensome questions are social network questions, wherein respondents are asked about members of their personal network, and knowledge questions, wherein respondents are asked to provide a factually correct response to a question. In this study, we explore how the presence of potentially burdensome questions are associated with item nonresponse and acquiescence rates on subsequent survey questions, and whether this effect differs by respondent age and education. We use data from the 2010 General Social Survey (AAPOR RR5 ¼ 70.3%, AAPOR 2016), which experimentally varied the location of a social network module and the presence of a knowledge question module. Those who received knowledge questions had higher item nonresponse rates on subsequent questions than those who did not receive knowledge questions, but the quality of responses did not differ by the presence of social network questions. Further, respondents with different characteristics were not differentially burdened by the knowledge questions or the social network questions. We conclude that knowledge questions may be better asked near the end of omnibus surveys to preserve the response quality for subsequent questions.
We conducted an idiographic analysis to examine the effect of survey burden, measured by the length of the most recent questionnaire, or number of survey invitations (survey frequency) in a one-year period preceding a new survey, on the response probability to a new survey in a probability-based Internet panel. The individual response process was modeled by a latent Markov chain with questionnaire length and survey frequency as explanatory variables. The individual estimates were obtained using a Monte Carlo based method and then pooled to derive estimates of the overall relationships and to identify specific subgroups whose responses were more likely to be impacted by questionnaire length or survey frequency. The results show an overall positive relationship between questionnaire length and response probability, and no significant relationship between survey frequency and response probability. Further analysis showed that longer questionnaires were more likely to be associated with decreased response rates among racial/ethnic minorities and introverted participants. Frequent surveys were more likely to be associated with decreased response rates among participants with a large household. We discuss the implications for panel management and advocate targeted interventions for the small subgroups whose response probability may be negatively impacted by longer questionnaires or frequent surveys.
Collecting life history data is highly demanding and therefore prone to error since respondents must retrieve and provide extensive complex information. Research has shown that response burden is an important factor influencing data quality. We examine whether increases in different measures of response burden in a (mixed-device) online survey lead to adverse effects on the data quality and whether these effects vary by the type of device used (mobile versus non-mobile).
We conducted an experimental study in an online mixed-device survey, for which we developed a questionnaire on the educational and occupational trajectories of secondary-school graduates, undergraduates, and university graduates. To address our research question, we randomly assigned different levels of response burden to the participants and compared different measures on the data quality and response.
We found mixed evidence for unfavourable effects of response burden on the examined outcomes. While some of our results were expected, they were not consistent across all subgroups. Most interestingly, the effects of response burden on outcomes seemed to differ based on the device used. Hence, we conclude that further research is needed to optimise the collection of complex data from different groups of participants.
Providing an exact answer to open-ended numeric questions can be a burdensome task for respondents. Researchers often assume that adding an invitation to estimate (e.g., “Your best estimate is fine”) to these questions reduces cognitive burden, and in turn, reduces rates of undesirable response behaviors like item nonresponse, nonsubstantive answers, and answers that must be processed into a final response (e.g., qualified answers like “about 12” and ranges). Yet there is little research investigating this claim. Additionally, explicitly inviting estimation may lead respondents to round their answers, which may affect survey estimates. In this study, we investigate the effect of adding an invitation to estimate to 22 open-ended numeric questions in a mail survey and three questions in a separate telephone survey. Generally, we find that explicitly inviting estimation does not significantly change rates of item nonresponse, rounding, or qualified/range answers in either mode, though it does slightly reduce nonsubstantive answers for mail respondents. In the telephone survey, an invitation to estimate results in fewer conversational turns and shorter response times. Our results indicate that an invitation to estimate may simplify the interaction between interviewers and respondents in telephone surveys, and neither hurts nor helps data quality in mail surveys.
Higher levels of perceived burden by respondents can lead to ambiguous responses to a questionnaire, item nonresponse, or refusals to continue participation in the survey which can introduce bias and downgrade the quality of the data. Therefore, it is important to understand what might influence the perception of burden in respondents. In this article, we demonstrate, using U.S. Consumer Expenditure Survey data, how regression tree models can be used to analyze the associations between perceived burden and objective burden measures conditioning on household demographics and other explanatory variables. The structure of the tree models allows these associations to easily be explored.
Our analysis shows a relationship between perceived burden and some of the objective measures after conditioning on different demographic and household variables and that these relationships are quite affected by different respondent characteristics and the mode of the survey. Since the tree models were constructed using an algorithm that accounts for the sample design, inferences from the analysis can be made about the population. Therefore, any insights could be used to help guide future decisions about survey design and data collection to help reduce respondent burden.
Respondent burden has important implications for survey outcomes, including response rates and attrition in panel surveys. Despite this, respondent burden remains an understudied topic in the field of survey methodology, with few researchers systematically measuring objective and subjective burden factors in surveys used to produce official statistics. This research was designed to assess the impact of proxy measures of respondent burden, drawing on both objective (survey length and frequency), and subjective (effort, saliency, and sensitivity) burden measures on response rates over time in the Current Population Survey (CPS). Exploratory Factor Analysis confirmed the burden proxy measures were interrelated and formed five distinct factors. Regression tree models further indicated that both objective and subjective proxy burden factors were predictive of future CPS response rates. Additionally, respondent characteristics, including employment and marital status, interacted with these burden factors to further help predict response rates over time. We discuss the implications of these findings, including the importance of measuring both objective and subjective burden factors in production surveys. Our findings support a growing body of research suggesting that subjective burden and individual respondent characteristics should be incorporated into conceptual definitions of respondent burden and have implications for adaptive design.
Minimizing respondent survey burden may help decrease nonresponse and increase data quality, but the measurement of burden has varied widely. Recent efforts have paid more attention to respondents’ subjective perceptions of burden, measured through the addition of questions to a survey. Despite reliance on these questions as key measures, little qualitative research has been conducted for household surveys. This study used focus groups to examine respondents’ reactions to possible sources of burden in the American Community Survey (ACS) such as survey length, sensitivity, and contact strategy; respondents’ knowledge, attitudes, and beliefs about burden; and overall perceptions of burden. Feedback was used to guide subsequent selection and cognitive testing of questions on subjective perceptions of burden. Generally, respondents did not find the ACS to be burdensome. When deciding whether it was burdensome, respondents thought about the process of responding to the questionnaire, the value of the data, that response is mandatory, and to a lesser extent, the contacts they received, suggesting these constructs are key components of burden in the ACS. There were some differences by response mode and household characteristics. Findings reinforce the importance of conducting qualitative research to ensure questions capture important respondent burden perceptions for a particular survey.
Statistical offices frequently use cutoff sampling to determine which businesses in a population should be surveyed. Examples include business surveys about international trade, production, innovation, ICT usage and so on. Cutoff thresholds are typically set in terms of key variables of interest and aim to satisfy a minimum coverage ratio–the share of aggregate values of reporting units. In this article we propose a simple cost-benefit approach to determination of the sampling cutoff by taking into account the response burden. In line with existing practice, we use the coverage ratio as our measure of accuracy and provide either analytical or numerical solutions to cutoff determination. Using a business survey on response burden of reporting trade flows within the EU (Intrastat), we present an application that illustrates our approach to cutoff determination. An important practical implication is the possibility to set industry-contingent cutoffs.
Large-scale, nationally representative surveys serve many vital functions, but these surveys can be long and burdensome for respondents. Cutting survey length can help to reduce respondent burden and may improve data quality but removing items from these surveys is not a trivial matter. We propose a method to empirically assess item importance and associated burden in national surveys and guide this decision-making process using different research products produced from such surveys. This method is demonstrated using the Survey of Doctorate Recipients (SDR), a biennial survey administered to individuals with a science, engineering, and health doctorate. We used three main sources of information on the SDR variables: a bibliography of documents using the SDR data as a measure of item use and importance, SDR data table download statistics from the Scientists and Engineers Statistical Data System as an additional measure of item use, and web timing paradata and break-off rates as a measure of burden. Putting this information together, we identified 35 unused items (17% of the survey) and found that the most burdensome items are highly important. We conclude with general recommendations for those hoping to employ similar methodologies in the future.