Speaker
Description
Knowing how long a survey takes to complete is important for respondents, researchers, and survey practitioners alike. It is important for respondents because their time is a valuable and limited resource; for researchers and survey practitioners because research has shown how instrument duration is linked to response rates and respondent burden (Edwards et al., 2009; Eslick & Howell, 2001), while also being a significant contributor to the final costs of survey fieldwork. However, predicting survey duration before fieldwork remains a challenge.
Traditionally, survey duration is estimated through pre-testing, which is both time-consuming and costly. Research shows that item characteristics — such as length (Couper & Kreuter, 2013), type (open vs. closed), number of response options (Yan & Tourangeau, 2008), or position (DeCastellarnau, 2018; Olson et al., 2020) — influence completion time. This information is retrievable from a metadata infrastructure, though not yet DDI-compliant, for the National Educational Panel Study (NEPS).
We focus initially on self-administered surveys, comparing our metadata-based duration estimates with actual field processing times from surveys already conducted. We present initial results, assess the accuracy of these estimates, and discuss their potential extension to interviewer-based survey modes, aiming to provide a tool for metadata-based estimations of questionnaire durations.