Article

Impact of sample frame on survey response rates in repeat-contact mail surveys

By Chris Neher and K. S. Neher

An integral part of many studies conducted for the National Park Service (NPS) by either outside contracted professionals or by NPS staff is survey research. These surveys range from small focus groups or cognitive interviews to large-scale national household surveys. They differ in content, targeted population, length, and survey methodology. However, when surveys involve contacts with 10 or more members of the public, they all have one thing in common: the need to secure Office of Management and Budget (OMB) approval of the survey instrument and study design prior to conducting the research. As part of preparing an information collection request (ICR) package for submission to OMB, researchers are asked to state the sample sizes and estimated response rates for their survey. Survey research faces an increasing challenge because of generally declining survey response rates (Keeter et al. 2017; AAPOR 2016). In this climate, it can be a bit like tossing darts to estimate response rates for surveys not yet completed, especially if the target survey population is not one that is frequently surveyed.

From December 2015 to January 2016, researchers from the University of Montana in Missoula conducted a wide range of mail surveys for research projects sponsored by the National Park Service and the US Geological Survey. Due to a fluke of gaining OMB approval for three different information collection programs within two months of each other in the fall of 2015, we were able to conduct mail surveys of six different sample populations nearly simultaneously. As the studies had all been designed by the same research team, the surveys were conducted using identical Dillman (2007) repeat-contact mail protocols along with identical mail packaging and postage applications. Survey contents, length, and complexity varied somewhat; however, because so many aspects of these surveys were parallel, we had the special opportunity to compare response rates across very different sample populations while controlling for many factors that might affect those rates. Rather than providing a statistical analysis of factors influencing mail survey response rates, our analysis is limited to a side-by-side comparison of response rates, made possible by the coincidence of six mail surveys being conducted simultaneously. The goal of this article is to provide recent benchmarks for researchers designing mail surveys to inform their a priori or presumptive judgment of likely response rates based on general factors of the population being surveyed.

Recent trends in survey response rates

The Pew Research Center reported their rates of response to telephone surveys dropped from 36% in 1997 to 9% in 2012. In more recent years that response rate has been found to stabilize at 9% (Keeter et al. 2017). Although in the Pew study the method of data collection (phone calls) was different from that employed by the studies described in this article (mail-back surveys), it is indicative of a wider trend. The American Association for Public Opinion Research (AAPOR) notes that “[l]argely due to increasing refusals, response rates across all modes of survey administration have declined, in some cases precipitously” (2016).

For a given survey, response rate is dependent on a wide range of interrelated factors, the relative importance of which change from individual to individual (Heberlein and Baumgartner 1978). The leverage-saliency theory of survey participation proposes that no single method has been shown to universally increase response rates because no single influencing factor holds constant in the magnitude of its influence across survey populations, and further that “the effect of one factor may be altered in the presence of another” (Groves et al. 2000). As such, there is no magic bullet for low or declining response rates, but certain methods have been consistently, if not universally, shown to be effective in increasing participation. One such method is the Dillman protocol employed by the studies discussed in this article (Chidlow et al. 2015).

The most substantial threat posed by lower rates of participation is the possibility of nonresponse bias, which occurs when the data collected are not representative of the population surveyed because of a higher rate of nonresponse among segments of the population whose answers would have differed systematically from those collected. For example, a common type of nonresponse bias is that of age: older individuals are generally more likely to respond to a survey, so younger people can be underrepresented in the data. In the past, a high response rate was considered the most important safeguard against nonresponse bias, and surveys with low rates of participation were thought to be necessarily unreliable. Recent studies, however, have shown that lower response rates are not inherently correlated with a higher incidence of nonresponse bias (Keeter et al. 2000; AAPOR 2016; Keeter et al. 2017). Furthermore, bias that is known to exist in a study can be corrected for through monitoring and weighting of key demographic factors among the respondents. This goes to show the decline in survey participation has not undermined the reliability of surveys as a method of statistical prediction; rather it has demonstrated the effectiveness of statistical research best practices (Keeter et al. 2000).

See caption for descriptive information
The survey explored issues specific to the Colorado River ecosystem and the operation of Glen Canyon Dam with sampling of the rafting public and river anglers.

NPS/MARK LELLOUCH

Survey and sample frame characteristics

In September and October 2015, OMB approved three ICR packages that had been submitted by University of Montana researchers for studies either entirely or partially funded by the National Park Service. These studies included survey designs for six different sample populations (table 1). The Glen Canyon Total Valuation Survey (Duffield et al. 2016a) was designed to sample two populations: a random address-based sample (ABS) of national households and a household sample of eight counties contiguous with the Colorado River from Lake Powell to Lake Mead (also a random ABS). A second study was designed to survey holders of National Parks and Federal Recreational Lands Passes (Neher 2016a, 2016b). Specifically we sampled pass purchasers who bought their passes through the USGS website portal. Again, this involved two randomly sampled populations: purchasers of the Annual Pass and purchasers of the Senior Pass. A final study of direct recreational users of the Colorado River corridor from Glen Canyon Dam downstream to Lake Mead (Duffield et al. 2016b) was funded in phases by both the NPS and the USGS. This study surveyed two groups of river users: private whitewater floaters traveling through Grand Canyon and anglers fishing the river stretch from just below Lee’s Ferry upstream to Glen Canyon Dam. Researchers randomly sampled Grand Canyon floaters from a listing of all private-party river floaters in the previous 12 months, while they contacted anglers on-site.

Table 1. Mail survey populations, sample frames, and response rates
Population Sample Frame Response Rate
Private Grand Canyon whitewater floaters Random sample of all float-
ers in calendar year 2015
64.3%
National Parks and Federal Recreational Lands Pass Senior Pass holders Random sample of most recent 12 months of online purchasers 63.3%
Glen Canyon anglers On-site recruitment of anglers near Lee’s Ferry 56.9%
National Parks and Federal Recreational Lands Pass Annual Pass holders Random sample of most recent 12 months of online purchasers 43.5%
Households in eight counties sur-
rounding the Colorado River
Random address-based sam-
ple (ABS) of households in eight-county Colorado River area
17.6%
National households Random ABS of US house-
holds
11.7%

The survey protocol for all samples followed the Dillman repeat-contact method of (1) an advance notice postcard, (2) a full survey package with postage-paid return envelope, (3) a reminder postcard, and (4) a final full survey package sent to nonrespondents.

As is clear from table 1, the response rates from the six different sample frames differ dramatically, ranging from a low of 11.7% for the national household sample to a high of nearly 65% for the Grand Canyon whitewater floater sample. Consideration of the key population characteristics for the samples helps to inform factors that may drive response rates.

Results and discussion

Age matters

The two surveys of the America the Beautiful–National Parks and Federal Recreational Lands Pass holders were identical in most ways, including survey questions, source of sample, and methodology. The one key way these samples differed was in the age of the sample population. The Annual Pass survey returns had an average reported pass-holder age of 46 years. The Senior Pass, by contrast, has a minimum age of 62 years required to qualify for purchase and an average respondent age of 66. Given the similarity of the two sample frames and survey methods and contents, we were surprised at the significant increase (20 percentage points) in response rates achieved for the Senior Pass survey as compared with the Annual Pass survey. Survey researchers have recognized that when compared with younger population an older-age sample population is often associated with increased response rates (see, for example, Gigliotti and Dietsch 2014), and these surveys underline how significant those differences can be.

For household samples, issue familiarity matters

The household surveys associated with the Glen Canyon Total Value study included two sample frames: a random sample of national households and a sample of households in the eight counties contiguous to the Colorado River from Lake Powell to Lake Mead. The survey itself dealt with issues specific to the Colorado River ecosystem and the operation of Glen Canyon Dam. Survey responses indicated that while only 11.5% of respondents to the national survey reported having visited Glen Canyon Dam, 57.5% of the regional sample respondents had visited the dam. We reason that the higher level of familiarity of the regional respondents with the subject of the survey largely explains the higher response rate for the regional sample (17.6%) versus the national sample (11.7%).

Response from targeted user groups differs greatly from household sample response

Survey researchers have long been aware that surveys of people about their chosen activities have substantially higher response rates than general population surveys on issues many or most people may have little familiarity with or interest in (Heberlein and Baumgartner 1978). This finding is also clear in the final response rates of our differing surveys. The highest response rate for one of our two household samples (17.6%) is still lower than one-half as high as the lowest rate from our four targeted user samples (43.5%).

Among recreational users the uniqueness of the activity being surveyed matters

The surveys of Glen Canyon anglers and Grand Canyon whitewater floaters both had similar survey instruments asking the same groups of questions. However, the response rate for the whitewater sample was 10% higher than that for the angler sample. This was true even though we had contacted the anglers on-site and they had agreed to participate in the survey, as opposed to Grand Canyon floaters whom we sampled from among the entire population of floaters. Additionally, the average angler respondent was significantly older (56 years) than the average floater respondent (48 years).

One explanation for the higher response rates among floaters might be the nature of the experience being surveyed. Floating the Grand Canyon is a once-in-a-lifetime experience for many, and at most a once-per-year experience for all. In contrast, the respondents to the Glen Canyon angler survey had been fishing the site on average for 11 years, and made an average of 3.7 trips to Glen Canyon to fish each year. In short, the Grand Canyon whitewater experience, which generally lasts two weeks or longer, is likely a much more special and memorable experience than an average Glen Canyon fishing trip, which might last only one or two days. Additionally, based on the preponderance of survey response comments, a much greater share of Grand Canyon whitewater floaters than Glen Canyon anglers were excited about their recent Colorado River visit and were eager to tell survey researchers about that experience. That eagerness appears to have translated into a bolstered survey response rate.

Take-home lessons for survey researchers

The repeat-contact mail surveys that we have described provide researchers with a starting point for estimating what response rates they might expect in their similarly structured surveys. Additional methods could be applied to future similar studies to boost response rates:

  • A web-based survey alternative response method could also be used
  • Follow-up phone contacts to nonrespondents to urge them to participate[1]

With regard to the straightforward Dillman mail protocol used in these surveys, researchers should be mindful primarily of the level of attachment and interest their sample population has in the subject of the survey, and secondarily of the age of the respondent pool.

[1] A 2016 household survey (Haefele et al. 2016) used these additions to the repeat-contact protocol: a web-based response option and reminder telephone calls. The study additionally used a $2 incentive payment. The response rate for this national survey conducted by Colorado State University and Harvard University was 18%.

References

AAPOR (American Association for Public Opinion Research). 2016. Response rates—An overview. AAPOR, Oakbrook Terrace, Illinois, USA. Accessed 3 March 2016. http://www.aapor.org/Education-Resources/For-Researchers/Poll-Survey-FAQ/Response-Rates-An-Overview.aspx.

Chidlow, A. P., P. N. Ghauri, S. Yeniyurt, and S. T. Cavusgil. 2015. Establishing rigor in mail-survey procedures in international business research. Journal of World Business 50:26–35. doi:10.1016/j.jwb.2014.01.004.

Dillman, D. 2007. Mail and Internet surveys: The tailored design. Second edition. Wiley and Sons, New York, New York, USA.

Duffield, J., C. Neher, and D. Patterson. 2016a. Colorado River Total Value Study. Final report prepared for National Park Service. University of Montana, Department of Mathematical Sciences, Missoula, Montana, USA. Accessed 19 December 2017. http://ltempeis.anl.gov/documents/docs/Colorado_River_Value_Study.pdf.

———. 2016b. Economic analysis of Glen Canyon angler and Grand Canyon whitewater visitor surveys. USGS Grand Canyon Research and Monitoring Center, Flagstaff, Arizona, USA.

Gigliotti, L., and A. Dietsch. 2014. Does age matter? The influence of age on response rates in a mixed-mode survey. Human dimensions of wildlife 19(3):280–287. doi:10.1080/10871209.2014.880137.

Groves, R. M., E. Singer, and A. Corning. 2000. Leverage-saliency theory of survey participation: Description and an illustration. Public Opinion Quarterly 64(3):299–308. doi:10.1086/317990.

Haefele, M., J. Loomis, and L. J. Bilmes. 2016. Total economic valuation of the National Park Service lands and programs: Results of a survey of the American public. Discussion Paper 16-71. Harvard Environmental Economics Program, Cambridge, Massachusetts, USA. Available at https://heep.hks.harvard.edu/files/heep/files/dp71_haefele-loomis-bilmes.pdf.

Heberlein, T., and R. Baumgartner. 1978. Factors affecting response rates to mailed questionnaires: A quantitative analysis of the published literature. American Sociological Review 43(4):447–462.

Keeter, S., C. Miller, A. Kohut, R. M. Groves, and S. Presser. 2000. Consequences of reducing nonresponse in a national telephone survey. Public Opinion Quarterly 64(2):125–148.

Keeter, S., N. Hatley, C. Kennedy, and A. Lau. 15 May 2017. What low response rates mean for telephone surveys. Pew Research Center, Washington, DC, USA. http://www.pewresearch.org/2017/05/15/what-low-response-rates-mean-for-telephone-surveys/.

Neher, C. 2016a. The National Parks and Federal Recreational Lands Annual Pass survey. National Park Service Interagency Pass Program, Washington, DC, USA.

———. 2016b. The National Parks and Federal Recreational Lands Senior Pass survey. National Park Service Interagency Pass Program, Washington, DC, USA.

About the authors

Chris Neher is a research specialist in the Department of Mathematical Sciences, University of Montana, Missoula. He can be reached at (406) 721-2265 and neher@montana.com. K. S. Neher is a research associate with Bioeconomics, Inc., in Missoula, Montana.

Last updated: September 13, 2019