Download:
pdf |
pdfNational Household Education Survey
2019 (NHES:2019)
Full-scale Data Collection
OMB# 1850-0768 v.14
Appendix 6 – Web Test Report
June 2018
1
NHES:2017 Web Test Report
Rebecca Medway
Harmoni Noel
Nicole Guarino
OCTOBER 2017
NHES:2017 Web Test Report
October 2017
Rebecca Medway
Harmoni Noel
Nicole Guarino
Joshua Sennett, Rachel Hanson, Carol Wan,
Bruno Silva, and Genevieve Johnson also
made significant contributions to this report.
1000 Thomas Jefferson Street NW
Washington, DC 20007-3835
202.403.5000
www.air.org
Copyright © 2017 American Institutes for Research. All rights reserved.
Contents
Page
Chapter 1: Introduction ................................................................................................................... 1
Chapter 2: Screener Mailing Experiments ...................................................................................... 4
2.1: Screener Incentive Experiment ............................................................................................ 4
2.2: Envelope Size Experiment ................................................................................................... 8
2.3: FedEx/First Class Experiment ........................................................................................... 10
Chapter 3: Screener Split-Sample Experiment ............................................................................. 12
3.1: Response Rate .................................................................................................................... 13
3.2: Response Quality ............................................................................................................... 13
3.3: Response Burden ............................................................................................................... 18
3.4: Respondent Characteristics ................................................................................................ 18
3.5: Screener Responses ........................................................................................................... 19
3.6: Key Takeaways From the Screener Experiment ............................................................... 22
Chapter 4: Dual-Topical Experiment ............................................................................................ 24
4.1: Response Rate .................................................................................................................... 24
4.2: Response Quality ............................................................................................................... 33
4.3: Respondent Burden ............................................................................................................ 35
4.4: Respondent Characteristics ................................................................................................ 36
4.5: Key Takeaways From the Dual-Topical Experiment ........................................................ 37
Chapter 5: ATES Split-Panel Experiments ................................................................................... 38
5.1: ATES Certification Provider Item Wording Experiment .................................................. 38
5.2: ATES Perceived Usefulness Items Response Option Order Experiment .......................... 41
Chapter 6: The Effectiveness of NHES Contact Attempts Across Administrations .................... 45
6.1: Effectiveness of Screener Contact Attempts ..................................................................... 45
6.2: Effectiveness of Topical Contact Attempts ....................................................................... 56
6.3: E-Mail Outcomes ............................................................................................................... 66
Chapter 7: Summary and Conclusions .......................................................................................... 70
7.1: Screener Mailing Experiments .......................................................................................... 70
7.2: Screener Split-Panel Experiment ....................................................................................... 71
7.3: Dual-Topical Experiment .................................................................................................. 73
7.4: ATES Item-Level Experiments ......................................................................................... 73
7.5: Effectiveness of NHES Contact Attempts ......................................................................... 74
References ..................................................................................................................................... 77
Appendix A. Tables .................................................................................................................... A-1
Appendix B. Additional Figures ................................................................................................. B-1
Topical Response Rate by Week ............................................................................................ B-2
Topical Response Rate by Day After Each Contact Attempt ................................................. B-5
Appendix C: Screener Experiment Results among TQA Respondents ...................................... C-1
Appendix D: Topical Survey Eligibility Decision Rules from NHES:2017 Sampling Plan ...... D-1
Appendix E. Envelopes Used in the 2017 Web Test ...................................................................E-1
Chapter 1: Introduction
The 2017 National Household Education Survey (NHES:2017) web test was the first time NHES
responses were collected almost entirely online.1 Sampled households were sent contact
materials that included information about how to access the NHES web instrument; they did not
have the option to complete a paper questionnaire. The intent of this test was to determine the
feasibility of moving forward using web as a primary mode of data collection in the next fullscale NHES collection in 2019. The web test experimented with:
strategies for contacting sample members;
alternate presentation of the household screener to maximize the accuracy of screener
responses and the overall usability of the screener instrument;
asking respondents to complete two topical surveys instead of one; and
alternate presentation of key Adult Training and Education Survey (ATES) items to
maximize the quality of the responses received for these items.
This report presents the results of several methodological experiments embedded in the
NHES:2017 web test (see exhibit 1.1 on the next page for more information about each
experiment). It also includes a discussion of the effectiveness of the contact attempts included in
both this administration and other recent NHES administrations. The overarching goal of this
report is to determine which aspects of the NHES:2017 design worked well and which ones did
not. In particular, the report addresses the following research questions, with a chapter of the
report dedicated to each:
1. Chapter 2: What is the impact of using lower priced screener mailing strategies on the
screener and topical response rates? Is there an effect on response timeliness or
representativeness?
2. Chapter 3: What is the ideal way to administer the household screener online? Is there
any benefit to using a redesigned screener more similar to the one the Census Bureau has
developed for other household surveys in terms of response rate, respondent burden,
response quality, or representativeness?
3. Chapter 4: Are sampled households willing to respond to two topical questionnaires
online? Does asking households to do this have any negative impact on response rates,
response quality, or representativeness?
4. Chapter 5: For the ATES topical questionnaire, is there a better way to ask the credential
provider item that is used to differentiate between certifications and licenses? Are
response order effects a concern for the “usefulness” items?
1
Sample members who called into the Telephone Questionnaire Assistance (TQA) and completed the screener over
the phone are the exception.
1
5. Chapter 6: Overall, how effective were the NHES:2017 contact attempts—particularly
the newly piloted approaches (pressure-sealed envelopes and e-mail reminders)? How
does the effectiveness of NHES:2017 contact attempts compare to other recent mailbased NHES administrations? Should any changes be made to the mailing schedule?
Each chapter includes an overview of the methods used in the experiment or for the survey
contact efforts being analyzed, a discussion of the results of any analyses that were conducted,
and a list of key takeaways.2 The report concludes with a final Chapter 7, which summarizes the
most important results from the earlier chapters and provides recommendations for the
application of these findings to NHES:2019.
2
Unless noted otherwise, all analyses in this report were conducted using base weights.
2
Exhibit 1.1: Experiments included in NHES:2017
Experiment
Screener
split-sample
Screener
incentive
Envelope
size
FedEx /
First Class
Dual topical
ATES
certification
provider
item
ATES
perceived
usefulness
items
Survey
stage
Description
Screener Half of sampled households received the screener used in NHES:2016,
which asks respondents to first indicate the number of people living in the
household and then provide more detailed information person-by-person
(e.g., all items for Person 1, all items for Person 2, and so on). The other half
of the sampled households received a redesigned screener, which asks
respondents to first list the names of all the individuals living in the
household and then provide more detailed information item-by-item (e.g.,
date of birth for Person 1, date of birth for Person 2, sex for Person 1, sex for
Person 2, and so on).
Screener Fifteen percent of the sample received a $2 prepaid incentive with the first
screener mailing, while the remaining 85 percent received the standard $5
prepaid incentive.
Screener Ninety-seven percent of the sample was sent their first and second screener
mailings in a full-size (BC-1776) envelope (standard NHES approach),
while the other 3 percent was sent theirs in a small, letter-sized envelope
(BC-1325).
Screener Half of the sample was assigned to receive the third screener mailing in a
FedEx envelope (standard NHES approach), while the other half was
assigned to be sent the mailing using First Class mail in a cardboard priority
mail envelope. Households with a PO box address could not be sent a FedEx
mailing and thus were sent this reminder using First Class mail regardless of
their experimental assignment.
Topical Two-thirds of the sample was assigned to the standard single-topical
condition. The other third was assigned to a dual-topical condition in which
households that were eligible for two or more topical surveys were asked to
complete two topical instruments (either a child and adult questionnaire or
two child questionnaires).
Topical: Half of the ATES respondents received the question wording used in
ATES
NHES:2016 (version A), which asked respondents, “Is your certification or
license required by a federal, state, or local government agency (such as a
state board) in order to do that kind of work?” The other half received an
alternate version B, which asked, “Is your certification or license required by
a government agency (such as a state licensing board) in order to do that
kind of work?”
Topical: Half of the ATES respondents received the version used in NHES:2016
ATES
(version A), in which the response options were listed as, “Not useful,
useful, very useful, too soon to tell.” The other half received an alternate
version B, in which the response options were listed as, “Very useful,
somewhat useful, not useful, too soon to tell.”
3
Chapter 2: Screener Mailing Experiments
This chapter presents the results of three screener experiments that tested the effectiveness of
alternate, less costly screener mailing strategies: (1) the incentive experiment, (2) the envelope
size experiment, and (3) the FedEx/First Class experiment. Each section of the chapter begins
with a description of the experiment and then presents the effect of the experiment on key
outcomes.
2.1: Screener Incentive Experiment
This experiment randomly assigned 15 percent of the sample members to receive a $2 prepaid
cash incentive with the first screener mailing instead of the standard $5 prepaid cash incentive.
Using a $2 incentive would present a potential large cost savings for future administrations.
However, using a smaller incentive could have a negative effect on the response rate (e.g., Singer
and Ye 2013; Mercer et al. 2015). This experiment also provides data that could be useful for
conducting analyses of incentive sensitivity when a web option is offered (as prior incentive
experiments have only been conducted among paper-only cases). This section of the chapter
includes an analysis of the effect of the incentive value on the response rate, response timeliness,
and respondent characteristics
Response rate and response timeliness
The first analysis in this section examines the screener response rate by incentive value, which is
defined as the percentage of eligible households in each condition that returned the questionnaire
(American Association for Public Opinion Research Response Rate 1 (AAPOR RR1)).3 T-tests
are used to identify statistically significant differences between response rates.4
As shown in figure 2.1 on the next page, the screener response rate for the $2 incentive group
was significantly lower than the screener response rate for the $5 incentive group (41 percent
versus 44 percent).
We also compared the response rate for each topical survey in each condition to determine if the
screener incentive had a carryover effect on the topical response rate (this seems especially likely
when the NHES is administered online because sample members often experience the screener
and topical phases in a single sitting).5 We also looked at the response rate for each topical
3
Typically, for production, the unit response rate is calculated using AAPOR Response Rate 3 (RR3), which
estimates the percentage of addresses of unknown eligibility that are eligible. However, due to the difficulty of
making statistical comparisons between response rates calculated using RR3, all unit response rates presented in this
report are calculated using AAPOR RR1, which assumes that all addresses of unknown eligibility status are, in fact,
eligible. Therefore, they represent the estimated response rate under the most conservative eligibility assumption and
can be interpreted as the proportion of sampled cases (excluding cases known to be ineligible) that returned a
completed questionnaire.
4
T-tests are used to identify statistically significant differences between experimental conditions in all tables
presented in this report unless indicated otherwise. We do not make adjustments for multiple comparisons (e.g.,
Bonferroni correction).
5
All topical response rate analyses in this chapter are restricted to households where the screener was completed
online because TQA screener respondents that were sampled for a topical were asked to complete the first topical
item but were not asked to complete a full topical questionnaire.
4
separately for the single-topical condition and the dual-topical condition. In addition, we looked
at the ATES results separately for households where the screener respondent was sampled for
ATES and those where a different household member was sampled for ATES; when a different
household member was sampled for ATES: (1) there was more of a separation between the
screener and topical response requests and (2) an additional $5 topical incentive was mailed to
the ATES sample member (see table 2.1 in appendix A for the full set of results).
There were no significant differences in the topical response rates by incentive condition.
This was true regardless of whether the household was in the single- or dual-topical
condition and regardless of whether the screener respondent was the household member
sampled for ATES.
Although there was a notable decline in the PFI-H response rate when the $2 incentive
was used (68 percent versus 77 percent), this is not a significant difference (likely due to
the small number of cases sampled for PFI-H).
Figure 2.1: Response rate, by questionnaire and incentive condition: 2017
100%
89% 90%
88% 87%
Response rate
80%
60%
68%
77%
73% 73%
41% 44%*
40%
20%
0%
Screener
ECPP
PFI-E
PFI-H
ATES
Incentive
$2 incentive
$5 incentive
* p < 0.05.
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of sampled households
(excluding undeliverable and out-of-scope addresses) that were respondents to the questionnaire. Topical response rates exclude
cases that did the screener on the TQA because these cases were not asked to complete the entire topical questionnaire. For the $2
condition, the unweighted eligible sample size was 13,400 for the screener, 390 for the Early Childhood Program Participation
survey (ECPP), 890 for the Parent and Family Involvement-Enrolled survey (PFI-E), 30 for the Parent and Family InvolvementHomeschool survey (PFI-H), and 3,180 for the Adult Training and Education Survey (ATES). For the $5 condition, the
unweighted eligible sample size was 76,090 for the screener, 2,560 for ECPP, 5,270 for PFI-E, 190 for PFI-H, and 19,180 for
ATES. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
We next calculated the percentage point gain in the response rate after each screener mailing in
each incentive condition to determine whether the incentive conditions differed in terms of how
early in the field period sample members completed the screener by (see figure 2.2 on the next
5
page and table 2.2 in appendix A).6 Earlier screener responses lead to cost savings because they
allow fewer follow-up mailings to be sent.
The $5 incentive appeared to have the greatest positive effect over the $2 incentive early
in the administration. It resulted in a significantly larger increase in the response rate than
the $2 incentive after each of the first two screener mailings (initial mailing and pressuresealed envelope).
However, incentive value did not have a significant effect on the magnitude of the gain in
the response rate after the second screener mailing. The $5 incentive also resulted in a
significantly smaller gain in the response rate after the third screener mailing than did the
$2 incentive.
Still, as noted previously, the final response rate in the $5 incentive condition was
significantly higher than the final response rate in the $2 condition.
Figure 2.2: Percentage point gain in screener response rate after each mailing, by incentive
condition: 2017
100%
Response rate
80%
60%
40%
20%
11%
14%*
12%
13%*
8%
8%
10%
9%*
0%
Initial screener mailing
Pressure-sealed
Second screener
envelope
mailing
Mailing
$2 incentive
Third screener mailing
(FedEx/First Class)
$5 incentive
* p < 0.05.
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of eligible sampled households
that had completed the screener after the specified mailing. Response is attributed to a mailing if the response was received three
or more days after that mailing was sent and less than three days after the next mailing was sent. Unweighted sample size
(excluding ineligible addresses) was equal to 13,400 for the $2 incentive condition and 76,090 for the $5 incentive condition.
Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
Respondent characteristics
Finally, we examined whether respondent characteristics differed by incentive condition (see
table 2.3 in appendix A). This analysis used frame variables to determine whether the smaller
incentive was less successful at getting hard-to-reach populations, such as younger, minority,
lower-education, and lower-income individuals (and those missing frame data), to respond to the
6
Response is attributed to a mailing if the response was received three or more days after that mailing was sent (to
allow time for the mailing to reach the household) and less than three days after the next mailing was sent.
6
survey. We also present the percentage of screener respondent households in each condition that
reported having at least one household member who was eligible for each of the four topical
surveys to determine whether the smaller incentive was less effective at getting households with
eligible individuals to respond to the screener.
There were significant (although small) differences between the $2 and $5 incentive groups for
only 4 of the 32 comparisons that were made, suggesting that incentive value did not have much
of an effect on sample composition.
The $2 incentive group had a higher percentage of households with a White head of
household (58 percent) compared to the $5 incentive group (56 percent).
The $2 incentive group had a higher percentage of households with an annual income of
$85,001–$120,000 (18 percent) compared to the $5 incentive group (17 percent).
The $2 incentive group had a lower percentage of households with missing information
on race/ethnicity of the head of household (22 percent versus 24 percent), education of
the head of household (22 percent versus 24 percent), and age of the head of household
(19 percent versus 21 percent). But it did not have a significant effect on the percentage
of screener respondent households that had annual income or a phone number available
on the frame.
The incentive did not have a significant impact on the percentage of screener respondent
households that reported at least one household member eligible for each of the four
topicals.
Takeaways for the screener incentive experiment
Using a $2 screener incentive resulted in a significant (although relatively small)
reduction in the screener response rate but had minimal effect on the topical response
rate.
The positive effect of the $5 incentive over the $2 incentive was greatest at the beginning
of the administration; it was particularly effective at getting sample members to respond
to one of the first two screener contacts (initial mailing and pressure-sealed envelope).
There were very few significant differences in the characteristics of screener respondent
households in the two screener incentive conditions, and those that did exist were small
in magnitude.
For a few variables, some evidence indicates that using a $2 incentive reduced the
prevalence of households that are missing frame data—a group that has been found in the
past to be less likely to respond. But the incentive did not have a measurable impact on
the likelihood that respondent households reported topical-eligible household members
on the screener.
7
2.2: Envelope Size Experiment
Three percent of sample members were randomly assigned to receive a smaller, letter-sized
envelope for the initial screener mailing and first screener reminder mailing instead of the larger,
full-size envelope that is traditionally used in the NHES. Images of the envelopes are included in
appendix E. Envelope size may influence people’s perception of the NHES, whether they notice
the mailing, whether or not they think it is official mail, and their likelihood of responding.
However, postage on the smaller envelope is roughly half that of the full-size envelope,
presenting a potential cost-saving opportunity if the smaller envelope does not have a negative
impact on the response rate. This section of the chapter includes an analysis of the effect of the
envelope size on the response rate, response timeliness, and respondent characteristics.
Response rate and response timeliness
Figure 2.3 shows the response rates for the screener and each topical survey by envelope
condition (also see table 2.4 in appendix A).
Envelope size did not have a significant effect on the screener response rate (43 percent
in both conditions).
It also did not have a significant effect on the topical response rates. Except for PFI-H,
the differences between the response rates in the two conditions ranged from 1 to 2
percentage points. Although there was a notable decline in the PFI-H response rate when
the letter-size envelope was used, the estimates are not stable enough to warrant making a
statistical comparison between the two PFI-H response rates.
Figure 2.3: Response rate, by questionnaire and envelope size condition: 2017
100%
90% 91%
87% 85%
77%
Response rate
80%
60%
73% 74%
53%!
43% 43%
40%
20%
0%
Screener
ECPP
PFI-E
Questionnaire
Full size
PFI-H
ATES
Letter size
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or
greater.
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of sampled households
(excluding undeliverable and out-of-scope addresses) that were respondents to the questionnaire. Topical response rates exclude
cases that did the screener on the TQA because these cases were not asked to complete the entire topical questionnaire. For the
full-size envelope condition, the unweighted eligible sample size was 85,010 for the screener, 2,820 for ECPP, 5,830 for PFI-E,
210 for PFI-H, and 21,290 for ATES. For the letter-size envelope condition, the unweighted eligible sample size was 4,480 for
the screener, 130 for ECPP, 333 for PFI-E, 10 for PFI-H, and 1,070 for ATES. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
8
We next examined the effect of envelope size on screener response timeliness by comparing the
gain in the response rate after each of the screener mailings in the two envelope conditions. As
shown in figure 2.4 below, the gain in the response rate did not differ significantly by envelope
condition after any of the four mailings (see also table 2.5 in appendix A).
Figure 2.4: Percentage point gain in screener response rate after each mailing, by envelope size
100%
Response rate
80%
60%
40%
20%
13%
13%
13%
13%
8%
7%
9%
9%
0%
Initial screener mailing
Pressure-sealed
envelope
Second screener
mailing
Third screener mailing
(FedEx/First Class)
Mailing
Full size
Letter size
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of eligible sampled households
that completed the screener after the specified mailing. Response is attributed to a mailing if the response was received three or
more days after that mailing was sent and less than three days after the next mailing was sent. The unweighted sample size
(excluding undeliverable addresses) was equal to 85,010 for the full-size envelope condition and 4,480 for the letter-size
envelope condition. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
Respondent characteristics
We also compared screener respondent households in the two conditions on the same household
characteristics used in the screener incentive analysis and found almost no significant differences
between the full- and letter-size envelope respondent households (see table 2.6 in appendix A).
The only significant difference was for households with a head of household age 18–24, which
were more prevalent in the full-size envelope condition (just over 1 percent) than in the lettersize envelope condition (about half a percent); however, the magnitude of this difference is very
small, and, as noted in the table, the latter estimate should be interpreted with caution.
Takeaways for the envelope size experiment
Using a letter-size envelope instead of a full-size envelope for two of the screener
mailings did not have a significant effect on the final screener response rate, or on the
response rate after any specific screener mailing.
It also did not have a significant effect on the topical response rate.
Finally, it did not have a significant effect on the characteristics of the responding
households or on their likelihood of reporting topical-eligible individuals on the screener.
9
2.3: FedEx/First Class Experiment
As part of this final screener mailing experiment, sample members were randomly assigned to be
sent the third screener mailing using FedEx (as has typically been done in recent NHES
administrations) or by First Class mail in a cardboard priority mail envelope, which is less
expensive than FedEx.7 An image of the FedEx envelope is included in appendix E.8 Again, this
analysis assesses whether the mailing method had an impact on the likelihood of response or the
characteristics of those who responded.
Response rate
First, we compared the screener response rates in the FedEx and First Class conditions. The
response rate was significantly lower in the First Class condition than in the FedEx condition (42
percent versus 45 percent, see figure 2.5 below and table 2.7 in appendix A). We also looked
specifically at the gain in the response rate in each condition following the FedEx / First Class
mailing and found that the gain was significantly larger in the FedEx condition than it was in the
First Class condition (10 percent versus 7 percent; not shown in table 2.7). As seen for the other
screener mailing experiments, there was not a significant difference in the topical response rates
by condition.
Figure 2.5: Response rate, by questionnaire and FedEx/First Class condition: 2017
100%
90% 90%
87% 87%
76% 75%
Response rate
80%
60%
73% 73%
45% 42%*
40%
20%
0%
Screener
ECPP
PFI-E
PFI-H
ATES
Questionnaire
FedEx
First Class
* p < 0.05.
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of sampled households
(excluding undeliverable and out-of-scope addresses) that were respondents to the questionnaire. Households with PO box
addresses are excluded because they cannot receive FedEx mailings. Topical response rates exclude cases that did the screener on
the TQA because these cases were not asked to complete the entire topical questionnaire. For the FedEx condition, the
unweighted eligible sample size was 44,720 for the screener, 1,530 for ECPP, 3,230 for PFI-E, 130 for PFI-H, and 11,520 for
ATES. For the First Class condition, the unweighted eligible sample size was 44,130 for the screener, 1,410 for ECPP, 2,900 for
PFI-E, 90 for PFI-H, and 10,720 for ATES. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
7
PO box addresses could not receive FedEx mailings; as a result, they were sent First Class mail regardless of their
experimental assignment.
8
AIR does not currently have a copy of the First Class envelope, but this is something that could likely be requested
from Census in the future.
10
Respondent characteristics
Finally, we compared the characteristics of the screener respondent households in the FedEx and
First Class conditions (see table 2.8 in appendix A). Again, the respondent characteristics were
almost identical in the two conditions, with only two significant (but small) differences (out of
more than 30 subgroups tested). The FedEx condition had a lower percentage of households with
an annual income of $85,001–$120,000 (17 percent) compared to the First Class condition (18
percent); it also had a higher percentage of households that were missing annual household
income information (11 percent versus 10 percent).
Takeaways for the FedEx/First Class experiment
Sending the final screener mailing using First Class instead of by FedEx resulted in a
significant (although relatively small) reduction in the screener response rate. It did not
have a significant effect on the topical response rates.
It also had very little effect on the characteristics of screener respondent households and
did not have a significant effect on the likelihood that they reported a topical-eligible
household member on the screener.
11
Chapter 3: Screener Split-Sample Experiment
This chapter presents the results of the screener split-sample experiment, in which sample
members were randomly assigned to receive either of the following:
the NHES:2016 screener, which is a rather literal translation of the NHES paper screener
and uses a person-by-person format, asking all of the questions about a single household
member before turning to the next household member; or
a redesigned screener that is more similar to screeners that the Census Bureau has
developed for web administration of other household surveys, such as the American
Community Survey, which uses a characteristic-by-characteristic format, asking about a
single characteristic for all household members before turning to the next characteristic.
A few other differences between the two versions include the following: (1) the 2016 version
starts by asking how many people live in the household, while the redesigned version starts by
asking for the screener respondent’s name; (2) the 2016 version uses the number provided in the
first question (about how many people live in the household) to decide how many individuals to
ask detailed questions about, while the redesigned version asks for all household members’
names and uses that to determine how many individuals to ask detailed questions about; (3) the
redesigned version only lets households report six names at first and then asks them if anyone
else lives in the household (and, if so, allows them to report up to four additional names), while
the 2016 version allows respondents to report up to 10 household members without including a
question of this type; and (4) in the redesigned version, the characteristic-by-characteristic
format allows the redesigned screener to identify earlier those households where no one is ageeligible for NHES (all household members over age 65) and thus permits skipping the enrollment
and current grade items for these households.
The goal of conducting this experiment with the redesigned screener was to see if it would be
easier than the 2016 screener for respondents to complete (for example, is it easier for
respondents to list all of the household members’ names instead of providing a number of
household members?), and whether it seemed to lead to more accurate screener responses. The
key outcomes of interest were the response rate, response quality, response burden, respondent
characteristics, and screener item responses (e.g., number of household members reported).
When applicable, all analyses in this chapter were conducted twice—first for web respondents
and then for the 8 percent of screener respondents who completed the screener over the phone by
calling into the TQA—to determine if the ideal screener format is different for interviewer
administration than it is for self-administration.9 The web screener results are shown in this
chapter, while the TQA results are shown in appendix C.
9
TQA respondents completed the same screener version over the phone to which they would have been assigned on
the web.
12
3.1: Response Rate
We began by comparing the screener and topical response rates by screener condition. If one
screener version resulted in much lower response rates, it might be preferable to avoid using that
version in future administrations. However, as shown in figure 3.1, the screener response rates in
the two versions were not measurably different from one another (43 percent in both conditions).
In addition, there were no significant differences in the topical response rates by screener version
(also see table 3.1 in appendix A).10 Although there was a noticeable decrease in the PFI-H
response rate when the 2016 screener was used, this difference was not significant, likely due to
small sample sizes for PFI-H (about 230 households were sampled for PFI-H).
Figure 3.1: Response rate, by questionnaire and screener version: 2017
Response rate
100%
90% 89%
89% 86%
72%
80%
60%
79%
73% 73%
43% 43%
40%
20%
0%
Screener
ECPP
PFI-E
PFI-H
ATES
Questionnaire
2016 version
Redesigned version
NOTE: In the 2016 version, questions were asked in a person-by-person format. In the redesigned version, questions were asked
in a characteristic-by-characteristic format. Response rates were calculated using AAPOR RR1. Percentages represent the
proportion of sampled households (excluding undeliverable and out-of-scope addresses) that were respondents to the
questionnaire. Topical response rates exclude cases that did the screener on the TQA because these cases were not asked to
complete the entire topical questionnaire. For the 2016 version, the unweighted eligible sample size was 44,780 for the screener,
1,420 for ECPP, 3,020 for PFI-E, 120 for PFI-H, and 11,200 for ATES. For the redesigned version, the unweighted eligible
sample size was 44,710 for the screener, 1,530 for ECPP, 3,140 for PFI-E, 101 for PFI-H, and 11,160 for ATES. Sample sizes
have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
3.2: Response Quality
It is also important to know if the screener version had an impact on the quality of the data that
was collected – is it easier for respondents to report information in a person-by-person format or
a characteristics-by-characteristic format? Is it easier for them to report the number of people
living in the household by reporting a number or by listing out everyone’s names? Response
quality was measured in terms of the screener breakoff rate, item missingness, and the
prevalence of inconsistent responses.
10
As also noted in chapter 2, all topical response rate calculations are limited households that completed the
screener on the web because TQA screener respondents were not asked to complete a full topical survey.
13
Screener breakoff rate
We first compared the breakoff rate in each screener condition—the percentage of households
that accessed the screener instrument but did not complete it. Among those who accessed the
web tool, there was not a significant difference between the two conditions in terms of the
screener breakoff rate (3 percent in both conditions; see table 3.2 in appendix A).
We also conducted subgroup analyses using frame variables to assess the impact of screener
format on subgroups that were expected to be particularly affected by burden:
by the educational attainment of the head of household, with the hypothesis that lower
education households might be more likely than higher education households to have
trouble completing the screener (and that this would be more obvious in whichever
screener was more burdensome);
by the number of adults in the household, with the hypothesis that larger households
might be more likely than smaller households to have trouble completing the screener;
and
by whether or not the household was flagged as having children, with the hypothesis that
having child household members also might mean the household size is larger. In
addition, it might make the questionnaire more burdensome to complete because it
requires the household to answer additional questions that only apply to individuals who
are enrolled in school.
However, for each of these subgroups, we found that there was not a measurable difference in
the breakoff rate by screener version. We did see slightly higher breakoff rates in both conditions
for households where the head of household had educational attainment of high school or less (as
opposed to some college or more), but we did not see the expected pattern for breakoffs in terms
of the number of adults in the household or whether the household had children in it. This may
suggest these variables are not sufficiently related to burden to prompt breakoffs—or it may be
that the frame variables are not actually accurate measures of these household characteristics (for
example, the frame may say there are children living in the household when really there are not
any present). We did, however, find that those who were missing data on the frame variables
used for the subgroup analyses tended to be the most likely to breakoff, which makes sense given
that these households also have been found to be less likely to participate in the NHES at all
(Jackson and Medway 2017).
Item missingness
We next explored the extent of item missingness for each of the household member characteristic
screener items: (1) name, (2) date of birth/age,11 (3) sex, (4) school enrollment status, and (5)
11
Both screeners started by asking for month and year of birth and then asked for age if the screener respondent
declined to provide month and year of birth. To be counted as missing for date of birth/age, a household member
needed to be missing data for both of these questions.
14
current grade. For each of these items, we first looked at the percentage of screener respondent
households where at least one household member was missing a response to the question.12
As shown in figure 3.2, households that responded online using the 2016 version were
significantly less likely than those that responded using the redesigned version to have at least
one household member missing a response for:
name (0.5 versus 1.1 percent);
sex (1.0 versus 1.6 percent);
enrollment status (0.9 percent versus 1.2 percent); and
grade (0.4 percent versus 0.8 percent) (see table 3.3a in appendix A).
Although the magnitude of these differences was quite small, the direction of the relationship
was consistent across the four items (there was not a significant difference in the extent of
missing data for date of birth/age by screener version). It also should be noted that, overall, the
percentage of households with at least one person missing a response to key questions was quite
low in both versions.
Percentage
Figure 3.2: Percentage of web screener respondent households with at least one household
member missing a response to the screener item, by screener item and screener
version: 2017
10%
8%
6%
4%
2%
0.5%
1.1%*
1.0%
0.6% 0.8%
1.6%*
0.9% 1.2%*
0.4% 0.8%*
School enrollment
status
Current grade
or equivalent
0%
Name
Date of birth/age
Sex
Screener item
2016 version
Redesigned version
* p < 0.05
NOTE: In the 2016 version, questions were asked in a person-by-person format. In the redesigned version, questions were asked
in a characteristic-by-characteristic format. Percentages represent the proportion of web screener respondent households with at
least one household member missing a response to that screener item. Households that responded to the screener on the TQA are
excluded from this analysis. The unweighted sample size was equal to 17,160 for the 2016 version and 17,040 for the redesigned
version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
12
There appear to be some irregularities in how the data from the redesigned version is output into a data file (for
example, the name for a given household member might be listed under P3 while his or her other information is
listed under P4). We are still researching this, and it may have at least small implications for any item-response
results reported in this chapter.
15
We conducted the same subgroup analyses discussed in the previous section using frame
variables (educational attainment of the head of household, the number of adults in the
household, whether the household is flagged as having children).
For name and sex, the difference between the two versions remained significant
(although small) for almost all subgroups, with the exceptions tending to be the smaller
subgroups where there may not have been sufficient power to detect small differences.
For enrollment and grade, the difference between the two versions was not significant for
most subgroups.
For all subgroups for all of the variables, the direction of the relationship was
consistent—a higher percentage of households had at least one person missing a response
to key questions in the redesigned version, regardless of statistical significance.
Next, we looked at the percentage of web screener respondent households where the sampled
household member was missing a response to the question. The rates were even lower for this
analysis (all less than 1 percent; see figure 3.3 below and table 3.3a in appendix A), in part
because excessive missing data would stop a household member from being sampled in the first
place. The results for this outcome were less consistent than when we looked at whether any one
household member was missing data. For example, households in the redesigned version were
significantly more likely than those in the 2016 version to be missing a name for the sampled
household member but less likely to be missing a date of birth/age; there was not a significant
difference for sex, enrollment, or grade. Overall, screener version had little impact on item
missing data for the sampled household member among web screener respondent households.
Figure 3.3: Percentage of web screener respondent households where the sampled household
member was missing a response to the screener item, by screener item and screener
Percentage
10%
8%
6%
4%
2%
0.2% 0.4%*
0.2% 0.1%!*
0.4% 0.4%
0.3% 0.4%
0.2% 0.3%
Sex
School enrollment
status
Current grade
or equivalent
0%
Name
Date of birth/age
Screener item
2016 version
Redesigned version
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or
greater.
* p < 0.05
NOTE: In the 2016 version, questions were asked in a person-by-person format. In the redesigned version, questions were asked
in a characteristic-by-characteristic format. Percentages represent the proportion of web screener respondent households with the
sampled household member missing a response to that screener item. Households that responded to the screener on the TQA are
excluded from this analysis. The unweighted sample size was equal to 17,160 for the 2016 version and 17,040 for the redesigned
version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
16
Inconsistent responses
Next, we calculated the percentage of screener respondent households that provided inconsistent
screener responses for at least one household member in each condition. Inconsistent responses
were defined as:
reporting an inconsistent pair of age and grade responses (for example, reporting that a
five year old is in twelfth grade);
reporting that the household member is currently homeschooled or in private school,
public school, or preschool for the enrollment question and then reporting that the
household member is in college, university, or vocational school for the current grade
question; or
reporting either that the household member is enrolled in school and is too old to
realistically be in school (for example, reporting that a 25 year old is in public or private
school) or that the household member is not enrolled and is too young to be finished with
his or her schooling (for example, reporting that a 10 year old is not currently in
school).13
The percentage of web screener respondent households that provided an inconsistent pair of
responses for at least one household member was very low in both conditions (see table 3.4a in
appendix A). However, respondents in the redesigned version were significantly more likely than
those in the 2016 version to do so (3 percent versus 2 percent). We conducted the same subgroup
analyses as discussed in earlier sections and found that the significant difference was limited to
the following subgroups:
households where the head of household had completed high school or less (3 percent in
the redesigned versus 2 percent in the 2016 version);
households with only 1 or 2 adults in them (rounds to 3 percent in both conditions); and
households that were flagged as having children (5 percent in the redesigned version
versus 4 percent in the 2016 version).
Unknown eligibility sampling status
Finally, we compared the percentage of screener respondent households in each condition where
at least one household member received an “unknown eligibility” sampling status. This status
was assigned when there was insufficient information to determine whether the household
member was eligible for any of the topical surveys because (1) there was too much item
nonresponse or (2) there were inconsistent screener responses.14 Household members that receive
this flag are not eligible for topical sampling. A higher prevalence of households with unknown
13
The web tool was programmed to prohibit some potential inconsistent responses from occurring: (1) household
members under the age of 10 could not have reported enrollment of “college, university, or vocational school,” and
(2) household members over age 25 could not have reported enrollment of “homeschool” or “public or private
school or preschool” (and thus could not be asked their current grade or equivalent).
14
Appendix D includes a table with the sampling decision rules based on all of the possible combinations of age,
enrollment, and grade responses to the screener—including the conditions under which an unknown eligibility
sampling decision would be made.
17
eligibility would suggest that respondents have more difficulty completing that version of the
screener.
Web respondents to the redesigned screener were significantly more likely to have at least one
household member end up with an unknown eligibility sampling status, although the magnitude
of this difference was quite small (2 percent of households in the redesigned version and 1
percent in the 2016 version) (see table 3.5a in appendix A).
The same subgroup analyses as discussed in previous sections also were conducted here. The
difference between the two versions remained significant for almost all subgroups—and the
direction of the relationship was also consistent for nearly all subgroups (more households
received an unknown eligibility status using the redesigned screener compared to the 2016
version).
3.3: Response Burden
The next section of this chapter explores the effect of screener version on response burden; it is
possible that one ordering of the items or the other was more difficult for respondents to process
and was thus more burdensome to complete. As a measure of burden, we calculated the mean
number of minutes screener respondents spent answering the screener questions in each
condition.15 Among web screener respondents, we found that there was a small but significant
increase in the mean amount of time needed to complete the redesigned screener as compared to
the 2016 screener (4.4 minutes versus 3.9 minutes; see table 3.6a in appendix A).
We again conducted the same subgroup analyses as described in earlier sections. We found that
the significant differences by screener condition remained for almost all subgroups—and that the
pattern of the relationship remained the same for all subgroups (the redesigned version took
longer than the 2016 version). We also found that, as expected, the screener took longer to
complete in both conditions when there were more people reported living in the household and
when the household was flagged as having children (but it did not take longer when the head of
household had lower educational attainment).
3.4: Respondent Characteristics
We next compared the following characteristics of the responding households in each condition
using variables available on the frame to see if the two versions of the screener resulted in
different types of households responding to the screener:
whether there was a phone number available on the frame;
the race/ethnicity, age, and education of the head of household; and
annual household income.
15
Cases that completed the screener over multiple days, took more than 6 hours to complete it, or spent more than 15 minutes on
a page without taking any actions are excluded from this analysis. One case was missing from the paradata file and is also
excluded from this analysis.
18
There were no significant or notable differences in screener respondent households on these
variables among web screener respondents (see table 3.7a in appendix A).
3.5: Screener Responses
Finally, we compared the responses received for the two versions of the screeners in terms of
two screener response outcomes: (1) the number of people reported to be living in the household
and (2) whether or not at least one household member was reported as being eligible for a topical
survey.
Number of household members reported
Among the households that completed the screener online, there was a small but significant
difference in the mean number of household members reported in each condition (2.5 in the 2016
version and 2.6 in the redesigned version). There were some significant differences in the
percentage distribution number of household members reported by screener condition that drive
the difference between these means; however, most of these differences also were quite small in
magnitude (see figure 3.4 below and table 3.8a in appendix A):
There was more likely to be only one household member reported in the 2016 version
than in the redesigned version (25 percent versus 21 percent).
Conversely, respondents in the 2016 version were less likely than those in the redesigned
version to report three household members (15 percent versus 16 percent), five household
members (rounds to 6 percent in each condition), or six household members (2 percent
versus 3 percent).
Figure 3.4: Percentage distribution of the number of household members reported in the web
Redesigned
version
21%
2016 version
37%
25%
0%
16%
37%
20%
40%
1
2
15%
60%
Percentage of
screener respondent
households
3
4
5
6
15%
14%
80%
6% 3% 1%
6% 2% 1%
100%
7 or more
NOTE: In the 2016 version, questions were asked in a person-by-person format. In the redesigned version, questions were asked
in a characteristic-by-characteristic format. Percentages represent the proportion of web screener respondent households within
each condition that reported that number of household members. Households that responded to the screener on the TQA are
excluded from this analysis. The unweighted sample size was 17,160 for the 2016 version and 17,040 for the redesigned version.
Sample sizes have been rounded to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
19
After giving respondents the opportunity to report the names of the first six household members,
the redesigned version of the screener included a question asking if any additional individuals
lived in the household (conversely the 2016 version allows respondents to report up to 10
household members without including a question of this type; results in this section not shown in
appendix tables).16
Overall, 4 percent of web redesigned screener respondent households said “yes” to this
question.
We also looked at the response to this question by whether or not the screener respondent
had already reported six household members in response to the initial question; for those
respondents, this question may simply provide an opportunity to report additional
household members who had not been listed previously due to a lack of space—while for
those who had previously reported fewer than six household members, this question
could provide an additional opportunity to report household members that the screener
respondent may have initially forgotten to list. Thirty-five percent of those screener
respondents who had already listed six household members replied ”yes” to this question,
while only 3 percent of those who had listed fewer than six household members did so.
After providing an affirmative response to that question, respondents were given the
opportunity to list up to four additional names. On average, web screener respondents
listed 1.2 additional names, with those who had already listed six people adding 1.6 more
names and those who had listed fewer than six added 1.0 names on average.17
Somewhat surprisingly, 26 percent of web respondents who had said “yes” to the
question about additional household members did not provide any additional names when
given the opportunity to do so. This was especially common for screener respondents
who had previously reported fewer than six household members (36 percent versus only
6 percent of households that had already listed six household members).
When web screener respondents did provide names for the additional household
members, they almost always also provided ages for these people (ages were reported for
96 percent of the people for whom names were provided). It was somewhat more likely
for respondents who had initially listed fewer than six household members not to provide
ages for the added household members (7 percent of added names were missing ages
compared to only 1 percent among respondents who had previously listed six household
members).
Looking at the ages for those names that were added by web respondents, there was a
large range in the reported ages, all the way from less than one year old up to 91 years
The exact wording of this question was: “Other than the people listed below, does ANYONE ELSE live in this
household? For example, anyone who usually lives here who is temporarily away from home or living in a dorm at
school, any babies or small children, roommates, foster children.”
17
Comparisons between households that initially listed six or more household members and those that listed fewer
than six are based on general patterns, not statistical significance, due to relatively small sample sizes.
16
20
old. About half of the added individuals were adults (age 19 or older), about a quarter
were children ages 6 to 18, and the final quarter were children ages 0 to 5.
o Among respondents who had initially listed six household members, the added
household members were especially likely to be children (35 percent were ages 0
to 5 and 37 percent were ages 6 to 18).
o However, among respondents who had initially listed fewer than six household
members, the added household members were much more likely to be age 19 or
older (65 percent of added household members, while 15 percent were ages 0 to 5
and 20 percent were ages 6 to 18). Ultimately, about 150 children in the eligible
age range for ECPP and PFI (age 0 to 18) were listed on the screener who would
not have been if the screener had only offered places to list six names and had not
included a question about potential additional household members for those
screener respondents who initially provided fewer than six names.
Reporting at least one household member eligible for a topical survey
Finally, we compared the percentage of screener respondent households in each condition that
reported at least one household member who was eligible for a topical survey. This percentage
did not differ significantly by screener version among web screener respondents (82 percent in
the 2016 version and 83 percent in the redesigned version; see figure 3.5 on the next page and
table 3.9a in appendix A).
We also looked at the percentage of web screener respondent households reporting at least one
household member who was eligible for each of the specific topical surveys. Although there was
not a significant difference between the two screener versions for ATES or PFI-H, there was a
small but significant increase in the percentage of web screener respondents completing the
redesigned version who reported at least one household member eligible for ECPP (11 percent
versus 10 percent) or PFI-E (23 percent versus 22 percent).
21
Figure 3.5: Percentage of web screener respondent households that reported at least one
household member eligible for each topical survey, by screener version: 2017
100%
82% 83%
82% 83%
Percentage
80%
60%
40%
22% 23%
10% 11%
20%
1%
1%
0%
Overall
ECPP
PFI-E
PFI-H
ATES
Topical questionnaire
2016 version
Redesigned version
NOTE: In the 2016 version, questions were asked in a person-by-person format. In the redesigned version, questions were asked
in a characteristic-by-characteristic format. Percentages represent the proportion of web screener respondent households for
which at least one reported household member was eligible for a topical survey. Screener respondent households may have been
eligible for more than one topical; as a result the topical-specific results do not sum to the overall result. Households that
responded to the screener on the TQA are excluded from this analysis. The unweighted sample size was 17,160 for the 2016
version and 17,040 for the redesigned version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
3.6: Key Takeaways From the Screener Experiment
Screener version did not have a significant impact on the screener response rate or
breakoff rate. It also did not have a significant effect on the topical response rates.
Screener version did not have a significant effect on the characteristics of web screener
respondent households (as measured by frame variables, such as having a phone number
available or the education or age of the head of household). But households in the 2016
version reported fewer household members on average and were significantly more likely
than those in the 2016 version to only report one household member (and less likely to
report three, five, or six household members); however, the magnitude of these
differences is quite small and may not be of practical concern.
Screener version did not have a significant impact on the percentage of web screener
respondent households that reported at least one household member eligible for at least
one topical survey, but households in the redesigned version were significantly more
likely to report at least one household member eligible for ECPP or PFI-E, although the
magnitude of these differences is again quite small and may not be of practical concern.
For several other outcomes, the redesigned screener performed worse than the 2016
version among web screener respondents; however, again, the magnitude of most of these
differences is very small and may not be of practical concern. Respondent households in
the redesigned version:
22
o were more likely to have missing data for at least one household member for most
screener items;
o were more likely to provide an inconsistent response for at least one household
member;
o were more likely to have at least one household member receive an “unknown
eligibility” sampling status; and
o took longer on average to complete the screener.
Among TQA screener respondents, there were very few significant or notable differences
between the two versions in terms of key outcomes (see appendix C for more details).
23
Chapter 4: Dual-Topical Experiment
This chapter of the report presents the results for the dual topical experiment. In this experiment,
one-third of the sampled households were randomly selected to receive two topical surveys
instead of one.18 The intent of this experiment was to determine if households that were eligible
to complete two or more topical surveys would be willing to provide more data as part of an
online survey (building on the 2014 paper-only version of this experiment), so that future NHES
collections could potentially sample a smaller number of households and still end up with a
similar number of topical respondents.
The analyses presented in this chapter examine the effect of the dual topical request on the
following outcomes: the response rate, response quality, respondent burden, and respondent
characteristics. All analyses conducted in this chapter are limited to households that reported on
the screener that they had household members that were eligible for at least two of the NHES
topical surveys (ECPP, at least one of the versions of PFI, or ATES) because households that did
not meet this criteria in the dual-topical condition would only have been sampled for one topical.
All analyses in this chapter also are restricted to households where the screener was completed
online because TQA screener respondents who were sampled for a topical were asked to
complete the first topical item but were not asked to complete a full topical questionnaire. The
results of these analyses provide insight into the feasibility of requesting that households
complete multiple topical questionnaires in future NHES administrations that include a web
administration component.
4.1: Response Rate
The first section of this chapter assesses the impact of the dual-topical condition on the topical
response rate; because the dual-topical condition can be more burdensome for households, it is
possible it could result in lower topical response rates or that some households would decline to
complete the second topical questionnaire.
We started by comparing the topical response rate for each topical in the single and dual topical
households (see figure 4.1 on the next page and table 4.1 in appendix A).
For ECPP, PFI-E, and ATES there was a significant decrease in the topical response rate
in the dual-topical condition as compared to the single-topical condition. There was not a
significant or notable difference between the two conditions for PFI-H.
For ECPP and PFI-E, the magnitude of the difference was 4 to 5 percentage points.
The difference was larger for ATES, with an 8 percentage point decrease in the topical
response rate in the dual-topical condition. There was a similarly sized decrease in the
18
In the dual household condition, all households with individuals eligible for at least two topicals were meant to be
asked to complete two topicals. Each household could only get a particular topical one time (for example, ATES
was only presented once even if the household was solely made up of adults who were eligible for ATES). In
addition, dual topical households could only receive either PFI-E or PFI-H, not both. In conducting the analyses in
this chapter, we found that 25 households flagged for the dual household condition that had members eligible for
two or more topicals were not sampled for two topicals; this was likely a sampling error in the web instrument.
24
response rate in the dual-topical condition regardless of whether the screener respondent
or a different household member was sampled for ATES.
In addition, in both conditions, the response rate was much lower when another
household member was sampled than it was when the screener respondent was sampled
(more than 40 percentage points lower, likely due to the reduced topical mailing protocol
used in 2017).19
Figure 4.1: Topical response rate among households eligible for two or more topical
Figure 4.1: questionnaires, by topical questionnaire and dual-topical condition: 2017
100%
89% 86%*
92%
87%*
77%
Response rate
80%
76%
68%
60%*
60%
40%
20%
0%
ECPP
PFI-E
PFI-H
ATES
Topical questionnaire
Single-topical condition
Dual-topical condition
* p < 0.05.
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of eligible households with two
or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that were respondents to the topical questionnaire.
The analysis excludes cases that did the screener on the TQA because these cases were not asked to complete the entire topical
questionnaire. In the single-topical condition, the unweighted eligible sample size was 1,700 for ECPP, 3,590 for PFI-E, 120 for
PFI-H and 1,400 for ATES. In the dual-topical condition, the unweighted eligible sample size was 1,230 for ECPP, 2,520 for
PFI-E, 100 for PFI-H, and 2,860 for ATES. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
We also looked at the response rate separately for each of the possible combinations of the dual
questionnaires, to determine if the response rate was impacted by the particular combination of
questionnaires that the household received (see table 4.1 in appendix A).
We began by looking at the child topical response rates for all possible child-child topical
pairings and compared those to the child topical response rates in the single-topical condition
(see figure 4.2 on the next page).
For both ECPP and PFI-E, the response rate was significantly lower when they were
paired together in the dual-topical condition than it was when they were administered
individually in the single-topical condition.
19
The 2017 topical protocol only included two mailings (and two e-mails, when following up with screener
respondents who had provided their e-mail address), while earlier NHES mail administrations included five topical
mailings.
25
When ECPP and PFI-H were paired together, there was a slight pattern of the ECPP
response rate being lower than in the single-topical condition—and the PFI-H response
rate being higher compared to the single-topical condition; however, the paired estimates
were not reliable enough to make statistical comparisons with the single-topical
condition.20
Figure 4.2: Child topical response rate among households eligible for two or more topical
questionnaires, by topical questionnaire, dual-topical condition, and dual topical
pairing: 2017
100%
89%
85%* 83%!
92%
87%*
77% 81%!
Respnose rate
80%
60%
40%
20%
†
†
†
†
†
0%
ECPP
PFI-E
Topical questionnaire
PFI-H
Single-topical condition
Dual-topical condition: paired with ECPP
Dual-topical condition: paired with PFI-E
Dual-topical condition: paired with PFI-H
† Not applicable.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or
greater.
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of eligible households with two
or more individuals eligible for ECPP and PFI (E or H) that were respondents to the topical questionnaire. The analysis excludes
cases that did the screener on the TQA because these cases were not asked to complete the entire topical questionnaire. In the
single-topical condition, the unweighted eligible sample size was 1,700 for ECPP, 3,590 for PFI-E, 120 for PFI-H and 1,400 for
ATES. In the dual-topical condition, the unweighted eligible sample size was 1,230 for ECPP, 2,520 for PFI-E, 100 for PFI-H,
and 2,860 for ATES. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
We next looked at the ATES response rate in all possible adult-child dual topical pairings and
compared it to the ATES single-topical condition response rate (see figure 4.3 on the next page
and table 4.1 in appendix A). We looked separately at households where the screener respondent
was sampled for ATES and at those where someone else was sampled because sampling a
20
In order to provide NCES with as much information as possible for decision-making purposes, no estimates have
been suppressed in this report. However, we have generally refrained from conducting statistical tests whenever at
least one of the estimates would have been suppressed (due to the coefficient of variation being 50 percent or
greater, (2) the numerator being less than 3 (other than for estimates that round to 0), or (3) the denominator being
greater than 30); these t-tests have been replaced with daggers in all tables in appendix A. In addition, throughout
this report, we have flagged estimates as unreliable/needing to be interpreted with caution if any of the following is
true: (1) the coefficient of variation is 30 percent or greater, (2) the numerator is less than 3 (other than for estimates
that round to 0), or (3) the denominator is greater than 30; these estimates have an exclamation point displayed next
to them in all figures and tables.
26
different household member for ATES takes away the main benefit of dual topical sampling in a
web administration (that the screener respondent could do both topicals right after completing
the screener).
When ATES was paired with ECPP or PFI-E in the dual condition, the ATES response
rate was significantly lower than it was in the single-topical condition.
There was also a notable decrease in the ATES response rate when it was paired with
PFI-H (as compared to the ATES single-topical condition response rate); however, this
difference was not statistically significant given the small sample size. These patterns
were even observed when a different household member was sampled for ATES, though
it is not immediately clear why the dual-topical condition would decrease the response
rate among this group.
In addition, regardless of dual-topical condition or topical pairing, the ATES response
rate was lower when a different household member was sampled for ATES than it was
when the screener respondent was sampled; this is likely due to the break in the response
process when a different household member is sampled for ATES.
Figure 4.3: ATES response rate among households eligible for two or more topical questionnaires,
by ATES respondent, dual-topical condition, and dual topical pairing: 2017
91%
100%
Response rate
80%
60%
68%
81%* 83%* 81%
58%* 60%* 60%
47%
35%* 38%* 35%
40%
20%
0%
Overall
Same respondent as screener
ATES respondent
Different respondent than
screener
Single-topical condition
Dual-topical condition: paired with ECPP
Dual-topical condition: paired with PFI-E
Dual-topical condition: paired with PFI-H
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of eligible households with two
or more individuals eligible for ATES and either ECPP or PFI (E or H) that were respondents to the topical questionnaire. The
analysis excludes cases that did the screener on the TQA because these cases were not asked to complete the entire topical
questionnaire. In the single-topical condition, the unweighted eligible sample size for ATES is 1,400. In the dual-topical
condition, the unweighted eligible sample size was 2,860 for ATES. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
Finally, since the order of the topicals was randomized for households in the dual-topical
condition that were asked to complete two topicals, we assessed whether the order that the
topicals were presented had an impact on response rates in the dual-topical condition (see figure
4.4 on the next page and table 4.1 in appendix A). This analysis—and all subsequent analyses of
topical order within the dual-topical condition—excludes households where someone other than
27
the screener respondent was sampled for ATES because that sampling scenario tended to result
in two different people taking the topicals at two different times (making topical order
irrelevant).
For ECPP, PFI-E, and ATES (when the screener respondent was sampled), the topical
response rate was significantly lower when a topical was presented second than when it
was the first topical presented.
For ECPP and PFI-E, the magnitude of the difference was 7 to 10 percentage points,
while for ATES it was about 13 percentage points.
These differences persisted regardless of which topical the surveys were paired with.
The same pattern was observed for PFI-H, but the difference was not significant (likely
due to small sample sizes for PFI-H).
Figure 4.4: Topical response rates in dual-topical condition among households eligible for two or
more topical questionnaires, by topical questionnaire and topical order: 2017
100%
88%
90%
81%*
80%*
75%
80%
Percentage
89%
84%
76%*
60%
40%
20%
0%
ECPP
PFI-E
PFI-H
ATES (same
respondent)
Topical questionnaire
First topical
Second topical
* p < 0.05.
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of eligible households in the
dual-topical condition with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that were
respondents to the topical questionnaire. The analysis excludes cases that did the screener on the TQA because these cases were
not asked to complete the entire topical questionnaire. Households where someone other than the screener respondent was
sampled for ATES are also excluded from the analysis. Unweighted eligible sample size was 870 for ECPP, 1,450 for PFI-E, 70
for PFI-H, and 1,430 for ATES. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
We next determined whether each household completed all of the topicals that they were asked
to complete, classifying all households in both conditions that had household members eligible
for two or more topicals into one of three groups: (1) respondent to all sampled topicals (1 in the
single-topical condition and 2 in the dual-topical condition), (2) respondent to one of two
topicals (possible in the dual-topical condition only), and (3) nonrespondents to all sampled
topicals.
28
We found that households in the dual-topical condition were significantly less likely to
complete all of the topicals they were sampled for than were those in the single-topical
condition (60 percent versus 86 percent, see figure 4.5 below and table 4.2 in appendix
A).
Households in the dual-topical condition also were significantly less likely than those in
the single-topical condition to end up as nonrespondents to all sampled topicals (10
percent versus 14 percent).
Figure 4.5: Topical unit response status among households with household members eligible for
two or more topical questionnaires, by dual-topical condition,and order of topicals:
2017
Percentage
100%
86%
80%
61%
60%*
59%
60%
40%
20%
30%
14%
10%*
†
31%
29%
10%
10%
0%
Single topical condition
Overall
Alphabetical topical
order
Reverse topical order
________________________________________________
Dual-topical condition
Topical condition
Respondent to all sampled topicals
Respondent to 1 of 2 topicals
Nonrespondent
† Not applicable.
* p < 0.05 (compared to single-topical condition).
NOTE: Percentages represent the proportion of households with two or more individuals eligible for at least two of ECPP, PFI (E
or H), and ATES that completed that number of topicals (all, one of two, none). The analysis excludes cases that did the screener
on the TQA because these cases were not asked to complete the entire topical questionnaire. In the single-topical condition, the
unweighted eligible sample size was 1,700 for ECPP, 3,590 for PFI-E, 120 for PFI-H and 1,400 for ATES. In the dual-topical
condition, the unweighted eligible sample size was 1,230 for ECPP, 2,520 for PFI-E, 100 for PFI-H, and 2,860 for ATES.
Sample sizes have been rounded to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
We also looked at the results separately for each possible topical pairing in the dual-topical
condition, to see if specific pairings (e.g., ECPP paired with PFI-E, ATES paired with PF-H, and
so on) had an effect on how likely dual topical households were to complete all of the topicals
for which they were sampled (see figure 4.6 to follow and table 4.2 in appendix A).
In general, we found topical pairing not to be much of a factor for pairings where the
screener respondent was asked to complete both topicals.
However, when a different household member was sampled for ATES, households were
much less likely to complete both topicals than they were when the screener respondent
was able to do both topicals. This is likely due to the break in the response process
29
associated with sending a separate response request to a different household member (as
well as due to the reduced topical mailing protocol used in 2017).
Households were particularly likely not to complete either topical when the pairing was a
different ATES respondent with PFI-H; however, this may be due to the lower response
rate associated with each of these topical scenarios individually.
Finally, we looked at whether the order in which the topicals were presented in had an impact on
whether dual-topical households completed both topicals (see table 4.2 in appendix A). We again
found this not to be the case—with one execption: when ECPP was paired with ATES (and the
screener respondent was the person sampled for ATES), households were significantly more
likely to complete both topicals when ATES was presented first than they were when ECPP was
presented first (81 percent versus 71 percent).
Topical yield
We also conducted an analysis of the topical yield in each condition—the number of screener
cases needed to achieve a single topical complete. Even though the topical response rates were
lower in the dual-topical condition, it is possible that getting two topicals from a sufficient
number of households could cancel this out. If fewer screeners could be sent out in the dualtopical condition while still maintaining the same topical yield, this would present a cost savings
opportunity for future NHES administrations. We also estimated the number of screener cases
that would need to be sampled to end up with the same topical yield as 2016 (approximately
67,660 topical completes) in both the single and dual-topical conditions; this provides an
estimate of whether using a dual topical design would allow for a smaller starting sample size in
2019. This analysis was conducted unweighted and includes all cases sampled for the NHES.21
In the single-topical condition, a topical complete was achieved for every 4.3 screeners
that were sent out. In the dual-topical condition, a topical complete was achieved for
every 3.3 screeners that were sent out.
To achieve 67,600 topical completes as was done in 2016, 291,206 screeners would need
to be sent in the single-topical condition and 224,803 screeners would need to be sent in
the dual-topical condition. This is a reduction of about 66,400 screeners in the dualtopical condition to end up with the same number of topical completes, suggesting that
the dual-topical condition is still more efficient even though the topical response rates
were lower in the dual-topical condition.
It is important to keep in mind that the 2017 web test had a much lower screener response rate
than other NHES administrations, mostly due to only offering a web option. In addition, the
topical yield is lower in 2017 because cases that completed the screener on the TQA were not
asked to complete the entire topical survey. Therefore, although this analysis is useful for making
comparisons between the two experimental conditions, it is not a good estimate of the exact
number of screeners that would need to be sent in the next official NHES administration.
21
These results are only shown in the text, not in appendix tables.
30
Figure 4.6: Topical unit response status among dual-topical condition households with household members eligible for two or more
topical questionnaires, by dual topical pairing: 2017
100%
81%
79%!
Percentage
80%
79%
76%
73%
57%
60%
56%
42%
40%
20%
35%
33%
9% 10%
15%!
7%!
12%12%
10%11%
ECPP/PFI-E
ECPP/PFI-H
ECPP/ATES
same
PFI-E/ATES
same
14%! 14%!
10%
31%
27%
9%
0%
________________________
Child-child pairing
PFI-H/ATES
same
___________________________________
Child-same ATES pairing
ECPP/ATES
different
PFI-E/ATES
different
PFI-H/ATES
different
_____________________________________
Child-different ATES pairing
Topical pairing
Respondent to all sampled topicals
Respondent to 1 of 2 topicals
Nonrespondent
NOTE: Percentages represent the proportion of households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that completed that number of
topicals (all, one of two, none). Sample sizes have been rounded to the nearest 10. The analysis excludes cases that did the screener on the TQA because these cases were not asked
to complete the entire topical questionnaire. ATES "same respondent” households are those where the screener respondent was sampled for ATES; ATES "different respondent"
households are those where a household member other than the screener respondent was sampled for ATES. In the single-topical condition, the unweighted eligible sample size
was 1,700 for ECPP, 3,590 for PFI-E, 120 for PFI-H and 1,400 for ATES. In the dual-topical condition, the unweighted eligible sample size was 1,230 for ECPP, 2,520 for PFI-E,
100 for PFI-H, and 2,860 for ATES. Sample sizes have been rounded t the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017
31
Incentive cost per topical complete
Finally, this section includes an analysis of the incentive cost per topical complete in the single
and dual-topical conditions. The analysis provides insight into the extent to which the dual
topical design might reduce the incentive cost per completed topical survey, since dual-topical
condition households received the same incentive as single-topical condition households. This
analysis was also conducted unweighted and includes all cases sampled for the NHES.
The incentive cost for each sampled household was determined based on the case’s
screener incentive condition ($2 or $5) and whether any topical mailings were sent to the
household (additional $5 incentive).
The cost was then summed for all households in each of the four conditions (singleversus dual-topical by $2 versus $5 screener incentive). The number of completes in each
condition was determined by summing up the number of completed topicals received in
that condition.
Finally, the cost per complete was calculated as the total incentive cost in that condition
divided by the total number of completes. This analysis was conducted unweighted.
As shown in figure 4.7 (and table 4.3 in appendix A), the incentive cost per complete was lower
in the dual-topical condition; the incentive cost per topical complete in the single-topical
condition was $21.73, while in the dual-topical condition it was $17.32. We see the same pattern
when we look at this result separately for those cases that were given a $2 screener incentive and
those that were given a $5 screener incentive, although the magnitude of the difference is larger
in the $5 condition (about $4.70 versus about $2.20).
Figure 4.7: Incentive cost per topical complete, by screener incentive condition and dual topical
condition: 2017
$25
Percentage
$20
$23.37
$21.73
$18.66
$17.32
$15
$11.36
$10
$9.21
$5
$0
Overall
$2 screener incentive
$5 screener incentive
Incentive condition
Single-topical condition
Dual-topical condition
NOTE: The cost per topical complete was calculated as the total incentive cost in that condition (for both screener and topical
incentives) divided by the total number of completed topicals received in that condition. In the single-topical condition, the
unweighted sample size was 67,000. In the dual-topical condition, the unweighted eligible sample size was 32,500.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
32
Although not the main aim of this analysis, it also demonstrates that the $5 incentive is much
more expensive than the $2 incentive per topical complete in both the single and dual-topical
conditions (single-topical condition: $23.37 versus $11.36; dual-topical condition: $18.66 versus
$9.21). This makes sense given that the $5 incentive is 2.5 times the size of the $2 incentive but
only added 3 percentage points to the screener response rate (and had no effect on the topical
response rate).
As noted for the yield analysis, while these results are useful for making comparisons between
conditions in 2017, they are not likely to be useful estimates of incentive cost per topical
complete in future NHES administrations due to differences in the methodology used in 2017
compared to what is likely to be used in future administrations.
4.2: Response Quality
The next section of this chapter compares response quality in the two conditions—as measured
by the breakoff rate and item missing rates—to determine if the additional burden respondents
face in the dual-topical condition has a negative impact on response quality.
Breakoff rate
The first analysis in this section compares the breakoff rate for each topical in the two conditions
(see figure 4.8 on the next page and table 4.4 in appendix A). If breakoff rates are measurably
higher for dual questionnaire households, this might suggest that sending two surveys is too
burdensome of a request.
The breakoff rates in the single-topical condition ranged from single digits for ATES (9
percent when the screener respondent was sampled, 10 percent when another household
member was sampled) up to 18 percent for PFI-H.
In most cases, dual-topical condition did not have a significant effect on the breakoff rate.
However, there was a small but significant increase in the PFI-E breakoff rate in the dualtopical condition versus the single-topical condition (13 percent versus 11 percent).
33
Figure 4.8: Topical breakoff rate among households eligible for two or more topical
Figure 4.4:questionnaires, by topical questionnaire and dual-topical condition: 20174.4
100%
Breakoff rate
80%
60%
40%
20%
13% 13%
11% 13%*
ECPP
PFI-E
18% 17%
9% 10%
10% 7%
ATES same
respondent
ATES different
respondent
0%
PFI-H
Topical questionnaire
Single-topical condition
Dual-topical condition
* p < 0.05.
NOTE: Percentages represent the proportion of eligible households with two or more individuals eligible for at least two of
ECPP, PFI (E or H), and ATES that was sampled for and reached the first item in the questionnaire but broke off before
completing it. ATES “same respondent” households are those where the screener respondent was sampled for ATES; ATES
“different respondent” households are those where a household member other than the screener respondent was sampled for
ATES. The analysis excludes cases that did the screener on the TQA because these cases were not asked to complete the entire
topical questionnaire. In the single-topical condition, the unweighted eligible sample size was 1,700 for ECPP, 3,580 for PFI-E,
120 for PFI-H, and 1,040 for ATES. In the dual-topical condition, the unweighted eligible sample size was 1,180 for ECPP,
2,420 for PFI-E, 100 for PFI-H, and 1,900 for ATES. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
Within the dual-topical condition we also looked at whether there was a higher breakoff rate for
each topical when it was presented second (versus when it was presented first); however, we did
not find any evidence that this was the case. Looking at this for specific topical pairings (e.g.,
when ECPP is paired with PFI-E or when ATES is paired with PFI-H) led to the same
conclusion in general for all of the potential pairings, although it was not possible to conduct
statistical tests of differences by order for some of the pairings due to unreliable estimates
(particularly for pairings with PFI-H).
Item missing rates
We also compared the item missing rates for key items in the two conditions. Leaving questions
blank is one indicator of poor response quality; this analysis provides insight into whether
requesting that a household complete a second questionnaire has a negative impact on data
quality.
The item missing rate was very low in both conditions for all items included in this analysis (see
table 4.5 in appendix A). It was less than 4 percent for all items in both conditions—and less than
1 percent for most of them. Most of the estimates in this analysis are not reliable enough to
comment on potential statistical differences between the two conditions, but, overall, topical
condition appears to have had little impact on item missing rates for key items. This was also the
case when looking at the effect of topical order within the dual-topical condition.
34
4.3: Respondent Burden
We next calculated the mean number of minutes respondents took to complete each topical
survey.22 As shown in figure 4.9, it took respondents the most time to complete PFI-E (more than
20 minutes), in the high-teens to complete ECPP and PFI-H, and about 12 minutes to complete
ATES (see also table 4.6 in appendix A). Respondents to ATES took a similar amount of time to
complete the survey regardless of whether the screener respondent or a different household
member was sampled.
Figure 4.9: Mean number of minutes to complete topical among topical respondents from
households eligible for two or more questionnaires, by topical questionnaire and
dual-topical condition: 2017
30
22.2
Minutes
25
20
19.0
20.9*
16.8*
19.1
16.5
15
11.8
12.2
10
5
0
ECPP
PFI-E
PFI-H
ATES
Topical questionnaire
Single topical group
Dual topical group
* p < 0.05.
NOTE: Estimates represent the mean number of minutes for topical respondents to complete the questionnaire among respondent
households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES. Cases that completed the
topical over multiple days, took more than 6 hours to complete it, or spent more than 15 minutes on a page without taking any
actions are excluded from this analysis. A small number of respondents (less than 1 percent) could not be included in this analysis
because no information was available for them on the paradata file. The analysis excludes cases that did the screener on the TQA
because these cases were not asked to complete the entire topical questionnaire. In the single-topical condition, the unweighted
eligible sample size was 1,470 for ECPP, 3,150 for PFI-E, 90 for PFI-H, and 920 for ATES. In the dual-topical condition, the
unweighted eligible sample size was 1,000 for ECPP, 2,080 for PFI-E, 70 for PFI-H, and 1,650 for ATES. Sample sizes have
been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
Looking at response time by dual-topical condition:
Respondents in the dual-topical condition completed ECPP and PFI-E in significantly
less time on average in the dual-topical condition than they did in the single-topical
condition (by 1 to 2 minutes). Respondents also completed PFI-H about 2 minutes more
quickly in the dual-topical condition than in the single-topical condition, but this
difference is not significant—probably due to the small number of cases sampled for PFIH.
22
Cases that completed the topical over multiple days, took more than 6 hours to complete it, or spent more than 15 minutes on a
page without taking any actions are excluded from this analysis. A small number of cases that were missing from the paradata file
are also excluded from this analysis.
35
However, the average ATES completion time was about the same in the single- and dualtopical conditions.
The shorter times for the child surveys in the dual-topical condition are likely caused by the
removal of overlapping questions from the second topical when two child topicals were
presented together (so that each household would only be asked these questions once). There
were also a few ATES items that were removed when this topical was presented second in the
dual-topical condition; however, much fewer items were removed, and so this would have
minimal effect on completion time.
We also looked at whether specific topical pairings had a particularly large or small effect on
mean response time in the dual-topical condition as compared to the single-topical condition. It
seemed to take respondents less time to complete the child topical surveys when they were
paired with another child survey than it did when they were paired with ATES (by about 1 to 3
minutes; likely because, as noted previously, overlapping parts of the child surveys were taken
out of the second child topical in the child-child topical scenario so that they would only be
asked once to each respondent).
Finally, we looked at whether topical order had a significant effect on response time in the dual
topical condition.
For ECPP, PFI-E, and ATES (when the screener respondent was sampled), the mean
response time was significantly shorter when the topical was presented second than it was
when it was first the first one presented (by about 3 to 5 minutes). The direction of the
relationship was also the same for PFI-H but narrowly missed statistical significance.
In particular, within the dual-topical condition, when ECPP and PFI-E were paired with
another child topical, respondents completed them significantly more quickly when they
were presented second than they did when they were presented first; about 8 to 9 minutes
were saved on average by eliminating overlapping questions from the second topical
when PFI-E and ECPP were presented together. The pattern is the same for PFI-H, but
the estimates are not realizable enough to make statistical comparisons. By contrast, there
was not a significant difference in time to complete the child surveys by presentation
order when they were paired with ATES.
4.4: Respondent Characteristics
Similar to the previous chapters, this section compares the distribution of respondent
characteristics in single and dual topical respondent households for each of the topical surveys
using variables available on the frame (age, race/ethnicity and education of the head of
household, annual household income, and whether a phone number was available on the frame).
This analysis provides insight into whether requesting that households complete a second topical
affected the characteristics of the responding households, and, in particular, whether one
condition or the other was more effective at getting underrepresented groups to respond to the
NHES.
36
Overall, there were very few significant differences between the respondents in the two
conditions (table 4.7 in appendix A). Selected notable findings include the following:
Education: For three of the four topicals (ECPP, PFI-E, and ATES), the head of
household was significantly less likely in the dual-topical condition to have educational
attainment of “some college” than they were in the single-topical condition.
Annual income: For two of the four topicals (ECPP, and PFI-H), respondent households
were significantly less likely to have an annual income of less than $21,000 in the dualtopical condition than they were in the single-topical condition.
4.5: Key Takeaways From the Dual-Topical Experiment
The dual-topical condition led to a significant decrease in the response rate for all topicals
except PFI-H. Although dual-topical condition households were less likely than singletopical condition households to complete all of the topicals they were sampled for, they
were more likely to complete at least one topical.
Being presented second in the dual-topical condition had a negative effect on the topical
response rate (as compared to being presented first) for all topicals except PFI-H
(although the pattern was in the same direction for PFI-H). Topical order generally did
not have a significant effect on whether or not households completed all sampled
topicals.
Specific topical pairings did not have an effect on response rates or the likelihood of
completing all sampled topicals—except when one of the requests was for a household
member other than the screener respondent to complete ATES (which suppressed the
ATES response rate and the likelihood of all topicals being completed).
The dual-topical condition led to significantly more breakoffs for PFI-E (but not for any
of the other topicals).
However, the dual-topical condition was still more efficient in terms of (1) the number of
screeners needed to yield a completed topical and (2) the incentive cost per complete.
The item missing rate was very low for key topical items in both conditions.
Respondents in the dual-topical condition completed ECPP and PFI-E significantly more
quickly on average (and the pattern was in the same direction for PFI-H), but no effect
was seen for ATES. In the dual-topical condition, topicals tended to be completed more
quickly when they were presented second than when then were completed first; this was
likely due to items being removed from the second topical that had already been asked in
the first topical (particularly notable for child-child topical pairings).
The experiment did not have much of an impact on the characteristics of topical
respondent households in terms of frame variables.
37
Chapter 5: ATES Split-Panel Experiments
This chapter presents the results of the ATES item-level experiments: (1) the certification
provider item wording experiment and (2) the usefulness item response option order experiment.
Each section of the chapter begins with a description of the experiment and then presents several
analyses of the effect of the experimental conditions on key outcomes. All analyses in this
chapter are restricted to households where the screener was completed online because TQA
screener respondents who were sampled for a topical were asked to complete the first topical
item but were not asked to complete a full topical questionnaire.
5.1: ATES Certification Provider Item Wording Experiment
In this experiment, sample members were randomly assigned to receive one of two wordings of
the ATES certification provider item (see exhibit 5.1).
Exhibit 5.1. Certification provider item wording, by experimental condition
Version
Version A (NHES:2016 version)
Version B (new NHES:2017 test version)
Wording
Is your [most/second-most/third-most] important
certification or license required by a federal, state, or local
government agency (such as a state board) in order to do
that kind of work?
Is your [most/second-most/third-most] important
certification or license required by a government agency
(such as a state licensing board) in order to do that kind of
work?
The ATES certification provider item was asked in reference to the respondent’s most-important
certification/license, second-most-important certification/license, third-most-important
certification/license, and the most-important certification/license that he or she is currently
working on getting. Respondents could be asked this item up to four times based on the number
of certifications they said they had or were working on getting; each respondent saw the same
version of the item every time they saw it.
The main purpose of this experiment was to determine if referring specifically to a “state
licensing board” that specifically mentions the word “licensing” would be preferable to referring
more generally to a “state board” because it would help respondents understand the meaning of
“state board”. However, it was also possible that making the reference more specific might also
backfire and lead to licensure underreporting if respondents were in fact more familiar with the
term “state board”.23
To determine which item version is preferable to use in the future, we examined response
distributions and data quality indicators for the two versions of the item. For all analyses
described in this section, we also conducted subgroup analyses to look specifically at the effect
In addition, there was concern that the 2016 “state board” wording might lead to licensure overreporting in some
occupations if it caused respondents to incorrectly report their certifications as licenses (e.g., doctors who are
“board” certified in medical specialties might incorrectly say their credential was required by a state board).
23
38
of item wording on respondents with higher or lower self-reported educational attainment (some
college or more versus high school or less).24 We also discuss how the results for each item
version in 2017 compare to the web respondent 2016 results for version A.
Response distributions
We first compared the response distributions for each of the four certification provider items by
item version to determine if item wording had an effect on the percentage of respondents
reporting that their credential was or was not required by the government (or that they were not
sure if it was required).
Version A (2016 version) respondents were significantly more likely than version B
(2017 version) respondents to answer “yes” for their most important certification (78
percent versus 73 percent, respectively) (see figure 5.1 below and table 5.1 in appendix
A).
No other significant differences were found in the response distributions for the four
items by item version, and there was not a clear pattern to any observed differences in the
responses to the other three items, with version A being endorsed more for some items
and less for others.
Figure 5.1: Percentage of respondents who chose each response option for items measuring
Figure 5.1: whether certification was required by the government, by item and item verrsion: 2017
100%
Percentage
80%
4%
5%
18%
22%
6%
6%
28%
32%
11%
34%
8%
9%
30%
28%
62%
63%
13%
30%
60%
40%
78%
73%*
66%
62%
55%
57%
20%
0%
Most important
Third-mostimportant
Second-mostimportant
New
certification
Item
Version A-Yes
Version B-Yes
Version A-No
Version B-No
Version A-Don't know
Version B-Don't know
* p < 0.05.
NOTE: Percentages represent the proportion of ATES respondents who selected the response option out of those who answered
the question. Cases that responded to the screener on the TQA are excluded from this analysis because they were not asked to
complete the full topical questionnaire. Unweighted eligible sample size was 8,080 for Version A and 8,200 for Version B.
Sample sizes have been rounded to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
24
We also had planned to do a second subgroup analysis to look separately at English and Spanish respondents.
However, there were too few ATES Spanish respondents in 2017 to make this analysis feasible. Only about 110
people responded to ATES in Spanish (less than 1 percent of all ATES respondents).
39
We also compared responses to versions A and B separately for those with lower and higher
educational attainment.
Among respondents with high school or less, those who saw version A were significantly
more likely than those who saw version B to answer “yes” for the most-important
certification question (78 percent versus 65 percent, respectively).
There were no other significant differences in the response distributions for the four items
by item version in the two educational attainment subgroups, and there was not a clear
pattern to any observed differences in the responses to the other three items. Several of
the results for respondents with high school or less are based on relatively small sample
sizes and should be interpreted with caution.
Finally, we compared the 2017 results for each version to the web responses received in 2016,
which only included version A (statistical tests were not conducted).25 We found varied results
across the three items. For the most-important certification, the 2016 results were more similar to
the 2017 version B results, while for the second-most-important certification the 2016 results
were more similar to the 2017 version A results. For the third-most-important certification, the
2016 results were not clearly closer to one 2017 version or the other. When interpreting
differences between 2016 and 2017, however, it is important to remember that the surveys in the
two years did not use the exact same methodology, and differences in data collection methods
beyond question wording might also be driving responses to these items across years.
Item missing rates
We next calculated the item missing rate for the provider items by item version to assess whether
question wording had an effect on the percentage of respondents who declined to answer the
item. The item missing rate was defined as the percentage of cases that should have answered the
item but did not.
Overall, the item missing rates for the three items in the certifications and licenses section
ranged from less than 1 percent to about 17 percent, with items later in the questionnaire
having higher item missing rates (see table 5.2 in appendix A).26
Item version did not have a significant effect on the item missing rate for any of these
items.
Looking at the results separately by educational attainment, there still was not a
significant or notable difference in the item missing rate for any of the three items by
25
Questions about new certifications the respondent was in progress of getting were not asked in 2016, so this
comparison could not be made for the “new certification” item.
26
As compared to item missing rates from 2016, the item missing rates for some ATES items are surprisingly high
in 2017. We are still investigating what might be driving the higher item missing rates. In particular, the item
missing rate for the “new certification” provider item was too high to possibly be correct; as a result, we do not
show the item missing rate for that item.
40
item version. However, nearly all of the results for the high school or less group were too
unreliable to make statistical comparisons between item versions.
We also compared the 2017 item missing rates to those for 2016 web respondents.27 We found
that the rates were very similar for the most-important certification item in both years (less than 1
percent missing). The item rate was slightly higher in 2017 than in 2016 for the second-most
important certification question (4 percent for both versions in 2017 versus 1 percent in 2016)
and notably higher for the third-most-important certification item (13-17 percent for the two
2017 versions versus 9 percent in 2016).
Takeaways for the certification provider item wording experiment
Overall, item version typically did not have a significant effect on response distributions
or the item missing rate for the certification provider items.
However, for the most important certification item, respondents were significantly more
likely to answer “yes” when presented with version A than when they were presented
with version B.
5.2: ATES Perceived Usefulness Items Response Option Order
Experiment
In this experiment, respondents were randomly assigned to one of two versions of the items,
which varied the order in which the response options were presented.
In version A (the 2016 version), the response options were presented from least to most
useful (“not useful,” “somewhat useful,” and “very useful”).
In version B (the new version for 2017), the response options appeared in the reverse
order, from most to least useful (“very useful,” “somewhat useful,” and “not useful”) in
2017.
In both versions, a fourth option of “too soon to tell” was the last response option. Respondents
could be asked the battery of usefulness items up to three times (for a total of up to 10 individual
items) based on which credentials they reported (in reference to their most-important
certification or license, their last postsecondary certificate, and their last work experience
program); they received the same version of the items each time they saw them. We compared
the response distributions and data quality indicators for the two versions of the items. We
conducted the same subgroup analyses by educational attainment as described in the previous
section. This section also discusses how the results for each item version compare to the results
obtained in 2016.
27
Questions about new certifications the respondent was in progress of getting were not asked in 2016, so this
comparison could not be made for the “new certification” item.
41
Response distributions
This section compares the response distribution for each item by item version to determine if
response option order had an effect on the percentage of respondents reporting that their
credential was useful.
There were no significant differences in the percentage of respondents who chose each
response option by item version for any of the 10 usefulness items (see figure 5.2 on the
next page and table 5.3 in appendix A).
There were only two significant differences when looking at the results separately for
respondents with a high school diploma or less and those with some college or more. Given that
this amounts to a significant difference for only 1 of 40 response options for each education
group, these differences could have occurred by chance.
The 2016 results were very similar to the 2017 results, both overall and by educational
attainment; generally, the percentage of respondents selecting each response option did not differ
by more than 5 percentage points between the three items (version A in 2016 and 2017, version
B in 2017).
Item missing rates
We also assessed the impact of response option order on two key data quality indicators: the item
missing rate and the straightlining rate. There were no significant or notable differences in the
item missing rate by item version, either overall or by educational attainment (see table 5.4 in
appendix A). However, many of the item missing rates for those with a high school degree or
less should be interpreted with caution because of small sample sizes or unreliable estimates. The
item missing rates were very similar in 2016 (among web respondents) and 2017 for the mostimportant certification or license and work experience program grids, but the rates were higher
for the post-secondary certificate grid in both conditions in 2017 than they were in 2016.
42
Figure 5.2: Percentage of respondents who chose each response option, by item version and usefulness item: 2017
100%
4% 3%
3%! 3%
3% 2%
1%! 1%
4% 4%
4% 4%
48% 43%
Percentage
4% 4%
28% 28%
80%
60%
1% 2%
79% 79%
65% 62%
62% 61%
78% 80%
1% 2%!
35% 38%
53% 48%
71% 70%
80% 79%
4% 4%
30% 25%
26% 24%
40%
27%
28%
33%
20%
0%
11% 13%
13% 13%
21%
8% 7%
6% 4%
Keeping Keeping yourself Improving
Getting
marketable to
a job
a job
your work
employers
or clients
______________________________________________
Most important certification or license
6% 5%
24% 25%
39% 42%
20% 21%
11% 13%
35%
35% 34%
29% 31%
25%
13% 15%
7% 6%
Getting
a job
Increasing
your pay
Improving
your work
skills
_________________________________
Postsecondary certificate
9% 10%
Getting
a job
5% 5%
Increasing
your pay
Improving
your work
skills
__________________________________
Last work experience program
Usefulness item
Version A—Not useful
Version A—Somewhat useful
Version A—Very useful
Version A—Too soon to tell
Version B—Not useful
Version B—Somewhat useful
Version B—Very useful
Version B—Too soon to tell
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
NOTE: Percentages represent the proportion of ATES respondents who selected the response option out of those who answered the question. Cases that responded to the screener
on the TQA are excluded from this analysis because they were not asked to complete the full topical questionnaire. Unweighted eligible sample size was 8,080 for Version A and
8,200 for Version B. Sample sizes have been rounded to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
43
Straightlining
Finally, we compared the straightlining rates for each usefulness item grid by item version (see
table 5.5 in appendix A). Straightlining was defined as the percentage of respondents who
selected the same answer for the full set of items (e.g., marking “very useful” for all usefulness
items that are presented together). A higher straightlining rate in one version could suggest that
respondents are not taking the time to think carefully about their responses when presented with
that version.
The straightlining rates were relatively high for these grids (ranging from 38 percent to 62
percent of respondents).
There were no significant or notable differences in the straightlining rate by item version
for any of the usefulness items, either overall or by educational attainment.
We compared the 2017 results to 2016 (among web respondents), and they were generally very
similar with one exception: for the usefulness of post-secondary certificate grid, the straightlining
rate was about 10 percentage points lower in 2017 than it was in 2016.
Takeaways for the perceived usefulness items response option order
experiment
There was little difference in the results for the usefulness items by response option order
in terms of response distributions, item missing rates, or straightlining.
44
Chapter 6: The Effectiveness of NHES Contact
Attempts Across Administrations
This chapter of the report examines the impact of the various contact attempts used across the last
three NHES administrations (2014, 2016, and 2017).28 Of particular interest is the effectiveness of
contact attempts that were newly tested in the NHES:2017 web test:
The 2017 administration forewent an advance letter because this was hypothesized to not
be necessary in a web-only administration.
In both the screener and topical phases, the 2017 administration included the first test of a
pressure-sealed envelope (instead of a postcard reminder), which allows for sample
members’ web tool access credentials to be included in the mailing (as compared to a
postcard which serves as a reminder without providing a direct way to respond).
In the topical phase, this administration also tested a reduced topical reminder protocol
and included the first test of using e-mail reminders.
This chapter of the report also includes a comparison of the effectiveness of all contact attempts
and mailing schedules for the last three NHES administrations that have used mail-based contact
strategies.29 The chapter concludes with an analysis of respondents’ willingness to provide their
own e-mail addresses, as well as their likelihood to respond to e-mails asking them to complete a
topical survey. The results of these analyses are intended to inform the contact strategy for the
2019 administration of the NHES.
6.1: Effectiveness of Screener Contact Attempts
This section discusses the effectiveness of the screener contact attempts in 2014, 2016, and 2017.
The contact attempts used in each administration are summarized in exhibit 6.1. There were
several differences across administrations:
In 2014 and 2016, there was an advance letter, while in 2017 there was not an advance
letter.
In 2014 and 2016, a postcard reminder was sent after the first screener mailing, whereas a
pressure-sealed envelope was sent in 2017.
In 2014 and 2016, three additional screener mailings were included after the
postcard/pressure-sealed envelope, while 2017 only included two additional mailings.
28
We had also hoped to include 2012 in this analysis, but AIR does not have the necessary data to calculate 2012
response rates in way that is consistent with the other years.
29
This comparison focuses on general patterns, instead of statistical significance because comparisons between this
many surveys and contact attempts could quickly become unwieldy. Statistical tests have not been conducted in this
chapter except for the e-mail outcomes analyses.
45
Finally, in 2016, a robocall was utilized as a final reminder for all households with a
phone number available on the frame; this was not done in 2014 or 2017.
Exhibit 6.1: Screener contact attempts, by administration
Advance letter
Initial mailing
Reminder postcard
or pressure-sealed
envelope
Second mailing
Third mailing
Fourth mailing
Robocall
2014 (paperonly survey)
Yes
Cover letter
Paper qnaire.
Postcard
Offers paper only
2016 (paper-only
condition)
Yes
Cover letter
Paper qnaire.
Postcard
Offers paper only
2016 (mixed-mode
condition)
Yes
Cover letter
Offers web only
Postcard
Offers web only
Cover letter
Paper qnaire.
FedEx
Cover letter
Paper qnaire.
Cover letter
Paper qnaire.
None
Cover letter
Paper qnaire.
FedEx
Cover letter
Paper qnaire.
Cover letter
Paper qnaire.
Yes
(If phone available)
Cover letter
Offers web only
FedEx
Cover letter
1st paper qnaire.
Cover letter
2nd paper qnaire.
Yes
(If phone available)
2017 (web-only
survey)
None
Cover letter
Offers web only
Pressure-sealed
envelope
Offers web only
Cover letter
Offers web only
FedEx/First Class
Cover letter
Offers web only
None
None
Final screener response rate
As in the earlier chapters of this report, we calculated the screener response rate using AAPOR
RR1. This information is presented for the 2014, 2016, and 2017 NHES administrations
separately and is broken down by mode condition in 2016 (paper-only versus mixed-mode).
The final screener response rate declined across the three years: from 69 percent in 2014 to 64
percent in the 2016 paper-only condition and 59 percent in the 2016 mixed-mode condition to 43
percent in 2017 (see figure 6.1 on the next page and table 6.1 in appendix A).30 The lower
response rates in 2017 and in the 2016 mixed-mode conditions are likely due to mode of
administration. For 2017, this is likely due to this administration being web-only; similarly, the
lower response rate for the 2016 mixed-mode condition is likely due to the delayed paper option
(those respondents who did not want to do the survey online may not have opened later mailings,
assuming that they too were only offering the option to do the survey online).
30
In 2016, the response rate was 41 percent when the analysis is limited to people who responded online or using the
TQA. This is not perfectly comparable to the 2017 response rate; in 2017, all mailings only offered a web option,
while in 2016 the first two mailings offered a web option, and the final two mailings offered a paper option (so most
of the later responses were received by paper).
46
Figure 6.1: Final screener response rate, by survey administration: 2014-2017
100%
Response rate
80%
69%
64%
60%
59%
43%
40%
20%
0%
2014
(paper-only survey)
2016
2016
(paper-only condition) (mixed-mode condition)
2017
(web-only survey)
Survey administration
NOTE: Response rates were calculated using AAPOR RR1. Unweighted eligible sample size were 54,620 in 2014 (paper-only),
155,180 in 2016 (paper-only condition), 31,680 in 2016 (mixed-mode condition) and 89,485 in 2017 (web-only). Sample sizes
have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Increase in screener response rate after each contact attempt
We also calculated the increase in the screener response rate after each contact attempt. Response
was attributed to a screener mailing if it was received three or more days after that mailing was
sent and less than three days after the next mailing was sent (see figure 6.2 that follows and table
6.1 in appendix A).31 Key findings include the following:
The first screener mailing was more effective at increasing the response rate in
administrations that included a web option; it led to a 26 percentage point gain in the
response rate for the 2016 mixed-mode condition and a 13 percentage point increase in the
response rate for the 2017 web-only administration (compared to only 5 percentage points
in 2014 and 2 percentage points in the 2016 paper-only condition).
o The differences between web and paper administrations are most likely due to the
web option allowing for speedier response, while paper questionnaires need to be
mailed back and processed by Census before they could be counted as respondents.
For example, we see a large increase the 2016 paper-only response rate after the
reminder postcard (31 percentage points), but it is likely that several of the
responses attributed to that mailing are actually responses to the first screener
mailing that were slower to get mailed back or processed.
o Still, the response rate following the first mailing was much lower in 2017 than it
was in the 2016 mixed-mode condition even though the two requests would have
appeared similar at that point. This may have been because 2017 did not include an
advance letter while 2016 did. It also could be due to small changes to the
31
For web respondents, the date of response is the date the screener or topical was completed. For paper and TQA
respondents, the date of response is the date the form was scanned into the Census system.
47
introductory text of the initial mailing; for example, the 2017 introductory text was
longer and included an extra appeal to sample members about the utility of the
survey data that may have unintentionally backfired among some sample
members.32
In the paper-only administrations, the reminder postcard was where the first large increase
in response occurred (25 percentage points in 2014 and 31 percentage points in the 2016
mixed-mode condition); however, as discussed previously, it seems likely that many of
these are actually responses to the first mailing that were slow to be mailed back or
processed.
The best comparison for determining the relative effectiveness of the pressure-sealed
envelope in 2017 is the reminder postcard that was used in the mixed-mode condition in
2016, since both of these were web-only requests at this point. We see a larger percentagepoint response to the pressure-sealed postcard in 2017 (13 percentage points) than we do
for the postcard in the mixed-mode condition in 2016 (6 percentage points). It is important
to take into account that the 2017 response rate was 13 percentage points lower than the
2016 response rate leading into this mailing, leaving more room for the pressure-sealed
envelope to improve the response rate in 2017. However, at the very least, the pressuresealed envelope does not seem to have backfired as compared to a postcard reminder.
Across all years, the second and third screener mailings each continued to yield somewhat
notable gains to the screener response rate (ranging from 7 to 27 percentage points, with
most in the range of 7 to 14 percentage points). The third mailing tended to do just as well
as the second (at least in 2016 and 2017), likely because this has typically been a FedEx
mailing that may be more likely to catch respondents’ attention (and in the 2016 mixedmode condition, was the first opportunity sample members had to respond using a paper
questionnaire)
In 2014 and 2016, the fourth screener mailing yielded a noticeably smaller gain in the
response rate, although in the 2016 mixed-mode condition, it did still lead to a 5
percentage point gain in the response rate (versus only 2 percent to 3 percent in the mailonly administrations). There was not a fourth screener mailing in 2016.
The robocall, which was only conducted in 2016, generated less than half a percentage
point increase in the screener response rate in both conditions.
In 2016, the letter started: “The U.S. Census Bureau is administering an important national research study for the
U.S. Department of Education, and we need your help. This survey provides vital information that is used to improve
education for people of all ages—this information is not available anywhere else.” In 2017, the letter started: “I am
pleased to inform you that your household has been selected to participate in the 2017 National Household Education
Survey. This is a U.S. Department of Education survey administered by the U.S. Census Bureau. The study provides
vital information that is used to improve education for people in the United States—information that is not available
anywhere else. The results will help policymakers, researchers, and educators understand the educational needs of our
diverse population in changing times. This survey is about all of us!”
32
48
Increase in response rate
Figure 6.2: Percentage point increase in screener response rate after each mailing, by survey
administration: 2014-2017
100%
80%
60%
2%
10%
0.2%
3%
14%
40%
27%
13%
20%
25%
31%
0%
5%
2%
2014
(paper-only)
0.2%
5%
14%
7%
6%
26%
2016
2016
(paper-only condition) (mixed-mode condition)
9%
8%
13%
13%
2017
(web-only)
Survey administration
First mailing
Reminder postcard/pressure-sealed envelope
Second mailing
Third mailing
Fourth mailing
Robocall
NOTE: Response rates were calculated using AAPOR RR1. Response is attributed to a mailing if the response was received three
or more days after that mailing was sent and less than three days after the next mailing was sent. Unweighted eligible sample size
were 54,620 in 2014 (paper-only), 155,180 in 2016 (paper-only condition), 31,680 in 2016 (mixed-mode condition) and 89,485 in
2017 (web-only). SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Weekly screener response rate
We next calculated the weekly gain in the screener response rate in each of the three years (again
looking separately at the paper-only and mixed-mode conditions in 2016), as shown in figure 6.3
on the next page. The lines in figure 6.3 are shorter for some administrations than others due to
shorter screener field periods in those administrations.33
As noted in the previous analysis, the response rate increased more quickly in the early
weeks of the screener field period in administrations that started by offering a web option
than it did for those that only offered a paper option. This was especially true in the 2016
paper-only condition, where there was not a noticeable increase in the screener response
rate until week 5.
In the two paper-only administrations, the 2014 response rate consistently tracked above
the 2016 paper-only response rate, though the difference in the final screener response
rates was only 5 percentage points. This may be due to slower processing of paper returns
in 2016 because it was a much larger collection with a higher volume of paper forms
being returned for processing.
In the two administrations that offered web options, the response rates in early weeks were
very similar to each other. However, around the seventh or eighth week of the field period,
the 2016 mixed-mode condition response rate pulled ahead of the 2017 web-only response
33
We considered the end of the screener field period to be the date when Census stopped accepting/keying screener
forms for admission into a topical mailing group.
49
rate. This is likely due to the mid-collection switch to offering a paper option in 2016,
which was not done in 2017.
In all four administrations, gains in the response rate slowed dramatically several weeks
before the end of the screener field period. This suggests that, if desirable to NCES, it may
be possible to shorten the screener field period by a few weeks without much of a negative
impact on the screener response rate.
Figure 6.3: Cumulative screener response rate, by week and survey administration: 2014-2017
Cumulative response rate
100%
80%
69%
64%
60%
59%
43%
40%
20%
0%
0%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
Week
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web-only)
NOTE: Response rates were calculated using AAPOR RR1. Lines are of differing lengths due to variation in the screener field
period across years. Unweighted eligible sample size were 54,620 in 2014 (paper-only), 155,180 in 2016 (paper-only condition),
31,680 in 2016 (mixed-mode condition) and 89,490 in 2017 (web-only). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Screener response rate by day after each contact attempt
To gain a more fine-grained understanding of how each mailing impacted the screener response
rate across administrations and whether mailings are spaced appropriately, an additional line
graph was created for each screener mailing (figures 6.4a-e).34 Each of these figures show the
cumulative screener response each day following that specific contact attempt in 2014, 2016, and
2017 (2016 paper-only and mixed-mode results are again presented separately). The response rate
on day 0 (the mailing day) is the screener response rate as of the day the mailing was sent. The
final response rate shown for each line is the response rate the day before the next mailing was
sent. The lines for some administrations are shorter than others because there were fewer days
between mailings in some administrations.
34
No figure was made for the robocall reminder because it was only conducted in one year and did not lead to any
notable gain in the response rate.
50
First mailing
We again see responses were slower to be received or scanned following the first mailing
when only a paper option was provided (figure 6.4a). As a result, there was very little
movement in the response rate before the next mailing was sent in 2014 or the 2016 paperonly condition in 2016. Given the relative speed with which responses were received
when a web option was offered, this suggests the delays were due to backups in the
processing of paper forms and that there is not necessarily a need to delay sending the next
contact attempt.
Even when a web option was provided, it took about 3 days for responses to start to be
received. In the 2016 mixed-mode condition, the response period for the first mailing was
19 days, while in 2017 it was only about 8 days. In 2016, the response rate increased an
additional 8 percentage points during those additional 11 days (beyond the 18 percentage
points that were achieved in the first 8 days); this suggests that it may not be necessary to
send the second contact as early as was done in 2017 (for example, the 2017 response rate
increased an additional 5 percentage points on days 9-13).
Figure 6.4a: Screener response rate following the first mailing, by number of days since mailing
Figure 6.x: and survey administration: 2014-2017
Cumulative response rate
100%
80%
60%
40%
26%
20%
0%
13%
0%
0
1
2
3
4
5
6
7
8
5%
9
2%
10
11
12
13
14
15
16
17
18
19
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web-only)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is the
day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
Unweighted eligible sample size were 54,620 in 2014 (paper-only), 155,180 in 2016 (paper-only condition), 31,680 in 2016
(mixed-mode condition) and 89,490 in 2017 (web-only). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Postcard/pressure-sealed envelope reminder
In 2017, the response rate increased the most in the first week following the pressuresealed reminder but then tapered off slightly, averaging out to about a percentage-point
gain per day (figure 6.4b).
51
In 2014 and the 2016 paper-only condition, there were relatively large increases in the
response rate that mostly came later on (days 5-11); this is likely again due to delays in
mailing back and processing forms.
In the 2016 mixed-mode condition (which was only offering a web option at this point),
the response rate did not increase as much following this reminder as it did in 2014 or
2017.
o This may be because the postcard did not provide a direct way for the sample
member to respond, while the pressure-sealed envelope used in 2017 did.
o In addition, in years where a paper option has already been offered, it seems
possible that sample members might be more likely to have noticed/saved/be able
to find the previously sent paper questionnaire package than they would be for the
single-page web invitation.
o In administrations that offer a web option, it seems preferable to instead use a
pressure-sealed envelope; in those that offer a paper option, it might be useful to
the next mailing more quickly, so that the postcard is still fresh in sample
members’ minds when they receive the next mailing.
Figure 6.4b: Screener response rate following the postcard/pressure-sealed envelope reminder,
Figure 6.x: by number of days since mailing and survey administration: 2014-2017
Cumulative response rate
100%
80%
60%
33%
32%
30%
27%
40%
26%
20% 13%
5%
0% 2%
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Number of days since mailing
2014 (paper-only)
2016 (mixed-mode condition)
2016 (paper-only condition)
2017 (web-only)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is the
day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years. A
postcard was sent in 2014 and 2016; a pressure-sealed envelope was sent in 2017. Unweighted eligible sample size were 54,620 in
2014 (paper-only), 155,180 in 2016 (paper-only condition), 31,680 in 2016 (mixed-mode condition) and 89,490 in 2017 (webonly). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
52
Second mailing
In the administrations that offered a web option (2016 mixed-mode and 2017), there was a
small and gradual increase in the response rate following this mailing, likely because most
sample members that were willing to respond online had already done so. It may be
worthwhile to send the next mailing sooner in future administrations that offer a web
option—especially if that next mailing adds a paper option.
In the administrations that offered a paper option (2014 and 2016 paper-only), there were
periodic bumps in the response rate throughout this window, suggesting the spacing
between these mailing was reasonable.
Figure 6.4c: Screener response rate following the second mailing, by number of days since mailing
Figure 6.4c: and survey administration: 2014-2017
Cumulative response rate
100%
80%
60%
56%
46%
40%
40% 33%
32%
30%
20% 27%
34%
0%
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2014 (paper-only)
2016 (mixed-mode condition)
2016 (paper-only condition)
2017 (web-only)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is the
day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
Unweighted eligible sample size were 54,620 in 2014 (paper-only), 155,180 in 2016 (paper-only condition), 31,680 in 2016
(mixed-mode condition) and 89,490 in 2017 (web-only). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Third mailing
In 2017, the third mailing was the final screener mailing; as also discussed for the fourth
mailing in the other years (which was the final mailing in those years), if desired by
NCES, the screener field period could probably have been closed about three weeks earlier
with minimal impact on the final screener response rate.
In all others years, the third mailing (which was a FedEx and the first time a paper option
was offered in the 2016 mixed-mode condition), led to a noticeable increase in the
response rate that popped 1-2 weeks after the mailing was sent (due to the lag time
associated with mailing back and processing paper forms) and it continued to slowly grow
for most of the days in the field period that were attributed to this mailing.
53
The reduced response to this mailing in 2017 may be due to it being the only
administration that did not offer a paper response option for this mailing. In addition,
some of the apparent greater response in the other years may due to batch processing of
forms that had been received earlier.
Figure 6.4d: Screener response rate following the third mailing, by number of days since mailing
Figure 6.4d: and survey administration: 2014-2017
Cumulative response rate
100%
80%
67%
61%
53%
60% 56%
46%
40% 40%
43%
34%
20%
0%
1
3
5
7
9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 57
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web-only)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is the
day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
Unweighted eligible sample size were 54,620 in 2014 (paper-only), 155,180 in 2016 (paper-only condition), 31,680 in 2016
(mixed-mode condition) and 89,490 in 2017 (web-only). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Fourth mailing
The response rate increased only slightly following the fourth mailing—and it did so very
gradually.
In 2016, in particular, where the screener field period was left open for a much longer time
after the fourth mailing, the field period likely could have been closed much earlier
without of a negative effect on the final response rate. For example, if the screener field
period had been closed after 24 days like it was in 2014 (instead of 49 days), the final
response rate would only have been 1 percentage point lower in each condition. There was
no additional gain in the paper-only response rate beyond 33 days and no gain the mixedmode response rate after 30 days.
54
Figure 6.4e: Screener response rate following the fourth mailing, by number of days since mailing
Figure 6.4e: and survey administration: 2014-2016
Cumulative response rate
100%
80%
67%
69%
64%
60% 61%
53%
40%
58%
20%
0%
1
3
5
7
9
11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49
Number of days since mailing
2014 (paper only)
2016 (mixed-mode condition)
2016 (paper-only condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is the
day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years. There
was not a fourth screener mailing in 2017. Unweighted eligible sample size were 54,620 in 2014 (paper-only), 155,180 in 2016
(paper-only condition), and 31,680 in 2016 (mixed-mode condition). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2016.
Takeaways for effectiveness of screener contact attempts
The screener response rate was lowest for administrations that only offered a web option
(2017) or started by only offering a web option (2016 mixed-mode); however, as
discussed further in the next section, web administrations also tended to lead to higher
topical response rates.
Offering a web option led to higher rates of response in earlier days and weeks—and to
earlier contacts overall—because web allowed for faster response (and processing of
responses) than did mail. Delays in processing paper forms make it difficult to attribute
response accurately to specific contact attempts or to know how quickly respondents
replied after receiving specific contact attempts.
The pressure-sealed envelope led to a 13 percentage point gain in the screener response
rate in 2017; in the 2016 mixed-mode condition, the postcard reminder led to a 6
percentage point increase in the response rate.
Although response to the second and third screener mailings remained strong, there
appeared to be a drop-off for the fourth screener mailing in 2014 and 2016 (though there
was still a 5 percentage point gain due to the fourth screener mailing in the 2016 mixedmode condition, likely due to this being only the second mailing to include a paper
questionnaire for this group).
55
When used as the final reminder in 2016, the robocall generated less than half a
percentage point increase in the screener response rate.
In all administrations, there was almost no gain in the screener response rate in the final
three weeks or so of the screener field period.
6.2: Effectiveness of Topical Contact Attempts
This next section focuses on the effectiveness of the topical contact attempts and compares their
effectiveness across the 2014, 2016, and 2017 administrations. A summary of the contact attempts
used in each year is shown in exhibit 6.2.
Exhibit 6.2: Topical contact attempts, by administration
First reminder email
Initial mailing
2014 (paper-only
survey)
2016 (paper-only and
mixed-mode conditions)
None
None
Cover letter
Paper qnaire.
Cover letter
Paper qnaire. OR offers
web only 1
Postcard
Offers paper or web only1
Reminder postcard
or pressure-sealed
envelope
First follow-up
mailing / e-mail
Postcard
Offers paper only
Cover letter
Paper qnaire.
Cover letter
Paper qnaire. OR offers
web only1
Second follow-up
mailing
FedEx
Paper qnaire.
FedEx
Cover letter
Paper qnaire.
Yes
If phone avail.
Cover letter
Paper qnaire.
Robocall
Third follow-up
mailing
None
Cover letter
Paper qnaire.
2017 (web-only survey,
single and dual-topical
conditions)
Yes
(If contacting screener R.
and if e-mail provided
during screener)
FedEx
Cover letter
Offers web only
Pressure-sealed envelope
Offers web only
Yes
(If contacting screener R.
and e-mail provided
during screener)
None
None
None
1. Households in the mixed-mode condition that responded to the screener online were only given the option to do the topical on
the web in this mailing. All other households (mixed-mode households that responded to the screener by paper or TQA and all
paper-only households) were only sent a paper questionnaire.
There was again some variation in the contact attempts used in the three years:
In 2017, screener respondents who provided their e-mail address in the screener and were
contacted about completing topicals received up to two reminder e-mails; e-mail
reminders were not used in the other two years.35
35
However, because the second e-mail was sent at about the same time as one of the mailings, it is not possible to
isolate the effect of the second e-mail on the response rate.
56
As in the screener phase, in 2014 and 2016, sample members received a postcard
reminder, while in 2017, they received a pressure-sealed envelope with web login
information.
In 2017, sample members only received up to two topical mailings. Screener sample
members who provided their e-mail address in the screener and were contacted about
completing topicals also received up to two reminder e-mails.
o In 2014 and 2016, sample members received up to three additional follow-up
mailings after the postcard/pressure-sealed envelope, but in 2017 they did not
receive any additional follow-up mailings (though some did receive the second
reminder e-mail).
In 2014 and 2016, the second follow-up mailing was sent using FedEx, but in 2017, this
was done for the initial topical mailing.
In 2016 only, a robocall reminder was made at about the same time as the second followup mailing.
In 2017 and in the 2016 mixed-mode condition, it was also possible for sample members
to respond to the topical before any topical contacts were made if they completed the
topical in the web instrument at (or around) the same time they completed the screener.
As was done for the screener analysis, we calculated the screener response rate using AAPOR
RR1. In 2016, the response rate was calculated separately for the mixed-mode and paper-only
conditions. In 2017, the response rate was calculated separately for the single-topical and dualtopical conditions. Finally, for ATES, the response rate was calculated separately when the
screener respondent was sampled for ATES (“same respondent”) and when someone other than
the screener respondent was sampled for ATES (“different respondent”).
Final topical response rate
In comparing the final topical response rates across years, a few key findings emerged (see figure
6.5 on the next page and table 6.2 in appendix A):
The topicals tended to have higher response rates than the screener, likely because only
households who already agreed to complete a screener were asked to complete a topical.
However, PFI-H tended to have a lower response rate than the other child topicals, likely
due to the difficulty of accurately identifying households that are eligible for this topical.
For the child surveys, the topical response rate was higher in administrations that offered a
web option (2016 mixed-mode condition and 2017), likely because web administration
allowed most screener respondents to go directly into the topical, while paper
administration required mailing out a separate topical survey request. This pattern was
also observed when the screener respondent was sampled for ATES but not when a
different household member was sampled for ATES—because that required mailing a
separate topical survey request to the household.
57
Figure 6.5: Final topical response rate, by survey administration, mode condition, dual-topical condition, topical questionnaire,
and contact effort: 2014-17
100%
84%
Response rate
80%
92%
88% 85%
85%
87%
73% 75% 76%
76% 75%
73%
77% 74%
82%
75%
70%
59%
60%
40%
20%
†
†
0%
ECPP
PFI-E
PFI-H
ATES
Topical questionnaire
2014 (paper-only)
2016 (paper-only condition)
2017 (web-only, single-topical condition)
2017 (web-only, dual-topical condition)
2016 (mixed-mode condition)
† Not applicable
NOTE: Response rates were calculated using AAPOR RR1. In 2014, ASPA was administered instead of the PFI and is used as a proxy for the PFI-E response rate in 2014. ECPP
and PFI-H were not administered in 2014. ATES seeded sample members (2014 and 2016) are excluded from this analysis. In 2017, these analyses exclude cases that completed
the screener on the TQA because they were not asked to complete the full topical. For ECPP, the unweighted eligible sample size was 6,700 in 2016 (paper-only), 1,230 in 2016
(mixed-mode condition), 1,720 in 2017 (single-topical condition), and 1,230 in 2017 (dual-topical condition). For PFI-E, the unweighted eligible sample size was 5,560 in 2014,
15,000 in 2016 (paper-only), 2,790 in 2016 (mixed-mode condition), 3,630 in 2017 (single-topical condition), and 2,530 in 2017 (dual-topical condition). For PFI-H, the
unweighted eligible sample size was 790 in 2016 (paper-only), 140 in 2016 (mixed-mode condition), 120 in 2017 (single-topical condition), and 100 in 2017 (dual-topical
condition). For ATES, the unweighted eligible sample size was 13,710 in 2014, 53,850 in 2016 (paper-only), 9,980 in 2016 (mixed-mode condition), 13,310 in 2017 (singletopical condition), and 9,050 in 2017 (dual-topical condition). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
58
Increase in topical response rate after each contact attempt
We also calculated the increase in the topical response rate after each contact attempt.36
Responses were again attributed to a mailing if they were received three or more days after that
mailing was sent and less than three days after the next mailing was sent. Responses were
attributed to e-mails if they were received on or after the day the e-mail was sent and less than
three days after the next mailing was sent. Looking at the response rate after each specific mailing
(see figures 6.6a-e that follow and table 6.2 in appendix A):
In administrations where a web option was offered (the 2016 mixed-mode condition and
2017), most topical response came before the initial topical contact (due to screener
respondents completing the topical at the same time as the screener). The sole exception
was when a different household member was sampled for ATES because these sample
members could start the topical until they were sent a topical mailing.
In the paper-only administrations (2014 and the 2016 paper-only condition), there was a
relatively small gain in the response rate due to the initial mailing (1 to 6 percentage
points) and a much larger increase due to the postcard reminder that followed it (30 to 51
percentage points). However, as also discussed in the screener section of this chapter, it is
more likely that many of these responses were sent in response to the initial topical
mailing and either were slow to arrive at Census or slow to be scanned into the system.
In 2017, only two mailings were sent when a household member other than the screener
respondent was sampled for ATES (a FedEx mailing and a pressure-sealed envelope).
This resulted in a much lower response rate for this group in 2017 than in the other years
(around 50 percent in 2017 versus around 70 percent in the other years).
In the administrations where second and third follow-up topical mailing were sent (2014
and 2016), these mailings continued to increase the response rate, although this was more
true for the second follow-up mailing than the third one. The effectiveness of the second
(and especially third) follow-up mailings was noticeably smaller in the 2016 mixed-mode
condition than in 2014 or the 2016 paper-only condition (except when a different
household member was sampled for ATES), likely because so many of the sample
members had already responded to the topical directly after completing the screener and
fewer sample members needed to be sent topical mailings.
The initial e-mail reminder in 2017 had very little impact on the topical response rate,
likely in part because it was sent to relatively few individuals and these screener
respondents had already shown themselves to be reluctant to complete a topical
questionnaire.
36
There were a few contact attempts that we could not isolate because they were made too close to other attempts: (1)
It is not possible to isolate the effect of the robocall reminder in 2016 because it was made too close to when the
second follow-up mailing was sent. (2) It is not possible to isolate the effect of the second reminder e-mail in 2017
because it was sent too close to when the pressure-sealed envelope was sent.
59
Finally, it is difficult to assess the effectiveness of the pressure-sealed envelope (2017) at
the topical phase as compared to the reminder postcard (2014 and 2016) due to several
factors: (1) for the child surveys, few cases were sent topical mailings in 2017, and those
that were sent them were likely reluctant topical respondents, while previous
administrations sent topical mailings to a larger and more diverse group of sample
members); and (2) the delayed processing of receipts in paper-only administrations makes
it difficult to disentangle responsiveness to the postcard from responsiveness to the initial
mailing.
Figure 6.6a: Percentage point increase in ECPP response rate after each contact attempt, by survey
administration and contact attempt: 2016-2017
Increase in response rate
100%
80%
60%
4%
11%
2%
4%
5%
12%
1%
18%
40%
0.1%
0.1%
88%
85%
60%
20%
39%
0%
1%
2016 (paper-only
condition)
2016 (mixed-mode
condition)
2017 (web-only, single- 2017 (web-only, dualtopical condition)
topical condition)
Survey administration
Before initial contact
Initial topical e-mail
Initial topical mailing
Postcard/pressure-sealed envelope reminder
First follow-up topical mailing
Second follow-up topical mailing
Third follow-up topical mailing
NOTE: Response rates were calculated using AAPOR RR1. Response is attributed to a mailing if the response was received three
or more days after that mailing was sent and less three days after the next mailing was sent. Response is attributed to an e-mail if
the response was received from the day the e-mail was sent up to two days after the next mailing was sent. ECPP was not
administered in 2014. In 2017, these analyses exclude cases that completed the screener on the TQA because they were not asked
to complete the full topical. There was also a robocall in 2016, but it happened the same date as the second follow-up mailing and
is therefore not shown in the table. There was also a second e-mail reminder in 2017, but it was sent too soon after the pressuresealed envelope to isolate its effect on the response rate and is therefore not shown in the table. Unweighted eligible sample size
was 6,700 in 2016 (paper-only), 1,230 in 2016 (mixed-mode condition), 1,720 in 2017 (single-topical condition), and 1,230 in
2017 (dual-topical condition) Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
60
Figure 6.6b: Percentage point increase in PFI-E response rate after each contact attempt, by survey
Figure 6.x: administration and contact attempt: 2014-17
Increase in response rate
100%
2%
5%
4%
11%
1%
80%
4%
9%
60%
4%
11%
0.1%
0.4%
13%
19%
92%
40%
86%
63%
40%
39%
20%
0%
0.1%
0.1%
10%
2%
2014
(paper-only)
2016
(paper-only
condition)
2016
(mixed-mode
condition)
2017
2017
(web-only, single- (web-only, dualtopical condition) topical condition)
Survey administration
Before initial contact
Initial e-mail
Initial mailing
Postcard/pressure-sealed envelope reminder
First follow-up mailing
Second follow-up mailing
Third follow-up mailing
NOTE: Response rates were calculated using AAPOR RR1. Response is attributed to a mailing if the response was received three
or more days after that mailing was sent and less three days after the next mailing was sent. Response is attributed to an e-mail if
the response was received from the day the e-mail was sent up to two days after the next mailing was sent. In 2014, ASPA was
administered instead of the PFI and is used as a proxy for the PFI-E response rate in 2014. In 2017, these analyses exclude cases
that completed the screener on the TQA because they were not asked to complete the full topical. There was also a robocall in
2016, but it happened the same date as the second follow-up mailing and is therefore not shown in the table. There was also a
second e-mail reminder in 2017, but it was sent too soon after the pressure-sealed envelope to isolate its effect on the response rate
and is therefore not shown in the table. Unweighted eligible sample size was 5,560 in 2014, 15,000 in 2016 (paper-only), 2,790 in
2016 (mixed-mode condition), 3,630 in 2017 (single-topical condition), and 2,530 in 2017 (dual-topical condition). Sample sizes
have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
61
Figure 6.6c: Percentage point increase in PFI-H response rate after each contact attempt, by survey
Figure 6.x: administration and contact attempt: 2016-2017
Increase in response rate
100%
80%
2%
3%
7%
60%
40%
4%
8%
11%
16%
75%
76%
2017
(web-only, singletopical condition)
2017
(web-only, dual-topical
condition)
51%
20%
0%
30%
1%
2016
2016
(paper-only condition) (mixed-mode condition)
Survey administration
Before initial contact
Initial e-mail
Initial mailing
Postcard/pressure-sealed envelope reminder
First follow-up mailing
Second follow-up mailing
Third follow-up mailing
NOTE: Response rates were calculated using AAPOR RR1. Response is attributed to a mailing if the response was received three
or more days after that mailing was sent and less three days after the next mailing was sent. Response is attributed to an e-mail if
the response was received from the day the e-mail was sent up to two days after the next mailing was sent. PFI-H was not
administered in 2014. In 2017, these analyses exclude cases that completed the screener on the TQA because they were not asked
to complete the full topical. There was also a robocall in 2016, but it happened the same date as the second follow-up mailing and
is therefore not shown in the table. There was also a second e-mail reminder in 2017, but it was sent too soon after the pressuresealed envelope to isolate its effect on the response rate and is therefore not shown in the table. Unweighted eligible sample size
was 790 in 2016 (paper-only), 140 in 2016 (mixed-mode condition), 120 in 2017 (single-topical condition), and 100 in 2017 (dualtopical condition). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
62
Figure 6.6d: Percentage point increase in ATES (same respondent) response rate after each
mailing, by survey administration and contact attempt: 2014-17
Increase in response rate
100%
80%
60%
2%
8%
11%
3%
9%
1%
4%
4%
13%
1%
0.1%
0.1%
92%
88%
15%
40%
64%
51%
46%
20%
0%
0.1%
0.2%
6%
2014
(paper-only)
3%
2016
2016
(paper-only
(mixed-mode
condition)
condition)
Survey administration
2017
2017
(web-only, single- (web-only, dualtopical condition) topical condition)
Before initial contact
Initial e-mail
Initial mailing
Postcard/pressure-sealed envelope reminder
First follow-up mailing
Second follow-up mailing
Third follow-up mailing
NOTE: Response rates were calculated using AAPOR RR1. Response is attributed to a mailing if the response was received three
or more days after that mailing was sent and less three days after the next mailing was sent. ATES “same respondent” households
are those where the screener respondent was sampled for ATES. Response is attributed to an e-mail if the response was received
from the day the e-mail was sent up to two days after the next mailing was sent. ATES seeded sample members (2014 and 2016)
are excluded from this analysis. In 2017, these analyses exclude cases that completed the screener on the TQA because they were
not asked to complete the full topical. There was also a robocall in 2016, but it happened the same date as the second follow-up
mailing and is therefore not shown in the table. There was also a second e-mail reminder in 2017, but it was sent too soon after the
pressure-sealed envelope to isolate its effect on the response rate and is therefore not shown in the table. Unweighted eligible
screener sample size was 7,620 in 2014, 30,370 in 2016 (paper-only), 6,320 in 2016 (mixed-mode condition), 7,700 in 2017
(single-topical condition), and 5,140 in 2017 (dual-topical condition). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
63
Figure 6.6e: Percentage point increase in ATES (different respondent) response rate after each
mailing, by survey administration and mode condition: 2014-17
Increase in response rate
100%
80%
4%
9%
60%
4%
10%
5%
11%
15%
13%
12%
17%
40%
44%
20%
17%
22%
40%
34%
30%
2017 (web-only,
single-topical
condition)
2017 (web-only,
dual-topical
condition)
20%
0%
5%
2014 (paper-only)
2%
2016
2016
(paper-only
(mixed-mode
condition)
condition)
Survey administration
Before initial contact
Initial topical e-mail
Initial topical mailing
Postcard/pressure-sealed envelope reminder
First follow-up topical mailing1
Second follow-up topical mailing
Third follow-up topical mailing
NOTE: Response rates were calculated using AAPOR RR1. Response is attributed to a mailing if the response was received three
or more days after that mailing was sent and less three days after the next mailing was sent. ATES “different respondent”
households are those where a household member other than the screener respondent was sampled for ATES. Response is attributed
to an e-mail if the response was received from the day the e-mail was sent up to two days after the next mailing was sent. ATES
seeded sample members (2014 and 2016) are excluded from this analysis. In 2017, these analyses exclude cases that completed the
screener on the TQA because they were not asked to complete the full topical. There was also a robocall in 2016, but it happened
the same date as the second follow-up mailing and is therefore not shown in the table. There was also a second e-mail reminder in
2017, but it was sent too soon after the pressure-sealed envelope to isolate its effect on the response rate and is therefore not shown
in the table. Unweighted eligible sample size was 6,090 in 2014, 23,460 in 2016 (paper-only), 3,670 in 2016 (mixed-mode
condition), 5,610 in 2017 (single-topical condition), and 3,910 in 2017 (dual-topical condition). Sample sizes have been rounded
to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Topical response rate by week
We next calculated the weekly gain the response rate for each topical in each of the three years
(again looking separately at the paper-only and mixed-mode conditions in 2016 and single and
dual-topical conditions in 2017). For the purposes of this analysis, “weeks” refer to how long a
particular topical sample member has had to respond to the topical, not how long the overall
topical phase has been going on (this approach was taken because topical sample members come
64
into the topical phase at varied points depending on their topical group or when their screener was
submitted).37 These figures can be found in appendix B (figures 6.7a–f).
As mentioned previously, in the 2017 web-only administration, there was nearly no gain
in the topical response rate during the topical field period for the child surveys or for
ATES when the screener respondent was sampled for ATES. Most of the responses came
at the same time as the screener response, and subsequent attempts to reach the screener
respondent did very little.
In both 2014 and 2016, almost no additional response was received after the tenth week of
the topical field period. In 2017, when a household member other than the screener
respondent was sampled for ATES, almost no additional response was received after about
the fifth week of data collection, likely due to the reduced topical protocol used in 2017.
Topical response rate by day after each contact attempt
To gain a more fine-grained understanding of how each mailing impacted the topical response
rate across administrations and whether mailings are spaced appropriately, an additional line
graph was created for each topical contact attempt for each topical survey.38 Each of these figures
show the cumulative screener response each day following the contact attempt for 2014, 2016,
and 2017 (2016 paper-only and mixed-mode results are again be presented separately, as are 2017
single and dual-topical conditions). The response rate on day 0 (the mailing day) is the screener
response rate as of the day the mailing was sent. The final response rate shown for each line is the
response rate the day before the next mailing was sent. The lines for some administrations are
shorter than others because there were fewer days between mailings in some administrations.
These figures can be found in appendix B (figures 6.8a through 6.13e).
For the child surveys and when the screener respondent is sampled for ATES, this discussion
focuses on the results for 2014 and 2016 because, as mentioned previously, there was almost no
gain in the child survey topical response rates in 2017 during the topical phase.
As seen for the screener, there was little response attributable to the initial topical mailing
in 2014 or the 2016 paper-only condition; this was especially the case in 2016, likely due
to the greater difficulty of processing a larger volume of topical returns quickly. The gain
was slightly greater for the 2016 mixed-mode condition, likely because some sample
members were given the option to respond by web. There is no indication that any
changes are needed to the timing of sending the next topical mailing.
As also mentioned for the screener, in the 2016 mixed-mode condition, the final mailing
that offered only a web option (in this case, the first follow-up) yielded comparatively
little response, and thus it may be preferable to send the next mailing (with a paper option
included) more quickly.
37
We considered the end of the topical field period to be the date when Census stopped accepting/keying topical
forms.
38
No figures were made for the e-mail reminder because they were only conducted in one year and had almost no
impact on the topical response rates.
65
In both 2014 and 2016, the pattern of response suggests that it may be worthwhile to send
the third follow-up mailing more quickly.
As previously mentioned, if desirable to NCES, the topical field period could also be
closed several weeks earlier without much negative effect on topical response rates.
Relatively similar conclusions are drawn when reviewing the results when a household
member other than the screener respondent is sampled for ATES.
Takeaways for effectiveness of topical contact attempts
When a web option was offered at the screener phase, most topical response came prior to
the topical contacts (due to screener respondents completing the topical at the same time
as the screener)—except for ATES when a household member other than the screener
respondent was sampled.
When only a paper response was offered, most responses came as a result of the
combination of the initial mailing and postcard reminder.
In 2014 and 2016, the second and third follow-up mailings continued to increase the
response rate, although there was a diminishing return for the third follow-up mailing
(especially for the mixed-mode condition in 2016).
In the 2017 web-only administration, topical contacts, which were mostly sent to screener
respondents who had already decided not to complete the topical questionnaire when it
was presented to them right after the screener, had almost no impact on the topical
response rate. The topical response rate for ATES when someone other than the screener
response rate was sampled was much lower in 2017 than in other years due to the reduced
topical protocol used in 2017 (and perhaps due to only offering web response).
The e-mail reminder had very little impact on the response rate.
Given the response pattern to the topical contact attempts, it may be desirable to shorten
the lag time before sending the second and third follow-up mailings. In general, there was
very little gain in the response rate following the tenth week of contacts. Some of this may
be due to the fact the only earlier topical groups actually had the full number of weeks
shown on the graph to respond to the topical (because responses continued to be accepted
for earlier topical groups while the mailing protocol for later topical groups was being
completed). However, topical response slows to a crawl even before the full contact
protocol has been completed, suggesting it could be possible to shorten the field period
with little negative impact on the response rate.
6.3: E-Mail Outcomes
This final section of the chapter presents findings related to the request for screener respondents
to provide their e-mail addresses before starting the topical to assess respondents’ willingness to
66
provide this information, the quality of the responses that are received, and the effectiveness of
the e-mails at garnering topical response.
Request for screener respondents to provide their e-mail address
The first part of this section examines the percentage of screener respondents that provided their
e-mail addresses after completing the screener.39 This request was also made of some respondents
in 2016 as part of an experiment. In that year, some respondents in the experimental conditions
were asked to provide their own e-mail addresses and others were asked to provide the e-mail
address of another household member (who was going to be asked to complete a topical survey);
we limit the analysis only to those respondents asked to provide their own e-mail address to
maximize comparability to 2017.
Overall, most screener respondents were willing to provide their e-mail address: 79 percent of
screener respondents provided their e-mail address in 2016 and 73 percent provided it in 2017
(see figure 6.5 on the next page and table 6.3 in appendix A). The percentage of screener
respondents who provided their e-mail addresses in 2017, however, significantly decreased as
compared to 2016. This was surprising given that the question wording was the same in both
years. The only difference between the two years is that the 2016 screener only asked for email
addresses of individuals who (for the child topicals) had already confirmed they were a parent or
guardian of the sampled child or (for ATES) had already confirmed they were the sampled
individual; this confirmation was not asked in 2017 (it was assumed that the screener respondent
was knowledgeable about the sampled child, and if person 1 was sampled for ATES, it was
assumed that the screener respondent was that person).
Looking specifically at households that were sampled for particular topicals, there was some
variation in the percentage of screener respondents who provided their e-mail addresses across
topicals, but the percentage was still high for all topicals in both years (ranging from 71 percent
for ATES in 2017 to 85 percent for PFI-H in 2016). The trend of more willingness to provide an
e-mail address in 2016 versus 2017 continued when looking at the topical-specific results. All of
the differences between the two years were significant except for PFI-H, likely due to the much
smaller number of cases sampled for this topical.
39
The exact wording of this question in 2017 varied slightly depending on which topical the household was sampled
for, but was similar to for all screener respondents: “Before we take you to the questions about (SAMPLED
CHILD)'s care and education, would you please give us your e-mail address in case we need to contact you further?”
The wording was almost identical in 2016.
67
Figure 6.5: Percentage of screener respondents that provided their e-mail address, by topical
questionnaire and survey administration: 2016-2017
100%
79%
Percentage
80%
81%
73%*
83%
77%*
85%
78%*
78%
78%
71%*
60%
40%
20%
0%
Overall
ECPP
PFI-E
PFI-H
ATES
Topical questionnaire
2016
2017
* p < 0.05.
NOTE: In 2016, a random sample of screener respondents were asked for an e-mail address for the topical respondent at the end of
the screener. In 2017, screener respondents were asked for their own e-mail addresses (unless the only topical sampling that
occurred was that a different household member was sampled for ATES—then the e-mail address request was not made).
Households that were not asked for an e-mail address are excluded from this analysis; households that were asked for another
household member’s e-mail address in 2016 (other than the screener respondent) are also excluded from this analysis. The number
of screener respondents in households sampled for a topical and asked to provide their own e-mail addresses was 3,560 in 2016
and 29,720 in 2017 (ECPP: 400 in 2016 and 3,000 in 2017; PFI-E: 920 in 2016 and 6,320 in 2017; PFI-H: 30 in 2016 and 220 in
2017; ATES: 2,210 in 2016 and 15,040 in 2017). Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
Bouncebacks
In 2017, topical reminder e-mails were sent to households where the screener respondent was
asked to complete one or more topical surveys but did not answer any topical questions.40 In
2016, thank you e-mails were sent to households after they completed the survey. In both years,
the percentage of e-mails that bounced back was very low, with 2 percent of the e-mails resulting
in bouncebacks. However, the 2016 results are not perfectly comparable to those from 2017
because, based on the information available from 2016, we were not able to disentangle the
outcome of e-mails sent to the screener respondent from the outcome of e-mails sent to another
household member. Nevertheless, bouncebacks were rare in both years, suggesting that
respondents provided valid e-mail addresses.
It was AIR’s understanding that, in the dual-topical condition, if another household member was sampled for
ATES and the screener respondent failed to answer any items in the child topical, then the other household member
would be contacted and asked to complete both topicals. However, it appears that more than 1,100 e-mails were sent
to the screener respondent in this situation. This also raises questions about whether those screener respondents would
even still be able to access the screener using the screener access credentials included in the e-mail, given that the
case simultaneously should have been switched over to the topical access credentials so that these could be included
in the topical mailings that went out to the other household member.
40
68
Topical response as a result of e-mail reminders
Finally, we examined the percentage of 2017 screener respondents who were sent a topical
reminder e-mail that responded as a result of the e-mail. As mentioned previously, these e-mails
were only sent in a relatively specific situation: if the screener respondent was asked to complete
at least one topical survey but did not answer any topical items. Respondents were considered to
have responded to the topical as a result of the e-mail if their topical response was received on or
after the day the day the e-mail was sent and less three days after the next topical mailing was
sent.41 This analysis provides insight into how often the e-mail operation was successful at
garnering response.
Overall, only 0.2 percent of the respondents who were sent an e-mail responded to the topical as a
result of that e-mail (see table 6.4 in appendix A). In households sampled for child surveys, none
of the people who were sent an e-mail responded to the topical as a result of that e-mail. In
households sampled for ATES, 8 percent of the people who were sent an e-mail responded to
ATES as a result of that e-mail. All results presented in this section should be interpreted with
caution given small sample sizes; this is particularly true for PFI-H and ATES, where less than 30
cases were sent an e-mail for each topical.
Takeaways for e-mail address request and e-mail outreach
Most screener respondents were willing to provide their e-mail address.
Bouncebacks were very rare in both years, suggesting that the e-mail addresses
respondents provide are valid.
However, barely any of the people who were sent e-mails in 2017 responded as a result of
those e-mails.
41
This analysis does not take into account whether or not the respondent accessed the web instrument using the URL
provided in the email because AIR does not have the data necessary to do this. In addition, although two e-mail
reminders were sent, this analysis only looks at the effect of the first e-mail. The second e-mail was sent at about the
same time as the previous mailing, and as a result it is not possible to disentangle the effects of the two contact
efforts. However, given the lack of response to the first e-mail, it seems reasonable to believe that a similar result was
obtained for the second one as well.
69
Chapter 7: Summary and Conclusions
This final chapter of the report summarizes the key findings for each previous chapter. It also
notes important implications of the findings for the design of NHES:2019.
7.1: Screener Mailing Experiments
Incentive experiment
The screener response rate was significantly lower when a $2 screener incentive was
offered (by 3 percentage points). If the primary goal is to maximize the screener response
rate, then a $5 screener incentive should continue to be used in NHES:2019.
However, results from the incentive cost per complete analysis presented in Chapter 4 also
show that the $5 incentive is much more expensive per complete. If cost savings and
efficiency are the primary goals, then it may be preferable to use a $2 incentive in 2019, at
least for a subset of cases (building on the findings discussed in Jackson and McPhee
2017).
Ideally, further incentive sensitivity research would be conducted to identify two
subgroups of households: (1) those for whom a $5 incentive leads to a large gain in the
response rate, (2) those who are just as likely to respond when a $2 incentive is offered as
when a $5 incentive is offered. However, preliminary research conducted by AIR using
NHES:2016 paper-only data suggests that the variables currently available on the frame
may not have sufficient out-of-sample predictive power to reliably identify such
households (Jackson, Steinley, and McPhee 2017).
Letter size experiment
Using a letter-size envelope did not have a negative impact on the screener response rate,
topical response rates, or screener respondent characteristics. Given the lower postage cost
of the letter-size envelope and lack of effect on the response rate, we recommend using a
letter-size envelope in 2019 for advance letters or web invitations.
FedEx/First Class experiment
The screener response rate was significantly lower when the final screener mailing was
sent using First Class mail instead of FedEx (by 3 percentage points). If maximizing the
screener response is the primary goal, then NHES:2019 should continue to use FedEx for
the final screener mailing.
However, FedEx is considerably more expensive than First Class mailing. If cost savings
and efficiency are the primary goals, then it would be ideal to conduct further research to
determine if there are certain subgroups for whom the FedEx mailing is not effective
enough to justify the cost (or subgroups for which it is particularly effective). Analyses
included in the 2016 paradata report may shed some light on this (Megra et al. 2017).
70
When interpreting all findings reported in this chapter, it is important to keep in mind that
NHES:2017 only offered a web response option–and that it had a lower screener response rate
(likely as a result of this). If NHES:2019 uses a mixed-mode design with both web and paper
options, the screener response rate will likely be higher, and in this case, we might expect that the
difference in the screener response rates between the experimental conditions would be smaller in
2019 than they were in 2017.
7.2: Screener Split-Panel Experiment
Among web respondents, screener version did not have a significant effect on the screener
or topical response rates, or the screener breakoff rate.
There was some evidence that web screener respondents had more difficulty completing
the redesigned version, although the magnitude of these differences tended to be quite
small (e.g., more item nonresponse, more inconsistent responses, more unknown
eligibility status designations, longer completion times). This may be because the
characteristic-by-characteristic format is harder for respondents to follow. It seems
reasonable that it would be easier for respondents to report all of the details about one
household member before moving on to the next person.
The increased item missing rate for the name question in the redesigned version among
web screener respondents was surprising given that it is not actually possible to have
missing name information for household members 2 through 10 (since the list of names is
how the instrument knows how many people are living in the household). Therefore, all of
the missing name information had to have been for the screener respondent. The
redesigned screener starts by asking the screener respondent for his or her name, while the
2016 screener first asks how many people live in the household and then explains that the
characteristics questions will be asked about each household member before asking for
any specific information. This more gradual introduction to the name request may help to
ease screener respondents into the idea of providing their name on the screener.
The 2016 version of the screener resulted in slightly fewer household members being
reported on average by web respondents (more single-member households and fewer
households with about 3 to 6 members). This suggests that having respondents list the
names of the household members on the screener yields higher numbers of household
members as compared to asking respondents to simply report the number of household
members.
The item asking if anyone else lived in the household (who had not been listed in response
to the initial question) was very rarely endorsed among respondents who had not already
listed six names. This suggests that this item more likely functions as a way for people
living in large households to add more household members (as opposed to being a second
chance for respondents in smaller households to remember to list additional household
members).
71
o Even among those respondents who did endorse the item after entering fewer than
six names on the first page, a third of them did not list any additional names,
suggesting that they may have been confused by the item.
o Ultimately, asking this question of respondents who had initially listed fewer than
six names led to about 150 individuals being added to the screeners who would not
have been listed otherwise; however, this is a very small increase considering that
there were more than 17,000 respondents to the redesigned screener.
As a result, for web administration in NHES:2019, we recommend combining the best
functioning parts of both screener versions:
o Before asking for any specific information about individuals, make it clear that
there will be questions about each of the people who live in the household (to ease
respondents into the request and reduce item missingness for the household
member characteristic questions).
o Next, to determine how many people live in the household, ask for a list of the
names of the household members (to maximize the number of household members
reported).
Consider showing 10 spaces for names the first time the question is asked,
instead of starting with six and then asking if anyone else lives here (since
the question about additional household members was mostly used as a
way for those who had already listed six household members to finish
listing the rest of the people living there).
Also consider rewording or dropping the question about whether anyone
else lives here (to reduce confusion among those who have listed fewer
people than there are spaces for names on the initial page, given the
relatively high rate of such respondents endorsing this item and then listing
no additional names - and the relatively low rate of child-topical-eligible
individuals being added to the roster).
o Finally, ask the remaining questions in a person-by-person format (to minimize
item missingness, inconsistent responses, unknown eligibility sampling status
decisions, and so on).
Among TQA respondents, there was very little difference in terms of the two screeners
among TQA respondents, likely because (1) the interviewers are able to facilitate
completion of the questionnaire regardless of version; and (2) smaller households tended
to complete the screener on the TQA, which may reduce the effect of the different
presentation formats. As a result, for ease of administration, we recommend using the
same screener version on the TQA as is used online.
72
7.3: Dual-Topical Experiment
Topical response rates were lower in the dual-topical condition than in the single-topical
condition. Within the dual-topical condition, they were often lower for the second topical
than the first. There also was a higher breakoff rate for some of the topicals in the dualtopical condition. This all suggests that some sample members are not willing to complete
a second topical.
Nevertheless, the dual-topical condition was still more efficient in terms of (1) the
percentage of households that completed at least one of the topicals for which they were
sampled, (2) the number of screeners that needed be sent to yield a completed topical, (3)
the incentive cost per complete, and (4) the number of minutes needed to complete each
topical. It also did not have a negative effect on the item missing rate or the characteristics
of the households that responded to the topical surveys.
Therefore, for web administration in NHES:2019, we recommend using the dual topical
approach again. The 2019 administration could also experiment with ways of increasing
the response rate to the second topical (for example, reminding the respondent that this is
the final household member about whom they will be asked to respond, as a way to
reassure them that they are making progress toward completing the survey task).
7.4: ATES Item-Level Experiments
Overall, item version had little effect on response distributions or response quality in
either experiment.
Given the lack of significant differences between the two conditions and the benefits of
maintaining continuity in a repeated cross-sectional federal survey such as ATES, we
recommend that future administrations of ATES continue to use version A (the 2016
version):42
o Certification provider items: “Is your [most/second-most/third-most] important
certification or license required by a federal, state, or local government agency
(such as a state board) in order to do that kind of work?”
o Usefulness items: response options ordered from least to most useful (“not useful,”
“somewhat useful,” and “very useful”).
42
Though the effect was not significant for most of the items, version B of the provider item (the new 2017 version)
led to somewhat lower rates of licensure reporting than version A (the 2016 version) for three of the four items. If
there is concern that licenses have been overreported in prior administrations (and conversely, that certifications have
been underreported), then it may be preferable to use version B.
73
7.5: Effectiveness of NHES Contact Attempts
Screener contact attempts
Only offering a web option in 2017 had a negative impact on the screener response rate.
We recommend adding back in a paper option in 2019.
Offering a web option leads to higher rates of response to earlier contacts because it
allows for faster response than paper questionnaires. We recommend continuing to offer
this option in 2019.
Because the pressure-sealed envelope tested in 2017 was not administered as part of an
experiment, it is difficult to directly assess its effectiveness as compared to the reminder
postcard that was used in the 2016 mixed-mode condition. However, the pressure-sealed
envelope appears to have performed at least somewhat better than the reminder postcard.
We recommend using it again in 2019 for cases that have been offered a web option, given
the clear usefulness of including web login credentials in the reminder mailing.
The fourth screener mailing continued to generate response in 2014 and 2016, although it
only increased the response rate by about 2 to 5 percentage points. If maximizing the
screener response rate is the priority in 2019, then we recommend adding this mailing
back into the screener protocol in 2019. However, if costs become a concern, then cutting
this mailing would be a reasonable cost savings measure.
In 2016, the robocall reminder had almost no effect on the screener response rate. This
may have been because it was made after the final mailing had already been conducted. If
the robocall is not very expensive, then we recommend trying to use it earlier in the
screener (or topical) field period in 2019 to see if getting the robocall helps to get sample
members to open and respond to subsequent mailings that they receive.
There appears to be a lag between when mailings are sent and when paper questionnaires
are returned and processed. If it is important to have detailed mailing return date
information moving forward, NCES may want to speak with Census to learn more about
the procedures for checking-in returned paper questionnaires and (if this seems to be a
factor) determine if it is possible to improve the timeliness of the check-in process. The
currently available data makes it difficult to know when exactly sample members
responded to the screener request.
Due to the seemingly different patterns of response to web and paper options, it may be
worth considering different mailing schedules for cases that are offered different mode
options (though this should of course be weighed against any cost increases due to greater
operational complexity). Even if the same mailing schedule is used for all sample
members, there are some schedule changes that could be considered in 2019. For example,
given the pattern of response to first mailing in the 2016 mixed-mode condition, it may be
worth waiting a few more days to send the next contact attempt for sample members that
are offered a web option than was done in 2017. In addition, for any sample members that
are offered a paper option early in the administration, it may be preferable to send the next
74
mailing package closer to the date of the reminder postcard so that it arrives when the
postcard is still fresh in the sample member’s mind.
In all administrations, there was very little gain in the response rate in the final three
weeks or so of the screener field period. If desirable to NCES, it may be possible to
shorten the screener field period by a few weeks with little negative impact on the screener
response rate.
Topical contact attempts
When a web option is offered during the screener phase, the topical response rates tend to
be higher, and most topical response comes prior to the topical contacts (due to screener
respondents completing the topical at the same time as the screener). This is a big
efficiency benefit for the topical phase and another reason we recommend keeping a web
option in 2019.
In 2017, only sending topical contacts to screener respondents who had declined to start
the topicals had almost no positive impact on topical response rates, likely because these
individuals had already shown they were not interested in completing the topical. It may
be useful to conduct further research into whether this was also the case in 2016, and if so,
we would recommend dropping topical follow-up of screener respondents who fail to start
the topical. We do not think that the lack of topical response for this group is due to not
offering a paper response option in the topical phase because these individuals had already
shown themselves to be willing to do the screener online.
In 2017, most topical mailings were sent to households where someone other than the
screener respondent was sampled for ATES. Reducing the topical protocol to only two
mailings had a negative impact on the response rate for this topical. In future
administrations, we recommend sending more than two topical mailings when a different
household member is sampled for ATES.
Looking at the weekly and daily pattern of response to topical contacts suggests that it
may be preferable to send the second and third follow-up mailings more quickly – and that
it may be possible to shorten the topical field period overall.
E-mail outcomes
Most screener respondents were willing to provide their own e-mail addresses in both
2016 and 2017 and very few of the e-mails that were sent bounced back.
The e-mail operation used in 2017 led to almost no additional topical responses. This is
likely because e-mails were only sent to screener respondents who had already made it
clear that they were not interested in completing the topical.
Therefore, we do not recommend continuing to use the same e-mail operation in the
future. We suggest either dropping the e-mail operation entirely or experimenting with
75
ways to ask for another household member’s e-mail address when someone else is
sampled for ATES that might be more successful than the approach that was used in 2016.
76
References
Jackson, M., and McPhee, C. (2017), NHES:2016 Tailored Incentive Experiment Report.
Jackson, M., and Medway, R. (2017). NATES:2013 Nonresponse Bias Analysis Report:
Evidence from a Nonresponse Follow-up Study (NCES 2017-012). National Center for
Education Statistics, Institute of Education Sciences, U.S. Department of Education.
Washington, DC.
Jackson, M., Steinley, K., and McPhee, C. (2017). What Will Work for Whom? Identifying
Subgroups for Which a Higher Monetary Incentive Will Be Effective. American Institutes
for Research Working Paper.
Megra, M., Xing, Q., Kaiser, A., and Hanson, R. (2017). NHES:2016 Paradata Analysis Report.
Mercer, A., Caporaso, A., Cantor, D., and Townsend, R. (2015). How Much Gets You How
Much? Monetary Incentives and Response Rates in Household Surveys. Public Opinion
Quarterly, 79: 105–129.
Singer, E., and Ye, C. (2013). The Use and Effects of Incentives in Surveys. ANNALS of the
American Academy of Political and Social Science, 645: 112–141.
77
Appendix A. Tables
A-1
Table 2.1.
Response rate, by screener incentive condition, questionnaire, dual-topical
condition, and topical respondent: 2017
Screener incentive condition
Questionnaire
Screener
Topical
ECPP
Overall
Single-topical condition
$2 incentive1
41.3
$5 incentive2
43.6
t statistic
5.1 *
88.2
89.7
87.0
88.3
0.6
0.6
Dual-topical condition
PFI-E
Overall
Single-topical condition
Dual-topical condition
PFI-H
Overall
Single-topical condition
85.9
85.3
0.2
88.7
90.4
86.1
89.8
91.9
86.8
1.0
1.1
0.4
68.1
78.2
!
76.5
74.0
0.9
†
Dual-topical condition
ATES
Overall
Overall
Same respondent as screener
Different respondent than screener
56.4
!
79.4
†
72.9
90.3
49.4
73.1
90.8
49.4
0.2
0.2
0.2
74.0
91.4
50.5
75.1
92.2
51.6
1.0
0.9
0.6
71.5
88.7
47.7
70.1
88.7
46.2
1.1
0.1
0.7
Single-topical condition
Overall
Same respondent as screener
Different respondent than screener
Dual-topical condition
Overall
Same respondent as screener
Different respondent than screener
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05.
1Unweighted eligible sample size was 13,400 for the screener, 390 for ECPP, 890 for PFI-E, 30 for PFI-H, and 3,180 for ATES.
2Unweighted eligible sample size was 76,090 for the screener, 2,560 for ECPP, 5,270 for PFI-E, 190 for PFI-H, and 19,180 for ATES.
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of sampled households (excluding
undeliverable and out-of-scope addresses) that were respondents to the questionnaire. Topical response rates exclude cases that did the
screener on the TQA because these cases were not asked to complete an entire topical questionnaire. Unweighted sample size was equal to
13,400 for the $2 incentive condition and 76,090 for the $5 incentive condition. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-2
Table 2.2.
Percentage point gain in response rate after each mailing, by screener
incentive condition and mailing: 2017
Screener incentive condition
Mailing
Initial screener mailing
Pressure sealed envelope
$2 incentive
11.2
$5 incentive
13.9
t statistic
6.8 *
12.2
13.2
3.4
Second screener mailing
8.2
7.9
1.5
Third screener mailing (FedEx/First Class)
9.7
8.6
3.9
*
*
*p < .05.
NOTE: Response rates were calculated using American Association for Public Opinion Research (AAPOR)
Response Rate 1 (RR1). Percentages represent the proportion of eligible sampled households that completed
the screener after the specified mailing. Response is attributed to a mailing if the response was received three
or more days after that mailing was sent and less than three days after the next mailing was sent. Unweighted
sample size (excluding undeliverable and out-of-scope addresses) is equal to 13,400 for the $2 incentive
condition and 76,090 for the $5 incentive condition. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household
Education Surveys Program (NHES), 2017.
A-3
Table 2.3.
Number of screener respondent households and percentage distribution, by
screener incentive condition and household characteristics: 2017
Percentage distribution of
screener respondents
Total number
of screener
respondents
$2 incentive
$5 incentive
37,330
100
100
†
27,820
9,510
74.6
25.4
74.9
25.1
0.4
0.4
19,980
3,210
58.3
7.0
56.1
6.9
2.8
0.0
Hispanic
3,130
6.8
7.1
0.7
Asian
1,420
3.4
3.9
1.8
850
8,740
2.5
22.1
2.4
23.6
0.3
2.5
2,980
7.6
7.4
0.4
High school
7,470
19.7
19.8
0.0
Some college
B.A.
8,020
6,110
21.6
17.7
21.4
16.6
0.3
1.7
4,000
8,740
11.3
22.1
11.2
23.6
0.3
2.5
18–24
420
1.0
1.1
0.9
25–34
2450
6.7
6.5
0.5
35–44
4520
12.4
12.1
0.6
45–54
5880
15.6
15.9
0.6
55–65
7730
21.5
20.7
1.2
Over 65
Missing
8560
7770
23.7
19.1
23.1
20.6
1.1
2.3
Household
characteristics
Total
Phone number available
(from sampling frame)
Yes
No
Race/ethnicity of head of
household (from sampling
frame)
White
Black
Other
Missing
Education of head of
household (from sampling
frame)
Less than high school
Graduate/professional
Missing
Age of head of household
(from sampling frame)
t statistic
*
*
*
*
See notes at end of table.
A-4
Table 2.3.
Number of screener respondent households and percentage distribution, by
screener incentive condition and household characteristics: 2017 —Continued
Percentage distribution of
screener respondents
Total number
of screener
respondents
$2 incentive
$5 incentive
5,220
13.4
13.5
0.2
$21,000–$36,000
3,570
8.9
9.5
1.4
$36,001–$56,000
4,390
11.0
11.4
0.8
$56,001–$85,000
5,810
15.2
15.5
0.6
$85,001–$120,000
6,340
18.5
17.0
2.6
Greater than $120,000
Missing
Reported at least one
topical-eligible household
member on the screener
ECPP
8,040
3,970
22.8
10.2
22.3
10.6
0.7
1.0
3,860
9.5
10.3
1.8
PFI-E
8,160
21.5
21.6
0.1
PFI-H
ATES
350
29,610
0.8
78.5
0.9
79.0
0.6
0.9
Household
characteristics
Annual income (from
sampling frame)
Less than $21,000
t statistic
*
† Not applicable. Either this estimate or comparison is not applicable for this subgroup, or estimates are not
reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is
30 percent or greater.
*p < .05.
1Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES
that received either ATES or a child topical questionnaire and reached at least the first item in the
questionnaire. Unweighted eligible sample size was 1,520 for ECPP, 3,300 for PFI-E, 90 for PFI-H, and 950 for
ATES.
2Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES
that received either ATES and a child topical questionnaire or two child questionnaires and reached at least
the first item in the questionnaire. Unweighted eligible sample size was 1,050 for ECPP, 2,180 for PFI-E, 80 for
PFI-H, and 1,720 for ATES.
NOTE: Item missing rates represent the percentage of respondents who should have answered the item but
did not. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household
Education Surveys Program (NHES), 2017.
A-5
Table 2.4.
Response rate, by envelope size condition and
questionnaire: 2017
Envelope size condition
Questionnaire
Full size1
Letter size2
t statistic
Screener
Topical
ECPP
43.3
42.7
0.6
87.3
85.4
0.6
PFI-E
PFI-H
ATES
89.5
76.8
73.0
91.0
53.0
73.8
1.0
†
0.6
!
† Not applicable. Estimates are not reliable enough to make statistical
comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate
or the coefficient of variation is 30 percent or greater.
*p < .05.
1Unweighted eligible sample size was 85,010 for the screener, 2,820 for ECPP,
5,830 for PFI-E, 210 for PFI-H, and 21,290 for ATES.
2Unweighted eligible sample size was 4,480 for the screener, 130 for ECPP,
333 for PFI-E, 10 for PFI-H, and 1,070 for ATES.
NOTE: Response rates were calculated using American Association for
Public Opinion Research (AAPOR) Response Rate 1 (RR1). Percentages
represent the proportion of sampled households (excluding undeliverable and
out-of-scope addresses) that were respondents to the questionnaire. Topical
response rates exclude cases that did the screener on the TQA because these
cases were not asked to complete an entire topical questionnaire. Unweighted
sample size was equal to 85,010 for the full-size envelope condition and 4,480
for the letter-size envelope condition. Sample sizes have been rounded to the
nearest 10.
SOURCE: U.S. Department of Education, National Center for Education
Statistics, NHES, 2017.
Table 2.5.
Percentage point gain in response rate after each mailing, by
envelope size condition and mailing: 2017
Envelope size condition
Mailing
Full size
Letter size
t statistic
Initial screener mailing
13.5
13.4
0.1
Pressure sealed envelope
13.1
12.7
0.7
Second screener mailing
8.0
7.4
1.3
Third screener mailing (FedEx/First Class)
8.8
9.2
0.9
NOTE: Response rates were calculated using American Association for Public Opinion Research
(AAPOR) Response Rate 1 (RR1). Percentages represent the proportion of eligible sampled
households that completed the screener after the specified mailing. Response is attributed to a
mailing if the response was received three or more days after that mailing was sent and less than
three days after the next mailing was sent. Unweighted sample size (excluding undeliverable and
out-of-scope addresses) is equal to 85,010 for the full-size envelope condition and 4,480 for the
letter-size envelope condition. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National
Household Education Surveys Program (NHES), 2017.
A-6
Table 2.6.
Number of screener respondent households and percentage distribution,
by envelope size condition and household characteristics: 2017
Percentage distribution of
screener respondents
Total number
of screener
respondents
Full size
Letter size
37,330
100
100
†
27,820
74.9
74.5
0.3
9,510
25.1
25.5
0.3
White
19,980
56.4
56.6
0.1
Black
3,210
7.0
6.3
1.3
Hispanic
3,130
7.0
7.9
1.6
Asian
1,420
3.8
4.6
1.6
Other
850
2.4
2.3
0.3
8,740
23.4
22.3
1.3
2,980
7.4
7.7
0.6
High school
7,470
19.8
19.1
0.7
Some college
8,020
21.4
21.9
0.5
B.A.
6,110
16.8
16.9
0.2
Graduate/professional
4,000
11.1
12.0
1.0
8,740
23.4
22.3
1.3
420
1.1
0.6
25–34
2,450
6.5
6.7
0.3
35–44
4,520
12.2
11.5
0.9
45–54
5,880
15.8
17.1
1.3
55–65
7,730
20.8
20.4
0.5
Over 65
8,560
23.2
23.1
0.2
Missing
7,770
20.3
20.7
0.4
Household
characteristics
Total
Phone number available
(from sampling frame)
Yes
No
t statistic
Race/ethnicity of head of
household (from
sampling frame)
Missing
Education of head of
household (from
sampling frame)
Less than high school
Missing
Age of head of
household (from
sampling frame)
18–24
!
2.6
*
See notes at end of table.
A-7
Table 2.6.
Number of screener respondent households and percentage distribution, by
envelope size condition and household characteristics: 2017—Continued
Percentage distribution of
screener respondents
Total number
of screener
respondents
Full size
Letter size
5,220
13.4
15.0
1.9
$21,000–$36,000
3,570
9.5
8.8
1.0
$36,001–$56,000
4,390
11.4
10.4
1.5
$56,001–$85,000
5,810
15.6
14.0
1.8
$85,001–$120,000
6,340
17.3
17.2
0.1
Greater than $120,000
8,040
22.3
23.7
1.4
Missing
3,970
10.5
10.9
0.5
ECPP
3,860
10.2
9.5
0.9
PFI-E
PFI-H
ATES
8,160
350
29,610
21.5
0.9
79.0
23.1
1.1
78.4
1.5
0.7
0.6
Household
characteristics
Annual income (from
sampling frame)
Less than $21,000
t statistic
Reported at least one
topical-eligible household
member on the screener
† Not applicable.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30
percent or greater.
*p < .05.
NOTE: Percentages represent the proportion of eligible screener respondent households within that group. Race
categories exclude persons of Hispanic ethnicity. These analyses exclude cases that did the screener on the TQA,
since these cases were not asked to complete an entire topical questionnaire. Unweighted sample size was equal to
35,480 for the full-size envelope condition and 1,850 for the letter-size envelope condition. Sample sizes have
been rounded to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household Education
Surveys Program (NHES), 2017.
A-8
Table 2.7.
Response rate, by FedEx/First Class condition and
questionnaire: 2017
FedEx/First Class condition
Questionnaire
t statistic
FedEx1
First Class2
44.6
42.0
8.2
87.4
87.2
0.2
PFI-E
89.7
89.5
0.3
PFI-H
76.4
74.5
0.3
73.2
73.0
0.2
Screener
*
Topical
ECPP
ATES
*p < .05.
1Unweighted eligible sample size was 45,030 for the screener, 1,530 for ECPP, 3,240
for PFI-E, 130 for PFI-H, and 11,580 for ATES.
2Unweighted eligible sample size was 44,460 for the screener, 1,420 for ECPP, 2,920
for PFI-E, 90 for PFI-H, and 10,780 for ATES.
NOTE: Response rates were calculated using American Association for Public
Opinion Research (AAPOR) Response Rate 1 (RR1). Percentages represent the
proportion of sampled households (excluding undeliverable and out-of-scope
addresses) that were respondents to the questionnaire. Households with PO box
addresses are excluded because they cannot receive FedEx mailings. Topical
response rates exclude cases that did the screener on the TQA because these cases
were not asked to complete an entire topical questionnaire. Unweighted sample size
(excluding undeliverable and out-of-scope addresses) is 45,030 for the FedEx
condition and 44,460 for the First class condition. Sample sizes have been rounded
to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics,
NHES, 2017.
A-9
Table 2.8.
Number of screener respondent households and percentage distribution, by FedEx/First
Class condition and household characteristics: 2017
Percentage distribution of
screener respondents
Total number
of screener
FedEx
t statistic
Household characteristics
respondents
First Class
37,110
100
100
†
27,740
74.7
75.5
1.7
9,370
25.3
24.5
1.7
White
19,880
56.3
56.8
0.9
Black
3,200
6.9
7.1
0.8
Hispanic
3,120
7.1
6.9
1.0
Asian
1,420
3.9
3.8
0.6
Other
840
2.5
2.3
1.1
8,650
23.3
23.2
0.3
Less than high school
2,960
7.6
7.2
1.3
High school
7,420
19.6
19.9
0.7
Some college
7,990
21.4
21.6
0.4
B.A.
6,100
16.6
17.2
1.5
Graduate/professional
3,990
11.5
10.9
1.5
8,650
23.3
23.2
0.3
420
1.1
1.0
0.9
25–34
2,440
6.3
6.7
1.5
35–44
4,500
12.4
12.0
1.0
45–54
5,860
15.9
15.8
0.3
55–65
7,700
20.7
21.1
0.9
Over 65
8,530
23.0
23.5
1.2
Missing
7,660
20.5
19.8
1.6
Total
Phone number available (from
sampling frame)
Yes
No
Race/ethnicity of head of household
(from sampling frame)
Missing
Education of head of household
(from sampling frame)
Missing
Age of head of household (from
sampling frame)
18–24
See notes at end of table.
A-10
Table 2.8.
Number of screener respondent households and percentage distribution, by
FedEx/First Class condition and household characteristics: 2017 —Continued
Percentage distribution of
screener respondents
Total number
of screener
FedEx
Household characteristics
respondents
First Class
t statistic
Annual household income (from
sampling frame)
Less than $21,000
5,190
13.8
13.2
1.5
$21,000–$36,000
3,550
9.6
9.3
1.1
$36,001–$56,000
4,370
11.2
11.6
1.1
$56,001–$85,000
5,780
15.4
15.6
0.4
$85,001–$120,000
6,310
16.8
17.8
2.5
Greater than $120,000
8,030
22.5
22.6
0.2
Missing
3,880
10.7
10.0
2.1
ECPP
3,840
10.2
10.1
0.3
PFI-E
8,110
21.9
21.2
1.6
*
*
Reported at least one topical-eligible
household member on the screener
PFI-H
350
1.0
0.8
1.5
ATES
29,460
79.3
78.7
1.7
† Not applicable.
*p < .05.
NOTE: Percentages represent the proportion of eligible screener respondent households within that group. Race categories
exclude persons of Hispanic ethnicity. Households with PO box addresses are excluded because they cannot receive
FedEx mailings. Households that did the screener on the TQA are also excluded, since these cases were not asked to
complete an entire topical questionnaire. Unweighted sample size (excluding undeliverable and out-of-scope addresses) is
19,270 for the FedEx condition and 17,840 for the First Class condition. Sample sizes have been rounded to the nearest 10.
Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household Education Surveys
Program (NHES), 2017.
A-11
Table 3.1.
Response rate, by screener version and
questionnaire: 2017
Screener version
2016
version1
Redesigned
version2
t statistic
43.4
43.1
1.0
88.6
85.8
1.0
PFI-E
90.0
89.2
1.0
PFI-H
72.0
79.3
1.0
73.4
72.8
1.0
Questionnaire
Screener
Topical
ECPP
ATES
1Questions
were asked in a person-by-person format. Unweighted eligible sample
size is 44,780 for the screener, 1,420 for ECPP, 3,020 for PFI-E, 120 for PFI-H, and
11,200 for ATES.
2Questions were asked in a characteristic-by-characteristic format. Unweighted
eligible sample size was 44,710 for the screener, 1,530 for ECPP, 3,140 for PFI-E, 101
for PFI-H, and 11,160 for ATES.
NOTE: Response rates were calculated using American Association for Public
Opinion Research (AAPOR) Response Rate 1 (RR1). Percentages represent the
proportion of sampled households (excluding undeliverable and out-of-scope
addresses) that were respondents to the questionnaire. Topical response rates
exclude cases that did the screener on the TQA because these cases were not asked
to complete an entire topical questionnaire. Unweighted sample size was equal to
44,780 for the 2016 version and 44,710 for the redesigned version. Sample sizes
have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics,
NHES, 2017.
A-12
Table 3.2.
Web screener breakoff rates, by screener version and
household characteristics: 2017
Screener version
Household
characteristics (from
sampling frame)
Overall
2016
version1
Redesigned
version2
t statistic
3.2
3.4
0.7
High school or less
3.3
3.3
0.2
Some college or more
2.7
3.1
1.3
Missing
4.0
3.9
0.4
1-2
3.0
3.2
1.1
3-4
3.4
5 or more
2.4
Missing
5.1
4.8
0.5
Yes
3.3
3.1
0.4
No
3.2
3.4
1.0
Educational attainment of
head of household
Number of adults in the
household
3.2
!
1.0
0.4
!
†
Household is flagged as
having children
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the
coefficient of variation is 30 percent or greater.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of sampled households that accessed
the screener web instrument but did not complete the screener. Households that
accessed the screener via the TQA are excluded from this analysis. Unweighted
sample size was equal to 17,760 for the 2016 version and 17,650 for the
redesigned version.
SOURCE: U.S. Department of Education, National Center for Education Statistics,
National Household Education Surveys Program (NHES), 2017.
A-13
Table 3.3a.
Number of web screener respondent households and percentage with at least one household member or the sampled household member missing
a response to a screener item, by screener version, screener item, and household characteristics: 2017
At least one household member
Sampled household member
missing a response
missing a response
Total number
of web
Screener item and household
screener
characteristics (from sampling
respondent
Redesigned
Redesigned
frame)
households
2016 version1
version2
t statistic
2016 version1
version2
t statistic
Name
Overall
Educational attainment of head of
household
High school or less
Some college or more
26,190
0.5
1.1
-5.5
*
0.2
6,880
13,760
0.7
0.5
1.2
1.0
-2.2
-3.9
*
*
0.3
0.2
5,540
0.4
1.1
-2.6
*
0.2
1-2
19,750
0.5
1.0
-4.5
*
0.2
3-4
3,850
0.8
1.3
-1.6
Missing
0.4
-2.1
*
!
0.6
0.2
-2.3
-0.7
*
!
0.4
!
-1.0
Number of adults in the household
5 or more
180
1.0
!
2,410
0.4
!
Yes
6,260
0.7
1.0
-1.4
No
19,930
0.5
1.1
-5.6
26,190
0.6
0.8
High school or less
6,880
0.8
Some college or more
13,760
0.6
5,540
0.4
Missing
0.6
1.2
!
†
-2.2
0.3
*
0.3
!
-1.9
0.6
-1.1
0.0
!
0.2
!
0.0
0.3
!
0.2
!
0.3
!
†
†
Household is flagged as having children
*
0.2
0.4
-1.2
0.2
0.1
0.8
0.0
0.2
!
0.8
-1.3
0.1
!
0.6
-1.0
0.2
!
-0.6
-2.3
*
!
2.8
*
0.1
0.1
!
!
†
1.0
0.0
!
†
Date of birth/age
Overall
Educational attainment of head of
household
Missing
See notes at end of table.
A-14
Table 3.3a.
Number of web screener respondent households and percentage with at least one household member or the sampled household member
missing a response to a screener item, by screener version, screener item, and household characteristics: 2017 —Continued
At least one household member
missing a response
Screener item and household
characteristics (from sampling
frame)
Total number
of web
screener
respondent
households
Sampled household member
missing a response
Redesigned
version2
2016 version1
t statistic
2016 version1
Redesigned
version2
t statistic
Number of adults in the household
1-2
19,750
0.6
0.7
-1.0
0.2
3-4
3,850
0.9
0.9
0.1
0.1
!
†
-1.4
0.0
!
0.2
!
0.2
0.2
!
5 or more
Missing
Household is flagged as having children
Yes
No
Sex
Overall
Educational attainment of head of
household
High school or less
180
1.0
!
1.1
0.8
!
0.1
!
2.5
0.1
!
†
0.0
!
†
0.0
!
†
0.1
0.0
!
!
1.0
†
2,410
0.4
6,260
19,930
0.8
0.6
0.9
0.7
-0.4
-1.1
24,680
1.0
1.6
-4.5
6,470
1.0
1.5
-1.8
13,040
1.1
1.6
-2.8
*
5,180
1.0
1.8
-2.2
1-2
18,610
1.0
1.5
-3.2
3-4
3,620
1.2
2.0
-1.9
0.5
!
0.4
!
0.4
180
1.5
0.9
†
-1.8
0.0
0.5
!
0.0
0.6
!
2,280
‡
-0.5
Yes
6,110
1.2
2.1
-2.5
*
0.5
0.5
0.1
No
18,570
1.0
1.4
-2.9
*
0.4
0.4
0.2
Some college or more
Missing
!
*
0.4
0.4
0.2
0.4
-1.5
0.5
0.4
0.9
*
0.5
0.5
*
0.4
0.4
0.2
!
!
*
0.2
Number of adults in the household
5 or more
Missing
!
1.7
1.8
!
!
0.3
!
Household is flagged as having children
See notes at end of table.
A-15
Table 3.3a.
Number of web screener respondent households and percentage with at least one household member or the sampled household member
missing a response to a screener item, by screener version, screener item, and household characteristics: 2017 —Continued
At least one household member
Sampled household member
missing a response
missing a response
Total number
of web
Screener item and household
screener
characteristics (from sampling
respondent
Redesigned
Redesigned
frame)
households
2016 version1
version2
t statistic
2016 version1
version2
t statistic
School enrollment status
Overall
Educational attainment of head of
household
High school or less
24,680
0.9
1.2
-2.5
*
0.3
6,470
0.9
1.4
-2.1
*
0.2
13,040
0.8
1.2
-1.8
0.3
5,180
1.0
1.2
-0.7
0.5
1-2
18,610
0.8
1.2
-2.0
3-4
3,620
1.1
1.3
-0.5
0.3
!
180
†
-1.4
0.0
!
2,280
0.0
1.0
0.3
Yes
6,110
0.9
1.6
-2.2
0.3
No
18,570
0.9
1.1
-1.6
24,680
0.4
0.8
-4.1
High school or less
6,470
0.3
0.7
-1.9
Some college or more
13,040
0.4
0.9
-3.3
5,180
0.5
0.9
-1.7
Some college or more
Missing
!
!
0.4
-0.2
0.4
-1.3
0.3
0.0
0.3
!
0.9
0.4
!
-0.3
!
†
!
0.0
0.4
!
†
!
0.3
!
Number of adults in the household
5 or more
Missing
!
1.8
1.7
!
*
0.4
0.3
0.1
Household is flagged as having children
*
0.1
0.3
0.4
-0.3
*
0.2
0.3
-1.9
0.1
0.2
!
0.3
*
!
0.3
0.2
!
0.2
Current grade or equivalent
Overall
Educational attainment of head of
household
Missing
!
!
!
†
-1.2
!
-0.2
See notes at end of table.
A-16
Table 3.3a.
Number of web screener respondent households and percentage with at least one household member or the sampled household member
missing a response to a screener item, by screener version, screener item, and household characteristics: 2017 —Continued
At least one household member
missing a response
Screener item and household
characteristics (from sampling
frame)
Total number
of web
screener
respondent
households
2016 version1
Sampled household member
missing a response
Redesigned
version2
t statistic
2016 version1
Redesigned
version2
t statistic
Number of adults in the household
1-2
18,610
0.4
0.8
-3.4
3-4
3,620
0.4
!
0.8
-1.7
0.2
!
0.3
!
-0.2
180
0.0
!
!
0.0
!
†
0.4
!
†
-1.7
0.0
2,280
0.0
1.0
0.1
!
0.2
!
†
Yes
6,110
0.3
!
1.1
-3.9
*
0.2
!
0.4
!
-1.4
No
18,570
0.4
0.7
-2.7
*
0.2
5 or more
Missing
!
*
0.2
0.3
-1.9
Household is flagged as having children
0.3
-1.5
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of web screener respondent households with at least one household member or the sampled household member missing a response to that screener
item. Households that responded to the screener on the TQA are excluded from this analysis. Unweighted sample size was equal to 17,160 for the 2016 version and 17,040 for the redesigned
version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-17
Table 3.3b.
Number of TQA screener respondent households and percentage with at least one household member or the sampled household member missing
a response to a screener item, by screener version, screener item, and household characteristics: 2017
At least one household
member missing a response
Screener item and household
characteristics (from sampling
frame)
Total number of
TQA screener
respondent
households
2016 version1
Sampled household member
missing a response
Redesigned
version2
t statistic
2016 version1
Redesigned
version2
t statistic
Name
Overall
Educational attainment of head of
household
1,510
0.2
!
0.2
!
†
0.0
!
0.0
!
†
650
0.2
!
0.0
!
†
0.0
!
0.0
!
†
520
340
0.0
0.4
!
!
0.4
0.0
!
!
†
†
0.0
0.0
!
!
0.0
0.0
!
!
†
†
1-2
1,100
0.3
!
0.2
!
†
0.0
!
0.0
!
†
3-4
260
0.0
!
0.0
!
†
0.0
!
0.0
!
†
High school or less
Some college or more
Missing
Number of adults in the household
5 or more
Missing
10
0.0
!
0.0
!
†
0.0
!
0.0
!
†
150
0.0
!
0.0
!
†
0.0
!
0.0
!
†
220
0.0
!
1.1
!
†
0.0
!
0.0
!
†
1,300
0.2
!
0.0
!
†
0.0
!
0.0
!
†
1,510
0.8
!
0.5
!
0.81
0.0
!
0.0
!
†
Household is flagged as having children
Yes
No
Date of birth/age
Overall
Educational attainment of head of
household
High school or less
650
0.8
!
0.0
!
†
0.0
!
0.0
!
†
Some college or more
520
0.9
!
1.2
!
†
0.0
!
0.0
!
†
Missing
340
0.8
!
0.4
!
†
0.0
!
0.0
!
†
See notes at end of table.
A-18
Table 3.3b.
Number of TQA screener respondent households and percentage with at least one household member or the sampled household member
missing a response to a screener item, by screener version, screener item, and household characteristics: 2017—Continued
At least one household
member missing a response
Screener item and household
characteristics (from sampling
frame)
Total number of
TQA screener
respondent
households
2016 version1
Sampled household member
missing a response
Redesigned
version2
t statistic
2016 version1
Redesigned
version2
t statistic
Number of adults in the household
1-2
1,100
0.8
!
0.3
!
†
0.0
!
0.0
!
†
3-4
260
0.9
!
1.0
!
†
0.0
!
0.0
!
†
5 or more
Missing
Household is flagged as having children
Yes
No
10
0.0
!
0.0
!
†
0.0
!
0.0
!
†
150
1.0
!
1.0
!
†
0.0
!
0.0
!
†
220
0.0
!
2.4
!
†
0.0
!
0.0
!
†
1,300
1.0
!
0.2
!
†
0.0
!
0.0
!
†
1,240
0.3
!
1.5
!
†
0.2
!
0.2
!
†
0.0
!
0.4
!
†
Sex
Overall
Educational attainment of head of
household
High school or less
*
540
0.2
!
1.7
!
†
Some college or more
430
0.2
!
1.2
!
†
*
0.2
!
0.0
!
†
Missing
270
0.4
!
1.9
!
†
*
0.4
!
0.0
!
†
1-2
900
0.4
!
1.4
!
†
*
0.2
!
0.2
!
†
3-4
210
0.0
!
2.2
!
†
0.0
!
0.0
!
†
Number of adults in the household
5 or more
Missing
10
0.0
!
0.0
!
†
0.0
!
0.0
!
†
120
0.0
!
1.7
!
†
0.0
!
0.0
!
†
190
0.0
!
2.5
!
†
*
0.0
!
0.0
!
†
1,050
0.3
!
1.3
!
†
*
0.2
!
0.2
!
†
Household is flagged as having children
Yes
No
See notes at end of table.
A-19
Table 3.3b.
Number of TQA screener respondent households and percentage with at least one household member or the sampled household member
missing a response to a screener item, by screener version, screener item, and household characteristics: 2017 —Continued
At least one household member
Sampled household member
missing a response
missing a response
Total number of
Screener item and household
TQA screener
characteristics (from sampling
respondent
Redesigned
Redesigned
frame)
households
2016 version1
version2
t statistic
2016 version1
version2
t statistic
School enrollment status
Overall
Educational attainment of head of
household
High school or less
1,240
0.3
0.4
†
0.1
!
0.1
†
0.0
†
540
0.2
0.6
†
0.0
Some college or more
430
0.2
0.0
†
0.2
Missing
270
0.4
0.7
†
0.0
1-2
900
0.4
0.5
†
0.1
3-4
210
0.0
0.0
†
0.0
!
0.0
!
†
10
0.0
†
0.0
!
0.0
!
†
120
0.0
0.0
†
0.0
!
0.0
!
†
!
0.6
!
†
0.0
!
0.4
†
!
†
Number of adults in the household
5 or more
Missing
!
0.0
!
0.1
†
Household is flagged as having children
Yes
190
0.0
0.9
†
0.0
1,050
0.3
0.3
†
0.1
0.0
†
1,240
0.1
0.2
†
0.0
0.0
†
High school or less
540
0.0
0.6
†
0.0
!
0.0
Some college or more
430
0.0
0.0
†
0.0
!
0.0
Missing
270
0.4
0.0
†
0.0
!
0.0
No
Current grade or equivalent
Overall
Educational attainment of head of
household
!
!
!
†
†
!
†
See notes at end of table.
A-20
Table 3.3b.
Number of TQA screener respondent households and percentage with at least one household member or the sampled household member
missing a response to a screener item, by screener version, screener item, and household characteristics: 2017 —Continued
At least one household member
missing a response
Screener item and household
characteristics (from sampling
frame)
Total number of
TQA screener
respondent
households
2016 version1
Sampled household member
missing a response
Redesigned
version2
t statistic
2016 version1
Redesigned
version2
t statistic
Number of adults in the household
1-2
900
0.1
!
0.3
!
†
0.0
!
0.0
!
†
3-4
210
0.0
!
0.0
!
†
0.0
!
0.0
!
†
5 or more
Missing
10
0.0
!
0.0
!
†
0.0
!
0.0
!
†
120
0.0
!
0.0
!
†
0.0
!
0.0
!
†
190
0.0
!
0.0
!
†
0.0
!
0.0
!
†
1,050
0.1
!
0.3
!
†
0.0
!
0.0
!
†
Household is flagged as having children
Yes
No
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of TQA screener respondent households with at least one household member or the sampled household member missing a response to that screener
item. Households that responded to the screener on the web are excluded from this analysis. Unweighted sample size was equal to 1,600 for the 2016 version and 1,530 for the redesigned version.
Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-21
Table 3.4a.
Number of web screener respondent households and percentage who reported an
inconsistent response for at least one household member, by screener version
and household characteristics: 2017
Web screener respondent households
Household
characteristics (from
sampling frame)
Total number
of web
screener
respondents
2016 version1
Redesigned
version2
t statistic
Overall
Educational attainment of
head of household
34,200
2.5
2.8
-2.0
*
High school or less
9,120
2.0
2.9
-3.2
*
Some college or more
17,130
2.6
2.9
-1.0
7,950
2.7
2.5
0.6
1-2
26,170
2.5
2.9
-2.3
3-4
4,240
1.8
1.9
-0.4
Missing
Number of adults in the
household
5 or more
200
1.2
3,580
3.2
2.5
1.3
Yes
7,060
3.8
4.9
-2.2
No
27,140
2.1
2.3
-0.8
Missing
!
9.2
!
*
†
Household flagged as
having children
*
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent
or greater.
*p < .05.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of web screener respondent households that reported an inconsistent
response for at least one household member. Respondents were considered to have provided an inconsistent response
based on their responses to the age, enrollment, and grade level items. Households that responded to the screener on
the TQA are excluded from this analysis. The unweighted sample size was 17,160 for the 2016 version and 17,040 for
the redesigned version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household Education
Surveys Program (NHES), 2017.
A-22
Table 3.4b.
Number of TQA screener respondent households and percentage who reported
an inconsistent response for at least one household member, by screener version
and household characteristics: 2017
TQA screener respondent
households
Household
characteristics (from
sampling frame)
Total number
of TQA
screener
respondents
2016 version1
-0.8
!
2.5
-1.5
0.9
!
1.1
!
-0.5
800
1.4
!
0.3
!
†
1-2
2,440
1.0
3-4
340
1.1
!
0.8
!
†
10
0.0
2.1
!
!
0.0
!
†
340
0.4
!
†
340
3.3
!
6.2
!
-1.2
2,790
0.9
Some college or more
Missing
1.2
1,340
1.3
1,000
t statistic
1.5
Overall
Educational attainment
of head of household
High school or less
3,130
Redesigned
version2
Number of adults in the
household
5 or more
Missing
1.7
-1.4
Household flagged as
having children
Yes
No
0.9
-0.1
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent
or greater.
*p < .05.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of TQA screener respondent households that reported an inconsistent
response for at least one household member. Respondents were considered to have provided an inconsistent response
based on their responses to the age, enrollment, and grade level items. Households that responded to the screener on
the web are excluded from this analysis. Unweighted sample size was 1,600 for the 2016 version and 1,530 for the
redesigned version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household Education
Surveys Program (NHES), 2017.
A-23
Table 3.5a.
Number of web screener respondent households and percentage where at
least one household member received an "unknown eligibility" sampling
status, by screener version and household characteristics: 2017
Percentage of web screener
respondents
Household
characteristics (from
sampling frame)
Total number
of web
screener
respondents
2016 version1
Redesigned
version2
t statistic
Overall
Educational attainment of
head of household
High school or less
34,200
0.9
1.6
5.2
*
9,120
1.1
1.7
2.7
*
Some college or more
17,130
0.8
1.3
3.1
*
7,950
0.9
1.9
3.7
*
1-2
26,170
0.8
1.5
4.9
*
3-4
4,240
1.1
1.6
1.4
Missing
Number of adults in the
household
5 or more
200
2.0
3,580
1.1
2.1
2.4
Yes
7,060
1.0
1.4
1.6
No
27,140
0.9
1.6
5.0
Missing
!
1.6
!
†
*
Household flagged as
having children
*
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30
percent or greater.
*p < .05.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of web screener respondent households where at least one
household member was assigned an "unknown eligibility" status. This status was assigned when there was
insufficient information to determine whether the household member was eligible for one of the topical surveys
because either there was too much item nonresponse or there were inconsistent screener responses. Household
members that received this flag were not eligible for topical sampling. Households that responded to the screener
on the TQA are excluded from this analysis. Unweighted sample size was 17,160 for the 2016 version and 17,040
for the redesigned version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household Education
Surveys Program (NHES), 2017.
A-24
Table 3.5b.
Number of TQA screener respondent households and percentage where at least
one household member received an "unknown eligibility" sampling status, by
screener version and household characteristics: 2017
Percentage of TQA screener
respondents
Household
characteristics (from
sampling frame)
Total number of
TQA screener
respondents
2016 version1
Redesigned
version2
Overall
Educational attainment of
head of household
High school or less
3,130
0.7
1,340
0.9
!
0.7
†
Some college or more
1,000
0.2
!
0.9
1.4
Missing
Number of adults in the
household
1-2
800
0.9
!
0.5
†
2,440
0.7
!
0.7
!
0.2
340
0.7
!
0.8
!
†
3-4
5 or more
Missing
Household flagged as
having children
Yes
No
0.7
t statistic
0.0
0
†
340
1.1
!
0.4
†
!
0.8
†
340
0.4
!
2.3
!
†
2,790
0.7
!
0.5
!
0.8
† Not applicable. Either there are no cases in this group or estimates are not reliable enough to make statistical
comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30
percent or greater.
*p < .05.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of web screener respondent households where at least one household
member was assigned an "unknown eligibility" status. This status was assigned when there was insufficient
information to determine whether the household member was eligible for one of the topical surveys because either
there was too much item nonresponse or there were inconsistent screener responses. Household members that
received this flag were not eligible for topical sampling. Households that responded to the screener on the TQA are
excluded from this analysis. Unweighted sample size was 1,600 for the 2016 version and 1,530 for the redesigned
version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household Education
Surveys Program (NHES), 2017.
A-25
Table 3.6a.
Mean number of minutes web screener respondent
households spent on the screener questionnaire, by screener
version and household characteristics: 2017
Screener version
Household characteristics
(from sampling frame)
2016 version1
Redesigned
version2
3.9
4.4
4.5
*
High school or less
4.0
4.5
3.0
*
Some college or more
3.9
4.3
2.3
*
Missing
3.8
4.4
2.3
*
1-2
3.1
3.6
3.1
*
3-4
4.8
5.0
0.6
5-6
5.6
6.7
2.7
7 or more
9.4
11.9
1.2
Yes
4.5
4.9
1.1
No
3.7
4.3
4.6
Overall
Educational attainment of head
of household
t statistic
Number of household
members reported in screener
*
Household is flagged as having
children
*
*p < .05.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: The estimates represent the mean number of minutes web respondents spent in the
screener, including time spent on the transition items that appear after sampling. Cases that
completed the topical over multiple days, took more than 6 hours to complete it, or spent more
than 15 minutes on a page without taking any actions are excluded from this analysis.
Households that responded to the screener on the TQA are excluded from this analysis. A small
number of additional households are excluded from the analysis because there was no
information for them available on the paradata file. Unweighted sample size was equal to 17,160
for the 2016 version and 17,040 for the redesigned version.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National
Household Education Surveys Program (NHES), 2017.
A-26
Table 3.6b.
Mean number of minutes TQA screener respondent households
spent on the screener questionnaire, by screener version and
household characteristics: 2017
Screener version
Household characteristics
(from sampling frame)
2016 version1
Redesigned
version2
t statistic
Overall
Educational attainment of
head of household
High school or less
1.9
1.9
0.0
1.7
1.9
1.6
Some college or more
2.2
1.9
0.6
Missing
2.1
2.0
0.4
1-2
1.8
1.7
0.4
3-4
3.5
4.1
1.3
5-6
6.2
6.0
0.2
8.8
8.1
0.4
2.6
0.9
2.0
0.8
Number of household
members reported in
screener
7 or more
Household is flagged as
having children
Yes
No
3.8
1.9
!
*p < .05.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: The estimates represent the mean number of minutes TQA respondents spent in the
screener, including time spent on the transition items that appear after sampling. Cases that
completed the topical over multiple days, took more than 6 hours to complete it, or spent more
than 15 minutes on a page without taking any actions are excluded from this analysis.
Households that responded to the screener on the web are excluded from this analysis. A small
number of additional households are excluded from the analysis because there was no
information for them available on the paradata file. Unweighted sample size was equal to 1,600
for the 2016 version and 1,530 for the redesigned version. Sample sizes have been rounded to
the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National
Household Education Surveys Program (NHES), 2017.
A-27
Table 3.7a.
Number of web screener respondent households and percentage distribution, by screener
version and household characteristics: 2017
Percentage distribution of
web screener respondents
Total number of
web screener
respondents
2016 version1
Redesigned
version2
34,200
100
100
†
25,350
74.2
74.7
0.9
8,850
25.8
25.3
0.9
White
18,430
56.6
56.9
0.6
Black
2,740
6.5
6.5
0.2
Hispanic
2,930
7.2
7.1
0.5
Asian
1,370
3.9
4.1
0.7
Other
790
2.4
2.4
0.1
7,950
23.4
23.0
0.8
Household characteristics
Total
Phone number available (from
sampling frame)
Yes
No
t statistic
Race/ethnicity of head of household
(from sampling frame)
Missing
Education of head of household
(from sampling frame)
Less than high school
2,620
7.1
7.2
0.3
High school
6,490
18.6
18.8
0.5
Some college
7,490
21.7
22.0
0.6
B.A.
5,810
17.3
17.6
0.7
Graduate/professional
3,830
11.9
11.4
1.3
7,950
23.4
23.0
0.8
Missing
Age of head of household (from
sampling frame)
18–24
410
1.0
1.3
1.8
25–34
2,380
6.8
7.0
0.6
35–44
4,370
13.0
12.6
1.1
45–54
5,650
16.4
16.7
0.8
55–65
7,210
21.1
21.3
0.3
Over 65
6,940
20.9
20.3
1.6
Missing
7,250
20.6
20.9
0.5
See notes at end of table.
A-28
Table 3.7a.
Number of web screener respondent households and percentage distribution, by screener
version and household characteristics: 2017—Continued
Percentage distribution of
web screener respondents
Total number of
web screener
Redesigned
Household characteristics
respondents
2016 version1
version2
t statistic
Annual household income (from
sampling frame)
Less than $21,000
4,560
13.0
12.7
0.7
$21,000–$36,000
3,000
8.9
8.5
1.2
$36,001–$56,000
3,920
11.0
11.2
0.6
$56,001–$85,000
5,350
15.5
15.6
0.2
$85,001–$120,000
5,980
17.9
17.6
0.9
Greater than $120,000
7,770
23.4
23.8
1.0
Missing
3,630
10.4
10.6
0.6
† Not applicable.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of eligible screener respondent households within that group. Households that
completed the screener on the TQA are excluded from this analysis. Race categories exclude persons of Hispanic ethnicity.
Unweighted sample size was 17,160 for the 2016 version and 17,040 for the redesigned version. Sample sizes have been rounded
to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household Education Surveys
Program (NHES), 2017.
A-29
Table 3.7b.
Number of TQA screener respondent households and percentage distribution, by
screener version and household characteristics: 2017
Percentage distribution of TQA
screener respondents
Total number
of TQA
screener
respondents
2016 version1
Redesigned
version2
t statistic
3,130
100
100
†
2,470
78.9
79.7
0.5
660
21.1
20.3
0.5
White
1,550
53.5
52.5
0.6
Black
470
11.6
12.7
1.1
Hispanic
200
5.3
5.4
0.2
60
2.1
1.5
1.3
Household characteristics
Total
Phone number available (from
sampling frame)
Yes
No
Race/ethnicity of head of
household (from sampling frame)
Asian
Other
60
1.9
2.5
1.2
800
25.7
25.3
0.2
Less than high school
360
11.2
10.2
0.9
High school
980
31.2
31.9
0.5
Some college
530
17.2
16.7
0.4
B.A.
300
9.4
9.9
0.5
Graduate/professional
170
5.4
6.0
0.6
800
25.7
25.3
0.2
10
0.3
25–34
70
2.4
1.8
1.2
35–44
150
4.3
5.0
0.9
45–54
240
7.3
7.9
0.8
55–65
520
15.7
17.1
1.1
Over 65
1,630
53.9
51.5
1.3
Missing
520
16.0
16.2
0.1
Missing
Education of head of household
(from sampling frame)
Missing
Age of head of household (from
sampling frame)
18–24
!
0.4
!
0.4
See notes at end of table.
A-30
Table 3.7b.
Number of TQA screener respondent households and percentage distribution, by screener
version and household characteristics: 2017—Continued
Percentage distribution of TQA
screener respondents
Total number
of TQA
screener
respondents
2016 version1
Redesigned
version2
t statistic
Less than $25,000
660
20.7
20.5
0.1
$25,000–$34,999
570
18.2
18.5
0.2
$35,000–$49,999
480
14.3
15.3
0.7
$50,000–$74,999
460
15.7
14.0
1.3
$75,000–$124,999
350
11.5
11.5
0.0
Greater than $124,999
270
8.4
9.4
1.0
Missing
350
11.1
10.8
0.3
Household characteristics
Annual household income (from
sampling frame)
† Not applicable.
! Interpret data with caution. The coefficient of variation is between 30 and 50 percent.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of eligible screener respondent households within that group. Households that
completed the screener on the web are excluded from this analysis. Race categories exclude persons of Hispanic ethnicity.
Unweighted sample size was 1,600 for the 2016 version and 1,530 for the redesigned version. Sample sizes have been rounded
to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household Education Surveys
Program (NHES), 2017.
A-31
Table 3.8a.
Percentage distribution of the number of household
members reported in the web screener, by screener version:
2017
Screener version
Number of
household members
reported in screener
1
2016 version1
25.3
Redesigned
version2
21.5
2
36.6
37.4
1.6
3
14.9
16.2
3.1
4
14.1
14.6
1.2
5
5.6
6.3
2.6
*
6
2.2
2.6
2.9
*
7 or more
1.3
1.4
0.8
t statistic
8.5 *
*
*p < .05.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of web screener respondent households
within each condition that reported that number of household members. Households
that responded to the screener on the TQA are excluded from this analysis.
Unweighted sample size was 17,160 for the 2016 version and 17,040 for the redesigned
version. Sample sizes have been rounded to the nearest 10. Detail may not sum to totals
due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics,
NHES, 2017.
Table 3.8b.
Percentage distribution of the number of household members
reported in the TQA screener, by screener version: 2017
Screener version
Number of
household members
reported in screener
1
2016 version1
Redesigned
version2
52.5
50.9
t statistic
0.9
2
34.5
35.7
0.7
3
6.9
6.9
0.0
4
3.2
3.9
1.1
5
1.6
1.5
0.3
6
0.8
!
0.8
!
0.0
7 or more
0.6
!
0.4
!
0.7
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient
of variation is 30 percent or greater.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of TQA screener respondent households within
each condition that reported that number of household members. Households that responded
to the screener on the web are excluded from this analysis. Unweighted sample size was
1,600 for the 2016 version and 1,530 for the redesigned version. Sample sizes have been
rounded to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES,
2017.
A-32
Table 3.9a.
Number of web screener respondent households and percentage who reported at least
one household member eligible for the topical surveys, by screener version: 2017
Percentage of web screener respondents
Total number of
web screener
respondents
2016 version
Redesigned
version
34,200
82.1
82.8
1.9
ECPP
34,200
10.5
11.3
2.3
*
PFI-E
34,200
22.4
23.5
2.5
*
PFI-H
34,200
0.9
1.0
0.8
ATES
34,200
81.9
82.6
1.8
Topical
Overall
t statistic
*p < .05.
NOTE: Percentages represent the proportion of web screener respondent households for which at least one reported
household member was eligible for a topical survey. Screener respondent households may have been eligible for more than
one topical; as a result the topical-specific results do not sum to the overall result. Households that responded to the
screener on the TQA are excluded from this analysis. Unweighted sample size was 17,160 for the 2016 version and 17,040
for the redesigned version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
Table 3.9b.
Topical
Overall
Number of TQA screener respondent households and percentage who reported at
least one household member eligible for the topical surveys, by screener version:
2017
Percentage of TQA screener
respondents
Total number of TQA
screener respondents
2016 version1
Redesigned
version2
t statistic
3,130
42.5
43.1
0.4
ECPP
3,130
1.9
2.1
0.4
PFI-E
3,130
6.6
6.3
0.3
PFI-H
3,130
0.3
ATES
3,130
41.8
!
0.3
42.5
!
0.2
0.4
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent
or greater.
1Questions were asked in a person-by-person format.
2Questions were asked in a characteristic-by-characteristic format.
NOTE: Percentages represent the proportion of TQA respondent households for which at least one reported household
member was eligible for a topical survey. Screener respondent households may have been eligible for more than one
topical; as a result the topical-specific results do not sum to the overall result. Households that responded to the
screener on the TQA are excluded from this analysis. Unweighted sample size was 1,600 for the 2016 version and
1,530 for the redesigned version. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-33
Table 4.1.
Topical response rate among households eligible for two or more topical questionnaires, by dual-topical condition, order of topicals, topical form, and
topical pairing: 2017
Difference
between
single2
Dual-topical condition
and dualSingle-topical
topical
Topical form
condition1
Overall
First topical Second topical
t statistic conditions
t statistic
ECPP
Overall
89.3
85.5
87.6
81.1
3.9
*
-3.8
3.0
When paired with PFI-E
†
84.6
87.5
82.0
3.0
*
†
†
When paired with PFI-H
†
82.8
†
†
When paired with ATES
†
86.2
87.3
80.6
3.0
*
†
†
!
93.8
!
75.8
!
†
*
PFI-E
Overall
91.8
86.8
90.3
80.3
8.2
*
-5.0
7.0
When paired with ECPP
†
86.7
92.4
80.8
3.9
*
†
†
When paired with ATES
†
86.8
89.3
80.1
6.9
*
†
†
83.9
74.8
0.3
-1.4
0.2
†
†
†
*
PFI-H
Overall
77.3
75.9
When paired with ECPP
†
81.1
When paired with ATES
†
74.1
87.6
70.9
0.5
†
†
-7.8
4.8
*
-8.0
5.5
*
*
!
79.8
!
83.2
!
ATES
Overall
67.6
59.9
†
†
†
Same respondent as screener
90.6
82.6
88.8
76.3
7.3
Different respondent than screener
46.9
37.0
†
†
†
-9.9
5.1
†
†
†
†
†
†
*
When paired with ECPP
Overall
†
58.3
†
†
†
Same respondent as screener
†
80.5
89.5
71.4
6.5
Different respondent than screener
†
34.9
†
†
†
*
See notes at end of table.
A-34
Table 4.1.
Topical response rate among households eligible for two or more topical questionnaires, by dual-topical condition, order of topicals, topical form, and
topical pairing: 2017—Continued
Difference
between
singleDual-topical condition2
and dualSingle-topical
topical
Topical form
condition1
Overall First topical Second topical
t statistic conditions
t statistic
When paired with PFI-E
Overall
†
60.4
†
†
†
†
†
Same respondent as screener
†
83.4
88.7
78.1
12.9
Different respondent than screener
†
37.8
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
*
When paired with PFI-H
Overall
†
60.0
†
Same respondent as screener
†
81.0
85.5
Different respondent than screener
†
35.0
†
!
75.2
†
!
† Not applicable. Either this estimate or comparison is not applicable for this subgroup, or estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05
1Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES or a child topical questionnaire. Unweighted eligible sample size was
1,700 for ECPP, 3,590 for PFI-E, 120 for PFI-H and 1,400 for ATES.
2Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES and a child topical questionnaire or two child questionnaires.
Unweighted eligible sample size was 1,230 for ECPP, 2,520 for PFI-E, 100 for PFI-H, and 2,860 for ATES.
NOTE: Response rates were calculated using AAPOR RR1. Percentages represent the proportion of eligible households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and
ATES that were respondents to the topical questionnaire. Child topical results in the dual-topical condition by topical order (first topical and second topical) exclude cases where the other topical sampling
for the household was that a household member other than the screener respondent was sampled for ATES. The analysis excludes cases that did the screener on the TQA because these cases were not
asked to complete the entire topical questionnaire. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-35
Table 4.2.
Topical unit response status among households with household members eligible for two or more topical questionnaires, by dual-topical condition,
order of topicals, and topical pairing: 2017
Topical unit response status
Single-topical
condition1
Overall
Dual-topical condition2
Reverse
Alphabetical
topical
topical order
order
t statistic
Difference
between
single- and
dual-topical
condition
t statistic
Overall
Respondent to all sampled topicals
Respondent to 1 of 2 topicals
Nonrespondent
ECPP/PFI-E
86.0
†
14.0
59.9
29.9
10.2
60.9
28.6
10.5
59.0
31.1
9.9
1.3
1.7
0.5
-26.0
†
-3.9
19.3
†
3.1
Respondent to all sampled topicals
†
81.2
80.5
82.0
0.4
†
†
Respondent to 1 of 2 topicals
†
8.9
7.3
10.4
1.3
†
†
Nonrespondent
†
9.8
12.2
7.6
1.7
†
†
Respondent to all sampled topicals
†
78.7
!
83.2
!
75.8
!
†
†
†
Respondent to 1 of 2 topicals
†
6.5
!
10.6
!
3.9
!
†
†
†
Nonrespondent
†
14.8
!
6.2
!
20.2
!
†
†
†
†
76.0
*
*
ECPP/PFI-H
ECPP/ATES same respondent
Respondent to all sampled topicals
2.0
*
†
†
15.8
2.0
*
†
†
10.5
12.7
0.7
†
†
33.1
31.9
34.6
0.6
†
†
†
57.3
57.2
57.3
0.0
†
†
†
9.6
10.9
8.1
1.0
†
†
Respondent to all sampled topicals
†
79.0
80.3
77.7
1.0
†
†
Respondent to 1 of 2 topicals
†
10.2
8.6
11.7
1.6
†
†
Nonrespondent
†
10.9
11.1
10.6
0.2
†
†
80.6
Respondent to 1 of 2 topicals
†
12.4
8.9
Nonrespondent
†
11.6
Respondent to all sampled topicals
†
Respondent to 1 of 2 topicals
Nonrespondent
71.4
ECPP/ATES different respondent
PFI-E/ATES same respondent
See notes at end of table.
A-36
Table 4.2.
Topical unit response status among households with household members eligible for two or more topical questionnaires, by dualtopical condition, order of topicals, and topical pairing: 2017—Continued
Dual-topical
Topical unit response status
PFI-E/ATES different respondent
Single-topical
condition1
Overall
Alphabetical
topical order
condition2
Reverse topical
order
t statistic
Difference
between
single- and
dual-topical
conditions
†
t statistic
†
Respondent to all sampled topicals
†
35.2
37.0
33.6
1.1
†
†
Respondent to 1 of 2 topicals
†
56.3
54.6
57.8
1.0
†
†
Nonrespondent
†
8.5
8.4
8.6
0.2
†
†
Respondent to all sampled topicals
†
72.8
70.9
!
75.2
!
†
†
†
Respondent to 1 of 2 topicals
†
13.6
!
14.5
!
12.4
!
†
†
†
Nonrespondent
†
13.6
!
14.5
!
12.4
!
†
†
†
Respondent to all sampled topicals
†
31.0
33.4
!
28.8
!
†
†
†
Respondent to 1 of 2 topicals
†
42.0
44.5
!
39.7
!
†
†
†
Nonrespondent
†
26.9
22.2
!
31.5
!
†
†
†
PFI-H/ATES same respondent
PFI-H/ATES different respondent
† Not applicable. Either this estimate or comparison is not applicable for this subgroup, or estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05.
1Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES or a child topical questionnaire. Unweighted eligible sample
size was 1,700 for ECPP, 3,590 for PFI-E, 120 for PFI-H and 1,400 for ATES.
2Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES and a child topical questionnaire or two child questionnaires.
Unweighted eligible sample size was 1,230 for ECPP, 2,520 for PFI-E, 100 for PFI-H, and 2,860 for ATES.
NOTE: Percentages represent the proportion of households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that completed that number of topicals (all, 1 of 2,
none). ATES "same respondent” households are those where the screener respondent was sampled for ATES; ATES "different respondent" households are those where a household member other
than the screener respondent was sampled for ATES. Child topical results in the dual-topical condition by topical order (alphabetical order versus reverse order) exclude cases where the other topical
sampling for the household was that a household member other than the screener respondent was sampled for ATES. The analysis excludes cases that did the screener on the TQA because these
cases were not asked to complete the entire topical questionnaire. Sample sizes have been rounded to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-37
Table 4.3.
Incentive cost
per complete
Incentive cost per topical complete, by dual-topical condition and screener incentive condition:
2017
Single-topical condition1
Dual-topical condition2
Overall
$2 screener
incentive
$5 screener
incentive
Overall
$2 screener
incentive
$5 screener
incentive
21.73
11.36
23.37
17.32
9.21
18.66
1Refers
to households that were assigned to receive only one topical questionnaire.
2Refers to households that were assigned to receive two topical questionnaires if they had two or more household members eligible for
at least two of ECPP, PFI (E or H), and ATES.
NOTE: The cost per topical complete was calculated as the total incentive cost in that condition (for screener and topical incentives)
divided by the total number of completed topicals received in that condition. Unweighted sample size for the single-topical condition was
65,000; unweighted sample size for the dual-topical condition was 32,500.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-38
Table 4.4.
Topical
form
ECPP
Topical breakoff rate among households eligible for two or more topical questionnaires, by dual-topical condition,
order of topicals, topical questionnaire, and topical pairing: 2017
Difference
Dual-topical condition2
between
single- and
Single-topical
Second
dual-topical
condition1
Overall
First topical
topical t statistic
conditions
t statistic
Overall
13.1
12.7
14.5
When paired with PFI-E
†
12.1
When paired with PFI-H
†
6.8
When paired with ATES
†
15.0
16.1
11.8
13.4
!
9.6
1.1
-0.3
0.3
0.8
†
†
†
†
†
13.7
0.7
†
†
10.8
!
4.5
!
PFI-E
Overall
10.5
13.0
10.4
12.4
1.2
2.5
3.2
When paired with ECPP
†
7.9
9.6
5.9
1.5
†
†
When paired with ATES
†
13.0
13.6
12.3
0.6
†
†
*
PFI-H
Overall
17.9
16.6
6.2
!
15.9
!
†
-1.2
0.2
When paired with ECPP
†
14.8
!
20.3
!
6.2
!
†
†
†
When paired with ATES
†
9.1
!
12.0
!
6.1
!
†
†
†
ATES same respondent
Overall
9.4
10.2
11.2
9.1
1.2
0.8
0.6
When paired with ECPP
†
12.0
10.5
13.8
1.0
†
†
When paired with PFI-E
†
9.6
11.5
7.5
1.9
†
†
When paired with PFI-H
†
9.1
†
†
†
!
10.9
!
6.8
!
*
See notes at end of table.
A-39
Table 4.4.
Topical breakoff rate among households eligible for two or more topical questionnaires, by dual-topical condition,
order of topicals, topical questionnaire, and topical pairing: 2017 —Continued
Difference
Dual-topical condition2
between
single- and
Topical
Single-topical
Second
dual-topical
t statistic
form
condition1
Overall First topical
topical
t statistic
conditions
ATES different respondent
Overall
9.9
6.8
When paired with ECPP
†
6.9
When paired with PFI-E
†
7.1
When paired with PFI-H
†
0.0
!
!
†
†
†
-3.1
1.8
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
† Not applicable. Either this estimate or comparison is not applicable for this subgroup, or estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05.
1Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES or a child topical
questionnaire and at least accessed the questionnaire. Unweighted eligible sample size was 1,700 for ECPP, 3,580 for PFI-E, 120 for PFI-H, and 1,040 for
ATES.
2Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES and a child topical
questionnaire or two child questionnaires and at least accessed the questionnaire. Unweighted eligible sample size was 1,180 for ECPP, 2,420 for PFI-E, 100
for PFI-H, and 1,900 for ATES.
NOTE: Percentages represent the proportion of eligible households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that
was sampled for and reached the first item in the questionnaire but broke off before completing it. ATES "same respondent” households are those where the
screener respondent was sampled for ATES; ATES "different respondent" households are those where a household member other than the screener
respondent was sampled for ATES. Child topical results in the dual-topical condition by topical order (first topical and second topical) exclude cases where the
other topical sampling for the household was that a household member other than the screener respondent was sampled for ATES. This analysis excludes
cases that did the screener on the TQA because these cases were not asked to complete the entire topical questionnaire. Sample sizes have been rounded to
the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-40
Table 4.5.
Item missing rate for key topical survey items among topical respondent households eligible for two or more questionnaires, by dual-topical
condition, topical order, topical questionnaire, and selected items: 2017
Difference
Dual-topical condition2
between
singleand dualSingle-topical
Second
topical
Topical form
condition1
Overall
First topical
topical
t statistic conditions
t statistic
ECPP
Regular care from a relative
0.1
!
0.4
Regular care from a non-relative
Regular care from a daycare, preschool, or pre-k
General description of child's health
0.1
0.3
!
!
0.6
0.5
PFI-E
Type of school child attends
Educational expectations
Number of nights family eats evening meal
together
General description of child's health
PFI-H
Person who provides homeschool instruction
Educational expectations
Number of nights family eats evening meal
together
General description of child's health
0.3
!
0.6
!
†
0.3
!
†
!
!
0.3
0.6
!
!
1.5
0.8
!
!
†
†
!
!
0.5
!
0.3
!
1.0
!
†
0.5
0.2
0.1
†
†
-0.3
!
0.1
!
0.2
!
†
0.6
!
1.2
!
-1.1
0.0
0.3
!
0.3
0.1
0.7
†
-1.6
0.7
0.3
1.4
0.3
0.9
0.2
!
!
3.0
0.1
!
-2.7
†
0.4
0.1
!
*
0.7
0.0
-2.5
0.3
0.0
0.3
!
!
0.0
0.7
!
!
0.0
3.9
!
!
0.0
0.0
!
!
†
†
0.0
0.3
!
!
†
†
0.7
!
1.4
!
0.0
!
0.0
!
†
0.7
!
0.3
!
0.3
!
0.0
!
0.0
!
†
0.0
!
†
†
*
See notes at end of table.
A-41
Table 4.5.
Item missing rate for key topical survey items among topical respondent households eligible for two or more questionnaires, by dual-topical
condition, topical order, topical questionnaire, and selected items: 2017 —Continued
Difference
between
single- and
dual-topical
conditions
t statistic
†
0.3
†
2.2
0.4
0.2
1.0
0.8
0.4
0.7
!
†
Dual-topical condition2
Topical form
ATES same respondent
Single-topical
condition1
Certification or license
0.2
Post-secondary certificate
2.1
Work experience program
!
Overall
0.5
!
2.4
First topical
0.2
2.5
!
Second
topical
0.7
!
!
t statistic
!
-0.3
0.2
!
0.9
Certification or license
0.0
!
0.0
!
†
†
†
0.0
!
†
Post-secondary certificate
0.6
!
2.7
!
†
†
†
2.0
!
†
!
†
†
†
0.0
!
†
ATES different respondent
Work experience program
0.0
!
0.0
† Not applicable. Either this estimate or comparison is not applicable for this subgroup, or estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05.
1Refers to topical respondent households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES or a child topical questionnaire.
Unweighted eligible sample size was 1,520 for ECPP, 3,300 for PFI-E, 90 for PFI-H, and 950 for ATES.
2Refers to topical respondent households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES and a child topical questionnaire or two
child questionnaires. Unweighted eligible sample size was 1,050 for ECPP, 2,180 for PFI-E, 80 for PFI-H, and 1,720 for ATES.
NOTE: Item missing rates represent the percentage of respondents who should have answered the item but did not. The analysis excludes cases that did the screener on the TQA because these
cases were not asked to complete the entire topical questionnaire. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-42
Table 4.6.
Topical form
Mean number of minutes to complete topical among topical respondent households eligible for two or more questionnaires, by
dual-topical condition, order of topicals, topical questionnaire, and topical pairing: 2017
Difference
Dual-topical condition2
between
single- and
Single-topical
Second
dual-topical
condition1
Overall
First topical
topical
t statistic
conditions
t statistic
ECPP
Overall
19.0
16.8
18.8
When paired with PFI-E
†
16.0
When paired with PFI-H
†
8.7
When paired with ATES
†
17.5
17.5
13.3
20.3
!
10.6
11.8
!
9.6
*
-2.2
3.2
3.8
*
†
†
†
†
†
15.6
1.1
†
†
7.3
!
*
PFI-E
Overall
22.2
20.9
21.4
17.7
3.6
*
-1.3
2.6
When paired with ECPP
†
18.5
22.4
13.7
4.9
*
†
†
When paired with ATES
†
21.4
20.9
19.4
1.2
†
†
*
PFI-H
Overall
19.1
16.5
17.3
!
13.0
!
†
-2.6
1.9
When paired with ECPP
†
14.5
!
16.9
!
9.9
!
†
†
†
When paired with ATES
†
17.3
!
17.7
!
14.3
!
†
†
†
0.4
0.4
0.3
0.3
ATES
Overall
11.8
12.2
†
†
†
Same respondent as screener
11.6
11.9
13.2
10.4
2.3
Different respondent than screener
12.2
12.9
†
†
†
0.6
0.5
†
12.2
†
†
†
†
†
Same respondent as screener
†
11.5
13.0
9.5
1.5
†
†
Different respondent than screener
†
14.0
†
†
†
†
†
*
When paired with ECPP
Overall
See notes at end of table.
A-43
Table 4.6.
Mean number of minutes to complete topical among topical respondent households eligible for two or more questionnaires, by
dual-topical condition, order of topicals, topical questionnaire, and topical pairing: 2017 —Continued
Dual-topical condition2
Topical form
When paired with PFI-E
Overall
Single-topical
condition1
Overall
First
topical
Second
topical
t statistic
†
12.3
†
†
†
Same respondent as screener
†
12.1
13.4
10.7
2.0
Different respondent than screener
†
12.5
†
†
†
Difference
between
single- and
dual-topical
conditions
t statistic
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
†
*
When paired with PFI-H
Overall
†
Same respondent as screener
†
10.9
10.4
!
8.7
Different respondent than screener
†
12.0
!
†
†
!
12.6
†
!
† Not applicable. Either this estimate or comparison is not applicable for this subgroup, or estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05
1Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES or a child topical questionnaire that were
respondents to that questionnaire. Unweighted eligible sample size was 1,470 for ECPP, 3,150 for PFI-E, 90 for PFI-H, and 920 for ATES.
2Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES and a child topical questionnaire or two
child questionnaires that were respondents to that questionnaire. Unweighted eligible sample size was 1,000 for ECPP, 2,080 for PFI-E, 70 for PFI-H, and 1,650 for ATES.
NOTE: Estimates represent the mean number of minutes for topical respondents to complete the questionnaire among respondent households with two or more individuals
eligible for at least two of ECPP, PFI (E or H), and ATES. Cases that completed the topical over multiple days, took over 6 hours to complete it, or spent more than 15 minutes on
a page without taking any actions are excluded from this analysis. A small number of respondents (less than 1 percent) could not be included in this analysis because there was
not any information available for them on the paradata file. The analysis excludes cases that did the screener on the TQA because these cases were not asked to complete the
entire topical questionnaire. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-44
Table 4.7
Percentage distribution of respondent households on frame variables among households eligible for two or more topical questionnaires, by topical questionnaire, dualtopical condition, and selected household characteristics: 2017
ECPP
PFI-E
PFI-H
ATES
Household
characteristics
Phone number available
(from sampling frame)
Yes
No
Race/ethnicity of head of
household (from sampling
frame)
White
Black
Hispanic
Singletopical
condition1
Dualtopical
condition 2
Singletopical
condition1
Dualtopical
condition2
Singletopical
condition1
Dual-topical
condition2
t statistic
64.1
68.8
2.6
*
77.5
77.1
0.3
73.8
79.5
0.9
74.0
75.6
1.0
35.9
31.2
2.6
*
22.5
22.9
0.3
26.2
20.5
0.9
26.0
24.4
1.0
55.0
51.3
1.8
54.4
53.3
0.7
57.1
53.1
0.5
56.3
54.5
0.8
6.2
6.1
0.1
7.8
6.0
2.9
t statistic
t statistic
*
Singletopical
condition1
Dualtopical
condition2
t statistic
8.9
!
5.1
!
†
6.6
5.6
1.2
10.2
10.4
0.2
10.9
11.3
0.5
5.4
!
8.0
!
0.7
9.9
9.9
0.0
Asian
5.4
4.5
1.1
5.2
5.5
0.5
1.2
!
1.4
!
†
5.5
6.0
0.5
Other
3.0
2.6
0.6
2.6
2.8
0.5
1.2
!
2.8
!
†
2.2
2.4
0.3
20.1
25.1
3.1
19.1
21.0
1.6
26.2
0.5
19.5
21.6
1.3
8.3
10.0
1.4
7.4
8.2
1.1
9.3
9.0
8.1
0.9
High school
15.0
14.3
0.5
14.6
14.8
0.2
16.8
15.0
1.1
Some college
26.4
22.2
2.5
25.7
23.2
2.0
26.9
22.9
2.3
*
B.A.
19.3
18.3
0.7
20.7
20.8
0.1
5.0
17.2
20.7
2.7
*
Graduate/professional
10.9
10.1
0.7
12.5
12.0
0.5
9.4
0.0
10.5
11.7
0.9
Missing
20.1
25.1
3.1
19.1
21.0
1.6
26.2
0.5
19.5
21.6
1.3
Missing
Education of head of
household (from sampling
frame)
Less than high school
*
*
*
*
29.5
!
4.4
!
1.3
23.5
7.8
!
2.8
26.7
21.6
0.9
!
27.3
†
!
9.3
29.5
!
*
*
See notes at end of table.
A-45
Table 4.7
Percentage distribution of respondent households on frame variables among households eligible for two or more topical questionnaires, by topical questionnaire, dualtopical condition, and selected household characteristics: 2017—Continued
ECPP
PFI-E
PFI-H
ATES
Singletopical
condition1
Dualtopical
condition2
Singletopical
condition1
Dualtopical
condition2
18–24
1.7
0.9
1.8
1.3
1.3
0.2
25–34
18.7
18.5
0.1
5.9
7.8
2.8
35–44
26.4
27.3
0.5
26.5
26.2
0.3
45–54
10.5
10.5
0.0
31.8
29.8
55–65
9.2
7.5
1.6
11.6
10.8
Over 65
5.6
4.7
1.2
5.1
Missing
27.8
30.6
1.7
16.2
12.0
3.2
Household
characteristics
t statistic
Singletopical
condition1
t statistic
Dualtopical
condition2
t statistic
Singletopical
condition1
Dualtopical
condition 2
t statistic
Age of head of household
(from sampling frame)
†
0.9
1.3
1.0
18.8
1.5
9.7
9.8
0.1
22.9
27.9
0.8
25.2
24.9
0.1
1.7
23.7
14.2
26.2
27.0
0.4
0.9
17.3
7.2
!
2.2
11.6
10.7
0.6
5.3
0.3
6.4
7.0
!
0.1
5.1
5.4
0.4
17.8
19.0
1.1
19.2
24.9
0.7
21.4
20.8
0.3
11.5
12.2
0.8
19.0
8.4
!
12.9
12.4
0.4
3.1
!
†
5.1
5.9
0.8
12.6
!
0.8
9.1
7.4
1.3
14.5
13.3
0.9
0.0
22.0
19.7
1.1
0.8
28.4
32.4
2.0
0.0
8.0
8.9
0.6
*
0.8
!
0.0
9.7
!
!
!
1.6
*
Annual income (from
sampling frame)
Less than $21,000
*
$21,000–$36,000
6.8
7.0
0.2
5.2
5.2
0.0
8.7
$36,001–$56,000
10.1
10.0
0.1
8.7
7.8
1.1
16.8
$56,001–$85,000
14.1
13.3
0.6
12.9
14.0
1.2
13.7
$85,001–$120,000
17.9
20.9
2.0
20.2
19.7
0.5
12.4
Greater than $120,000
24.6
24.4
0.2
33.8
32.5
1.1
19.9
Missing
10.2
12.5
1.7
7.6
8.6
1.1
9.5
!
!
29.2
12.3
2.4
!
25.0
!
9.4
2.0
!
*
*
*
† Not applicable. Either this estimate or comparison is not applicable for this subgroup, or estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05.
1Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES or a child topical questionnaire and reached at least the first item in the questionnaire.
Unweighted eligible sample size was 1,520 for ECPP, 3,300 for PFI-E, 90 for PFI-H, and 950 for ATES.
2Refers to households with two or more individuals eligible for at least two of ECPP, PFI (E or H), and ATES that received either ATES and a child topical questionnaire or two child questionnaires that were respondents to that
questionnaire. Unweighted eligible sample size was 1,050 for ECPP, 2,180 for PFI-E, 80 for PFI-H, and 1,720 for ATES.
NOTE: Percentages represent the proportion of eligible topical respondent households within each group among households with two or more household members eligible for at least two of ECPP, PFI, and ATES. Race
categories exclude persons of Hispanic ethnicity. The analysis excludes cases that did the screener on the TQA because these cases were not asked to complete the entire topical questionnaire. Sample sizes have been rounded
to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-46
Table 5.1.
Percentage of respondents who chose each response option for item measuring whether certification was required by the government, by
educational attainment and item version: 2017
By educational attainment
High school or less
Overall
Required by government item
Most important certification
Yes
No
Don't know
Second-most-important certification
Yes
No
Don't know
Third-most-important certification
Yes
No
Don't know
New certification
Yes
No
Don't know
Version A1
Version B2
t statistic
77.5
18.0
4.5
73.0
21.7
5.3
2.1
1.6
0.7
66.0
28.4
5.5
62.2
31.9
5.9
55.4
33.8
10.8
63.0
28.2
8.7
Some college or more
Version A1
Version B2
t statistic
78.4
14.4
7.1
!
64.9
26.0
9.1
2.0
1.6
0.5
1.0
1.0
0.2
79.1
15.1
5.8
!
!
66.9
30.8
2.3
61.8
30.2
8.0
0.9
0.6
0.8
68.7
7.4
23.9
!
!
!
56.9
30.2
12.9
1.3
0.4
1.3
55.6
25.2
19.2
!
!
*
Version A1
Version B2
t statistic
77.4
18.5
4.2
73.8
21.3
4.9
1.6
1.3
0.7
!
1.2
1.7
†
64.4
30.3
5.3
61.9
32.0
6.2
0.7
0.5
0.5
50.7
32.4
17.0
!
!
!
†
†
†
54.1
37.0
8.9
62.6
30.0
7.4
1.2
1.1
0.5
61.6
17.5
20.9
!
!
0.3
0.6
0.1
63.9
28.4
7.7
56.4
31.5
12.0
1.4
0.7
1.4
*
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
*p < .05
1In Version A, this item was worded as "Is your [most/second-most/third-most] important certification or license required by a federal, state, or local government agency (such as a state board)
in order to do that kind of work?"
2In Version B, the item was "Is your most important certification or license required by a government agency (such as a state licensing board) in order to do that kind of work?"
NOTE: Percentages represent the proportion of ATES respondents who selected the response option out of those who answered the question. Cases that responded to the screener on the
TQA are excluded from this analysis because they were not asked to complete the full topical questionnaire. Unweighted eligible sample size was 8,080 for Version A and 8,200 for Version B.
Sample sizes have been rounded to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-47
Table 5.2.
Item missing rate for item measuring whether certification was required by the government, by respondent characteristics and item version: 2017
By educational attainment
High school or less
Overall
Required by government item
Version A1
Version B2
Most important certification
Second-most-important
certification
Third-most-important certification
0.4
0.1
!
!
t statistic
Version A1
Version B2
†
1.2
!
0.7
!
Some college or more
t statistic
Version A1
Version B2
†
0.2
0.1
!
t statistic
†
4.5
4.4
0.1
2.3
!
0.0
!
†
4.8
4.7
0.1
13.3
17.3
1.1
8.2
!
9.5
!
†
14.0
17.8
1.0
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
1In Version A, this item was worded as "Is your [most/second-most/third-most] important certification or license required by a federal, state, or local government agency (such as a state board) in order to do
that kind of work?"
2In Version B, the item was worded as "Is your [most/second-most/third-most] important certification or license required by a government agency (such as a state licensing board) in order to do that kind of
work?"
NOTE: Item missing rates represent the proportion of ATES respondents who should have answered the item but did not. Cases that responded to the screener on the TQA are excluded from this analysis
because they were not asked to complete the full topical questionnaire. Unweighted eligible sample size was 8,080 for Version A and 8,200 for Version B. Sample sizes have been rounded to the nearest
10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-48
Table 5.3.
Percentage of respondents who chose each response option, by respondent characteristics, item version, and usefulness item: 2017
By educational attainment
High school or less
Overall
Usefulness item
Most important certification or license
Getting a job
Not useful
Somewhat useful
Very useful
Too soon to tell
Keeping a job
Not useful
Somewhat useful
Very useful
Too soon to tell
Keeping you marketable to
employers or clients
Not useful
Somewhat useful
Very useful
Too soon to tell
Improving your work skills
Not useful
Somewhat useful
Very useful
Too soon to tell
Version A1
Version B2
5.5
11.1
79.7
3.6
4.9
13.1
79.1
2.9
6.9
11.0
79.3
2.8
Version A1
Version B2
0.6
1.3
0.3
0.9
7.9
7.8
80.2
4.1
3.6
17.7
76.7
2.0
!
!
5.7
12.5
78.6
3.2
1.1
1.0
0.3
0.3
6.9
11.5
79.1
2.6
!
!
4.4
11.5
73.9
10.2
!
5.6
12.8
78.3
3.3
4.1
13.2
80.2
2.5
1.3
0.3
0.9
0.7
6.5
10.5
80.8
2.3
!
5.4
12.5
76.2
6.0
!
8.2
19.6
71.2
1.0
7.2
21.4
70.5
1.0
0.8
0.8
0.3
0.0
8.9
13.8
76.1
1.2
6.6
27.5
62.8
3.1
!
!
!
t statistic
!
!
!
!
!
!
!
!
Some college or more
t statistic
Version A1
Version B2
t statistic
1.6
1.7
0.5
†
5.2
11.3
79.9
3.6
5.0
12.6
79.4
3.0
0.2
0.9
0.3
0.7
0.9
0.0
0.7
†
7.0
10.9
79.3
2.9
5.9
12.6
79.1
2.5
0.9
1.0
0.1
0.3
†
0.6
0.8
1.4
5.5
13.1
78.0
3.4
3.9
13.3
80.7
2.1
1.3
0.2
1.1
1.1
0.8
1.7
1.7
†
8.1
20.4
70.6
1.0
7.2
20.7
71.2
0.8
0.7
0.2
0.3
0.5
!
!
!
See notes at end of table.
A-49
Table 5.3.
Percentage of respondents who chose each response option, by respondent characteristics, item version, and usefulness item: 2017—Continued
By educational attainment
High school or less
Overall
Usefulness item
Post-secondary certificate
Getting a job
Not useful
Somewhat useful
Very useful
Too soon to tell
Increasing your pay
Not useful
Somewhat useful
Very useful
Too soon to tell
Improving your work skills
Not useful
Somewhat useful
Very useful
Too soon to tell
Version A1
Version B2
t statistic
Version A1
20.7
27.6
47.6
4.1
25.4
27.1
43.2
4.4
1.9
0.2
1.3
0.2
23.0
26.2
47.3
3.4
38.7
30.2
27.5
3.6
42.2
25.1
28.4
4.2
1.1
1.9
0.3
0.5
32.3
30.7
33.1
3.9
12.7
32.8
53.2
1.4
14.6
35.0
48.2
2.3
1.0
0.7
1.6
1.3
17.4
27.7
52.9
2.0
Version B2
!
21.3
31.3
44.1
3.3
!
41.1
27.8
28.0
3.1
!
17.3
36.8
44.9
1.0
Some college or more
t statistic
Version A1
Version B2
t statistic
!
0.3
0.7
0.4
†
20.2
27.9
47.7
4.2
25.7
26.6
43.2
4.5
1.9
0.4
1.2
0.2
!
1.1
0.4
0.7
0.3
39.8
30.1
26.5
3.6
42.4
24.6
28.6
4.3
0.8
2.0
0.8
0.6
!
0.0
1.3
1.1
†
11.8
33.7
53.2
1.3
14.2
34.7
48.6
2.4
1.3
0.3
1.3
1.5
*
See notes at end of table.
A-50
Table 5.3.
Percentage of respondents who chose each response option, by respondent characteristics, item version, and usefulness item: 2017—Continued
By educational attainment
High school or less
Overall
Usefulness item
Version A1
Version B2
9.5
24.2
62.3
4.0
10.2
24.9
60.7
4.2
Not useful
Somewhat useful
Very useful
Too soon to tell
Improving your work skills
Not useful
Somewhat useful
Very useful
34.7
26.2
34.9
4.1
Too soon to tell
t statistic
Version A1
Version B2
0.6
0.4
0.7
0.2
9.7
29.6
53.1
7.7
!
16.6
19.6
55.6
8.2
33.8
23.9
38.0
4.4
0.4
1.4
1.4
0.2
25.2
31.6
33.9
9.3
!
18.9
23.9
49.7
7.5
5.4
29.1
64.5
5.0
31.2
61.9
0.5
1.1
1.3
8.4
35.7
53.4
!
6.3!
26.0
63.8
0.9
1.9
1.5
2.5
!
4.0
Some college or more
t statistic
Version A1
Version B2
t statistic
!
1.2
1.1
0.3
0.2
9.5
23.9
62.9
3.7
9.8
25.2
61.0
3.9
0.3
0.7
0.9
0.2
!
0.9
0.9
2.0*
0.4
35.3
25.9
34.9
3.9
34.7
23.9
37.2
4.2
0.2
1.2
1.0
0.3
0.5
1.2
1.2
5.2
28.8
65.1
4.9
31.6
61.7
0.4
1.4
1.6
†
0.9
1.8
Last work experience program
Getting a job
Not useful
Somewhat useful
Very useful
Too soon to tell
!
Increasing your pay
!
!
!
!
1.4
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
1In Version A, the response options were listed in the order presented in this table.
2In Version B, the response options were listed as "Very useful", "Somewhat useful", "Not useful", and "Too soon to tell".
NOTE: Percentages represent the proportion of ATES respondents who selected the response option out of those who answered the question. Cases that responded to the screener on the TQA are excluded
from this analysis because they were not asked to complete the full topical questionnaire. Unweighted eligible sample size was 8,080 for Version A and 8,200 for Version B. sample sizes have been rounded
to the nearest 10. Detail may not sum to totals due to rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-51
Table 5.4.
Item missing rates, by respondent characteristics, item version, and usefulness item: 2017
By educational attainment
High school or less
Overall
Usefulness item
Version A1
Version B2
t statistic
1.2
1.0
1.5
1.2
1.5
Version A1
Version B2
0.6
2.8
!
2.9
!
0.7
3.4
!
0.9
!
1.2
0.7
3.2
!
1.7
1.2
1.1
0.4
2.2
!
2.0
8.2
7.8
0.2
9.2
!
8.6
8.5
0.0
10.3
8.5
7.2
0.9
2.4
2.2
Increasing your pay
2.5
Improving your work skills
2.3
Some college or more
t statistic
t statistic
Version A1
Version B2
0.1
0.9
!
0.8
0.3
†
1.2
!
1.2
0.1
!
†
1.2
1.1
0.2
!
†
1.0
1.0
0.1
7.4
0.4
7.9
7.9
0.0
!
9.0
0.3
8.1
8.5
0.2
9.3
!
6.5
0.7
8.2
7.3
0.6
0.3
3.4
!
5.0
!
†
2.4
2.1
0.5
2.5
0.0
6.0
!
5.4
!
†
2.3
2.3
!
0.0
2.1
0.3
5.0
!
3.6
!
†
2.2
2.0
!
0.2
Usefulness of most important
certification or license
Getting a job
Keeping a job
Keeping you marketable to
employers or clients
Improving your work skills
Usefulness of post-secondary
certificate
Getting a job
Increasing your pay
Improving your work skills
Usefulness of work experience
program
Getting a job
† Not applicable. Estimates are not reliable enough to make statistical comparisons.
! Interpret with caution. Either there are too few cases for a reliable estimate or the coefficient of variation is 30 percent or greater.
1In Version A, the response options were listed as "Not useful", "Somewhat useful", "Very useful", and "Too soon to tell".
2In Version B, the response options were listed as "Very useful", "Somewhat useful", "Not useful", and "Too soon to tell".
NOTE: In both versions, item missing rates represent the proportion of ATES respondents who should have answered the item but did not. Cases that responded to the screener on the
TQA are excluded from this analysis because they were not asked to complete the full topical questionnaire. Unweighted eligible sample size was 8,080 for Version A and 8,200 for
Version B. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
A-52
Table 5.5.
Straightlining rates, by respondent characteristics, item version, and usefulness item: 2017
By educational attainment
High school or less
Overall
Usefulness item
Usefulness of most important
certification or license
Usefulness of post-secondary
certificate
Usefulness of work experience
program
Some college or more
Version A1
Version B2
t statistic
Version A1
Version B2
t statistic
Version A1
Version B2
t statistic
61.3
61.9
0.2
71.1
55.7
1.9
60.4
62.5
0.8
38.4
41.5
1.1
42.7
46.5
0.6
37.6
40.9
1.1
40.6
45.0
1.9
51.9
59.0
0.9
39.9
44.1
1.7
1In
Version A, the response options were listed as "Not useful", "Somewhat useful", "Very useful", and "Too soon to tell".
Version B, the response options were listed as "Very useful", "Somewhat useful", "Not useful", and "Too soon to tell".
NOTE: Percentages represent the proportion of respondents who straightlined (selected the same response for all items in the grid) out of those who should have answered the questions.
Cases that responded to the screener on the TQA are excluded from this analysis because they were not asked to complete the full topical questionnaire. The denominator for each analysis
is all ATES respondents who reported having the credential in question. Unweighted eligible sample size was 8,080 for Version A and 8,200 for Version B. Sample sizes have been rounded
to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2017.
2In
A-53
Table 6.1.
Percentage point increase in screener response rate after each screener mailing, by survey
administration, mode condition, and mailing: 2014-2017
Survey administration
2014
(paperonly)2
2016
(paper-only
condition)3
2016
(mixedmode)4
2017
(webonly)5
4.5
1.6
26.1
13.5
Reminder postcard/pressure-sealed envelope
25.3
31.4
6.1
13.1
Second screener mailing
26.6
13.3
7.5
7.9
Third screener mailing
10.1
14.4
13.7
Fourth screener mailing
2.2
†
3.2
4.9
8.8
†
0.2
0.2
†
68.7
64.2
58.5
43.3
Mailing
First screener mailing
Robocall
Final response rate
† Not applicable.
NOTE: Response rates were calculated using AAPOR RR1. Response is attributed to a mailing if the response was received
three or more days after that mailing was sent and less than three days after the next mailing was sent. Unweighted eligible
sample sizes were 54,620 in 2014 (paper-only), 155,180 in 2016 (paper-only condition), 31,680 in 2016 (mixed-mode condition)
and 89,485 in 2017 (web-only). Sample sizes have been rounded to the nearest 10. Detail may not sum to totals due to
rounding.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-17.
A-54
Table 6.2.
Contact effort
Percentage point increase in topical response rate after each topical contact effort, by survey
administration, mode condition, dual-topical condition, topical questionnaire, and contact effort:
2014-2017
Survey administration
2017
2016
(web-only,
2017
2016 (
(mixedsingle(web-only,
2014
paper-only
mode
topical dual-topical
(paper-only)
condition)
condition)
condition)
condition)
ECPP1
Before initial contact
†
†
60.3
88.4
85.3
Initial topical e-mail
†
†
†
0.0
0.0
Initial topical mailing
†
1.1
0.9
0.0
0.0
Postcard/pressure-sealed envelope reminder
†
39.0
11.8
0.1
0.1
First follow-up topical mailing
†
18.0
4.9
†
†
Second follow-up topical mailing
†
10.8
4.0
†
†
Third follow-up topical mailing
†
4.0
2.3
†
†
Final response rate
†
72.8
84.2
88.5
85.3
Before initial contact
†
†
62.8
91.5
86.3
Initial topical e-mail
†
†
†
0.0
0.0
9.8
1.8
1.2
0.1
0.1
Postcard/pressure-sealed envelope reminder
40.4
39.3
10.6
0.1
0.4
First follow-up topical mailing
12.7
18.9
4.1
†
†
Second follow-up topical mailing
8.8
11.2
4.8
†
†
Third follow-up topical mailing
3.9
4.1
1.8
†
†
75.6
75.3
85.4
91.6
86.7
Before initial contact
†
†
51.0
74.6
76.2
Initial topical e-mail
†
†
†
0.0
0.0
Initial topical mailing
†
1.1
0.0
0.0
0.0
Postcard/pressure-sealed envelope reminder
†
29.9
10.6
0.0
0.0
First follow-up topical mailing
†
16.0
7.1
†
†
Second follow-up topical mailing
†
8.3
2.5
†
†
Third follow-up topical mailing
†
3.6
1.7
†
†
Final response rate
†
58.9
73.0
74.6
76.2
PFI-E2
Initial topical mailing
Final response rate
PFI-H3
See notes at end of table.
A-55
Table 6.2.
Contact effort
ATES
(Overall)4
Percentage point increase in topical response rate after each topical contact effort, by survey
administration, mode condition, dual-topical condition, topical questionnaire, and contact effort:
2014-2017—Continued
Survey administration
2017
2016
(web-only,
2017
2016
(mixedsingle(web-only,
2014 (paper-only
mode
topical dual-topical
(paper-only)
condition)
condition)
condition)
condition)
Before initial contact
†
†
40.8
53.1
50.1
Initial topical e-mail
†
†
†
0.0
0.0
5.8
2.6
7.8
14.4
12.9
Postcard/pressure-sealed envelope reminder
48.2
43.3
16.6
7.4
7.3
First follow-up topical mailing
11.4
15.2
7.7
†
†
Second follow-up topical mailing
8.7
9.7
6.4
†
†
Third follow-up topical mailing
3.0
3.4
2.5
†
†
77.1
74.2
81.7
74.9
70.3
Before initial contact
†
†
64.5
91.8
88.5
Initial topical e-mail
†
†
†
0.0
0.0
Initial topical mailing
Final response rate
ATES (Same
respondent)5
6.4
2.8
0.9
0.1
0.1
Postcard/pressure-sealed envelope reminder
51.3
46.1
13.3
0.2
0.1
First follow-up topical mailing
11.0
15.2
4.4
†
†
8.3
9.3
3.8
†
†
Initial topical mailing
Second follow-up topical mailing
Third follow-up topical mailing
Final response rate
2.5
3.2
1.1
†
†
79.5
76.6
88.0
92.1
88.7
.See notes at end of table.
A-56
Table 6.2.
Contact effort
Percentage point increase in topical response rate after each topical contact effort, by survey
administration, mode condition, dual-topical condition, topical questionnaire, and contact effort:
2014-2017—Continued
Survey administration
2017
2016
(web-only,
2017
2016
(mixedsingle(web-only,
2014 (paper-only
mode
topical dual-topical
(paper-only)
condition)
condition)
condition)
condition)
ATES (Different respondent)6
Before initial contact
†
†
†
†
†
Initial topical e-mail
†
†
†
†
†
4.9
2.4
19.6
34.1
29.8
Postcard/pressure-sealed envelope reminder
44.3
39.7
22.2
17.3
16.6
First follow-up topical mailing
11.9
15.1
13.3
†
†
Second follow-up topical mailing
9.3
10.1
10.8
†
†
Third follow-up topical mailing
3.6
3.8
5.0
†
†
74.0
71.2
70.9
51.4
46.4
Initial topical mailing
Final response rate
† Not applicable.
1 Unweighted eligible sample size was 6,700 in 2016 (paper-only), 1,230 in 2016 (mixed-mode condition), 1,720 in 2017 (single-topical condition),
and 1,230 in 2017 (dual-topical condition).
2 Unweighted eligible sample size was 5,560 in 2014, 15,000 in 2016 (paper-only), 2,790 in 2016 (mixed-mode condition), 3,630 in 2017 (singletopical condition), and 2,530 in 2017 (dual-topical condition).
3 Unweighted eligible sample size was 790 in 2016 (paper-only), 140 in 2016 (mixed-mode condition), 120 in 2017 (single-topical condition), and
100 in 2017 (dual-topical condition).
4 Unweighted eligible sample size was 13,710 in 2014, 53,850 in 2016 (paper-only), 9,980 in 2016 (mixed-mode condition), 13,310 in 2017 (singletopical condition), and 9,050 in 2017 (dual-topical condition).
5 Unweighted eligible screener sample size was 7,620 in 2014, 30,370 in 2016 (paper-only), 6,320 in 2016 (mixed-mode condition), 7,700 in 2017
(single household), and 5,140 in 2017 (dual household).
6 Unweighted eligible sample size was 6,090 in 2014, 23,460 in 2016 (paper-only), 3,670 in 2016 (mixed-mode condition), 5,610 in 2017 (singletopical condition), and 3,910 in 2017 (dual-topical condition).
NOTE: Response rates were calculated using AAPOR RR1. Response is attributed to a mailing if the response was received three or more days
after that mailing was sent and less three days after the next mailing was sent. ATES “same respondent” households are those where the screener
respondent was sampled for ATES. ATES “different respondent” households are those where a household member other than the screener
respondent was sampled for ATES. Response is attributed to an e-mail if the response was received from the day the e-mail was sent up to two
days after the next mailing was sent. In 2017, these analyses exclude cases that completed the screener on the TQA because they were not asked
to complete the full topical. In 2014, ASPA was administered instead of the PFI and is used as a proxy for the PFI-E response rate in 2014. ECPP
and PFI-H were not administered in 2014. ATES seeded sample members (2014 and 2016) are excluded from this analysis. There was also a
robocall in 2016, but it happened the same date as the second follow-up mailing and is therefore not shown in the table. There was also a second
e-mail reminder in 2017, but it was sent too soon after the pressure-sealed envelope to isolate its effect on the response rate and is therefore not
shown in the table. Detail may not sum to totals due to rounding. Sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017
A-57
Table 6.3.
Percentage of screener respondents that provided
their e-mail address, by survey administration, and
topical questionnaire: 2016-2017
Topical
questionnaire
20161
2017
Overall2
79.4
72.9
8.6
*
ECPP3
81.4
76.8
2.2
*
PFI-E4
82.7
77.7
4.0
*
PFI-H5
85.5
77.5
1.2
ATES6
77.6
71.0
7.0
t statistic
*
*p < .05.
1This is restricted to NHES:2016 respondents who were asked for their own e-mail
address to be comparable to 2017.
2The number of screener respondents in households sampled for a topical and
asked to provide their own e-mail address was 3,560 in 2016 and 29,720 in 2017.
3The number of screener respondents in households sampled for this topical and
asked to provide their own e-mail address was 400 in 2016 and 3,000 in 2017.
4The number of screener respondents in households sampled for this topical and
asked to provide their own e-mail address was 920 in 2016 and 6,320 in 2017.
5The number of screener respondents in households sampled for this topical and
asked to provide their own e-mail address was 30 in 2016 and 220 in 2017.
6The number of screener respondents in households sampled for this topical and
asked to provide their own e-mail address was 2,210 in 2016 and 15,040 in 2017.
NOTE: In 2016, a random sample of screener respondents were asked for e-mail
addresses for the topical respondent at the end of the screener. In 2017, screener
respondents were asked for their own e-mail address (unless the only topical
sampling that occurred was that a different household member was sampled for
ATES—then the e-mail address request was not made). Households that were not
asked for an e-mail address are excluded from this analysis. All sample sizes have
been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics,
NHES, 2017.
A-58
Table 6.4.
Percentage of screener respondents who were sent an e-mail
reminder that responded to the topical as a result of the email, by topical questionnaire: 2017
Topical questionnaire
Overall1
Percentage of screener respondents who were
sent an e-mail reminder1
0.2
!
ECPP2
0.0
!
PFI-E3
0.0
!
PFI-H4
0.0
!
ATES5
8.3
!
! Interpret with caution. Either there are too few cases for a reliable estimate or the
coefficient of variation is 30 percent or greater.
*p < .05.
1The number of screener respondents in households sampled for at least one topical and
sent an e-mail was 1,160.
2The number of screener respondents in households sampled for ECPP and sent an e-mail
was 280.
3The number of screener respondents in households sampled for PFI-E and sent an e-mail
was 830.
4The number of screener respondents in households sampled for PFI-H and sent an e-mail
was 30.
5The number of screener respondents in households sampled for ATES and sent an e-mail
was 20 in 2017. E-mails were only sent for ATES if the screener respondent was the
household member sampled for ATES—the screener respondent was not asked to provide
the e-mail address for another household member.
NOTE: Screener respondents were asked for their e-mail address at the end of the
screener. Response is attributed to an e-mail if the response is received starting the day
the day the e-mail was sent and within three days after the next mailing was sent. These
analyses only include response due to the first e-mail, not the second e-mail. The second email was sent 3 days after the pressure sealed envelope reminder, and, therefore, the effect
of the second e-mail cannot be distinguished from the effect of the pressure sealed
envelope reminder. All sample sizes have been rounded to the nearest 10.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES,
2017.
A-59
Appendix B. Additional Figures
B-1
Topical Response Rate by Week
Cumulative response rate
Figure 6.7a: Cumulative ECPP response rate, by week and survey administration: 2016-2017
100%
88%
88%
80% 85%
60%
85%
84%
73%
60%
40%
20%
0% 0%
0 1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Weeks
2016 (paper-only condition)
2017 (web only, single topical condition)
2016 (mixed-mode condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Unweighted eligible sample sizes were 6,700 in 2016 (paper-only),
1,230 in 2016 (mixed-mode condition), 1,720 in 2017 (single-topical condition), and 1,230 in 2017 (dual-topical condition).
ECPP was not administered in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation
because they were not asked to complete a topical survey. The final contact attempt was sent at week 23 for 2016 and at week 12
for 2017.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
Cumulative response rate
Figure 6.7b: Cumulative PFI-E response rate, by week and survey administration: 2014-17
100% 91%
92%
80% 86%
87%
85%
76%
75%
63%
60%
40%
20%
0%
0%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Weeks
2014 (paper-only)
2016 (mixed-mode condition)
2017 (web only, dual topical condition)
2016 (paper-only condition)
2017 (web only, single topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Unweighted eligible sample size was 5,560 in 2014, 15,000 in 2016
(paper-only), 2,790 in 2016 (mixed-mode condition), 3,630 in 2017 (single-topical condition), and 2,530 in 2017 (dual-topical
condition). In NHES:2014, ASPA was administered instead of the PFI. Given similarities in the eligibility criteria, ASPA is used
as a proxy for PFI-E response rates in 2014. TQA screener respondents are excluded from the 2017 topical response rate
calculation because they were not asked to complete a topical survey. The final contact attempt was sent at week 20 for 2014,
week 23 for 2016, and week 12 for 2017.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-2
Figure 6.7c: Cumulative PFI-H response rate, by week and survey administration: 2016-2017
Cumulative response rate
100%
76%
80% 76%
75%
60%
40%
73%
75%
59%
51%
20%
0%
0%
0
1
2
3
4
5
6
7
8
9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27
Weeks
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Unweighted eligible sample size was 790 in 2016 (paper-only), 140
in 2016 (mixed-mode condition), 120 in 2017 (single-topical condition), and 100 in 2017 (dual-topical condition). PFI-H was not
administered in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation because they were
not asked to complete a topical survey. The final contact attempt was sent at week 23 for 2016 and week 12 for 2017.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
Figure 6.7d: Cumulative ATES (overall) response rate, by week and survey administration: 2014-17
Cumulative response rate
100%
82%
80%
70%
60%
53%
50%
40%
41%
75%
77%
74%
20%
0%
0%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Week
2014 (paper-only)
2016 (mixed-mode condition)
2017 (web only, dual topical condition)
2016 (paper-only condition)
2017 (web only, single topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Unweighted eligible sample size was 13,710 in 2014, 53,850 in 2016
(paper-only), 9,980 in 2016 (mixed-mode condition), 13,310 in 2017 (single-topical condition), and 9,050 in 2017 (dual-topical
condition). TQA screener respondents are excluded from the 2017 topical response rate calculation because they were not asked
to complete a topical survey. The final contact attempt was sent at week 20 for 2014, week 23 for 2016, and week 12 for 2017.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-3
Cumulative response rate
Figure 6.7e: Cumulative ATES (same respondent) response rate, by week and survey
Figure 6.x: administration: 2014-17
100% 92%
92%
80% 88%
89%
88%
77%
80%
60% 64%
40%
20%
0%
0%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Weeks
2014 (paper-only)
2016 (mixed-mode condition)
2017 (web only, dual topical condition)
2016 (paper-only condition)
2017 (web only, single topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Unweighted eligible screener sample size was 7,620 in 2014, 30,370
in 2016 (paper-only), 6,320 in 2016 (mixed-mode condition), 7,700 in 2017 (single-topical condition), and 5,140 in 2017 (dualtopical condition). TQA screener respondents are excluded from the 2017 topical response rate calculation because they were not
asked to complete a topical survey. The final contact attempt was sent at week 20 for 2014, week 23 for 2016, and week 12 for
2017.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Figure 6.7f: Cumulative ATES (different respondent) response rate, by week and survey
Figure 6.x: administration: 2014-17
Cumulative response rate
100%
80%
74%
71%
71%
60%
51%
46%
40%
20%
0%
0%
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30
Weeks
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. TQA screener respondents are excluded from the 2017 topical
response rate calculation because they were not asked to complete a topical survey. Unweighted eligible sample size was 6,090 in
2014, 23,460 in 2016 (paper-only), 3,670 in 2016 (mixed-mode condition), 5,610 in 2017 (single-topical condition), and 3,910 in
2017 (dual-topical condition). The final contact attempt was sent at week 20 for 2014, week 23 for 2016, and week 12 for 2017.
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-4
Topical Response Rate by Day After Each Contact Attempt
Cumulative response rate
Figure 6.8a: ECPP response rate following the initial mailing, by number of days since mailing
Figure 6.8a: and survey administration: 2016-2017
100%
88%
88%
80%
85%
60%
60%
85%
61%
40%
20%
1%
0%
0%
0
1
2
3
4
5
6
7
8
9
10
Number of days since mailing
2016 (paper-only condition)
2017 (web only, single topical condition)
2016 (mixed-mode condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ECPP was not administered in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation
because they were not asked to complete a topical survey. Unweighted eligible sample size was 6,700 in 2016 (paper-only),
1,230 in 2016 (mixed-mode condition), 1,720 in 2017 (single-topical condition), and 1,230 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
100%
60%
88%
88%
80% 85%
73%
85%
61%
40%
40%
20%
0%
1%
0
3
6
9
12
15
18
21
24
27
30
33
36
39
42
45
48
51
54
57
60
63
66
69
72
75
78
81
84
87
90
93
96
99
102
105
108
111
114
Cumulative response rate
Figure 6.8b: ECPP response rate following the postcard/pressure-sealed envelope reminder, by
Figure 6.8b: number of days since mailing and survey administration: 2016-2017
Number of days since mailing
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ECPP was not administered in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation
because they were not asked to complete a topical survey. Unweighted eligible sample size was 6,700 in 2016 (paper-only),
1,230 in 2016 (mixed-mode condition), 1,720 in 2017 (single-topical condition), and 1,230 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
B-5
Cumulative response rate
Figure 6.8c: ECPP response rate following the first follow-up, by number of days since mailing
Figure 6.8c: and survey administration: 2016-2017
100%
80%
78%
73%
58%
60%
40%
40%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ECPP was not administered in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation
because they were not asked to complete a topical survey. Unweighted eligible sample size was 6,700 in 2016 (paper-only),
1,230 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
Cumulative response rate
Figure 6.8d: ECPP response rate following the second follow-up, by number of days since
Figure 6.8d: mailing and suvery administration: 2016
100%
80%
78%
60%
58%
82%
69%
40%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ECPP was not administered in 2014. There was no second follow-up mailing in 2017. Unweighted eligible sample size was 6,700
in 2016 (paper-only), 1,230 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016.
B-6
100%
82%
84%
60% 69%
73%
80%
40%
20%
0%
0
4
8
12
16
20
24
28
32
36
40
44
48
52
56
60
64
68
72
76
80
84
88
92
96
100
104
108
112
116
120
Cumulative response rate
Figure 6.8e: ECPP response rate followingn the third follow-up, by number of days since
Figure 6.8e: mailing and suvery administration: 2016
Number of days since mailing
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1 Day 0 is the day that the mailing was sent. The final day shown is the
day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ECPP was not administered in 2014. There was no third follow-up mailing in 2017. Unweighted eligible sample size was 6,700
in 2016 (paper-only), 1,230 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016.
Cumulative response rate
Figure 6.9a: PFI-E response rate following the initial mailing, by number of days since mailing
Figure 6.9a: and suvey administration: 2014-2017
100%
92%
80% 86%
92%
86%
64%
60% 63%
40%
20%
0%
0%
0
1
2
3
4
5
6
7
8
9
10%
2%
10
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
In 2014, ASPA was administered instead of the PFI-E. Given similarities in the eligibility criteria, ASPA is used as a proxy for
PFI-E response rates in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation because
they were not asked to complete a topical survey. Unweighted eligible sample size was 5,560 in 2014, 15,000 in 2016 (paperonly), 2,790 in 2016 (mixed-mode condition), 3,630 in 2017 (single-topical condition), and 2,530 in 2017 (dual-topical
condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-7
Cumulative response rate
Figure 6.9b: PFI-E response rate following the postcard/pressure-sealed envelope reminder, by
Figure 6.9b: numer of days since mailing and survey administration: 2014-2017
100% 92%
92%
80% 86%
64%
60%
75%
40%
41%
87%
50%
20% 10%
2%
0%
0 4
8 12 16 20 24 28 32 36 40 44 48 52 56 60 64 68 72 76 80 84 88 92 96 100104108112116
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
In 2014, ASPA was administered instead of the PFI-E. Given similarities in the eligibility criteria, ASPA is used as a proxy for
PFI-E response rates in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation because
they were not asked to complete a topical survey. This mailing was a postcard in 2014 and 2016; it was a pressure-sealed
envelope in 2017. Unweighted eligible sample size was 5,560 in 2014, 15,000 in 2016 (paper-only), 2,790 in 2016 (mixed-mode
condition), 3,630 in 2017 (single-topical condition), and 2,530 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Cumulative response rate
Figure 6.9c: PFI-E response rate following the first follow-up, by number of days since mailing
Figure 6.9c: and survey administration: 2014-2017
100%
80%
79%
75%
63%
60%
60% 50%
40%
41%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
In 2014, ASPA was administered instead of the PFI-E. Given similarities in the eligibility criteria, ASPA is used as a proxy for
PFI-E response rates in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation because
they were not asked to complete a topical survey. Unweighted eligible sample size was 5,560 in 2014, 15,000 in 2016 (paperonly), 2,790 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-8
Cumulative response rate
Figure 6.9d: PFI-E response rate following the second follow-up, by number of days since mailing
Figure 6.9d: and survey administration: 2014-2016
100%
80%
84%
72%
71%
79%
63%
60%
60%
40%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
In 2014, ASPA was administered instead of the PFI-E. Given similarities in the eligibility criteria, ASPA is used as a proxy for
PFI-E response rates in 2014. There was no second follow-up mailing in 2017. Unweighted eligible sample size was 5,560 in
2014, 15,000 in 2016 (paper-only), 2,790 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2016.
Figure 6.9e: PFI-E response rate following the third follow-up, by number of days since mailing
Figure 6.9e: and survey administration: 2014-2016
Cumulative response rate
100%
85%
84%
80%
60%
71%
76%
72%
75%
40%
20%
0
4
8
12
16
20
24
28
32
36
40
44
48
52
56
60
64
68
72
76
80
84
88
92
96
100
104
108
112
116
120
0%
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
In 2014, ASPA was administered instead of the PFI-E. Given similarities in the eligibility criteria, ASPA is used as a proxy for
PFI-E response rates in 2014. There was no third follow-up mailing in 2017. Unweighted eligible sample size was 5,560 in 2014,
15,000 in 2016 (paper-only), 2,790 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2016.
B-9
Cumulative response rate
Figure 6.10a: PFI-H response rate following the initial mailing, by number of days since mailing
Figure 6.10a: and survey administration: 2016-2017
100%
76%
80% 76%
75%
60%
51%
51%
40%
20%
0%
0%
0
1
2
3
4
5
6
7
8
9
1%
10
Number of days since mailing
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
PFI-H was not administered in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation
because they were not asked to complete a topical survey. Unweighted eligible sample size was 790 in 2016 (paper-only), 140 in
2016 (mixed-mode condition), 120 in 2017 (single-topical condition), and 100 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
Figure 6.10b: PFI-H response rate following the postcard/pressure-sealed envelope reminder, by
Figure 6.10b: number of days since mailing and survey administration: 2016-2017
Cumulative response rate
100%
76%
80% 76%
75%
60%
75%
62%
51%
40%
31%
20%
1%
0
3
6
9
12
15
18
21
24
27
30
33
36
39
42
45
48
51
54
57
60
63
66
69
72
75
78
81
84
87
90
93
96
99
102
105
108
111
114
0%
Number of days since mailing
2016 (paper-only condition)
2017 (web only, single topical condition)
2016 (mixed-mode condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
PFI-H was not administered in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation
because they were not asked to complete a topical survey. Unweighted eligible sample size was 790 in 2016 (paper-only), 140 in
2016 (mixed-mode condition), 120 in 2017 (single-topical condition), and 100 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
B-10
Cumulative response rate
Figure 6.10c: PFI-H response rate following the first follow-up, by number of days since mailing
Figure 6.10c: and survey administration: 2016-2017
100%
80%
60%
69%
62%
47%
40%
31%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
PFI-H was not administered in 2014. TQA screener respondents are excluded from the 2017 topical response rate calculation
because they were not asked to complete a topical survey. In all other administrations it was a paper mailing. Unweighted eligible
sample size was 790 in 2016 (paper-only), 140 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016-2017.
Cumulative response rate
Figure 6.10d: PFI-H response following the second follow-up, by number of days since mailing
Figure 6.10d: and survey administration: 2016
100%
80%
71%
69%
55%
60%
47%
40%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
PFI-H was not administered in 2014. There was no second follow-up mailing in 2017. Unweighted eligible sample size was 790
in 2016 (paper-only), 140 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016.
B-11
Figure 6.10e: PFI-H response rate following the third follow-up, by number of days since mailing
Figure 6.10e: and survey administration: 2016
Cumulative response rate
100%
80% 71%
73%
60% 55%
59%
40%
20%
0
4
8
12
16
20
24
28
32
36
40
44
48
52
56
60
64
68
72
76
80
84
88
92
96
100
104
108
112
116
120
0%
Number of days since mailing
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
PFI-H was not administered in 2014. There was no third follow-up mailing in 2017. Unweighted eligible sample size was 790 in
2016 (paper-only), 140 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2016.
Cumulative response rate
Figure 6.11a: ATES (overall) response rate following the initial mailing, by number of days since
Figure 6.11a: mailing and survey administrationon: 2014-17
100%
80%
68%
63%
60%
53%
50%
40%
41%
49%
20%
0%
0%
0
1
2
3
4
5
6
7
8
9
6%
3%
10
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
TQA screener respondents are excluded from the 2017 topical response rate calculation because they were not asked to complete
a topical survey. Unweighted eligible sample size was 13,710 in 2014, 53,850 in 2016 (paper-only), 9,980 in 2016 (mixed-mode
condition), 13,310 in 2017 (single-topical condition), and 9,050 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-12
Figure 6.11b: ATES (overall) response rate following the postcard/pressure-sealed envelope
Figure 6.11b: reminder, by number of days since mailing and survey administration: 2014-17
Cumlative response rate
100%
80%
68%
75%
65%
60% 63%
70%
54%
46%
40% 49%
20%
6%
3%
0
3
6
9
12
15
18
21
24
27
30
33
36
39
42
45
48
51
54
57
60
63
66
69
72
75
78
81
84
87
90
93
96
99
102
105
108
111
114
0%
Number of days since mailing
2014 (paper-only)
2016 (mixed-mode condition)
2017 (web only, dual topical condition)
2016 (paper-only condition)
2017 (web only, single topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
TQA screener respondents are excluded from the 2017 topical response rate calculation because they were not asked to complete
a topical survey. In 2014 and 2016, this mailing was a postcard; in 2017, it was a pressure-sealed envelope. Unweighted eligible
sample size was 13,710 in 2014, 53,850 in 2016 (paper-only), 9,980 in 2016 (mixed-mode condition), 13,310 in 2017 (singletopical condition), and 9,050 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Cumulative response rate
Figure 6.11c: ATES (overall) response rate following the first follow-up, by number of days since
Figure 6.11c: mailing and survey administration: 2014-17
100%
73%
65%
80%
65%
60%
54%
40% 46%
61%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
TQA screener respondents are excluded from the 2017 topical response rate calculation because they were not asked to complete
a topical survey. Unweighted eligible sample size was 13,710 in 2014, 53,850 in 2016 (paper-only), 9,980 in 2016 (mixed-mode
condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-13
Cumulative response rate
Figure 6.11d: ATES (overall) response rate following the second follow-up, by number of days
Figure 6.11d: since mailing and survey administration: 2014-2016
100%
79%
74%
80% 73%
65%
60%
61%
40%
71%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
There was no second follow-up mailing in 2017. Unweighted eligible sample size was 13,710 in 2014, 53,850 in 2016 (paperonly), 9,980 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2016.
100%
82%
79%
80%
74%
60% 71%
77%
74%
40%
20%
0%
0
4
8
12
16
20
24
28
32
36
40
44
48
52
56
60
64
68
72
76
80
84
88
92
96
100
104
108
112
116
120
Cumulative response rate
Figure 6.11e: ATES (overall) response rate following the third follow-up, by number of days
Figure 6.11e: since mailing and survey administration: 2014-2016
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
There was no third follow-up mailing in 2017. Unweighted eligible sample size was 13,710 in 2014, 53,850 in 2016 (paper-only),
9,980 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2016.
B-14
Cumulative response rate
Figure 6.12a: ATES (same respondent) response rate following the initial mailing, by number
Figure 6.12a: of days since mailing and survey administration: 2014-2017
100%
92%
80%
88%
92%
89%
65%
60% 64%
40%
20%
6%
3%
0%
0%
0
1
2
3
4
5
6
7
8
9
10
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
TES “same respondents” are screener respondents that were sampled for ATES. TQA screener respondents are excluded from the
2017 topical response rate calculation because they were not asked to complete a topical survey. Unweighted eligible screener
sample size was 7,620 in 2014, 30,370 in 2016 (paper-only), 6,320 in 2016 (mixed-mode condition), 7,700 in 2017 (singletopical condition), and 5,140 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
100% 92%
80% 89%
92%
89%
79%
60% 65%
58%
49%
40%
20%
6%
0%
3%
0
3
6
9
12
15
18
21
24
27
30
33
36
39
42
45
48
51
54
57
60
63
66
69
72
75
78
81
84
87
90
93
96
99
102
105
108
111
114
Cumulative response rate
Figure 6.12b: ATES (same respondent) response rate following the postcard /pressure-sealed
Figure 6.12b: envelope reminder, by number of days since mailing and survey administration:
Figure 6.12b: 2014-2017
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ATES “same respondents” are screener respondents that were sampled for ATES. TQA screener respondents are excluded from
the 2017 topical response rate calculation because they were not asked to complete a topical survey. In 2014 and 2016, this
mailing was a postcard; in 2017, it was a pressure-sealed envelope. Unweighted eligible screener sample size was 7,620 in 2014,
30,370 in 2016 (paper-only), 6,320 in 2016 (mixed-mode condition), 7,700 in 2017 (single-topical condition), and 5,140 in 2017
(dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-15
Cumulative response rate
Figure 6.12c: ATES (same respondent) response rate following the first follow-up, by number
Figure 6.12c: of days since mailing and survey administration: 2014-2017
100%
83%
80% 79%
60%
58%
40%
49%
69%
64%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ATES “same respondents” are screener respondents that were sampled for ATES. TQA screener respondents are excluded from
the 2017 topical response rate calculation because they were not asked to complete a topical survey. Unweighted eligible screener
sample size was 7,620 in 2014, 30,370 in 2016 (paper-only), 6,320 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Cumulative response rate
Figure 6.12d: ATES (same respondent ) response rate following the second-follow up, by
Figure 6.12d: number of days since mailing and survey administration: 2014-2016
100%
87%
77%
83%
80%
60%
69%
73%
64%
40%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ATES “same respondents” are screener respondents that were sampled for ATES. There was no second follow-up mailing in
2017. Unweighted eligible screener sample size was 7,620 in 2014, 30,370 in 2016 (paper-only), 6,320 in 2016 (mixed-mode
condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2016.
B-16
100%
88%
87%
80% 77%
73%
60%
80%
77%
40%
20%
0%
0
4
8
12
16
20
24
28
32
36
40
44
48
52
56
60
64
68
72
76
80
84
88
92
96
100
104
108
112
116
120
Cumulative response rate
Figure 6.12e: ATES (same respondent) response rate following the third follow-up, by number
Figure 6.12e: of days since mailing and survey administration: 2014-2016
Number of days since mailing
2014 (paper-only)
2016 (mixed-mode condition)
2016 (paper-only condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ATES “same respondents” are screener respondents that were sampled for ATES. There was no third follow-up mailing in 2017.
Unweighted eligible screener sample size was 7,620 in 2014, 30,370 in 2016 (paper-only), 6,320 in 2016 (mixed-mode
condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2016.
Cumulative response rate
Figure 6.13a: ATES (different respondent) response rate following the initial mailing, by number
Figure 6.13a: of days since mailing and survey administration: 2014-2017
100%
80%
60%
40%
34%
30%
20%
0%
0%
0
1
2
3
4
5
6
7
8
9
20%
5%
2%
10
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ATES “different respondents” are household members other than the screener respondent who were sampled for ATES. TQA
screener respondents are excluded from the 2017 topical response rate calculation because they were not asked to complete a
topical survey Unweighted eligible sample size was 6,090 in 2014, 23,460 in 2016 (paper-only), 3,670 in 2016 (mixed-mode
condition), 5,610 in 2017 (single-topical condition), and 3,910 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-17
Figure 6.13b: ATES (different respondent) response rate following the postcard/pressure-sealed
Figure 6.13b: envelope reminder, by number of days since mailing and survey administration:
Figure 6.13b: 2014-2017
80%
60%
51%
49%
46%
42%
40% 34%
46%
30%
20%
20%
5%
4%
0%
0
3
6
9
12
15
18
21
24
27
30
33
36
39
42
45
48
51
54
57
60
63
66
69
72
75
78
81
84
87
90
93
96
99
102
105
108
111
114
Cumulative response rate
100%
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
2017 (web only, single topical condition)
2017 (web only, dual topical condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ATES “different respondents” are household members other than the screener respondent who were sampled for ATES. TQA
screener respondents are excluded from the 2017 topical response rate calculation because they were not asked to complete a
topical survey. In 2014 and 2016, this mailing was a postcard; in 2017, it was a pressure-sealed envelope. Unweighted eligible
sample size was 6,090 in 2014, 23,460 in 2016 (paper-only), 3,670 in 2016 (mixed-mode condition), 5,610 in 2017 (singletopical condition), and 3,910 in 2017 (dual-topical condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
Cumulative response rate
Figure 6.13c: ATES (different respondent) response rate following the first follow-up, by number
Figure 6.13c: of days since mailing and survey administration: 2014-2017
100%
80%
60%
40%
61%
49%
57%
55%
42%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ATES “different respondents” are household members other than the screener respondent who were sampled for ATES. TQA
screener respondents are excluded from the 2017 topical response rate calculation because they were not asked to complete a
topical survey. Unweighted eligible sample size was 6,090 in 2014, 23,460 in 2016 (paper-only), 3,670 in 2016 (mixed-mode
condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2017.
B-18
Cumulative response rate
Figure 6.13d: ATES (different respondent) resonse rate following the second follow-up, by
Figure 6.13d: number of days since mailing and survey administration: 2014-2016
100%
80%
70%
61%
67%
66%
60%
57%
40% 55%
20%
0%
0
1
2
3
4
5
6
7
8
9
10 11 12 13 14 15 16 17 18 19 20 21 22 23
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ATES “different respondents” are household members other than the screener respondent who were sampled for ATES. There
was no second follow-up mailing in 2017. Unweighted eligible sample size was 6,090 in 2014, 23,460 in 2016 (paper-only),
3,670 in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2016.
100%
74%
80% 70%
67%
60% 66%
71%
71%
40%
20%
0%
0
4
8
12
16
20
24
28
32
36
40
44
48
52
56
60
64
68
72
76
80
84
88
92
96
100
104
108
112
116
120
Cumulative response rate
Figure 6.13e: ATES (different respondent) response rate following the third follow-up, by
Figure 6.13e: number of days since mailing and survey administration: 2014-2016
Number of days since mailing
2014 (paper-only)
2016 (paper-only condition)
2016 (mixed-mode condition)
NOTE: Response rates were calculated using AAPOR RR1. Day 0 is the day that the mailing was sent. The final day shown is
the day before the subsequent mailing was sent. Lines are of differing lengths due to variation in mailing schedules across years.
ATES “different respondents” are household members other than the screener respondent who were sampled for ATES. There
was no third follow-up mailing in 2017. Unweighted eligible sample size was 6,090 in 2014, 23,460 in 2016 (paper-only), 3,670
in 2016 (mixed-mode condition).
SOURCE: U.S. Department of Education, National Center for Education Statistics, NHES, 2014-2016.
B-19
Appendix C: Screener Experiment Results among
TQA Respondents
C-1
As mentioned previously, where applicable, all screener version analyses were repeated among
TQA screener respondents to determine if the ideal screener format is different for interviewer
administration than it is for self-administration; this appendix summarizes the result of those
analyses.
Breakoffs
Among those who started the screener on the TQA, there were almost no breakoffs (a breakoff
rate of less than 0.1 percent); therefore, we did not compare the breakoff rate by screener version
among those who started the screener on the TQA.
Item Missingness
The percentage of TQA screener respondent households with at least one person missing a
response to the household member characteristic items was very low in both conditions for all
five items (less than 2 percent for each item in both conditions; see table 3.3b in appendix A).
The same was true for the percentage of households with missing data for the sampled household
member (0.2 percent or less for all items in both conditions). However, the estimates are not
reliable enough to make statistical comparisons.
Inconsistent Responses
Screener version did not have a significant or notable effect on the percentage of TQA screener
respondent households with an inconsistent response for at least one household member (rounds
to 1 percent in both conditions; see table 3.4b in appendix A). There also were not significant
differences found for any subgroups by screener version, although about half of the subgroup
estimates were not reliable enough to comment on statistical comparisons between the two
conditions.
Unknown Eligibility Status
Respondents in both conditions were rather unlikely to report household members of unknown
eligibility status (less than 1 percent in each condition), and there was not a significant difference
in the likelihood of this outcome in the two screener versions (see table 3.5b in appendix A).
There also were not significant differences found for any subgroups by screener version,
although about half of the subgroup estimates were not reliable enough to comment on statistical
comparisons between the two conditions.
Time to Complete Screener
There was not a significant or meaningful difference in the mean number of minutes to complete
the screener by screener version (1.9 minutes in both conditions; see table 3.6b in appendix A).
This was also the case for all subgroup analyses that were conducted.
C-2
Respondent Characteristics
There were no significant or notable differences in the characteristics of screener respondent
households based on household characteristic variables available on the frame (see 3.7b in
appendix A).
Number of Household Members Reported
There was not a significant difference between versions in the mean number of household
members reported (1.7 in both versions). There also were not any significant or notable
differences in the percentage distribution of the number of household members reported in each
condition (see table 3.8b in appendix A).
Among respondents to the redesigned version, a very similar percentage of households reported
additional household members beyond the original six as we found among web screener
respondents (3 percent, with this again being more common among those who had already
reported six household members; not shown in tables).
Compared to web responds, much fewer TQA respondents reported zero additional
names after saying that more people live in the household (none of screener respondents
who had already reported six household members and only 6 percent of those who had
previously reported six or fewer members).
Screener respondents who had previously reported six household members always
provided an age for those new members.
For those who had initially listed six household members, all of the added household
members were age 18 or younger; for those who had initially listed fewer than six
household members, most of the added household members were age 19 or older (69
percent). As a result, about 20 additional children were listed on the screener who would
not have been listed if only six name slots had been provided and there had not been a
question asking those who initially listed less than six names whether or not anyone else
lived in the household.
Reporting at Least One Household Member Eligible for a Topical
Survey
Finally, there were not any significant or notable differences by screener version in the
percentage of TQA screener respondent households that reported at least one household member
eligible or a topical survey (43 percent in each condition; see table 3.9b in appendix A). We do,
however, see notably lower rates of reporting of eligible household members among TQA
respondents.
C-3
Key Takeaways from the Screener Experiment among TQA
Respondents
There was very little difference in key screener outcomes between the two screener
versions among TQA respondents. As a result, there is not a clearly preferable version of
the screener to use for TQA respondents in the future. For ease of administration, we thus
recommend using the same screener on the phone as is used online.
Overall, undesirable outcomes like item missingness, breakoffs, and inconsistent
responses were less common among TQA respondents than they were among web
respondents. Both this and the lack of difference between the two conditions on the
phone is likely due to interviewers being more skilled than respondents at navigating the
screener (since they have more experience with it).
In addition, households that completed the screener on the TQA tended to be smaller than
those that completed it on the web (for example, 86 percent of households that completed
the screener on the TQA had only 1 or 2 household members compared to only 60
percent of those who completed it online); the differences between the two versions
would be less notable when there are fewer household members reported.
Finally, given the much lower topical eligibility rates among households that completed
the screener on the TQA, it appears that households that responded to the screener on the
TQA were less likely than those who responded online to have children and were more
likely to only include senior citizens (the only age group not eligible for any of the
topicals).
C-4
Appendix D: Topical Survey Eligibility Decision Rules
from NHES:2017 Sampling Plan
D-1
Table D.1. Topical survey eligibility, by age, enrollment, and grade permutations
Age 0-2
Age 3
Enrollment
Public/private
Public/private
Public/private
Public/private
Public/private
Public/private
Public/private
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
College
College
College
College
College
College
College
Not in school
Not in school
Not in school
Not in school
Not in school
Not in school
Not in school
Missing
Missing
Missing
Missing
Missing
Missing
Missing
Grade
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
Survey Eligibility
ECPP
ECPP
ECPP
ECPP
ECPP
Enrollment
Public/private
Public/private
Public/private
Public/private
Public/private
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
Public/private
Public/private
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
Homeschool
Homeschool
College
College
College
College
College
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
College
College
Not in school
Not in school
Not in school
Not in school
Not in school
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
Not in school
Not in school
Missing
Missing
Missing
Missing
Missing
ECPP
ECPP
Missing
Missing
Grade
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
Age 4
Survey Eligibility
ECPP
PFI-E
ECPP
ECPP
ECPP
Enrollment
Public/private
Public/private
Public/private
Public/private
Public/private
ECPP
ECPP
ECPP
PFI-H
ECPP
ECPP
ECPP
Public/private
Public/private
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
ECPP
ECPP
ECPP
PFI-E
ECPP
ECPP
ECPP
Homeschool
Homeschool
College
College
College
College
College
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
College
College
Not in school
Not in school
Not in school
Not in school
Not in school
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
Not in school
Not in school
Missing
Missing
Missing
Missing
Missing
ECPP
ECPP
Missing
Missing
Grade
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
Survey
Eligibility
ECPP
PFI-E
PFI-E
ECPP
ECPP
ECPP
ECPP
ECPP
PFI-H
PFI-H
ECPP
ECPP
ECPP
ECPP
ECPP
PFI-E
PFI-E
ECPP
Unknown
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
ECPP
PFI-E
PFI-E
ECPP
Unknown
ECPP
ECPP
D-2
Table D.1. Topical survey eligibility, by age, enrollment, and grade permutations—Continued
Age 5
Age 6
Survey
Survey
Enrollment
Grade
Eligibility
Enrollment
Grade
Eligibility
Public/private PK
ECPP
Public/private PK
ECPP
Public/private K
PFI-E
Public/private K
PFI-E
Public/private 1-2
PFI-E
Public/private 1-2
PFI-E
Public/private 3-12
PFI-E
Public/private 3-12
PFI-E
Public/private College
PFI-E
Public/private College
PFI-E
None of
None of
Public/private these
ECPP
Public/private these
PFI-E
Public/private Missing
ECPP
Public/private Missing
PFI-E
Homeschool PK
ECPP
Homeschool PK
ECPP
Homeschool K
PFI-H
Homeschool K
PFI-H
Homeschool 1-2
PFI-H
Homeschool 1-2
PFI-H
Homeschool 3-12
PFI-H
Homeschool 3-12
PFI-H
Homeschool College
PFI-H
Homeschool College
PFI-H
None of
None of
Homeschool these
ECPP
Homeschool these
PFI-H
Homeschool Missing
ECPP
Homeschool Missing
PFI-H
College
PK
ECPP
College
PK
ECPP
College
K
PFI-E
College
K
PFI-E
College
1-2
PFI-E
College
1-2
PFI-E
College
3-12
PFI-E
College
3-12
PFI-E
College
College
Unknown
College
College
Unknown
None of
None of
College
these
ECPP
College
these
PFI-E
College
Missing
ECPP
College
Missing
PFI-E
Not in school PK
ECPP
Not in school PK
ECPP
Not in school K
ECPP
Not in school K
ECPP
Not in school 1-2
ECPP
Not in school 1-2
ECPP
Not in school 3-12
ECPP
Not in school 3-12
ECPP
Not in school College
ECPP
Not in school College
ECPP
None of
None of
Not in school these
ECPP
Not in school these
ECPP
Not in school Missing
ECPP
Not in school Missing
ECPP
Missing
PK
ECPP
Missing
PK
ECPP
Missing
K
PFI-E
Missing
K
PFI-E
Missing
1-2
PFI-E
Missing
1-2
PFI-E
Missing
3-12
PFI-E
Missing
3-12
PFI-E
Missing
College
Unknown
Missing
College
Unknown
None of
None of
Missing
these
ECPP
Missing
these
PFI-E
Missing
Missing
ECPP
Missing
Missing
PFI-E
Age 7-10
Enrollment
Public/private
Public/private
Public/private
Public/private
Public/private
Public/private
Public/private
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
College
College
College
College
College
College
College
Not in school
Not in school
Not in school
Not in school
Not in school
Not in school
Not in school
Missing
Missing
Missing
Missing
Missing
Missing
Missing
Grade
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
Survey
Eligibility
PFI-E
PFI-E
PFI-E
PFI-E
PFI-E
PFI-E
PFI-E
PFI-H
PFI-H
PFI-H
PFI-H
PFI-H
PFI-H
PFI-H
Unknown
PFI-E
PFI-E
PFI-E
Unknown
PFI-E
PFI-E
Unknown
Unknown
Unknown
Unknown
Unknown
Unknown
Unknown
Unknown
PFI-E
PFI-E
PFI-E
Unknown
PFI-E
PFI-E
D-3
Table D.1. Topical survey eligibility, by age, enrollment, and grade permutations—Continued
Age 11-15
Age 16-17
Survey
Survey
Enrollment
Grade
Eligibility
Enrollment
Grade
Eligibility
Public/private PK
PFI-E
Public/private PK
PFI-E
Public/private K
PFI-E
Public/private K
Unknown
Public/private 1-2
PFI-E
Public/private 1-2
PFI-E
Public/private 3-12
PFI-E
Public/private 3-12
PFI-E
Public/private College
PFI-E
Public/private College
ATES
None of
None of
Public/private these
PFI-E
Public/private these
PFI-E
Public/private Missing
PFI-E
Public/private Missing
PFI-E
Homeschool PK
PFI-H
Homeschool PK
PFI-H
Homeschool K
PFI-H
Homeschool K
Unknown
Homeschool 1-2
PFI-H
Homeschool 1-2
PFI-H
Homeschool 3-12
PFI-H
Homeschool 3-12
PFI-H
Homeschool College
PFI-H
Homeschool College
PFI-H
None of
None of
Homeschool these
PFI-H
Homeschool these
PFI-H
Homeschool Missing
PFI-H
Homeschool Missing
PFI-H
College
PK
PFI-E
College
PK
ATES
College
K
PFI-E
College
K
ATES
College
1-2
PFI-E
College
1-2
ATES
College
3-12
PFI-E
College
3-12
ATES
College
College
PFI-E
College
College
ATES
None of
None of
College
these
PFI-E
College
these
ATES
College
Missing
PFI-E
College
Missing
ATES
Not in school PK
Unknown
Not in school PK
ATES
Not in school K
Unknown
Not in school K
ATES
Not in school 1-2
Unknown
Not in school 1-2
ATES
Not in school 3-12
Unknown
Not in school 3-12
ATES
Not in school College
Unknown
Not in school College
ATES
None of
None of
Not in school these
Unknown
Not in school these
ATES
Not in school Missing
Unknown
Not in school Missing
ATES
Missing
PK
Unknown
Missing
PK
Unknown
Missing
K
PFI-E
Missing
K
Unknown
Missing
1-2
PFI-E
Missing
1-2
PFI-E
Missing
3-12
PFI-E
Missing
3-12
PFI-E
Missing
College
Unknown
Missing
College
ATES
None of
None of
Missing
these
PFI-E
Missing
these
ATES
Missing
Missing
PFI-E
Missing
Missing
PFI-E
Age 18
Enrollment
Public/private
Public/private
Public/private
Public/private
Public/private
Public/private
Public/private
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
College
College
College
College
College
College
College
Not in school
Not in school
Not in school
Not in school
Not in school
Not in school
Not in school
Missing
Missing
Missing
Missing
Missing
Missing
Missing
Grade
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
Survey Eligibility
Unknown
Unknown
PFI-E
PFI-E
ATES
ATES
ATES
Unknown
Unknown
PFI-H
PFI-H
ATES
PFI-H
PFI-H
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
Unknown
Unknown
PFI-E
PFI-E
ATES
ATES
ATES
D-4
Table D.1. Topical survey eligibility, by age, enrollment, and grade permutations—Continued
Age 19-20
Age 21-24
Survey
Enrollment
Grade
Survey Eligibility Enrollment
Grade
Eligibility
Public/private PK
Unknown
Public/private PK
Unknown
Public/private K
Unknown
Public/private K
Unknown
Public/private 1-2
PFI-E
Public/private 1-2
Unknown
Public/private 3-12
PFI-E
Public/private 3-12
Unknown
Public/private College
ATES
Public/private College
ATES
None of
None of
Public/private these
ATES
Public/private these
ATES
Public/private Missing
ATES
Public/private Missing
ATES
Homeschool PK
Unknown
Homeschool PK
Unknown
Homeschool K
Unknown
Homeschool K
Unknown
Homeschool 1-2
PFI-H
Homeschool 1-2
Unknown
Homeschool 3-12
PFI-H
Homeschool 3-12
Unknown
Homeschool College
ATES
Homeschool College
ATES
None of
None of
Homeschool these
PFI-H
Homeschool these
ATES
Homeschool Missing
PFI-H
Homeschool Missing
ATES
College
PK
ATES
College
PK
ATES
College
K
ATES
College
K
ATES
College
1-2
ATES
College
1-2
ATES
College
3-12
ATES
College
3-12
ATES
College
College
ATES
College
College
ATES
None of
None of
College
these
ATES
College
these
ATES
College
Missing
ATES
College
Missing
ATES
Not in school PK
ATES
Not in school PK
ATES
Not in school K
ATES
Not in school K
ATES
Not in school 1-2
ATES
Not in school 1-2
ATES
Not in school 3-12
ATES
Not in school 3-12
ATES
Not in school College
ATES
Not in school College
ATES
None of
None of
Not in school these
ATES
Not in school these
ATES
Not in school Missing
ATES
Not in school Missing
ATES
Missing
PK
Unknown
Missing
PK
ATES
Missing
K
Unknown
Missing
K
ATES
Missing
1-2
PFI-E
Missing
1-2
ATES
Missing
3-12
PFI-E
Missing
3-12
ATES
Missing
College
ATES
Missing
College
ATES
None of
None of
Missing
these
ATES
Missing
these
ATES
Missing
Missing
ATES
Missing
Missing
ATES
Age 25-65
Enrollment
Public/private
Public/private
Public/private
Public/private
Public/private
Public/private
Public/private
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
Homeschool
College
College
College
College
College
College
College
Not in school
Not in school
Not in school
Not in school
Not in school
Not in school
Not in school
Missing
Missing
Missing
Missing
Missing
Missing
Missing
Grade
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
PK
K
1-2
3-12
College
None of
these
Missing
Survey
Eligibility
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
ATES
D-5
Table D.1. Topical survey eligibility, by age, enrollment, and grade permutations—Continued
Age Over 65
Age Missing
Survey
Survey
Enrollment
Grade
Eligibility
Enrollment
Grade
Eligibility
Public/private PK
Ineligible
Public/private PK
ECPP
Public/private K
Ineligible
Public/private K
PFI-E
Public/private 1-2
Ineligible
Public/private 1-2
PFI-E
Public/private 3-12
Ineligible
Public/private 3-12
PFI-E
Public/private College
Ineligible
Public/private College
ATES
None of
None of
Public/private these
Ineligible
Public/private these
Unknown
Public/private Missing
Ineligible
Public/private Missing
Unknown
Homeschool PK
Ineligible
Homeschool PK
ECPP
Homeschool K
Ineligible
Homeschool K
PFI-H
Homeschool 1-2
Ineligible
Homeschool 1-2
PFI-H
Homeschool 3-12
Ineligible
Homeschool 3-12
PFI-H
Homeschool College
Ineligible
Homeschool College
ATES
None of
None of
Homeschool these
Ineligible
Homeschool these
Unknown
Homeschool Missing
Ineligible
Homeschool Missing
Unknown
College
PK
Ineligible
College
PK
ATES
College
K
Ineligible
College
K
ATES
College
1-2
Ineligible
College
1-2
ATES
College
3-12
Ineligible
College
3-12
ATES
College
College
Ineligible
College
College
ATES
None of
None of
College
these
Ineligible
College
these
ATES
College
Missing
Ineligible
College
Missing
ATES
Not in school PK
Ineligible
Not in school PK
ECPP
Not in school K
Ineligible
Not in school K
PFI-E
Not in school 1-2
Ineligible
Not in school 1-2
PFI-E
Not in school 3-12
Ineligible
Not in school 3-12
PFI-E
Not in school College
Ineligible
Not in school College
ATES
None of
None of
Not in school these
Ineligible
Not in school these
Unknown
Not in school Missing
Ineligible
Not in school Missing
Unknown
Missing
PK
Ineligible
Missing
PK
ECPP
Missing
K
Ineligible
Missing
K
PFI-E
Missing
1-2
Ineligible
Missing
1-2
PFI-E
Missing
3-12
Ineligible
Missing
3-12
PFI-E
Missing
College
Ineligible
Missing
College
ATES
None of
None of
Missing
these
Ineligible
Missing
these
Unknown
Missing
Missing
Ineligible
Missing
Missing
Unknown
D-6
Appendix E. Envelopes Used in the 2017 Web Test
E-1
AN EQUAL OPPORTUNITY EMPLOYER
PARTMENT OF COMMERCE
mies and Statistics Administration
11sus Bureau
om Street
1ville IN 47132-0001
L BUSINESS
'or Private Use $300
i(7198SW) (9-2015)
201740000070 10 01
SEQ001-02260
5
•1111•••1 l1 l1 l111l1•11•1111•1•1•1••1 •11111•1 1111 l111111•1 l•111•1
ARLINGTON RESIDENT
1
Please respond within two weeks.
semanas.
�nsus
nited States"'
PRESORTED
FIRST-CLASS MAIL
POSTAGE & FEES PAID
U.S. CENSUS BUREAU
PERMIT NO. G-58
US DEPARTMENT OF COMMERCE
Economics and Statistics Administration
US Census Bureau
AN EQUAL OPPORTUNITY EMPLOYER
1201 E 10th Street
Jeffersonville IN 47132-0001
OFFICIAL BUSINESS
Penalty for Private Use $300
0939
PRESORTED
FIRST-CLASS MAIL
POSTAGE & FEES PAID
ll,S. CENSlJS IH!REAlJ
PERMIT NO. G-�8
BC-1325 (5-2011)
Please respond within two weeks.
t:.U I I -rvv---- -
SEQ001-00120
···1 !11''.'11 ,,1111 ,,.,,,,••111l11•1 1•1•l•1lll1l1l1 1
HYATTSVILLE RESIDENT
1
l ·•1·l·1
1 11 11
0939
. . .. -rrxa ..$2$14. ,,.,------·
I I I Il I I I I I I I I I I I I I I I I I I I I
201630000020
(/)
(/)
L...
c..
X
w
11
03
Grand Rapids RESIDENT
Grand Rapids MI 49507
ATTENTION
FEDEX COURIER/CSA
IF UNDELIVERABLE
STAT14 - NO REROUTE NO RETURN
ORIGIN ID=LOUA
SHIPPING DEPARTMENT
US CENSUS BUREAU
1621 DUTCH LANE
SHIP DATE:
04MAR17
ACTWGT: 0. 5 LB MAN
CAO: 01410111CAF"E2512
JEFFERSONVILLE, IN 47132
UNITED STATES US
BILL SENDER
_________
ro GRAND RAPIDS RESIDENT
.,
�
!,
GRAND RAPIDS Ml 49507
PO: 2016300000206103
REF: 52184
ilillli.i'
� 6695 7668 4154
SA GRRA
I 111
FfEfi
RELN
3785346
07 MAR 8:00P
STANDARD OVERNIGHT
I I
NSR RES
49
Ml-us
8��
111
Align bottom of peel-and-stick airbill or pouch here.
File Type | application/pdf |
Subject | Prepared for the National Center for Education Statistics (NCES) |
Author | Stephen Wenck |
File Modified | 2018-07-02 |
File Created | 2018-05-03 |