2017 National Household Education Survey (NHES)
OMB# 1850-0803 v.163
National Center for Education Statistics (NCES)
August 4, 2016
TABLE OF CONTENTS
Page
JUSTIFICATION 1
A.1 Circumstances Necessitating Collection of Information 5
A.2 Purposes and Uses of the Data 5
A.3 Use of Improved Information Technology 6
A.4 Efforts to Identify Duplication 6
A.5 Consultations Outside the Agency 7
A.6 Payments to Respondents 9
A.7 Assurance of Confidentiality 9
A.8 Sensitive Questions 10
A.9 Estimated Response Burden 11
A.10 Annualized Cost to Respondents 11
A.11 Annualized Cost to the Federal Government 11
A.12 Publication Plans and Project Schedule 11
A.13 Approval for Not Displaying the Expiration Date for OMB Approval 12
A.14 Exceptions to the Certification Statement 12
B.1 Statistical Design and Estimation 12
B.2 Survey Procedures 16
B.3 Methods for Maximizing Response Rates 17
B.4 Individuals Responsible for Study Design and Performance 17
References 17
List of Tables
Table Page
1 Estimated response burden for NHES:2017 Web Data Collection Test 11
2 Expected margin of error for NHES:2017 Web Data Collection Test percentage estimates, by subgroup size, topical survey, and estimate 13
3 NHES:2017 Web Data Collection Test expected sample size, interview count, design effect, and effective interview count, by survey 14
List of Exhibits and Figures
Exhibit
1 Surveys conducted under the National Household Education Surveys Program, by years administered: 1991 through 2016 2
2 NHES:2017 Web Data Collection Test schedule of major activities 12
Figure
1 Screener Data Collection 18
2 Single Within Household Topical Data Collection 19
3 Dual Within Household Topical Data Collection 20
Request for Clearance
In 2008, the National Household Education Surveys program (NHES) began a redesign effort to convert from a system of landline random digit dial (RDD) surveys to a self-administered mail survey using an address-based sample (ABS). This redesign was prompted by declines in response rates to the telephone survey and concerns about population coverage using the landline telephone frame (due to increasing conversion to cellular-only coverage). The goals of the redesign effort were to develop and assess approaches to collecting important information on education topics from households while improving response rates and coverage from the previous design. A feasibility test of the new design was conducted in 2009 followed by a field test in 2011. The field test results helped to inform the final design of a full-scale NHES in 2012 (OMB# 1850-0768 v.9). During the same period of time, NCES, with input from the Interagency Working Group on Expanded Measures of Enrollment and Attainment (GEMEnA), conducted two pilot studies (OMB# 1850-0803), first a two-stage telephone survey (42% response rate) and then a single-stage self-administered mail survey (69% response rate). Finally, in 2014, to further improve the NHES design, NCES conducted a feasibility study, which included two topical surveys: the Adult Training and Education Survey (ATES) and the After-school Programs and Activities Survey (ASPA) (OMB# 1850-0803 v.85). The feasibility study informed the design of the full-scale NHES conducted in 2016 (OMB# 1850-0768 v.11-13).
One feature of the 2016 NHES was a mode experiment, testing response rates to a web data collection instrument. This web instrument closely paralleled the paper questionnaires used in the data collection. In preparation for the next full-scale NHES data collection, planned for 2019, NCES proposes to test in 2017 a web data collection instrument that is more robust and better leverages the functionality of the web environment. In addition, NCES will test the sampling of up to two household members for topical questionnaires (dual household sampling) and conduct a targeted incentive experiment. This package requests clearance for the NHES:2017 Web Data Collection Test, the dual household sampling experiment, the targeted incentive experiment, three experiments related to data collection procedures, and a split-panel experiment of ATES survey item wording. These activities will inform the final design of a planned 2019 full-scale NHES.
The NHES:2017 Web Data Collection Test will include a household screener survey and three topical surveys; the Adult Training and Education Survey (ATES), the Early Childhood Program Participation Survey (ECPP), and the Parent and Family Involvement in Education Survey (PFI) 1.
The target populations for the ECPP, PFI, and ATES surveys are mutually exclusive, such that a single person can be eligible for only one topical survey. The target population for the ECPP survey consists of the U.S. noninstitutional population of children ages six or younger who are not yet in kindergarten. The target population for the PFI survey consists of the U.S. noninstitutional population of children/youths ages 20 or younger who are enrolled in kindergarten through twelfth grade, or homeschooled for equivalent grades. Finally, the target population for the ATES survey consists of the U.S. noninstitutional population of adults ages 16 through 65 who are not enrolled in grades 12 or below or homeschooled for equivalent grades. For the purpose of determining eligibility, age will be calculated as of December 31, 2016. The NHES:2017 Web Data Collection Test will screen 100,000 households. Of these households, it is expected that approximately 36,000 will return a screener survey. From these completed screeners, it is expected that approximately 18,700 households will contain an eligible adult but no eligible children; approximately 11,000 will contain an eligible adult and an eligible child; and approximately 120 will contain an eligible child but no eligible adults (for example, children who live with grandparents above age 65).
Survey item justifications are provided in Appendix A. Respondent contact materials are in Appendix B. Information about the results of cognitive interviews designed to evaluate new ATES items is presented in Appendix C. Appendices D (screener) and E (topical) present details of each survey instrument, including variable names, question wording, response options, and skip instructions. Appendix F includes representative screen shots of the web instrument.
NCES developed NHES to complement its institutional surveys by serving as the principal mechanism for addressing education topics appropriate for households rather than establishments. Such topics have covered a wide range of issues, including early childhood care and education, children’s readiness for school, parent perceptions of school safety and discipline, before- and after-school activities of school-age children, participation in adult and continuing education, parent involvement in education, school choice, homeschooling, and civic involvement (see Exhibit 1 below). The NHES currently consists of three surveys which provide data on young children, school-aged children, and adults. NHES uses a two-stage design in which a household screener collects household membership and key characteristics for sampling and then appropriate topical survey(s) are mailed to sample members.
The Adult Training and Education Survey (ATES)
ATES
provides means to investigate issues related to adults’
education, training, and credentials that cannot be adequately
studied through the Center’s institution-based data collection
efforts. It targets non-institutionalized adults in the United
States ages 16 to 65 not enrolled at grade 12 or below. The ATES
collects information on educational attainment, prevalence and
characteristics of certifications and licenses and their holders,
preparation for a new certification or license, prevalence and
characteristics of educational certificates and certificate holders,
and completion and key characteristics of work experience programs
such as apprenticeships and internships. It also collects detailed
employment and background information.
The Early Childhood Program Participation Survey (ECPP)
The
ECPP, previously conducted in 1991, 1995, 2001, 2005, and
2012, surveys families of children ages 6 or younger who are not yet
enrolled in kindergarten and provides estimates of children’s
participation in care by relatives and non-relatives in private
homes and in center-based daycare or preschool programs (including
Head Start and Early Head Start). Additional topics addressed in
ECPP interviews have included family learning activities;
out-of-pocket expenses for nonparental care; continuity of care;
factors related to parental selection of care; parents’
perceptions of care quality; child health and disability; and child,
parent, and household characteristics.
The Parent and Family Involvement in Education Surveys (PFI)
The
PFI, previously conducted in 1996, 2003, 2007, and 2012, surveys
families of children and youth enrolled in kindergarten through 12th
grade or homeschooled for these grades, with an age limit of 20
years, and addresses specific ways that families are involved in
their children’s school; school practices to involve and
support families; involvement with children’s homework; and
involvement in education activities outside of school. Parents of
homeschoolers are asked about their reasons for choosing
homeschooling and resources they used in homeschooling. Information
about child, parent, and household characteristics is also
collected.
Exhibit 1. Surveys conducted under the National Household Education Surveys Program, by years administered: 1991 through 2016
Topical survey |
NHES survey administration |
|
|
||||||||
1991 |
1993 |
1995 |
1996 |
19991 |
2001 |
2003 |
2005 |
2007 |
2012 |
2016 |
|
Early childhood education/ program participation |
√ |
|
√ |
|
√ |
√ |
|
√ |
|
√ |
√ |
Adult education and training |
√ |
|
√ |
|
√ |
√ |
√ |
√ |
|
|
√ |
School readiness |
|
√ |
|
|
√ |
|
|
|
√ |
|
|
School safety and discipline |
|
√ |
|
|
|
|
|
|
|
|
|
Parent and family involvement in education |
|
|
|
√ |
√ |
|
√ |
|
√ |
√ |
√ |
Civic involvement |
|
|
|
√ |
√ |
|
|
|
|
|
|
After-school programs and
|
|
|
√2 |
|
√ |
√3 |
|
√ |
|
|
|
Household library use |
|
|
|
√ |
|
|
|
|
|
|
|
Homeschooling |
|
|
|
|
√ |
|
√ |
|
√ |
√ |
√ |
1 The NHES:1999 was a special end-of-decade administration that measured key indicators from the surveys fielded during the 1990s.
2 The After-School Programs and Activities Survey of the NHES:1995 only asked about children in first through third grades.
3 The After-School Programs and Activities Survey of the NHES:2001 also included items on before-school programs.
SOURCE: U.S. Department of Education, National Center for Education Statistics, National Household Education Surveys Program (NHES), 1991–2016.
Data from the NHES are used to provide national cross-sectional estimates on populations of special interest to education researchers and policymakers. For surveys about children, the population of interest is defined by age or grade in school, or both, depending on the particular survey topic and research questions. For surveys of adults, the population of interest is those ages 16 to 65 who are not enrolled in grade 12 or below, excluding those on active duty military service and those who are institutionalized. The NHES targets these populations using specific screening and sampling procedures.
The NHES design also yields estimates for subgroups of interest for each child and adult survey, as defined by age (or grade for children), education level for adults, Hispanic origin, and racial background for all populations of interest2. In addition to providing cross-sectional estimates, the NHES is designed to produce estimates from repeated cross sections to measure changes over time in key statistics.
The NHES:2012 was the first full-scale data collection using an addressed-based sample and a self-administered questionnaire and included the PFI and the ECPP. The overall screener plus topical response rate was approximately 58 percent for both the PFI and the ECPP in 2012, compared to the 2007 overall response rate of 39-41% (depending on the survey). The results suggested that the new methodology had the ability to address the response rate and coverage issues identified in the 2007 data collection. Full-scale data collection of the NHES:2016, informed by the 2014 NHES Feasibility Study, will be completed in September 2016.
Survey data from the NHES have been used for a large number of descriptive and analytic reports and articles, including NCES publications, publications of other Federal agencies, policy analyses, theses and dissertations, conference papers, and journal articles. A list of NHES publications issued by NCES can be found on the NHES website, http://nces.ed.gov/nhes.
Dual Household Sampling Experiment
This experiment will test whether households with members eligible for two or more topical surveys can be asked to complete two topical surveys (either the ATES plus a child survey, or the ECPP plus one of the PFI surveys) without negatively impacting topical response rates and data quality. The initial experimental condition will be identified by a variable called DUAL_FLAG. Of the 100,000-household screener sample, 2/3 (66,666 or 66,667) will be assigned to DUAL_FLAG = 1 (no dual sampling); these households will be sampled for no more than one topical survey, regardless of the number of surveys for which they have eligible members. The remaining 1/3 of households will be assigned to DUAL_FLAG = 2 (dual sampling); these households will be sampled for two topical surveys if they have members eligible for two or more. Households with members eligible for only one topical survey will be routed to that topical survey regardless of their treatment group assignment.
Unlike with other experiments, an initial 50/50 allocation would not be optimal for comparisons related to dual sampling, for two reasons. First, a significant portion of both groups will consist of households that only have members eligible for one survey and that will therefore need to be excluded from the comparisons (regardless of whether they are in the control or treatment group). Second, and more importantly, a larger allocation to the single-questionnaire control group is needed to compensate for the relatively low sampling rates for the ATES and PFI, in particular. For example, under the single-questionnaire condition, households with both adults and children will be sampled for ATES at a rate of only 20 percent; whereas under the dual-questionnaire condition, such households will be sampled with certainty for ATES (except those with both PFI- and ECPP-eligible children). This means that, for comparisons of ATES response rates between single- and dual-questionnaire households, a 50/50 allocation would yield a relatively large number of ATES cases in the treatment group, but a relatively small number in the control group. In designing the sample, estimates of experimental group sample sizes and level of precision (discussed in section B.1) suggested that the 1/3 allocation is close to optimal for the comparisons of ATES and PFI response rates between single- and dual-questionnaire households, which are the comparisons that are expected to have the smallest sample sizes.
Targeted Incentive Experiment
This experiment will test four alternative screener incentive protocols aimed at reducing nonresponse bias. The incentive that will be sent with the first screener mailing will vary only for households whose predicted response propensity (RP) is below the 25th percentile (referred to as “low-RP households”). Response propensity will be modeled using data from the NHES:2016 Web survey component (this model has not yet been created because the NHES:2016 is still being fielded). Based on the RP model, households will be assigned to one of three RP groups: High: 25% of households with highest propensity scores; Medium: 50% of households with next highest propensity scores; Low: 25% of households with lowest propensity scores. The percentage of cases allocated to the low RP group is increased from what was used in NHES:2016 (from 15% to 25%) to ensure sufficient low RP household sample size.
The experimental condition will be identified by a variable called INC_FLAG. Of the 100,000-household screener sample, 30,000 will be assigned to INC_FLAG = 1 ($5 prepaid cash to low-RP households), 30,000 to INC_FLAG = 2 ($5 prepaid cash and $20 contingent debit card to low-RP households), 30,000 to INC_FLAG = 3 ($5 prepaid cash and $30 contingent debit card to low-RP households), and 10,000 to INC_FLAG = 4 ($5 prepaid debit card to low-RP households).3 For households whose RP is at or above the 25th percentile, the screener incentive will be the same regardless of the treatment group. “Medium-RP” households with a predicted RP at or above the 25th percentile but below the 75th percentile will receive a $5 prepaid cash incentive with the first screener mailing, regardless of the treatment group. “High-RP” households with a predicted RP at or above the 75th percentile will receive a $2 prepaid cash incentive with the first screener mailing, regardless of the treatment group.
The purpose of the prepaid debit card (sent to low-RP cases with INC_FLAG = 4) is to evaluate the proportion of low-RP cases that appear to be receiving the incentive, regardless of whether they respond (e.g. by estimating the proportion of cases for which the debit card is used). This will provide additional insight into the effectiveness of prepaid incentives for low-RP households by suggesting what proportion of low response propensity households use prepaid incentives without completing the survey compared to the proportion of low response propensity households that do not utilize the prepaid incentive, suggesting that they may not have even opened the mailing. It is not expected that the prepaid debit card will be adopted as a permanent incentive intervention in a future full-scale administration. For this reason, statistical comparisons of response rates under this treatment to response rates under the other incentive treatments are not necessary. The purpose of allocating only 10 percent of the sample to this treatment is to minimize the risk to the overall sample yield, while slightly increasing the available statistical power for comparisons between the other three incentive treatments relative to an equal allocation.
This experiment will aim to determine whether initial login rates to the web instrument are affected by whether the web letters are mailed using large (i.e. full size) or small (i.e. letter-size) envelopes. The experimental condition will be identified by a variable called ENVELOPE. Of the 100,000-household screener sample, 95,000 will be assigned to ENVELOPE = 1 (large envelope) and 5,000 to ENVELOPE = 2 (small envelope). Although an allocation of 50 percent each to the control and treatment groups would maximize statistical power for response rate comparisons, the lower allocation to the treatment group is proposed to prevent a significant loss in yield if the small envelope leads to a large reduction in login rates. This will reduce the risk associated with the experiment while maintaining sufficient statistical power to test the screener-level effect of the treatment. It will be assumed that any effect of envelope size on login rates from topical mailings will be similar to the screener-level effect.
Advance Letter Experiment
This experiment will test whether initial login rates to the web instrument are affected by whether households receive an advance letter prior to the first web letter mailing. The experimental condition will be identified by a variable called ADVANCE. Of the 100,000-household screener sample, 50,000 will be assigned to ADVANCE = 1 (receiving an advance letter) and 50,000 to ADVANCE = 2 (no advance letter). Advance letters were first tested with paper screeners in the NHES:2011 Field Test. Cases that did not receive an advance letter did not show significantly lower final response rates, so a 50/50 allocation is used to maximize statistical power.
Automated Telephone Reminder Experiment
This experiment will test whether households that do not log in by a certain point in the field period (when third screener letter is mailed), and that have a phone number available on the sample file, can be encouraged to respond using a pre-recorded telephone message. Telephone reminders were used in NHES:2012. For the NHES:2017 Web Data Collection Test, the reminder calls will be placed on the day that the third screener letter is mailed. The experimental condition will be identified by a variable called PH_REMIND. Of the 100,000-household screener sample, 50,000 will be assigned to PH_REMIND = 1 (no phone reminder) and 50,000 to PH_REMIND = 2 (phone reminder). It is unlikely that telephone reminders would significantly change screener login rates, so a 50/50 allocation is used to maximize statistical power.
ATES Split Panel Experiment
This experiment will test revisions to the wording and structure of some key ATES items. The initial experimental condition will be identified by a variable called ATES_VERS. This flag will be used only if a household completes the screener and is sampled for ATES. Of the 100,000-household screener sample, 50,000 will be assigned to ATES_VERS = 1 and will be routed to ATES version A (with the old wording and structure for the items being tested). The remaining 50,000 will be assigned to ATES_VERS = 2 and will be routed to ATES version B (with the revised wording and structure for the items being tested). It is not expected that the split panel will significantly affect response rates to the ATES topical.
The Education Sciences Reform Act of 2002 (ESRA 2002: 20 U.S. Code § 9543) defines the legislative mission of NCES to collect, report, analyze, and disseminate statistical data related to education in the United States and in other nations. The NHES is specifically designed to support this mission by providing a means to investigate education issues that cannot be adequately studied through the Center’s institution-based data collection efforts. For example, some school-age children are homeschooled rather than attending a public or private school. There is no available sample frame that includes all of the homeschooling students across the United States. Similarly, there is no available sample frame that includes all child care providers from which to sample.
Likewise, although attaining a postsecondary credential has become increasingly important for securing opportunities to get high-return jobs in the United States in the 21st century, NCES has traditionally only collected data on postsecondary certificates and degrees awarded through credit-bearing instruction in institutions of higher education that participate in Title IV federal student aid programs. These comprise only a portion of the subbaccalaureate education and training that American adults seek and complete in order to learn the skills they need for finding and keeping good-paying jobs. It is most efficient to interview adults through a household-based approach rather than trying to obtain lists from a myriad of public and private credential awarding bodies. Also, the household approach allows for capture of adults who do not participate in training or have a credential, providing points of comparison.
There are also methodological reasons necessitating the NHES:2017 Web Data Collection Test. Preliminary analysis of NHES:2016 data indicated a high likelihood that a screener respondent to a web instrument will immediately complete a topical questionnaire. Topical response rates for the child surveys in the web sample are about 94 percent. To the respondent, the NHES can be completed in one step through the web instrument. The efficiency per response is greatly increased when the need to send out a separate topical mailing is eliminated. The efficiency of the overall sample is increased with the completion of multiple topical surveys by one household. Therefore, it is also necessary to test the feasibility of asking households to complete more than one topical survey via a web instrument.
The data collected in the NHES:2017 Web Data Collection Test will be used to evaluate methodological and operational issues in the NHES. Most critically, it will demonstrate whether or not households will respond to multiple topical surveys via a web data collection instrument. The NHES:2017 Web Data Collection Test data will also be used to evaluate contact strategies for encouraging web response, the impact of response propensity tailored incentives, and question item and web instrument performance. Data will not be used to generate official national estimates of the population. Information gathered from this study will be used to make recommendations for methodological approaches and survey measures.
The NHES:2017 Web Data Collection Test will be administered for NCES by the U.S. Census Bureau using a web instrument developed by Census. The web instruments are designed to minimize respondent burden by eliminating the cumbersome skip patterns required in the pencil and paper instruments. Electronic specifications will be developed to guide the coding of these instruments. The instruments will be securely hosted on the Census server.
PFI and ECPP
Population: Most other surveys do not address the topics covered in NHES for the populations of interest. For example, the Head Start Family and Child Experiences Survey (FACES) focuses on children in Head Start, whereas all children who have not yet started kindergarten are of interest in the ECPP Survey. Likewise, the National Survey of Early Care and Education (NSECE) focuses primarily on low-income children and their program participation. The National Survey of Parents of Public School Students and Survey of Family and School Partnerships in Public Schools focus on parents of children in public schools. Those whose children attend private schools or are homeschooled are not represented. Some studies, such as the Early Childhood Longitudinal Study, Birth Cohort (ECLS-B); the Early Childhood Longitudinal Study, Kindergarten Class of 1998-1999 (ECLS-K); and the Early Childhood Longitudinal Study-Kindergarten Class of 2010-11 (ECLS-K:2011) focus on single-year cohorts that are followed over time and therefore do not provide nationally representative data on different age groups. The NHES surveys are designed to complement these longitudinal collections with more frequent and more inclusive cross-sectional data.
Survey Content: Extant studies are limited in the content that they include relative to the goals of the NHES surveys. Studies such as the National Survey of America’s Families and the National Study of the Changing Workforce collect some information on child care or program participation, but their primary emphasis is on other topics, and the depth of information on early care and education experiences is limited. The Head Start FACES project collects information on Head Start program participation and some family measures, but does not account for all nonparental care and programs. The Current Population Survey October Education Supplement is limited to a relatively small number of items on education participation and does not address the roles that parents play in their children’s school, schoolwork, and home activities. Also, no nationally representative study other than the NHES collects detailed data on homeschooling.
Current Estimates and Measuring Change over Time. Many of the extant surveys follow one cohort or periodic cohorts (e.g., the ECLS-K, Head Start FACES, NSECE) or are no longer conducted (e.g., the National Survey of America’s Families, Family Involvement in Education: A National Portrait). As a result, they cannot meet the NHES goal of providing up-to-date cross-sectional estimates and measures of change over time for all children who have not started kindergarten or for children in kindergarten through 12th grade, as is provided by the NHES.
ATES
In the 2000s, senior policy officials in the Departments of Education, Commerce, and Labor, foundations including the Gates Foundation and Lumina, and research organizations such as the Georgetown Center for Education and the Workforce recognized a lack of valid statistical information on prevalence of industry-recognized certifications and education certificates and called for the development of new data sources. A series of meetings during the fall of 2009 launched a broad effort to begin to define and enumerate these credentials, which became GEMEnA. NCES conducted a review of research literature and data collections since the work of a previous Interagency Committee in 2000, from which NCES developed a bank of existing survey items on certifications (completed 11/2009) and education certificates (completed 1/2010). This research found no surveys that adequately captured comprehensive data on the extent to which adults participate in training or non-Title IV credit bearing education and attain non-degree credentials.
Due to these limitations in extant studies and the household based sampling of NHES, NCES plans to conduct the PFI-E, PFI-H, ECPP, and ATES surveys under the NHES program. A review of surveys that cover topics similar to those in the NHES child surveys showed there is little overlap between the NHES and these other surveys. Although GEMEnA’s work has resulted in the addition of survey items on certifications and licenses to the Current Population Survey and other federal surveys, ATES is the only one that collects detailed information on the attainment of non-degree credentials from the general U.S. adult population.
A Technical Review Panel (TRP) comprising leading experts in survey methodology was established to provide input to the redesign of the NHES system. Most members of the panel met in February 2010 to discuss the proposed design for the field test, and their comments and suggestions led to changes reflected in this submission.
Technical Review Panel Participants and Their Affiliation at the Time of TRP Recruitment
Nancy Bates
U.S. Census Bureau
649 A. St. N.E.
Washington, DC 20002
Tel: 301-763-5248
E-mail: nancy.a.bates@census.gov
Paul Beatty
National Center for Health Statistics
Division of Health Care Statistics
3311 Toledo Road,
Hyattsville, MD 20782
Tel. 301-458-4090
E-mail: pbeatty@cdc.gov
Johnny Blair
Survey Sampling and Methodology
Abt Associates Inc.
4550 Montgomery Avenue
Bethesda, MD 20814-3343
Tel: 301-634-1825
E-mail: Johnny_Blair@AbtAssoc.com
Stephen Blumberg
National Center for Health Statistics
3311 Toledo Road
Hyattsville, MD 20782
Tel.301-458-4107
E-mail: stephen.blumberg@cdc.hhs.gov
Mick Couper
Survey Research Center
University of Michigan
ISR, 426 Thompson Street
Ann Arbor, MI 48104
Tel: 734-647-3577
E-mail: mcouper@umich.edu
Don Dillman
Social and Economic Sciences Research Center, Professor
Washington State University
133 Wilson Hall
Pullman, WA 99164-4014
Tel: 509-335-1511
E-mail: dillman@wsu.edu
Robert Groves
Survey Research Center, Institute for Social Research
University of Michigan
426 Thompson Street
Ann Arbor, MI 48106-1248
Tel: 734-764-8365
E-mail: bgroves@isr.umich.edu
Scott Keeter
Pew Research Center
1615 L. St. NW. Suite 700
Washington, DC 20036
Tel: 202-419-4362
E-mail: skeeter@pewresearch.org
Kristen Olsen
Survey Research and Methodology
University of Nebraska-Lincoln
201 N. 13th St.
Lincoln, NE 68588-0241
Tel: 402-472-7737
E-mail: kolson5@unl.edu
Roger Tourangeau
Joint Program in Survey Methodology
University of Maryland
1218 LeFrak Hall, University of Maryland
College Park, MD 20742
Tel: 240-595-0057
E-mail: RTourango@survey.umd.edu
Gordon Willis
Division of Cancer Control / Population Sciences
National Cancer Institute
6130 Executive Blvd, MSC 7344, EPN 4005
Bethesda, MD 20892-7344
Tel: 301-594-6652
E-mail: willisg@mail.nih.gov
The content of the NHES:2017 Web Data Collection Test child-focused topical surveys repeats the content developed for the NHES:2012 and NHES:2016 administrations and prior NHES administrations. As a result, the PFI and ECPP surveys reflect the cumulative input of many experts in the field and past NHES Technical Review Panels. In order to ensure that the ECPP and PFI surveys address important issues in the topical areas of interest and incorporate important emerging issues, the design phase of the 2012 study included consultations with experts in the substantive areas addressed in the surveys. These experts included persons in government agencies, academe, and research organizations.
Substantive Experts: ECPP and Their Affiliation at the Time of TRP Recruitment
Jerry West - Mathematica
Mathematica Policy Research, Inc.
600 Maryland Ave., SW, Suite 550
Washington, DC 20024-2512
E-mail: jwest@mathematica-mpr.com
Ann Collins – Abt Assoc. Cambridge, MA
Abt Associates Inc.
55 Wheeler Street
Cambridge, MA 02138-1168
E-mail: Ann_Collins@abtassociates.com
Ron Haskins – Brookings Institution and Casey Foundation
The Brookings Institution
1775 Massachusetts Ave., NW
Washington, DC 20036
E-mail: rhaskins@brookings.edu
Ivelisse Martinez-Beck – HHS Division of Child and Family Development
Administration for Children and Families
370 L’Enfant Promenade, S.W.
7th Floor West, Room 7A011
Washington, D.C. 20447
E-mail: ivelisse.martinezbeck@acf.hhs.gov
Lynda Laughlin – Census
U.S. Census Bureau
4600 Silver Hill Road
Suitland, MD 20746
E-mail: lynda.l.laughlin@census.gov
Substantive Experts: PFI and Their Affiliation at the Time of TRP Recruitment
Richard Brandon – Univ. of Washington
Human Services Policy Center, Evans School of Public Affairs
University of Washington
1107 NE 45th St.
Seattle, WA 98105
E-mail: brandon@u.washington.edu
Annette Lareau – Univ. of Pennsylvania
Department of Sociology
University of Pennsylvania
McNeil Hall
Philadelphia, PA 19104
E-mail: alareau@sas.upenn.edu
Joyce Epstein – The Johns Hopkins University
Center for Social Organization of Schools
3003 N. Charles St., Suite 200
Baltimore, MD 21218
E-mail: jepstein@csos.jhu.edu
Lawrence Aber - NYU
Steinhardt School of Culture, Education, and Human Development
New York University
82 Washington Square East
New York, NY 10003
E-mail: lawrence.aber@nyu.edu
As noted above, the ATES is a product of work guided by GEMEnA, which met monthly or bi-monthly from 2009 to 2016. It consisted of senior staff from the Bureau of the Census, the Bureau of Labor Statistics, the Council of Economic Advisors, the National Center for Education Statistics, the National Center for Science and Engineering Statistics, the Office of Statistical and Science Policy (OMB), and the Office of the Under Secretary of Education. In addition, GEMEnA established an Expert Panel of substantive experts in the fields of workforce education, economic development, and non-degree credentials that met in November of 2012, March and December of 2014, and October of 2015 to provide input on ATES content.
Substantive Experts: GEMEnA Member Agency Representatives
Census Bureau
Bob Kominski
Stephanie Ewert
Bureau of Labor Statistics
Dori Allard
Harley Frazis
National Center for Science and Engineering Statistics
Dan Foley
John Finamore
OMB Office of Statistical and Science Policy
Shelly Martinez
Department of Education – Office of the Under Secretary
Jon O’Bergh
National Center for Education Statistics
Sharon Boivin
Lisa Hudson
Kashka Kubzdela
Andy Zukerberg
NHES Screener incentives background. The NHES:2003 included an extensive experiment in the use of small cash incentives to improve unit response. The experiment demonstrated that gains in respondent cooperation could be realized with relatively modest cash incentives (Brick et al. 2006). Such incentives were used in NHES:2005 and NHES:2007. The NHES:2011 Field Test included an incentive experiment at the screener level testing the effect on response rates of including a $2 vs. a $5 cash incentive in the initial screener mailing. The $5 incentive was associated with higher response rates, so $5 was used with the NHES:2012 screener. NHES:2016 also contained an incentive experiment that tested differential incentive amounts of $0, $2, $5, or $10 based on response propensity. Results from this experiment showed high response rates for $0 or $2 among addresses predicted to respond at high rates to the screener and no effect for the $10 incentive.
NHES Topical incentives background. The NHES:2012 included an incentive experiment at the topical level to further refine an optimal strategy for the use of incentives in the NHES. For those households in which a child was selected as the subject of an ECPP or PFI questionnaire, cases that responded to the first or second mailing of the screener received a $5 cash incentive with the initial topical survey mailing. Evidence from the 2011 Field Test indicated that topical response rates could benefit significantly by providing later screener respondents with a larger topical incentive. To confirm this finding, NCES subsampled late screener respondents (those responding to the 3rd or 4th questionnaire mailing) to receive either a $5 or $15 cash incentive with their first topical survey mailing. The results from the NHES:2012 indicate that among later screener responders, the $15 incentive was associated with higher response rates compared to the $5 incentive. Based on these findings, the same strategy was used for NHES:2016.
NHES:2017 Web Data Collection Test Incentives. Building on the NHES:2016 screener experiment results, which showed high response rates for $0 or $2 among addresses predicted to respond at high rates to the screener, NHES: 2017 will offer $2 to all respondents modeled as being among the 25 percent of the sample most likely to respond to a web survey. Five dollars will be offered to the middle half of respondent on the response propensity scale. Because a $10 screener incentive was ineffective in NHES:2016, NHES Web Test will experiment with the use of incentives that are contingent on survey completion for those in the 25 percent of the sample least likely to respond. At the topical level, all households which completed a screener survey but not a topical survey will either receive a $5 topical incentive or a $15 topical incentive depending on the date of their screener response.
As noted earlier, the NHES:2017 Web Data Collection Test will include a targeted incentive experiment designed to examine the effectiveness of sending a promised incentive, contingent upon survey completion, to sampled households least likely to respond to the web survey. Response likelihood will be determined by a response propensity model using data collected during the NHES:2016 administration. The contingent incentive will be sent in addition to the standard $5 prepaid cash incentive and will be delivered via a debit card, which will be mailed in the initial screener package, but not activated until the respondent completed the survey.
Households in the low-propensity stratum will be placed in one of four treatment groups - $5 prepaid cash incentive; $5 prepaid cash incentive plus a $20 contingent debit card; $5 prepaid cash incentive plus a $30 contingent debit card; $5 prepaid debit card. The contingent debit cards will be activated by the vendor on the direction of Census after the household has completed the survey. The $5 prepaid debit card will be used to evaluate the propensity of respondents to open their mail with use of the prepaid debit functioning as a proxy of opening and reading their mail; it is being evaluated for research purposes and is not being considered for NHES:2019.
All households in the high-propensity stratum (the approximately 25% of households predicted to be most likely to respond) will receive only a $2 cash screener incentive. The remaining 50% of households (those with response propensities between the 25th and 75th percentiles) will receive a $5 cash screener incentive only. The four treatment groups from the lowest response propensity stratum will be compared to each other to determine whether sending the contingent incentive along with the $5 prepaid cash incentive yields a meaningful increase in response rates for this hardest-to-reach group.
Respondents will be informed of the voluntary nature of the survey and of the confidentiality provision in the initial cover letter and on the questionnaires, stating that their responses may be used for statistical purposes only and may not be disclosed, or used, in identifiable form for any other purpose except as required by law [Education Sciences Reform Act of 2002 (ESRA 2002), 20 U.S. Code § 9573].
Additionally, all staff members and subcontractors working on the NHES and having access to the data are required to sign the NCES Affidavit of Nondisclosure. Notarized affidavits are kept on file by the contractor and submitted to NCES quarterly. In addition, all contractor staff members who have access to confidential data and work on the project more than 30 days are required to have a federal background check.
The NHES is a voluntary survey, and no persons are required to respond to it. In addition, respondents may decline to answer any question in the survey. Respondents are informed of the voluntary nature of the survey in the cover letter as well as in the burden statement and Frequently Asked Questions via the web data collection instrument. At the same time, some items in the surveys may be considered sensitive by some respondents:
ATES: The ATES includes a question about earnings that may be considered sensitive:
Personal earnings in the past year (categorical)
A measure of earnings is important because educational attainment is statistically associated with earnings, and the empirical properties of the survey measures may differ for people with different earnings levels. The American Community Survey (ACS) was the source for most of the ATES employment and background items. Item response rates for earnings questions were reasonably high in the 2014 NHES Feasibility Study. The item response rate for personal earnings was 96.4 percent.
PFI and ECPP: Child development and education experts consider economic disadvantage and children’s disabilities to be important factors in children’s school experiences and their activities outside of school. As a result, the child surveys contain measures of these characteristics, including:
Household income;
Receipt of public assistance in the form of Temporary Assistance to Needy Families (TANF), food stamps, and the Women, Infants, and Children program (WIC); and
Children’s disability conditions.
Measures of household income and government assistance are important because access to early childhood programs by children at-risk and the education involvement of families of children from different socioeconomic backgrounds is of interest to policymakers, child development specialists, and educators. These items are important in identifying children at risk and have been administered successfully in previous NHES studies. Respondents are also asked the age at which they first became a parent. This may be sensitive for parents in some situations.
The 2012 response rates for these items were very high. For total household income, the 2012 PFI survey had an item response rate of 95.4 percent. Item response rates for receipt of public assistance were also high: for Temporary Assistance to Needy Families, 97.9 percent; for the Women, Infants, and Children Program, 97.7 percent; and for Food Stamps, 98.4 percent. In the 2012 mail survey, it is not possible to examine item missing data for child disability because of the multiple response list format of the question. Missing data may indicate either unreported data or that the child does not have a disability. However, in prior NHES collections, response to this item was high: in the 2007 PFI the item response rates were over 99 percent. In the 2012 PFI, the item response rate for age at which the child’s parent first became a parent to any child was 96.2 for the first parent reported and 96.0 for the second parent reported.
ECPP: In addition to the items above, the ECPP survey also includes questions about assistance to pay for child care. This measure is important to understand families’ and children’s access to early childhood programs.
PFI: The PFI survey includes items concerning children’s school performance and difficulties in school. Among these are:
Children’s school performance and difficulties, including school grades, grade retention, suspensions, and expulsions; and
Identification of children’s schools.
Items concerning school performance and difficulty are important to the PFI survey as correlates of parent and family involvement in children’s education. These items were asked in the NHES:2012 PFI and item response rates for these items were high: 99.0 percent for children’s grades, 97.6 percent for out-of-school suspension, and 97.5 percent for expulsion.
Another element of the surveys that may be sensitive to some parents is the identification of children’s schools. This feature allows analysts to link the NHES data to other NCES datasets containing additional information about schools, greatly enhancing the ability to examine the relationships between students’ and families’ experiences and the characteristics of schools. The item response rate for the identification of the child’s school was 97.0 percent in NHES:2012.
The response burden per instrument and the total response burden are shown in table 1. The administration times for the questionnaires are based on NHES:2016 results. The expected number of respondents and number of responses are based on the expected numbers of completed surveys of each type, discussed in section B.1.3. The hourly rate of $23.25 is based on the average wage and salary for all civilian workers from the March 2016 National Compensation Survey by the U.S Department of Labor (http://www.bls.gov/news.release/ecec.t02.htm). For the NHES:2017 Web Data Collection Test, a total of 8,900 burden hours are anticipated, resulting in an estimated burden time cost to respondents of approximately $206,925.
Table 1. Estimated response burden for NHES:2017 Web Data Collection Test
Interview forms |
Number sampled |
Anticipated Response Rate |
Estimated Number of Respondents |
Estimated Number of Responses |
Burden per Respondent (minutes) |
Total Burden (hours) |
Screener1 |
90,000 |
40% |
36,000 |
36,000 |
4 |
2,400 |
ECPP questionnaire |
1,825 |
90% |
1,643 |
1,643 |
20 |
548 |
PFI-Enrolled questionnaire |
4,062 |
90% |
3,656 |
3,656 |
20 |
1,219 |
PFI-Homeschooled questionnaire |
94 |
90% |
85 |
85 |
20 |
28 |
ATES questionnaire |
20,175 |
80% |
16,140 |
16,140 |
11 |
2,959 |
ECPP plus PFI-Enrolled questionnaire |
526 |
90% |
473 |
473 |
36 |
284 |
ECPP plus PFI-Homeschooled questionnaire |
12 |
90% |
11 |
11 |
36 |
7 |
ATES plus ECPP questionnaire |
750 |
90% |
675 |
675 |
31 |
349 |
ATES plus PFI-Enrolled questionnaire |
2324 |
90% |
2,092 |
2,092 |
31 |
1,081 |
ATES plus PFI-Homeschooled questionnaire |
54 |
90% |
48 |
48 |
31 |
25 |
Study Total |
|
|
60,823 |
60,823 |
|
8,900 |
1Approximately 10% of addresses will be returned by USPS as invalid, reducing the final sample size to 90,000 addresses. Calculations of number of screener respondents are based on 90,000 addresses rather than 100,000.
NOTE: Eligibility and response rates for the national sample are estimated based on NHES:2016.
There are no recordkeeping requirements associated with NHES and no other costs to respondents.
The total cost of NHES:2017 Web Data Collection Test to the government is approximately 3.6 million dollars over a period of 20 months. This includes all direct and indirect costs of the design, data collection, analysis, and reporting phases of the study, as well as the delivery of data sets to NCES.
The primary objectives of the NHES:2017 Web Data Collection Test are to evaluate how a web data collection instrument can be best utilized, whether and how dual household sampling can be incorporated into the NHES data collection operations, the impact of tailored incentives, the impact of contact strategies and materials, and to evaluate item performance in the ATES. Future NHES data collections will implement these findings and produce datasets, statistics, and reports. The following are the planned outcomes of the NHES:2017 Web Data Collection Test:
Web data collection instrument: Compare response rates and data quality from the matrix format screener to the non-matrix screener used in 2016 and the Census platform versus the Department of Education platform. Evaluate operational challenges and improvements and their impact on the efficiency and cost of the survey operations.
Dual household sampling: Use experimental data to evaluate topical response rates and data quality among households when one and two household members are asked to complete topical surveys.
Tailored incentives: Overall, and for each experimental manipulation, evaluate response rates, return rates, and any appropriate refusal conversion rates, and their impact on helping to reach hard-to-reach respondents.
Contact strategies and materials: Track and evaluate the impact of the different contact strategies on response and timeliness of response.
ATES item performance: Compare response distributions and item missing data for items in the split panel experiment and compare to extant data to evaluate reasonableness.
Exhibit 2 presents the schedule of project activities for NHES:2017 Web Data Collection Test.
Exhibit 2. NHES:2017 Web Data Collection Test schedule of major activities
Task |
Date of Scheduled Conduct/Completion |
Survey Letters to Formatting and Printing |
October-December 2016 |
Data Collection Begins (advance letter mailing) |
February 2017 |
Data Collection Ends |
August 2017 |
The OMB authorization number and expiration date will be displayed on the bottom of every page in the web instrument, as well as in the link to the burden statement and Frequently Asked Questions.
There are no exceptions to the certification statement.
The NHES:2017 Web Data Collection Test will be an address-based sample covering the 50 states and the District of Columbia and will be conducted from February through August 2017. Households will be randomly sampled as described in section B.1.1, and an invitation to complete a screener questionnaire will be sent to each sampled household. Demographic information about household members provided on the screener will be used to determine whether anyone is eligible for the Adult Training and Education Survey (ATES), the Early Childhood Program Participation Survey (ECPP), the Parent and Family Involvement in Education Survey (PFI), or more than one survey. In order to limit respondent burden, regardless of the number of eligible people in a household, no more than one child and one adult or two children per household will be sampled for the topical surveys.
The initial sample will consist of 110,000 addresses selected from an ABS frame by Marketing Systems Group (MSG), based on the United States Postal Service (USPS) Computerized Delivery Sequence File (CDS). After invalid addresses are removed from the 110,000-household sample, it will be subsampled by NCES’s sample design contractor, the American Institutes for Research (AIR), yielding a 100,000-household final screener sample. The remaining households will remain unused to protect respondent confidentiality.
The NHES:2017 Web Data Collection Test is designed to meet precision requirements that allow for comparison among various experiments included in the study. The precision is dependent on the expected margin of error for each estimate, which is expected to be between 1 percentage point and 5.5 percentage points depending on the subgroup size. Table 2 shows the expected margin of error for all respondents and subgroups that are 50 and 25 percent of the full sample size.
Table 2. Expected margin of error for NHES:2017 Web Data Collection Test percentage estimates, by subgroup size, topical survey, and estimate
Topical survey and estimate |
Margin of error for estimate within: |
||
All respondents |
50% subgroup |
25% subgroup |
|
ECPP |
|
|
|
10% or 90% estimate |
1.61% |
2.27% |
3.22% |
20% or 80% estimate |
2.14% |
3.03% |
4.29% |
30% or 70% estimate |
2.46% |
3.47% |
4.91% |
40% or 60% estimate |
2.63% |
3.71% |
5.25% |
50% estimate |
2.68% |
3.79% |
5.36% |
PFI |
|
|
|
10% or 90% estimate |
1.16% |
1.64% |
2.32% |
20% or 80% estimate |
1.55% |
2.19% |
3.09% |
30% or 70% estimate |
1.77% |
2.51% |
3.54% |
40% or 60% estimate |
1.89% |
2.68% |
3.79% |
50% estimate |
1.93% |
2.73% |
3.87% |
ATES |
|
|
|
10% or 90% estimate |
0.68% |
0.96% |
1.36% |
20% or 80% estimate |
0.91% |
1.28% |
1.82% |
30% or 70% estimate |
1.04% |
1.47% |
2.08% |
40% or 60% estimate |
1.11% |
1.57% |
2.22% |
50% estimate |
1.13% |
1.61% |
2.27% |
Among households that complete the web screener instrument and report household members eligible for one or more topical surveys, one or two household members will be sampled for the topical phase of the survey. The process will vary depending on whether a household was randomly pre-assigned to receive the dual household sampling treatment (refer to Section A.1: Justification for additional information about the dual household experiment). Within the control group that will not receive dual sampling, no more than one person in a household will be selected for a topical survey. Within the treatment group that will receive dual sampling, up to two persons in a household may be selected, each for a different topical survey.
In addition to the dual treatment group flag (DUAL_FLAG, described in Section 1), the topical sampling procedure for the Web Test will use two topical sampling flags that will be randomly pre-assigned to each address in the screener sample: CorA_smpflg, indicating whether the household will receive an adult survey or child survey(s); and chld_smpflg, indicating whether the household will receive the PFI or ECPP.4 The designations will be assigned at the same rates used in NHES:2016, which were chosen to balance the sample requirements for each of the surveys. Depending on the composition of the household, and on whether the household is assigned to the dual sampling treatment, some or all of these pre-designations are used to assign the household to one or two of the topical surveys. If the household has more than one member eligible for the survey(s) for which it is selected, the next step of within-household sampling randomly selects one of these members. For households in which the screener respondent is expected to complete two topical surveys in the same web session, a third pre-assigned sampling flag (DUAL_ORDER) will be used to determine the order in which the topical surveys appear.
As described above, the initial mailed sample will consist of approximately 100,000 addresses. An expected screener response rate of 40 percent and an address ineligibility5 rate of 10 percent are assumed, based on results from prior NHES administrations. Under these assumptions, the expected number of completed screeners is 36,000.
No differences in screener or topical response rates between treatment groups were assumed (although, as noted above, a relatively conservative screener response rate was assumed to account for the possibility of unexpectedly large reductions in one or more treatment groups). Therefore, it is important to note that the expected sample sizes, and the resulting precision and power calculations, are themselves dependent on the results of experiments whose effects cannot be known with certainty. For the advance letter and dual household experiments, prior tests of similar interventions with paper surveys (in the NHES:2011 Field Test and the NHES:2014 Feasibility Study, respectively) did not show statistically significant impacts on response rates. The envelope, phone recording follow-up, incentive, screener, and ATES split panel experiments represent new interventions that have not yet been tested for NHES, either with paper or web surveys. Therefore, these experiments, in particular, introduce a degree of uncertainty into response rate assumptions.
Note that assumptions specific to the low response propensity (RP) cohort were derived using the RP scores available on the NHES:2016 file. These RP scores are based on the model used for the NHES:2016 tailored incentive experiment, as the model for the NHES:2017 Web Test experiment has not yet been created. To the extent that the NHES:2017 Web Test model leads to a different ordering of cases by RP than what was generated by the NHES:2016 model, actual outcomes for this cohort may vary from the assumptions.
Based on the assumptions described above, table 3 shows the expected sample size, interview count, final design effect among respondents, and effective interview count for the screener and each topical survey. The effective interview count is equal to the interview count divided by the design effect, and can be interpreted as the size of the SRS that would yield approximately the same variance in estimates as the complex NHES:2017 Web Test sample.
Table 3. |
NHES:2017 Web Data Collection Test expected sample size, interview count, design effect, and effective interview count, by survey |
||||
Survey |
Sample size |
Interview count |
Design effect |
Effective interview count |
|
Screener1 |
90,000 |
36,000 |
1.16 |
31,169 |
|
ECPP |
3,113 |
2,802 |
2.10 |
1,337 |
|
PFI2 |
7,072 |
6,365 |
2.48 |
2,570 |
|
PFI-Enrolled |
6,912 |
6,221 |
2.48 |
2,512 |
|
PFI-Homeschooled |
160 |
144 |
2.48 |
58 |
|
ATES |
23,303 |
18,643 |
2.50 |
7,455 |
|
1Expected screener sample size assumes a 90% address eligibility rate. |
|||||
2Decomposition between PFI-Enrolled and PFI-Homeschooled assumes that approximately 2.26% of PFI respondents will be PFI-Homeschooled respondents, the same as in NHES:2012. Unlike in NHES:2016, households will not be sampled separately for the PFI-Homeschooled in the NHES:2017 Web Data Collection Test. |
|||||
NOTE: Because of rounding, the effective interview count may not exactly equal the interview count divided by the design effect. |
The data sets from the NHES:2017 Web Data Collection Test will have weights assigned to facilitate estimation of nationally representative statistics to help in the evaluation of methodological effects. All households responding to the screener will be assigned weights based on their probability of selection and a non-response adjustment, making them representative of the household population. All individuals responding to the topical questionnaires will have a record with a person weight designed such that the complete data set represents the target population.
The estimation weights for the NHES:2017 Web Data Collection Test surveys will be formed in stages. The first stage is the creation of a base weight for the household, which is the inverse of the probability of selection of the address. The second stage is a screener nonresponse adjustment to be performed based on characteristics available on the frame and discussed below. These weights may be used to produce national household-level estimates to aid in evaluating the performance of questionnaire items.
The household-level weights are the base weights for the person-level weights. For each completed topical questionnaire, the person-level weights also undergo a series of adjustments. The first stage is the adjustment of these weights for the probability of selecting the person within the household. The second stage is the adjustment of the weights for topical survey nonresponse to be performed based on characteristics available on the frame and discussed below. The third stage is the raking adjustment of the weights to Census Bureau estimates of the target population. The variables that may be used for raking at the person level include race and ethnicity of the sampled person, household income, home tenure (own/rent/other), region, age, grade of enrollment, gender, family structure (one parent or two parent), and highest educational attainment in household. These variables have been shown to be associated with response rates. The final ranked person-level weights include undercoverage adjustments as well as adjustments for nonresponse.
Standard errors of the estimates will be computed using a jackknife replication method. The replication process repeats each stage of estimation separately for each replicate. The replication method is especially useful for obtaining standard errors for statistics such as quantiles. The standard errors may be computed using the complex survey data analysis package WesVar Complex Samples Software or other software packages that use replication methods such as Stata, SAS, SUDAAN, or the AM software package. Also, PSU and STRATUM variables will be available for use in Taylor series linearization or to compute standard errors for internal analysis.
To the extent that those who respond to surveys and those who do not differ in important ways, there is a potential for nonresponse biases in estimates from survey data. The estimates from NHES:2017 Web Data Collection Test are subject to bias because of unit nonresponse to both the screener and the extended topical surveys, as well as nonresponse to specific items.
Unit nonresponse
To identify characteristics associated with unit nonresponse, a multivariate analysis will be conducted using a categorical search algorithm called Chi-Square Automatic Interaction Detection (CHAID). CHAID begins by identifying the characteristic of the data that is the best predictor of response. Then, within the levels of that characteristic, CHAID identifies the next best predictor(s) of response, and so forth, until a tree is formed with all of the response predictors that were identified at each step. The final result is a division of the entire data set into cells by attempting to determine sequentially the cells that have the greatest discrimination with respect to the unit response rates. In other words, it divides the data set into groups so that the unit response rate within cells is as constant as possible, and the unit response rate between cells is as different as possible. Since the variables considered for use as predictors of response must be available for both respondents and nonrespondents, demographic variables from the sampling frame provided by the vendor (including household education level, household race/ethnicity, household income, number of children in the household, number of adults in the household, age of head of household, whether the household owns or rents the dwelling, and whether there is a surname and/or phone number present on the sampling frame) will be included in the CHAID analysis.
In addition to the above, the magnitude of unit nonresponse bias and the likely effectiveness of statistical adjustments in reducing that bias will be examined by comparing estimates computed using adjusted weights to those computed using unadjusted weights. The unadjusted weight is the reciprocal of the probability of selection, reflecting all stages of selection. The adjusted weight is the extended interview weight adjusted for unit nonresponse (without the raking adjustment). In this analysis, the statistical significance of differences in estimates will be investigated only for key survey estimates including, but not limited to, the following:
All surveys
Age/grade of child or age of adult
Census region
Race/ethnicity
Mother’s or adult’s employment status
Mother’s or adult’s home language
Educational attainment of mother/adult
Family type
Home ownership
Adult Training and Education Survey (ATES)
Highest degree or level of school
Certification or license
Certificate
Completed work-related training last 12 months
The final component of the bias analysis will include comparisons between respondent characteristics and known population characteristics from extant sources including the Current Population Survey (CPS) and the American Community Survey (ACS). Additionally, for substantive variables, weighted estimates will be compared to prior NHES administrations if available. While differences between estimates and those from external sources as well as prior NHES administrations could be attributable to factors other than bias, differences will be examined in order to confirm the reasonableness of the estimates.
Item nonresponse
In order to examine item nonresponse, all items with response rates below 85 percent will be listed. Alternative sets of imputed values will be generated by imposing extreme assumptions on the item nonrespondents. For most items, two new sets of imputed values—one based on a “low” assumption and one based on a “high” assumption—will be created. For most continuous variables, a “low” imputed value variable will be created by resetting imputed values to the value at the 5th percentile of the original distribution; a “high” imputed value variable will be created by resetting imputed values to the value at the 95th percentile of the original distribution. For dichotomous and most polytomous variables, a “low” imputed value variable will be created by resetting imputed values to the lowest value in the original distribution, and a “high” imputed value variable will be created by resetting imputed values to the highest value in the original distribution. Both the “low” imputed value variable distributions and the “high” imputed value variable distributions will be compared to the unimputed distributions. This analysis helps to place bounds on the potential for item nonresponse bias through the use of “worst case” scenarios.
This section describes the data collection procedures to be used in the NHES:2017 Web Data Collection Test. These procedures represent a combination of best practices to maximize response rates based on findings from the NHES:2016, within the NCES’s budget constraints. The NHES is a two-phase self-administered survey. For the NHES:2017 Web Data Collection Test, the household will be sent a letter asking the household to participate in the survey. The letter will contain the URL for the survey and the username assigned to the household. Upon completion of the screener questionnaire via the web instrument, the respondent will be notified which household member or members were selected to complete a topical questionnaire. The screener respondent will be asked to complete the child questionnaire. If the sampled ATES respondent is either the screener respondent or is available, the topical survey can be completed immediately. If the sampled ATES respondent is not available, a letter will be sent inviting the sampled member to complete the appropriate topical questionnaire(s). The NHES employs multiple contacts with households to maximize response. These include an advance letter and up to three mail invitations to participate for the screener survey and up to three mail and up to four email invitations to participate for the topical surveys. In addition, households will receive one reminder pressure sealed envelope after the initial mailing of a screener or topical and an automated phone call reminder for nonrespondents in conjunction with the third mailing of the screener survey.
Mailout Procedures
Figure 1 presents a flow chart for the NHES:2017 Web Test data collection which will begin with the mailing of an advance notification letter to 50% of the sample in mid-February 2016. This experiment will test whether initial login rates to the web instrument are affected by whether households receive an advance letter prior to the first web letter mailing. Advance letters were first tested with paper screeners in the NHES:2011 Field Test. Cases that did not receive an advance letter did not show significantly lower final response rates, so a 50/50 allocation is used to maximize statistical power. The portion of the sample receiving an advance letter and the remaining 50% of the sample will receive an invitation to participate in the survey during the third week in February. The packages will contain a letter and incentive as described in section A.6 above. All subsequent nonresponse follow-up mailings will only contain an invitation to participate in the survey. A thank you/reminder letter in a pressure sealed envelope will be sent to all sampled addresses approximately one week after the first mailing. Reminder and final thank you emails will also be sent approximately 1.5 weeks prior to each topical mailing. A second mailing will be sent to nonresponding households approximately two weeks after the letter in the pressure sealed envelope. Approximately three weeks after the second mailing, a third mailing will be sent to non-responding households using rush delivery (FedEx or UPS). For addresses for which the frame includes a telephone number, an automated reminder phone call will be made on the same day as the third screener mailing to encourage households to complete the study as soon as possible.
In instances where, at the time the screener is completed, the adult household member sampled for a topical survey is not available, up to three letters and up to four emails to participate in the topical survey will be sent to the sampled member. Invitations to complete the topical surveys will also be sent in instances where the screener survey was completed but the topical survey was not completed regardless of the circumstances.
B.3 Methods for Maximizing Response Rates
The NHES:2017 Web Data Collection Test design incorporates a number of features to maximize response rates. This section discusses those features.
Total Design Method/Respondent-Friendly Design. Surveys that take advantage of respondent-friendly design have demonstrated increases in survey response (Dillman, Smyth, and Christian 2008; Dillman, Sinclair, and Clark 1993). We have honed the design of the NHES forms through multiple iterations of cognitive interviewing and field testing. These efforts have included the design, content and Spanish translation of all respondent contact materials. As noted previously, we will include a respondent incentive in the initial screener mailing. Many years of testing in the NHES have shown the effectiveness of incentives on increasing response. The Census Bureau will maintain an email address and a toll-free questionnaire assistance (TQA) line to answer respondent questions or concerns. If a respondent chooses to provide their information to the TQA staff, staff will be able to collect the respondent’s information on the web instrument. Additionally, the web data collection instrument contains frequently asked questions (FAQs) and contact information for the Project Officer.
Engaging Respondent Interest and Cooperation. The content of respondent letters and FAQs is focused on communicating the legitimacy and importance of the study. Past experience has shown that the NHES child survey topics are salient to most parents while new ATES contact materials specifically inform respondents who may not have formal education that the survey is about everyone.
Nonresponse Follow-up. The data collection protocol includes several stages of nonresponse follow-up at each phase. In addition to the number of contacts, changes in method (USPS First Class mail, FedEx, and automated reminder phone calls) and materials (letters, emails, pressure sealed envelopes) are designed to capture the attention of potential respondents.
From NCES, the following persons participated in the study design and are responsible for the collection and analysis of the data: Sarah Grady, Sharon Boivin, Lisa Hudson, and Andrew Zukerberg, and from the Census Bureau, Carolyn Pickering.
Brick, J.M., Hagedorn, M.C., Montaquila, J., Brock Roth, S., and Chapman, C. (2006). Monetary Incentives and Mailing Procedures in a Federally Sponsored Telephone Survey: Methodology Report. U.S. Department of Education. Washington, DC: National Center for Education Statistics.
Dillman, D.A., Sinclair, M.D., and Clark, J.R. (1993). Effects of questionnaire length, respondent-friendly design, and difficult questions on response rates for occupant-addressed Census mail surveys. Public Opinion Quarterly, 57, 289-304.
Dillman, D.A., Smyth, J.D., and Christian, L.M. (2008). Internet, mail, and mixed mode surveys: The Tailored Design Method. New York: Wiley.
U.S. Department of Labor. (2016). Employer Costs for Employee Compensation – March 2016. Washington, DC: Bureau of Labor Statistics. Available online at http://www.bls.gov/news.release/ecec.t02.htm
Figure 1: Screener Data Collection
Figure 2: Single Within Household Topical Data Collection
Figure 3: Dual Within Household Topical Data Collection
1 Children enrolled in school receive the PFI-Enrolled questionnaire while those who are homeschooled receive the PFI-Homeschooled questionnaire. Unlike in NHES:2016, households will not be sampled separately for the PFI-Homeschooled questionnaire in the NHES:2017 Web Data Collection Test. Therefore, unless otherwise stated, all references to the PFI refer to both the PFI-Enrolled and the PFI-Homeschooled.
2 NCES collects information on the following racial/ethnic groups in the NHES topical surveys: American Indian or Alaska Native, Asian, Black or African American, Hispanic, Native Hawaiian or other Pacific Islander, and White. Reported estimates by race typically use the following categories: White, non-Hispanic; Black, non-Hispanic; Hispanic; Asian or other Pacific Islander, non-Hispanic; and other, non-Hispanic. Areas with higher concentrations of Blacks and Hispanics are typically oversampled in order to ensure sufficient samples sizes to generate reliable estimates for these subgroups.
3 The contingent debit cards will be activated after the household completes all parts of the web instrument that are applicable to them. For cases that are sampled for a topical survey, and for which the topical respondent is the same as the screener respondent, the debit card would be activated upon topical completion. For cases that are not sampled for a topical survey, or for which the person sampled for the topical differs from the screener respondent, the debit card would be activated upon screener completion. The prepaid debit card will be available for immediate use regardless of whether any part of the survey is completed.
4 In NHES:2016, an additional sampling flag, PFIH_smpflg, was used prior to the other two flags to determine whether households with homeschoolers would receive the PFI-Homeschooled or a different topical survey. In the NHES:2017 Web Data Collection Test, households will not be sampled separately for the PFI-Homeschooled to reduce the variability in the weights, thereby preserving statistical power for experimental comparisons. As in NHES:2012, households will be routed to the PFI-Homeschooled if they are sampled for the PFI and the selected child is homeschooled. All other households sampled for the PFI will be routed to the PFI-Enrolled.
5 Ineligible addresses are those which are undeliverable. Once a screener mailing for an address is returned as undeliverable as addressed (UAA), the address will be coded ineligible.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | WAITS_T |
File Modified | 0000-00-00 |
File Created | 2021-01-22 |