APPENDIX
O
Nonresponse bias analysis for the faces core study
parent survey in fall 2014 and spring 2015
This page has been left blank for double-sided copying.
Head Start Family and Child Experiences Survey (FACES)
OMB Control Number: 0970-0151
Nonresponse Bias Analysis for the FACES Core Study Parent Survey
in Fall 2014 and Spring 2015
In fall 2014 and spring 2015, we administered parent surveys as part of the FACES 2014 Classroom + Child Outcomes Core study. At both times, parents were asked to complete a 20- to 25-minute survey that included questions about their child and family, as well as about themselves. Parents could complete the survey online (self-administered) or by telephone (interviewer-administered).
In fall 2014, we attempted to conduct the survey with the parents of the 2,462 sampled children who consented for their child to participate in FACES. Of these, 1,909 (77.5 percent) completed the survey. In spring 2015, 2,226 of these children remained in Head Start; of these, 2,206 remained in their sampled Head Start center and therefore were still eligible for FACES.1 We attempted to complete a spring survey with a parent of each of these 2,206 children, whether or not the parent completed the fall survey. Of these, 1,641 (74.4 percent) completed the survey in the spring. Across the two waves, 85.5 percent of the original sample completed a survey in fall or spring, providing basic demographic information commonly used in analysis.2
Weights constructed for some FACES cross-sectional estimates are based on the presence of the concurrent parent surveys. Therefore, these weighted estimates exclude children for whom we do not have a parent survey but who may have child outcome data. By dropping these cases, we are potentially biasing those estimates if the outcomes for children without parent surveys are different from those for the children with parent surveys.
Because the parent response rate was less than 80 percent for a given wave (fall or spring), we analyzed the potential for nonresponse bias (bias that results when respondents differ in meaningful ways from nonrespondents). As response rates decrease, the risk for nonresponse bias increases if nonrespondents would have responded differently from respondents. Our goal was to assess the potential risk for nonresponse bias and whether we could properly account for nonresponse using the FACES 2014 analysis weights, thereby mitigating any significant differences between the Head Start parents who responded and the sample as a whole.
This memorandum describes:
Our approach to conducting the nonresponse bias analysis
The results of this analysis
Implications for the users of the Core child-level data
In addition, we have included a summary of these findings, as well as the sample design, in the FACES 2014 users’ manual (Kopack Klein et al. 2016).
Approach to the nonresponse bias analysis. Bias usually cannot be directly measured; in this case, however, we can do so. We have key outcomes (outcome data from the child assessments) for nearly all sampled children, so we can examine what happens to estimates of those outcomes with and without children whose parents completed the parent survey.
In this analysis, we compared estimates of child outcomes for parent survey respondents and nonrespondents and looked for significant differences between the two groups. We then examined whether the child-level nonresponse-adjusted weights mitigated the bias. We did this for the parent survey in the fall and spring by focusing on all sampled and consented children (for program-level characteristics and parent survey contact options obtained from the consent form) and then those sampled and consented children with completed child assessments (for child outcomes).
Our analysis involved three steps:
Identifying key child outcomes, as well as parent survey contact options and program characteristics not obtained from the parent survey
Determining which of these variables had significantly different response profiles for the fall or the spring survey
Looking at whether these differences diminished after applying nonresponse weights
The key child outcomes were:
An indicator of teacher-reported child disability status
An indicator of children’s performance on the English language screener3
PPVT-4 Raw Score (a measure of children’s English receptive vocabulary)
Teacher-reported social skills score
Teacher-reported problem behavior score
Other child characteristics (from sources other than the Parent Survey):
An indicator of child language (primarily from the parental consent form)
Child gender
Child age (as of September 1, 2014, in months)
Child status as newly entering Head Start
The parent survey contact options from the parental consent form were:
An indicator of parent Internet access4
Unlimited cell phone minutes5
Ability to send or receive text messages6
The program-level characteristics we examined were:
Census region
Funded enrollment
Percentage of enrolled children who are Hispanic/Latino
Percentage of enrolled children who are American Indian or Alaska Native
Percentage of enrolled children who are Black
Percentage of enrolled children who are White
Percentage of enrolled children who have a disability
Whether the program’s zip code is in a metropolitan statistical area
Whether it is a public school program
We then examined whether the values for these 21 variables differed between respondents and nonrespondents to the fall and/or spring parent survey. Among these, five variables for fall and five variables for spring were significantly associated with the probability of responding to the survey at α = 0.05 (using a Rao-Scott Chi-square in SAS SurveyFreq procedure). The analysis accounted for the complex sample design (appropriately accounted for unequal weights, stratification, and clustering in the variance estimates)7 and used the child base weight (adjusted for all stages of sampling and parental consent). There were seven unique variables that had significantly different response profiles in either the fall or spring or both.
Analysis results. Tables 1 and 2 present these significant variables, both before and after weighting adjustments for nonresponse, separately for the fall and spring surveys. For each variable, we show five columns. Column 1 shows the Parent Survey weighted response rate for each category of the variable, column 2 shows the variable’s weighted distribution among all sampled and eligible cases, column 3 shows the weighted distribution among respondents only, and column 4 shows the weighted distribution among respondents after we applied a nonresponse adjustment to one of the child-level base weights. The distributions in columns 1, 2, and 3 use the base weight before this nonresponse adjustment. Column 5 shows the relative bias of the estimate after nonresponse adjustment. This is calculated as the bias (absolute difference between columns 2 and 4), relative to column 2, where the best scenario would be a value of 0. This helps put the size of the difference in perspective.
The tables show that differences in response rates across categories of a variable (column 1) may lead to differences between the variable’s distribution for all eligible cases (column 2) and for respondents only (column 3). This would indicate a need for nonresponse-adjusted weights. Ideally, these weights would correct for differential response behavior (column 4) and close the gap between the variable’s distribution for all cases (column 2) and for respondents only (column 3).
In the fall (Table 1), two of the child characteristics, one of the parent survey contact option questions, and two of the program-level characteristics showed significant differences between respondents and nonrespondents. The base-weighted distributions for these five variables were not markedly different for the 1,909 respondents relative to the entire group of 2,462 parents whose children were sampled and eligible in fall 2014. Each category was off by fewer than 2 percentage points.
In the spring (Table 2), one of the child characteristics, two of the parent survey contact option questions, and two of the program-level characteristics showed significant differences between respondents and nonrespondents. Once again, the sample-weighted distributions for these five variables were not markedly different for the 1,641 respondents relative to the entire group of 2,226 parents we examined. Each category was off by fewer than 3 percentage points.
The final step of the analysis examined whether the respondent distribution matched the full population distribution after nonresponse adjustments to the weights. In particular, we compared whether the nonresponse-weighted distribution for respondents (column 4) matched the base-weighted distribution for the full sample (column 2).
Because we did not create weights specifically for this nonresponse bias analysis—adjusting only for fall parent survey nonresponse or for spring parent survey nonresponse—we used the most appropriate existing nonresponse-adjusted weights for column 4. For the fall, we used a weight (P1_RA1WT) that was positive if the child had a fall parent survey and completed either a Teacher Child Report or a child assessment in the fall, which is positive for 1,908 children. Because there was only one child for whom we had a completed fall parent survey but for whom we had neither the Teacher Child Report nor the child assessment in the fall, the number of parents with a positive nonresponse-adjusted weight is off by only one, relative to the 1,909 parents treated as respondents in this nonresponse bias analysis. For the spring, the corresponding weight allowed for the child to have either a fall or a spring parent survey, so that weight was not appropriate for assessing nonresponse bias associated with missing the spring parent survey only. Instead, we used a weight (PRA2WT) that was positive if the child had a spring parent survey, a spring Teacher Child Report, and a spring child assessment, which is positive for 1,499 children. There were 142 children who had a completed spring parent survey (treated as a respondent in this nonresponse bias analysis) but who were missing the Teacher Child Report or the child assessment, or both, and who therefore are excluded from column 4. But we see no reason that, had we constructed a separate weight for this nonresponse analysis that would have allowed us to include these 142, the results would have been markedly different.
In the fall, after adjusting the base weight for nonresponse to the parent survey, we brought the distributions among respondents into complete alignment with the entire sample for the two program-level variables and into closer alignment with the three parent survey contact option questions and child characteristics variables (although some differences remained). In the spring, the findings were similar to those for the fall for three of the variables; however, two of the five variables (whether the parent could send text messages and whether the Head Start program was a public school) were slightly more out of alignment after adjusting for nonresponse, but differences were still quite small (fewer than 2 percentage points).
Although there is no rule of thumb for how large a relative bias is acceptable, the larger it is, the more caution is merited in analysis. In a modeling context, potential bias due to nonresponse can be mitigated by controlling for any possibly problematic variables in an analysis.
Implications for use of the child-level data.8 More than three-quarters of the variables we examined did not have significantly different distributions between respondents and nonrespondents, even before nonresponse adjustments to the weights. Among those that did have different distributions, nonresponse adjustments to the weights generally either resolved or lessened those differences that had been significant. Furthermore, among the parents of the 2,462 children who were in the study in the fall, 2,105 (85.5 percent) completed at least one of the two surveys. Among the parents of the 2,206 children who were in the study in both fall and spring, 1,951 (88.4 percent) completed at least one of the two surveys. This is important to note, because those parents who completed the spring survey but did not complete the fall survey were asked key demographic questions from that fall survey instrument in the spring. Therefore, most spring or program-year weights require that either the fall or spring parent interview be completed, but not necessarily both. Because of this, we feel researchers should feel comfortable making child-level estimates from the FACES 2014 Classroom + Child Outcomes Core study using the appropriate weights.
Reference
Kopack Klein, Ashley, Barbara Lepidus Carlson, Nikki Aikens, Anne Bloomenthal, Jerry West, Lizabeth Malone, Emily Moiduddin, Melissa Hepburn, Sara Skidmore, Sara Bernstein, Annalee Kelly, Felicia Hurwitz, and Grace Lim. “Head Start Family and Child Experiences Survey (FACES 2014) Draft User’s Manual.” Draft report submitted to the U.S. Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research and Evaluation. Washington, DC: Mathematica Policy Research, April 2016.
Table 1. Variables found to have statistically significant associations with fall 2014 Parent Survey response
Variable |
Response categories |
(1) Response rate for fall Parent Survey (percentage) |
Unadjusted weighted distributions (percentage) |
Nonresponse-adjusted weighted distributions (percentage) |
(5) Relative bias after nonresponse weighting adjustments |
|
(2) All sampled and eligible cases in fall (n = 2,462) |
(3) Fall Parent Survey respondents only (n = 1,909) |
(4) Fall Parent Survey respondents with positive adjusted weights (n = 1,908) |
||||
Child characteristics |
||||||
Teacher-reported child disability status |
No |
77.61 |
87.08 |
86.14 |
86.42 |
0.008 |
Yes |
84.16 |
12.92 |
13.86 |
13.58 |
0.051 |
|
Child language |
English |
76.96 |
78.19 |
76.79 |
77.45 |
0.009 |
Non-English |
83.40 |
21.81 |
23.21 |
22.55 |
0.034 |
|
Parent Survey contact options |
||||||
Parent had unlimited cell phone minutes |
No |
83.43 |
20.75 |
22.16 |
21.86 |
0.053 |
Yes |
76.73 |
79.25 |
77.84 |
78.14 |
0.014 |
|
Head Start program-level characteristics |
|
|||||
Percentage Black |
20 or less |
80.27 |
59.32 |
61.03 |
59.32 |
0.000 |
>20 to 60 |
74.69 |
26.29 |
25.17 |
26.29 |
0.000 |
|
More than 60 |
74.79 |
14.40 |
13.80 |
14.40 |
0.000 |
|
Percentage White |
50 or less |
75.76 |
52.04 |
50.54 |
52.04 |
0.000 |
More than 50 |
80.46 |
47.96 |
49.46 |
47.96 |
0.000 |
Source: FACES fall 2014 child assessment, FACES 2014 parental consent form, or 2013 Head Start Program Information Report.
Note: In column 4, we used a weight (P1_RA1WT) that was positive if the child had a fall Parent Survey and completed either a Teacher Child Report or a Child Assessment in the fall, which is positive for 1,908 children. Because there was only one child for whom we had a completed fall Parent Survey but for whom we had neither the Teacher Child Report nor the Child Assessment in the fall, the number of parents with a positive nonresponse-adjusted weight is off by only one, relative to the 1,909 parents treated as respondents in this nonresponse bias analysis.
Table 2. Variables found to have statistically significant associations with spring 2015 Parent Survey response
Variable |
Response categories |
(1) Response rate for spring Parent Survey (percentage) |
Unadjusted weighted distributions (percentage) |
Nonresponse-adjusted weighted distributions (percentage) |
(5) Relative bias after nonresponse weighting adjustments |
|
(2) All sampled and eligible cases in spring (n = 2,226) |
(3) Spring Parent Survey respondents only (n = 1,641) |
(4) Spring Parent Survey respondents with positive adjusted weights (n = 1,499) |
||||
Child characteristics |
||||||
Child language |
English |
72.21 |
76.83 |
73.92 |
75.75 |
0.014 |
Non-English |
84.45 |
23.17 |
26.08 |
24.25 |
0.047 |
|
Parent Survey contact options |
||||||
Parent had unlimited cell phone minutes |
No |
81.33 |
21.21 |
23.49 |
23.27 |
0.097 |
Yes |
71.3 |
78.79 |
76.51 |
76.73 |
0.026 |
|
Parent agreed to receive text messages |
No |
83.49 |
7.95 |
9.01 |
9.35 |
0.176 |
Yes |
72.81 |
92.05 |
90.99 |
90.65 |
0.015 |
|
Head Start program-level characteristics |
|
|||||
Percentage White |
50 or less |
70.94 |
52.52 |
50.81 |
52.52 |
0.000 |
More than 50 |
75.96 |
47.48 |
49.19 |
47.48 |
0.000 |
|
Public school |
No |
72.48 |
89.65 |
88.53 |
87.98 |
0.019 |
Yes |
81.31 |
10.35 |
11.47 |
12.02 |
0.161 |
Source: FACES fall 2014 child assessment, FACES 2014 Parental Consent Form, or 2013 Head Start Program Information Report.
Note: In column 4, we used a weight (PRA2WT) that was positive if the child had a spring Parent Survey, a spring Teacher Child Report, and a spring Child Assessment, which is positive for 1,499 children. There were 142 children with a completed spring Parent Survey (included among the respondents in this nonresponse bias analysis) but who were missing the Teacher Child Report, the Child Assessment, or both, and who are therefore excluded from column 4.
1 Although 2,206 children were eligible from an operational standpoint (they remained in the sampled Head Start program), 2,226 were eligible from the perspective of weighting. This means that, although they may have changed Head Start programs between fall and spring, we account for them in the sampling weights because they formed part of the target population at the time of sampling.
2 There were 1,445 parents who completed both the fall and the spring surveys, and there were 2,105 parents who completed at least one of these.
3 This indicator is a sum score of items on the two preLAS subtests of the language screener, with higher scores indicating more correct responses and greater proficiency in English.
4 “Do you have access to a smart phone, laptop, computer or other device that gives you access to the Internet?”
5 “Does your cellular phone plan have unlimited minutes?”
6 “May we send you text messages?”
7 We did not adjust the significance level (α) for multiple comparisons, so we could expect one of these tests to yield false significance purely by chance. In that sense, our nonresponse bias analysis is conservative.
8 The User Manual includes the following summary: “Among the 2,462 parents of consented children at baseline, 1,909 (77.5 percent) completed the fall parent survey. Among the 2,226 parents of children who were still in the sampled Head Start program in spring 2015, 1,641 (74.4 percent) completed the spring parent survey. Given the response rate was less than 80 percent at each wave, we conducted analysis to assess nonresponse bias. As response rates decrease, the risk for nonresponse bias for an estimate increases if nonrespondents would have responded differently from respondents. Bias usually cannot be directly measured; in this case, however, we can do so. We have key outcomes (outcome data from the child assessments) for nearly all sampled children, so we examined what happens to estimates of those outcomes with and without children whose parents completed the parent survey. In this analysis, we compared estimates of child outcomes for parent survey respondents and nonrespondents and looked for significant differences between the two groups. We then examined whether the child-level nonresponse-adjusted weights mitigated the bias. We did this for the parent survey in the fall and spring by focusing on all sampled and consented children (for program-level characteristics and parent survey contact options obtained from the consent form) and then those sampled and consented children with completed child assessments (for child outcomes).
More than three-quarters of the variables we examined did not have significantly different distributions between respondents and nonrespondents, even before nonresponse adjustments to the weights. Among those that did have different distributions, nonresponse adjustments to the weights generally either resolved or lessened those differences that had been significant. Furthermore, among the parents of the 2,462 children who were in the study in the fall, 2,105 (85.5 percent) completed at least one of the two surveys. Among the parents of the 2,206 children who were in the study in both fall and spring, 1,951 (88.4 percent) completed at least one of the two surveys. This is important to note, because those parents who completed the spring survey but did not complete the fall survey were asked key demographic questions from that fall survey instrument in the spring. Therefore, most spring or program-year weights (see Chapter VI) require that either the fall or spring parent interview be completed, but not necessarily both. Because of this, we feel researchers should feel comfortable making child-level estimates from the FACES 2014 Classroom + Child Outcomes Core study using with the appropriate weights.”
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Nonresponse Bias Analysis for the FACES Core Study Parent Survey in Fall 2014 and Spring 2015 |
Subject | OMB APPENDIX |
Author | MATHEMATICA |
File Modified | 0000-00-00 |
File Created | 2021-01-14 |