Alternative Supporting Statement for Information Collections Designed for
Research, Public Health Surveillance, and Program Evaluation Purposes
Survey of Staff Recruitment, Training, and Professional Development in Early Head Start
OMB Information Collection Request
0970-0629
Supporting Statement
Part B
JANUARY 2024
Revised MAY 2024
Submitted By:
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officers:
Jenessa Malin
Krystal Bichay-Awadalla
Part B
The Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval to collect descriptive information for the Survey of Staff Recruitment, Training, and Professional Development in Early Head Start. The information collection is intended to provide nationally representative information on Early Head Start (EHS) grant recipients’ strategies, successes, and challenges related to recruitment, training, and professional development practices. The objective of the information collection is to describe how EHS grant recipients ensure teachers and home visitors meet or exceed the Head Start Program Performance Standards (HSPPS) qualification and competency requirements1. The research team plans to ask EHS program administrators how they recruit and hire competent and qualified teachers and home visitors as well as support ongoing career development through training and professional development. The information collection is intended to guide program planning, technical assistance, and research.
The research team plans to collect data from a nationally representative sample of EHS grant recipients. The results are intended to be generalizable to the EHS program as a whole, with a few restrictions. EHS grant recipients in U.S. territories are excluded, as are those under the direction of ACF Region XI (American Indian and Alaska Native Head Start), Region XII (Migrant and Seasonal Worker Head Start), and EHS grant recipients under transitional management. These limitations to generalizability will be clearly stated in published results.
The research team selected a nationally representative survey design to understand EHS grant recipients’ practices, successes, and challenges with recruitment, training, and professional development across Head Start regions 1-10. With a survey, the research team intends to ask EHS grant recipients about many of their practices with minimal burden. The resulting data will allow the research team to describe the practices and experiences of EHS grant recipients in recruiting, training and maintaining their workforce as well as variation in those experiences based on characteristics of and practices used by the EHS grant recipient. The study’s sample is designed so that the resulting weighted estimates will be unbiased, sufficiently precise, and with adequate power to detect relevant differences at the national level. A web-based survey is an appropriate study design to gather descriptive information that can result in nationally representative estimates intended to inform national-level program operations and training and technical assistance efforts.
As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.
The target population are EHS grant recipients excluding those described in B1 Generalizability of Results, grant recipients under transitional management, and any grant recipients that do not directly provide services to children and families. Based on administrative data, there are approximately 1,500 EHS grant recipients eligible for inclusion in the study population. In most cases, we anticipate that the respondent will be the program director. In some cases, such as with larger grant recipients, the respondent may be another EHS program administrator in the sampled grant recipient, such as a human resources employee or education manager.
The research team plans to use grant recipient contact information contained in the most recent and available Office of Head Start (OHS) Program Information Report (PIR)2 as the sampling frame. All EHS grant recipients are required to submit data on their program to OHS. OHS compiles this data into the PIR and updates it at the end of every program year. The research team expects that the PIR data that will be available for sampling will have been last updated in 2023.
The research team plans to use a probability proportional to size (PPS) stratified sample design to select EHS grant recipients for inclusion in the study. For each of the sampled EHS grant recipients, the research team will collect information from respondents who can provide information on the study’s key constructs. The research team plans for the PPS sample design to include stratification by program type. It is likely the program type variable from the PIR data will be used to create an explicit stratification where all eligible EHS grant recipients will be assigned to one of four possible stratums. These stratums are grant recipients that provided 1) center-based services only; 2) home-based services only; 3) provided both center and home-based services; 4) other types including family based, locally designed options or various combinations of services. The sample will be allocated and selected separately within each stratum, allowing for more control over how the sample is distributed across program type. Within the stratum, the research team plans to sort the grant recipients by OHS region which can allow us to improve the final sample distribution across OHS region within program types. The research team selected program type and OHS region for stratification because these are characteristics known to produce significant variation in EHS grant recipients’ practices, successes, and challenges with recruitment, training, and professional development.
The research team plans to select the PPS sample from the four program type strata in two waves with the combined goal of completing 600 surveys. The wave1 sample would be a PPS sample of 600 EHS grants recipients across the four program type strata. A large portion of the responses for an online survey occur not long after the initial survey invite. As such, after 5 days, there would be enough data from the wave1 sample release to estimate expected response rates overall and for the four program type strata. The size of the second wave sample will depend on this estimated wave1 response rate. For instance, if the research team expects 450 completes from wave1, the research team would release about 200 more EHS grant recipients in wave2. Wave2 sample survey invites would be selected 6 days after the wave1 sample was released and sent out one week after the wave1 sample release. If the research team decides to release a higher proportion of sample from a stratum that has a lower wave1 response rate, then the wave2 sample size would be increased so that the goal of 600 completed surveys could still be achieved. If the research team’s initial response rate estimates turn out to be lower than desired, then the research team would consider a small third sample release.
Using the PIR data, the research team plans to determine whether any of the key program types or regions has a lower response rate or are more likely to have unreachable grant recipients. This will be part of the early wave1 analysis so that the wave2 sample could disproportionately sample from the four strata so that the final sample is a better representative sample of the PIR sampling frame. So, the two-wave approach helps ensure that the target number of completes is achieved and reduces the size and range of the survey weights needed to make the sample nationally representative because it is proactive in addressing potential coverage and nonresponse bias.
Table 1 shows the precision for the full population and key sub-groups based on the sample design described above. The next to last column shows the minimum detectable effects (MDE) based on a variable with a 50/50 proportion, a variable with a .9/.1 proportion or a 5-point scale mean with a standard deviation of .6. The MDE estimates are shown for the full population with separate estimates for the four key EHS program types. The research team calculated the MDE using adjusted sample size that reflects an expected design effect of about 1.2. Because the effective sample size includes a large portion of the overall population, the research team shows in the final column the MDE after applying a finite population correction.
Population |
Dependent Variable |
Eligible Population |
Sample Size |
Design Effect |
Effective Sample Size |
Minimum Detectable Effect |
Minimum Detectable Effect with FPC** |
Full Population |
50/50 proportion |
1529 |
600 |
1.2 |
500 |
5.5% |
4.5% |
Full Population |
90/10 proportion |
1529 |
600 |
1.2 |
500 |
3.5% |
2.9% |
Full Population |
5-point scale |
1529 |
600 |
1.2 |
500 |
6.7% |
5.5% |
Center Based Only |
50/50 proportion |
596 |
232 |
1.2 |
194 |
8.9% |
7.3% |
Center Based Only |
90/10 proportion |
596 |
232 |
1.2 |
194 |
5.7% |
4.7% |
Center Based Only |
5-point scale |
596 |
232 |
1.2 |
194 |
10.7% |
8.8% |
Home Based Only |
50/50 proportion |
178 |
56 |
1.2 |
46 |
18.0% |
15.5% |
Home Based Only |
90/10 proportion |
178 |
56 |
1.2 |
46 |
12.2% |
10.5% |
Home Based Only |
5-point scale |
178 |
56 |
1.2 |
46 |
21.9% |
18.9% |
Center & Home Based |
50/50 proportion |
536 |
227 |
1.2 |
190 |
9.0% |
7.2% |
Center & Home Based |
90/10 proportion |
536 |
227 |
1.2 |
190 |
5.8% |
4.7% |
Center & Home Based |
5-point scale |
536 |
227 |
1.2 |
190 |
10.8% |
8.7% |
Other** |
50/50 proportion |
219 |
85 |
1.2 |
71 |
14.5% |
11.9% |
Other |
90/10 proportion |
219 |
85 |
1.2 |
71 |
9.8% |
8.1% |
Other |
5 point scale |
219 |
85 |
1.2 |
71 |
17.9% |
14.7% |
|
|
|
|
|
|
|
|
Note: Power calculation assumptions include: significance level (two-tail test) = 0.05 with power to detect = 0.8. |
|||||||
**Other includes family based, locally designed options, and combinations of Center, Home, Family or Local. |
|||||||
** Minimum detectable effects including the finite population correction. |
For sub-groups or regions where the response rates are falling below the average, the research team plans to do targeted follow-up to encourage participation. This additional targeted nonresponse follow-up will reduce the size survey weights needed to make the sample representative of all EHS grant recipients and improve the analyses of sub-groups that tend to have a lower response rate.
The research team developed three survey data collection protocols for this study (see Instruments 1, 2, and 3), where each one was designed to be appropriate for the respondent’s EHS program option (i.e., home-based only, center-based only, both center- and home-based). The research team plans to distribute the surveys to a randomly selected sample of EHS grant recipients. The survey respondent in most situations will be the EHS program director.
The research team designed these surveys based on items in existing instruments, a review of the literature, and input from EHS experts. The team reviewed existing instruments including, but not limited to: the School District Staffing Survey3, Home Visiting Career Trajectories4 Program Manager Survey, CareerPlug Benchmark, Baby FACES 2018 and 20205 Program Director Surveys, and FACES 20196 Program Director Survey. The research team heavily modified existing survey questions because the existing questions did not measure key constructs in sufficient depth or were not specific enough to EHS. EHS experts reviewed the survey to ensure questions were accessible and not burdensome for respondents. Wherever possible, the instrument uses established items. However, there were few existing measures to capture the constructs of interest and expert consultation supported the development of new items tapping these constructs.
By combining questions from existing surveys and generating new questions, these surveys fill a gap in knowledge by creating a more comprehensive understanding of EHS grant recipients’ teacher and home visitor recruitment, training, and professional development practices. The data collection instruments address the key research questions of the project. Each question in the survey instrument aligns with key constructs relevant to the research questions. The research team developed three protocols that align to three groups of respondents’ EHS grant recipients’ option and, as such, their areas of experience and knowledge.
The research team pre-tested the survey with EHS program administrators and those with infant and toddler education and home visiting expertise. Pre-testers completed the survey and supplied feedback on item wording and clarity7. The research team revised the survey based on their feedback.
The research team may also release a small sample replicate in advance of the Wave1 survey to serve as a second pre-test group. This small sample will allow the research team to receive responses and feedback from EHS grant recipients before releasing the survey to the remaining sampled respondents.
The research team plans to collect most of the data through an online platform. For respondents that request it, the team can collect the data over the phone or through a paper survey. The team plans to send a pre-notification email describing the study to the prospective respondent, followed by a survey invite that includes a unique URL link to the survey. The team intends to send multiple follow-up email reminders and phone follow-up reminders, as needed, for groups where the participation rate is below average. The research team intends to provide information on how and who can be contacted if respondents have any questions about the study. In the first week of data collection, the team plans to monitor incoming responses daily to correct any technological or survey programming issues that may arise. For example, the team plans to assess responses for missing data patterns that could suggest an issue with skip patterns. The team also intends to generate daily response-rate estimates by stratum to plan for wave2 sampling. The team plans to continue to monitor incoming responses and generate response rates by stratum on a weekly basis until data collection concludes. If the team finds an error in the data (e.g. the respondent provided implausible information or she/he did not see all questions of the survey), we plan to correct it as soon as possible. The team plans to follow-up with respondents to fill-in missed questions or correct information. The team created a follow-up protocol that minimizes burden on respondents.
The research team expects a relatively high response rate given the length and focus of the survey. Prior surveys of similar populations that have been conducted by OPRE have achieved response rates of 65% and above, thus the team anticipates our response rate will be similar and have set our expectations to a conservative 60 percent or above. The research team intends to calculate the following response rates: overall, within strata, and item. The research team plans to calculate the response rate as the proportion of responses received compared to the targeted sample size (see Table 1).
The research team anticipates some nonresponse. Using PIR administrative data, the research team plans to conduct a thorough comparison between respondents and the full PIR sample frame. The team plans to create survey weights by post-stratifying on variables that differ between the sample respondents and the full population. The survey weight will align to the sample to match the full population of EHS grant recipients along key variables, including size, region, and type of grantee. The survey weight will reduce potential nonresponse bias. Depending on the size and range of the weights, the research team will consider creating replicate weights that researchers can use to get estimates of variance when using the survey weight.
For survey-level nonresponse the research team plans to conduct statistical modelling to compare non-responding EHS grant recipients to responding EHS grant recipients on characteristics listed in PIR administrative data including program type, region, number of children served, and agency type. For survey items that have missing data, we plan to conduct missing data analyses to see if patterns in nonresponse to items are related to the same program characteristics.
The research team plans to create survey weights that adjust for the sample design and differential nonresponse. The research team plans to apply survey weights to the survey data to develop target population estimates. The research team plans to use weighted estimates for internal reports as well as in public-facing products, such as briefs. Unweighted estimates may also be used in internal reports, and the research team plans to clearly note this in the text.
The research team plans to use survey weights because they reduce potential bias in the survey estimates. Nonetheless, they also introduce design effects that typically increase sampling error relative to the sampling error that would be based on a simple random sample. The research team plans to either create a set of replicate weights that will make it easy to obtain more precise sampling errors or use an average design effect multiplier to improve sampling error estimates. If the average design effect is greater than 1.1 (greater than 10%), then the research team would use a replicate weights approach.
The survey archive data file will contain analytic variables created by the research team for analyses and detailed documentation of their construction. The research team plans to include in the documentation (user’s guide) illustrations on how to use weights to get correct estimates and the replicate weights (or other procedure) to generate margins of error for percentage estimates.
The research team plans to merge Head Start PIR data with the survey data. As mentioned in B2, the research team plans to use information about all EHS grant recipients, such as program type and OHS region, to sample grant recipients. The research team intends to use the PIR data to correct for instrument level non-response bias in the survey weights by post-stratifying the survey respondents PIR data to align and match the population of Early Head Start grant recipients represented in the PIR.
The research team will establish data cleaning rules to correct problems with data that are implausible or that are out-of-range when other answers of the participant are considered. These could involve top coding or setting outliers to missing. Additionally, the research team plans to assess whether imputing information is necessary to adjust for bias due to item nonresponse. The research team plans to use imputation techniques for survey items that respondents did not see and thus led to missing data. The research team plans to impute or correct estimates for internal reports and in public-facing products, such as briefs. The research team may use unimputed and corrected estimates in internal reports, and the research team plans to clearly note this in the text.
The research team plans to test the survey instrument once programmed into the online platform to ensure correct programming (see B3 for more information). Additionally, the team plans to use a Qualtrics tool to simulate random respondents and create a test data set to check programming.
The research team plans to include data validation in the web-based data collection tool as a built-in preventive method for mitigating respondent reporting errors. As previously discussed under item B4, during the data collection time period, the research team will regularly monitor incoming responses. As part of this monitoring process, the research team plans to conduct the following checks: (1) logic checks (e.g., compare responses to the recruitment tracking file to verify that EHS grant recipients characteristics are consistent); (2) range and validity checks (e.g., review variable descriptives for out-of-range responses); and (3) missing data checks (e.g., review variable-level missing data patterns). When a team member identifies a problem, the research team plans to discuss, select, implement, and document the solution.
At the close of the data collection period, the research team plans to summarize the final results of the checks. The team also plans to examine the distribution of all variables to identify potential outliers. The team plans to handle outliers on a case-by-case basis using the proper statistical method based on the type of outlier, variable distribution, and conceptual congruence.
Lastly, all team
members complete required research ethics training and will follow
all ethical practices for the handling and maintenance of data.
Most survey questions are closed-ended with discrete response options. Therefore, the instruments included in this request will yield data that will primarily be analyzed using quantitative methods. We will carefully link the study’s research questions with the data we collect, constructs we measure, and our analyses.
To answer the research questions detailed in section A2, the research team plans to perform descriptive statistical analyses of closed-ended questions. The research team plans to use software that enables the use of analytic weights in statistical modeling to account for sampling design. For interval and ratio data, the research team plans to calculate means and standard deviations. For nominal and ordinal response options, the research team plans to calculate frequencies. For ordinal response options, the research team also plans to calculate medians and modes. The research team also plans to create cross-tabulations of data to show the weighted differences in the share of respondents engaged in specific strategies by specific grant recipient characteristics (e.g., enrollment, urbanicity, agency type). The research team may perform appropriate inferential analyses (e.g., analysis of variance, regression) to assess correlations.
As described in additional detail in section A2, the research team plans to merge the survey data with PIR data. This will allow the research team to examine relationships between selected PIR and survey variables.
Some questions include an “Other” option with a follow-up open-ended response. To analyze open-ended response options, the research team plans to treat these as qualitative data and to review and potentially recode as well. The research team intends to add the response to existing codes or create a new code. The research team plans to carefully document all decisions. The research team plans for a team leader to conduct a quality check on a subset of responses coded into existing categories and approve all new categories.
The research team plans to pre-register the study with the Center for Open Science. The team intends to include primary research questions, hypotheses, data sources, and variables in the pre-registration materials. We plan to pre-register the study before starting the analysis.
The research team plans to write a methodology report summarizing the technical approach. The report may include (1) background information about the study, including the research questions; (2) an overview of the data collection procedures, data collection instruments, and measures; (3) information about the sample design; and (4) the number of study participants, response rates, and weighting procedures. The research team plans to make this methodology report publicly available.
The research team plans to summarize survey findings in written report(s) and presentations. The research team plans to create a report for internal use that can inform ACF in its program management and improvement efforts. This report may contain weighted and unweighted estimates.
The research team also plans to create externally facing products targeted to federal staff, training and technical assistance providers, and Early Head Start grant recipients. Public products will only include weighted estimates. These products will contain a subset of variables. In these products, the research team plans to present national descriptions of Early Head Start hiring and staffing practices and to document variations in practices.
Urban Institute and MEF Associates are conducting this project under Contract No. HHSP233201500064I, Task Order No. 75P00120F37027. To complement the contracted research team’s knowledge and experience, we also consulted with a technical working group of outside experts, as described in Section A8 of Supporting Statement Part A.
Diane Schilder, Ed.D
Senior Fellow
Urban Institute
Timothy Triplett, Ph.D
Senior Fellow
Urban Institute
Rebecca Berger, Ph.D
Senior Research Associate
Urban Institute
Jenessa Malin, Ph.D.
Federal Project Officer
Senior Social Science Research Analyst
Office of Planning, Research, and Evaluation
Krystal Bichay-Awadalla, Ph.D.
Federal Project Officer
Social Science Research Analyst
Office of Planning, Research, and Evaluation
Attachments
Instrument 1: Survey of Staff Recruitment, Training, and Professional Development in Early Head Start - Center-Based Only
Instrument 2: Survey of Staff Recruitment, Training, and Professional Development in Early Head Start - Home-Based Only
Instrument 3: Survey of Staff Recruitment, Training, and Professional Development in Early Head Start - Center-Based and Home-Based
Attachment A: Grant Recipient Outreach Email Scripts
Attachment B: Grant Recipient Outreach Call Script
Attachment C: Survey Pre-Notice
Attachment D: Instrument Preamble (Informed Consent)
Attachment E: Grant Recipient Follow-Up Email and Phone Scripts
1 OMB #: 0970-0148
2 OMB #: 0970-0427
3 OMB #1850-0598
4 OMB # 0970-0512
5 OMB # 0970-0354
6 OMB # 0970-0151
7 Fewer than 10 pre-testers participated in these activities.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Berger, Rebecca |
File Modified | 0000-00-00 |
File Created | 2024-07-20 |