F4EQ_Surveys_Supporting Statement B_Draft 6_toACF_10.30.23 (clean)

F4EQ_Surveys_Supporting Statement B_Draft 6_toACF_10.30.23 (clean).docx

Financing for Early Care and Education: Quality and Access for All (F4EQ)

OMB: 0970-0623

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes








Financing for Early Care and Education Quality and Access for All (F4EQ)


OMB New Information Collection Request





Supporting Statement

Part B



NOVEMBER 2023



Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officer: Paula Daneri, PhD



Part B


B1. Objectives

Study Objectives

The Office of Planning, Research, and Evaluation (OPRE) at the Administration for Children and Families (ACF) under the U.S. Department of Health and Human Services (HHS) proposes to conduct a nationwide descriptive study of coordinated funding in early care and education (ECE), including surveys of Head Start programs and state ECE administrators. The primary objective is to better understand the landscape of Head Start’s participation in, and use of, coordinated funding models by (1) identifying common approaches and describing their implementation, (2) identifying the local, state, and federal conditions that impact programs’ decision making around coordinated funding and broader ECE systems engagement, (3) exploring potential associations between coordinated funding models, program implementation, and Head Start’s engagement with broader ECE systems, and (4) studying state-level approaches to funding coordination. OPRE aims to better understand these objectives through the two nationwide surveys. The program1 survey will be a census of Head Start program directors, inclusive of grantee and delegate programs across all 12 Head Start regions. The state ECE administrator survey will invite three state-level ECE administrators from each of the 50 states and Washington, DC to complete the survey. Administrators invited to participate will include the state Head Start Collaboration Office Director (HSCO), the lead state pre-k administrator, and the lead Child Care and Development Fund (CCDF) administrator. Both surveys will inform eventual case studies to be completed under a future information collection. Findings will inform future ACF data collections and be used to test hypotheses about coordinated funding models.


Generalizability of Results

This study is intended to produce nationally representative estimates of the extent to, and ways in which, Head Start programs use multiple funding streams to support programming. All Head Start programs, inclusive of all grantees and delegates from all 12 regions, will be asked to participate in the program survey and as such results will represent the universe of Head Start programs. The research team will monitor incoming survey results for representativeness based on agency type, Early Head Start-Child Care Partnership (EHS-CCP) models, Head Start region, and size, prompting participation amongst under-represented groups along the way. If the sample of survey respondents is not nationally representative in these areas after data collection ends, we will generate and use survey weights to reflect the population of Head Start programs.


The survey of state ECE administrators is not intended to be generalizable and is instead intended to provide state-specific context for understanding responses to the Head Start program directors survey. This information will contextualize state level policies and structures in which those programs operate and make decisions. The state survey will include three respondents from each of the 50 states and the District of Columbia (DC) but will not include respondents from tribal nations or U.S. territories.


Appropriateness of Study Design and Methods for Planned Uses

The two surveys included in this collection will help ACF achieve the objectives listed under Section B.1 above and develop a nationally representative understanding of Head Start programs’ involvement in coordinated funding models and the individual state policy contexts in which programs make funding decisions. The survey of state-level ECE administrators will ensure that we capture the perspectives of three key roles in each state and DC.


The program survey is intended to be descriptive and is not designed to measure or identify impact in any way. The state survey is intended to provide information about within-state (and DC) contexts. Although we will survey individuals with the same roles across the 50 states and DC through the state-level survey, it is possible that their duties and perspectives could look very different from one state to the next, which would impede the ability to make direct comparisons across states. The key limitations of the study design listed here will be included in all public products associated with this study. As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.



B2. Methods and Design

Target Population & Sampling

Program Survey. All Head Start program directors will receive an invitation to take the program survey. We will identify and contact program directors using data from the Head Start Enterprise System (HSES, OMB #0970-0207) and Program Information Report (PIR, OMB #0970-0427). This survey will be a census of the approximately 1,825 unique Head Start programs. This estimate of the number of programs is based on the number of programs identified for ACF’s Study of Disability Services Coordinators and Inclusion in Head Start (OMB #0970-0585) with data from the HSES provided in January 2021. While the ~1,825 Head Start program population is relatively large, subgroups of interest quickly become very small. In addition, we believe, based on prior project activities (i.e., review of literature, key informant interviews) that there may be many different ways programs are approaching coordinating funding. We currently do not have enough prior evidence to suggest a sampling approach that would capture a nationally representative sample regarding this topic. We also consider the power necessary to answer key subgroup comparisons, as described under Section B7, Data Analysis. Therefore, we plan to conduct a census survey to gain a true national picture of approaches and experiences. In addition, the role of the state context is a critical question this project seeks to answer. Because early childhood funding policies vary from state to state, it is desirable to ensure a sufficient sample from each state to gain an understanding of state interplay that a nationally representative sample alone would not provide us. In addition, Regions 11 and 12, representing tribal nations and Migrant and Seasonal Head Start (MSHS), respectively, are of particular interest to the Office of Head Start (OHS) and OPRE but have relatively few programs. Thus, a census is required to maintain adequate sample size for subgroup analyses.


State Survey. For each US state and DC, the HSCO, the lead state pre-k administrator, and the lead CCDF administrator will receive an invitation to take the state survey. This makes a total of 153 potential state-level respondents. HSCOs will be identified from a directory provided by ACF. The remaining individuals will be identified via a systematic web search of CCDF Plans (to identify CCDF administrators) and the National Institute for Early Education Research’s (NIEER) “State of Preschool” documentation (to identify state pre-k administrators). The research team may need to rely on state ECE websites to fill in any gaps. We will not include administrators from either territories or Tribal Nations in the state survey.2



B3. Design of Data Collection Instruments

Development of Data Collection Instruments

As discussed in Supporting Statement A, Exhibit A2.1: Data Collection Activities, the study instruments include example recruitment materials for Head Start program directors and state-level ECE administrators and the two surveys.


To develop the two surveys, the research team engaged federal staff and experts of Head Start and ECE policy, practice, and research to refine a set of research questions and develop constructs to address those research questions.


The research team identified potential survey items from the following works:

  • National Survey of Early Care and Education 2019 (NSECE; OMB # 0970-0391)

  • Early Childhood Training and Technical Assistance Cross-System Evaluation Project (EC T/TA; OMB # 0970-0356)


The research team created a matrix of research questions and constructs for investigation, mapping each potential survey question from existing sources onto a construct and research question. Upon review, all existing survey questions required some revision and most constructs required new project-generated questions. However, in the cases where survey questions and items were adapted from existing sources, the research team kept note of those revisions and the origin. This survey creation process ensured that each construct was represented and thoroughly investigated in each of the two surveys. Survey items underwent review by experts internal to the research team’s organizations, as well as three experts in the field. See Supporting Statement A, Section A.8 for more information.


Every survey item has also undergone cognitive testing amongst purposively selected individuals from the target populations. No single item was tested by more than 9 respondents and therefore testing was not subject to the Paperwork Reduction Act. To reduce the potential for measurement error, items have been optimized to reduce respondent burden and collect only the necessary information. During the development and review process, the research team pared down response options, simplified language, and reduced overall complexity of individual items, skip patterns, and fills.



B4. Collection of Data and Quality Control

Recruitment Protocol

Every Head Start program director (n~1,825) and each of the identified ECE administrators (up to n=153) will receive an email invitation and hardcopy letter mailing. Both letters and emails will explain the purpose of the survey, provide information about anticipated time commitment and incentives, and share a unique URL to access the survey online. The research team may include a letter of support from the Office of Head Start to encourage participation. We will engage in ongoing weekly outreach via email and add in phone follow-ups after the first four weeks to prompt respondents throughout the data collection window. See “Appendix A: Recruitment Materials” for example scripts.


Data Collection

The data will be collected electronically via survey platform such as Voxco by contractor NORC at the University of Chicago. Respondents will complete the survey at their convenience.


Quality and Consistency

Survey protocols have been tested via cognitive interviews to ensure items are clearly written and consistently interpreted. Each respondent group will receive the same recruitment materials and survey items. Respondents will be given the option to download a PDF copy of their survey on the introductory page of the online survey platform. This will allow them to preview the questions, collect any helpful documentation, and/or ask colleagues for question-specific input. This option may reduce the total time needed to complete the surveys and may lead to more thorough and accurate data.


To improve data collection quality, the research team will conduct a half-day training for all NORC field staff on administering the phone recruitment scripts and survey instruments. This will ensure consistent, efficient, and culturally responsive data collection.


To ensure survey programming quality and to prevent a potential widespread data collection issue, the research team will “soft launch” the surveys. The soft launch will involve administering the surveys to 5% of the program survey’s potential respondents and 5% of the state survey’s potential respondents. During this period, the research team will perform certain quality checks to ensure the integrity of the production environment now using live respondent cases. These encompass quality check reviews of the questionnaire, dataset, reporting procedures, and distribution channels. As respondents complete their surveys, the research team will monitor these completes to ensure previously tested logic, skip patterns, and response coding is all accurate and as designed. Cases that begin and stop short of completion are reviewed to identify any possible areas in the survey that may require addressing for increased completion rates. Any questionnaire or dataset issues identified during this soft launch will be discussed by the research team to ensure any patch to the code or process, if needed, is appropriately handled prior to full sample release.


Throughout data collection, the research team will monitor questionnaire administration to detect potential technical issues and possible misinterpretation of questions by respondents. In addition to questionnaire functioning, the research team will monitor data collection progress carefully throughout the fielding period to ensure good response rates and representative data. Twice weekly production reports will show how data collection is progressing, enabling the identification of problem areas and timely remedial actions if needed. These reports will also allow the research team to monitor completion rates by sample subgroups in order to detect potential bias in response rates and pursue remedial action as needed. When the research team detects that a subgroup is completing surveys at lower rates, we will adjust our field procedures to boost completion among that group.



B5. Response Rates and Potential Nonresponse Bias

Response Rates

The research team aims to achieve a response rate of between 70–90% on both surveys. NORC has a proven track record of obtaining similar response rates in prior surveys of Head Start grant recipients through the NSECE (OMB# 0970-0391) and the EC T/TA Evaluation (OMB# 0970-016). In recent data collection efforts on the Study of Disabilities Services Coordinators and Inclusion in Head Start (OMB# 0970-0585), collected in the year programs were recovering from COVID (2022), NORC was able to obtain responses from 73% of directors invited to take the director survey. Other recent data collection efforts achieved higher rates on smaller samples. For example, ACF’s Early Care and Education Leadership Study (ExCELS; OMB# 0970-0582) recently obtained survey response rates between 86% and 96%; however, these were rates obtained within centers that already agreed to participate (they ultimately identified 132 participating centers of more than 3,000 they conducted outreach to).3


Non-Response

Survey non-response. Although we will encourage participation through clear and attractive materials and tokens of appreciation (see Supporting Statement A, Section A9), we will also offer the flexibility to complete the survey online at participants’ convenience. However, we do anticipate some survey nonresponse. Each Head Start program director and state administrator invited to participate in the online survey will be assigned a unique ID that will be used to track, in real time, who has responded to the survey. We will establish subgroups of interest based on a priori information available through the PIR about Head Start programs for Program Survey respondents. Potential subgroups of interest may be determined by agency type, EHS-CCP models, Head Start region, and size. We will regularly monitor response rates by these factors to identify where additional outreach may be needed to obtain representativeness. In reporting our results, we will calculate nonresponse rates according to the standards promulgated by the American Association for Public Opinion Research, which involve calculating the response rate as the ratio of the number of eligible completed cases to the number of eligible cases. Respondent demographics will be documented and reported in written materials associated with the data collection. If we have disproportionate response rates in key subgroup areas (for example, if respondents were more likely to be from particular ACF regions), we will create and use statistical weights for analysis so that responses are representative of the target population.


Item non-response. Questionnaires are designed to minimize item non-response based on design work the research team has conducted on other questionnaires, such as the NSECE and the EC T/TA surveys. For example, we reduced the complexity of questions and narrowed the focus to reduce the possibility of respondent skipping questions. In addition, input on survey drafts and cognitive testing prior to administration helped identify questions that were difficult to complete and provided opportunities to reformat in ways that will increase user response to items. When the final surveys are in the field, the study team will examine item non-response to identify if there are patterns of missingness. We will ensure these are documented clearly as a potential bias in the analyses. We may also discuss potential issues of bias with the appropriate populations to gain perspectives on why that bias may exist.





B6. Production of Estimates and Projections

For the program survey, we will produce estimates for official external release by OPRE that are intended to be generalizable to the population of Head Start programs described in Section B1. As our intent is to produce nationally-representative estimates of Head Start programs, we will use weights to adjust for nonresponse if needed. As discussed above in section B.5, we will create calibrated weights to increase the precision of our estimates and account for nonresponse. Weights for the program survey will incorporate information on grant recipient characteristics provided through the Head Start PIR and HSES. We will select characteristics that are associated both with nonresponse and participants’ responses to the survey questions. We anticipate that this will include agency type, partnership models (e.g., EHS-CCPs), Head Start region, and size.


The weighting adjustment factor is then computed as the inverse of the weighted response rate in each cell. Use of the sampling weights will enable unbiased estimation of descriptive statistics that are run on the variables. Selected data from the information collection will be made available to the public for secondary analysis. Datasets will include sampling weights as well as sample design variables to allow analysts to produce design-unbiased standard errors for their analysis. Study documentation will describe how these variables can be used with commonly available statistical software to produce valid population estimates.


Data Archiving

Survey data collected from Head Start programs via this study will be archived and made available to the public for secondary data analysis. Selected program-level data from the HSES and Head Start PIR administrative data systems will be incorporated into the archived data, to the extent they do not disclose information about the respondents. In addition, state policy information collected through the state survey may be incorporated into the dataset, to allow for other researchers to examine program survey responses within their state context. The research team will implement masking strategies to ensure the privacy of survey participants. We will prepare documentation for each data file, including codebooks and user manuals, which will describe each variable on each data file, methods for accessing each data file, guidance for using the weights, and any editing strategies employed. If sampling weights are needed, datasets will include sampling weights to allow secondary analysts to produce nationally representative estimates for the program survey data. Study documentation will describe how these variables can be used with commonly available statistical software to produce valid population estimates.


We do not plan to make policy decisions off data that is not representative or publish biased population estimates.





B7. Data Handling and Analysis

Data Handling

In order to ensure the survey will perform well once in the field, during the survey programming process, the research team will implement testing procedures, including fielding general scenarios for end-to-end testing of initial survey programming. Testing general scenarios ensures anticipated real-world experiences will perform as expected. The research team will then move on to a section-by-section test. Section-level testing will be paired with review of any new content from the final approved questionnaire. During this period, the research team will review the dataset to ensure all response coding and technical aspects approved are adhered to. The research team will then perform another end-to-end test to ensure any changes and fixes found to this point did not harm anything else in the programming. If nothing is harmed, we will perform a smoke test of the production environment using fake sample data. This ensures the questionnaire, its dataset, and its distribution environment all behave as expected. Once the smoke test is successfully completed, we will deem the questionnaire code ready for soft launch. The research team will conduct a soft launch of the survey as described in Section B4 above.


In addition to these quality checks of the questionnaire and dataset environments, the soft launch will also be used to confirm the survey distribution prior to full field release. The research team will review the initial email distribution to ensure all emails appear received by the intended recipient’s inbox. This process helps minimize any technical role the email medium may factor in preventing survey completions. If all reviews pass their quality check inspection, the full sample will be released to take the survey per the project’s schedule. The questionnaire is then “frozen,” with no more editing allowed, to ensure respondents have a uniform experience.


Data Analysis

The data analysis will begin with univariate data inspection, including descriptive measures of distribution (e.g., range, standard deviation), center (e.g., mean, median), and missingness as appropriate. At this time, the research team may also construct new variables such as scales based on multiple survey items. The research team will then progress to bivariate comparison and possibly multivariate modeling. For example, this might include examining responses to a particular survey item by program characteristic or constructing a latent profile analysis to detect different approaches to coordinating funds. This many involve merging collected survey data with other existing data sets, such as the PIR or the CCDF Policies Database, to incorporate additional context. Statistical tests for differences in means or distributions, including t-tests and chi-square tests, may be used as appropriate. The research team may also use survey responses to develop regression models to predict indicators of equity. For example, the research team might examine whether programs that use a particular approach to coordinating funds report different capacities to meet the needs of underserved populations.


Assuming a minimum program survey response rate of 70%, there would be 1,278 respondents.4 For a sample of 1,278, the research team can estimate a predicted level of precision for survey responses. Using a power analysis and assuming a 95% confidence level and a 1.0 standard deviation/margin of error around a given mean or proportion, the 1,278-respondent sample size would have an estimated two-sided confidence interval width of 0.11, or about one-tenth of a standard deviation. The research team can also observe expected effect size across different subgroups and survey item arrangements. A common example may be testing responses across Head Start region on a 5-point Likert scale survey question. Using our sample size of 1,278, power of 0.80, and a two-sided alpha level of 0.05, we could detect a 0.12 effect size difference across regions. Any response rate above 70% would have even more power to detect differences across groups.


Analyses will be developed and conducted by the contractor in consultation with the ACF study team and with input from experts and engagement with key constituents influenced by ECE braiding this sector. The study will pre-register with OpenScience prior to survey fielding; however, no analysis plan will be publicly posted prior to the start of analysis.


Data Use

Data Tables. A “Data Tables Report” will serve as the primary reference of the information collected. This report will provide estimates from the program and state surveys. It will also provide a description of the study design, methods, analytic approaches, and sampling information. The project may also highlight findings through study briefs. Any briefs resulting from analyses of these data will be published on OPRE’s website and disseminated to various audiences by the research team. The topics will be selected based on ACF interest, the research objectives, and feedback through active engagement with the broader field.


Data Archiving. We will archive the data with supporting materials (e.g., codebooks, instruments) so that a wide variety of researchers and stakeholders can access, use, and duplicate any analyses conducted by the project. The codebooks will include data variables, data labels, and response options for each question. The accompanying User Guide will describe each dataset (from the program survey and state survey), explain the weights, and detail the processes for linking datasets if they are designed to be linked. The User Guide will also include: a description of the study design and methods used to collect and analyze data; documentation of study approval (i.e., OMB and IRB) and consent forms; and the survey questionnaires. The data will be stored in the Child and Family Data Archive at the University of Michigan, or another data archive of ACF’s choice.




B8. Contact Persons

Name

Affiliation

Email Address

Paula Daneri, PhD

Office of Planning, Research, and Evaluation at the Administration for Children and Families

Paula.Daneri@acf.hhs.gov

Jacquelyn Gross, PhD

Office of Planning, Research, and Evaluation at the Administration for Children and Families

Jacquelyn.Gross@acf.hhs.gov

Elleanor Eng, MPH

Office of Planning, Research, and Evaluation at the Administration for Children and Families

Elleanor.Eng@acf.hhs.gov

Stacy Loewe, PhD

NORC at the University of Chicago

loewe-stacy@norc.org

Mitch Barrows, MA

NORC at the University of Chicago

barrows-mitchell@norc.org

Cristina Carrazza, PhD

NORC at the University of Chicago

Carrazza-cristina@norc.org

Jill Ghandi, PhD

NORC at the University of Chicago

ghandi-jill@norc.org

Sarah Kabourek, PhD

NORC at the University of Chicago

kabourek-sarah@norc.org

Kelly Pudelek

NORC at the University of Chicago

pudelek-kelly@norc.org

Lekha Venkataraman

NORC at the University of Chicago

venkataraman-lekha@norc.org




Attachments

Instrument 1: Head Start Program Survey

Instrument 2: State Systems Administrator Survey

Appendix A: Recruitment Materials



1 “Program” includes both Head Start grantees and delegate agencies.

2 This decision was made in consultation with OPRE and OHS and based on recent experiences with other OPRE-funded studies. Through these discussions, the research team determined that tribal nations may not consistently have the three types of administrators (HSCO, CCDF, pre-K) that the state-level survey is intended for. The unique nature of each tribal nation would also have implications for disclosure and what data or findings could be shared broadly. The exclusion of tribal nation leaders from the state survey does not preclude the research team from addressing the research questions using just the program-level survey responses from tribal nations.

3 L. Malone, E. Litkowski, B. Eiffes, D. Straske, S. Albanese, Y. Xue, K. Gonzales, R. Gilliard, E. Appel, and G. Kirby. “Early Care and Education Leadership (ExCELS) Data User’s Guide.” OPRE Report 2023-130. Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, 2023. 

4 As described above in section B5, the study team anticipates a response rate between 70-90% and has calculated burden using the upper limit of that range.

8

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorEng, Elleanor (ACF) (CTR)
File Modified0000-00-00
File Created2023-12-12

© 2024 OMB.report | Privacy Policy