Variations in Implementation of Quality Interventions (VIQI)
OMB Information Collection Request
New Collection
Supporting Statement
Part B
Original Submission: November 2017
Updated as of April 2018
Submitted By:
Office of Planning, Research and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officers:
Ivelisse Martinez-Beck
Amy Madigan
B1. Respondent Universe and Sampling Methods
Although the results of this study are designed to be generalizable to the center-classroom-child combinations eligible for this study, the study will not provide results that are statistically representative of populations of children or classrooms or centers. Convenience methods and qualitative judgement are necessary to ensure both diversity and feasibility.
Sampling.
As discussed in Supporting Statement A, we plan to recruit 165 centers that are spread across 7 metropolitan areas in the United States for the Impact Evaluation and Process Study. We plan to recruit an additional 40 centers for the Pilot Study, likely in about three of the metropolitan areas used in the Impact Evaluation and Process Study. The same screening and recruitment approach will be used for the Pilot Study and the Impact Evaluation and Process Study.
To identify metropolitan areas where participating centers are located in either the Pilot Study or the Impact Evaluation and Process Study for the VIQI project, the study team will seek to gather information from state and local stakeholders, such as ECE early childhood program administrators, local leaders in ECE, and local ECE practitioners to identify particular metropolitan areas that could be good fits for the VIQI project. The study team will use a purposeful, snowball selection strategy to determine which informants to engage and when. The informants will be selected to participate in an iterative fashion based on their expertise and the study’s need for information from different sources and localities. The team will engage informants in waves, consistently returning to the list of potential informants to determine which individuals with select expertise should be engaged next given the study team’s remaining gaps in knowledge. We anticipate meeting with up to 120 state and local informants over the period covered under this package.
To screen and recruit centers for either the Pilot Study or the Impact Evaluation and Process Study within the metropolitan areas identified for the VIQI project, the study team will reach out to key informants at local administrative entities that are connected to large numbers of Head Start and community-based child care centers, such as Head Start grantee or delegate agencies that receive funding directly from the Office of Head Start or community-based child care programs that operate or oversee multiple child care centers. The study team will use a purposeful, snowball selection strategy to determine which informants to engage and when. The informants will be selected to participate in an iterative fashion based on their expertise and the study’s need for information about different ECE programs and centers. The team will engage informants in waves, consistently returning to the list of potential informants to determine which individuals with select expertise should be engaged next given the study team’s remaining gaps in knowledge. These discussions will occur by phone and in-person. We anticipate meeting with 132 staff in Head Start grantee and community-based child care oversight agencies by phone over the period covered under this package. We anticipate that the in-person discussions will occur with large groups of staff across multiple Head Start grantees and other community-based child care programs and centers from a given metropolitan area for a total of 610 staff participating in those group meetings over the period covered under this package.
Upon obtaining initial screening and eligibility information, the study team will then refine and narrow the list of prospective programs and centers and begin outreach to key informants at the program level (and at individual centers when appropriate). The study team will continue to gather information to further refine and narrow the list of potential programs and centers in an iterative fashion by taking stock of the information learned to guide the next set of conversations and contacts. Doing so allows us to assess the extent to which the combination of programs and centers on the list of potential candidates provides an appropriate distribution of characteristics of centers and classrooms that allows us to be sufficiently powered to fruitfully investigate the guiding research questions for the different phases of the VIQI project. The conversations will occur through a combination of phone calls and in-person meetings. In total, we expect to meet by phone with 336 staff from Head Start centers and community-based child care centers over the period covered under this package. We anticipate meeting in-person with 950 staff from Head Start centers and community-based child care centers over the period covered under this package.
Prior to the Impact Evaluation and Process Study, we will conduct a Pilot Study. Our sample for the Pilot Study will include about 40 centers that serve 3- and 4- year-olds in about three metropolitan areas (likely in three of the 7 metropolitan areas), with up to three classrooms selected per center (for a total of about 120 classrooms). Within these centers and for the classrooms participating in the study, we plan to identify administrators and lead and assistant teachers who will be asked to participate in baseline, follow-up and implementation of fidelity instrument data collection activities. The targeted participants for these activities will be up to one administrator in participating centers, all lead and assistant teachers in participating classrooms, and all coaches serving the participating centers. We anticipate up to 48 administrators across the participating centers (assumed to be one per center with some additional administrators being added to the group of participants when turnover occurs). We also expect up to 150 lead teachers and 150 assistant teachers across the participating centers (assume one lead teacher and one assistant teacher per classroom with some additional lead and assistant teachers being added to the group of participants when turnover occurs). We expect up to 22 coaches across the participating centers; we assume 11 coaches will provide support for the installation of the interventions in centers and classrooms assigned to one of the intervention conditions, and 11 coaches will provide support to control group centers and classrooms. We also assume some additional coaches will be added to the group of participants when turnover occurs. (Note: The total number of respondents in Exhibit 5 in Supporting Statement A represents the total respondents across both the Pilot Study and the Impact Evaluation and Process Study.) Additionally, we also expect to identify and recruit a select group of children who are being served to potentially participate in direct assessments. To obtain this group of participating children, we anticipate asking all parents/guardians of all children in these classrooms to consent and complete a baseline information form. The information gathered on these forms will be used to identify a list of candidate families and children who are open to participating in the pilot study that vary in terms of their demographic characteristics with a particular interest in having children with different racial, ethnic and immigrant backgrounds, as well as socioeconomic backgrounds, participating in the pilot study. We expect 1,620 parents/guardians of children being served in the centers to be asked a set of baseline information questions and from this we expect to identify and select about 4 children per classroom to participate in the study (or a group of 480 children to be asked to participate in data collection activities for the Pilot Study).
Our selection plan for the Impact Evaluation and Process Study involves recruiting about 165 centers that serve 3- and 4- year-olds in 7 metropolitan areas (an average of about 24 centers per locality). Across the 165 centers, our selection plan assumes that there will be 3 classrooms per center on average (495 classrooms). Within these centers and for the classrooms participating in the study, we plan to identify administrators and lead and assistant teachers who will be asked to participate in baseline, follow-up and implementation of fidelity instrument data collection activities. The targeted participants for these activities will be up to one administrator in participating centers, all lead and assistant teachers in participating classrooms, and all coaches serving the participating centers. In line with this, we anticipate up to 198 administrators across the participating centers (assumed to be one per center with some additional administrators being added to the group of participants when turnover occurs). We also expect 619 lead teachers and 619 assistant teachers across the participating centers (assumed to be one lead teacher and one assistant teacher per classroom with some additional lead and assistant teachers being added to the group of participants when turnover occurs). We expect 208 coaches across the participating centers (assumed to be 111 coaches providing support for the installation of the interventions in centers and classrooms assigned to one of the intervention conditions and 55 coaches providing support to control group centers and classrooms with some additional coaches being added to the group of participants when turnover occurs). For participating classrooms in the Impact Evaluation and Process Study, we also expect to identify and recruit a group of children who are being served to participate in direct assessments. To obtain this group of children, we anticipate asking all parents/guardians of all children in these classrooms to consent and complete a baseline information form. The information gathered on these forms will be used to identify a list of candidate families and children who are open to participating in the impact evaluation and process study and meet the selection criteria. We expect 6,948 parents/guardians of children being served in the centers to be asked a set of baseline information questions and from this we expect to identify about 4 children per classroom to participate in the study (or a sample of 1,980 children to be asked to participate in data collection activities for the Impact Evaluation).
The recruitment, screening, and selection activities will aim to generate a group of 3-year-old children in classrooms participating in the impact evaluation and process study who complete the baseline and follow-up protocols for assessments of children’s skills and whose teachers complete reports on their social and behavioral skills at follow-up. We will aim to achieve a group of children with sufficient variation in their background characteristics (e.g., their family income (e.g., at or below the federal poverty level and 200% below the federal poverty level), race/ethnicity (e.g., White, Black, Hispanic), parent’s level of education (e.g., at least a high school diploma), dual language learner background (e.g., learning English as a second language), so that selected group provides sufficient power to detect impacts of the interventions and to explore the relationship of quality to child outcomes for subgroups defined by these characteristics of interest.
Participation in all of these data collection activities will be voluntary.
Statistical Power. For the Impact Evaluation and Process Study, centers will be randomly assigned to one of three groups: a group that receives Intervention 1 (Group 1), a group that receives Intervention 2 (Group 2), or a group that continues to conduct “business as usual” (Control). There will be on average 3 classrooms per center participating in the study. An equal number of centers will be assigned to each group (55 centers per group in the Impact Evaluation and Process Study). Each of the two interventions will target a different dimension of classroom quality (structural/process quality or instructional quality). Random assignment to the one of up to 3 groups will be blocked by metro area, and possibly by baseline quality (low and high) and by setting (community-based, Head Start). The blocks used for random assignment will be defined in such a way as to maximize the precision gained from blocking. The definitions for each random assignment strata will be informed by the results of the Pilot Study. At minimum, there will be three centers per random assignment block, because the study design has 3 groups.
The Impact Evaluation design will be used to estimate the following quantities of interest: (1) the impact of each intervention on classroom quality and child outcomes; (2) the impact of both interventions pooled together on classroom quality and child outcomes; (3) the effect of global quality on child outcomes; and (4) the effect of each targeted dimension of quality on child outcomes. These analyses will be conducted for all centers in the Impact Evaluation and for subgroups of interest (e.g., Head Start and community-based care; centers with low and high baseline quality).
The remainder of this section discusses the minimum detectable effect size (MDES) for each type of effect in the Impact Evaluation and Process study. The MDES is the smallest true program impact (scaled as an effect size) that can be detected with a reasonable degree of power (in this case, 80 percent) for a given level of statistical significance. For example, if the MDES is 0.15, then the true effect would have to be at least 0.15 for us to conclude that it is statistically significant. For the Impact Evaluation and Process Study, we will use a 10 percent significance level (two-tailed test), which has been used in prior studies to identify effects that are of both practical and policy significance (e.g., HS CARES [0970-0364]).
For the Impact Evaluation, the impact analyses will be based on the study classrooms with quality data at follow-up and the sampled children with outcomes data at follow-up. (Missing data on baseline characteristics or outcomes used as covariates will be imputed using an appropriate method.) Therefore, the MDES calculations presented in this section account for nonresponse at follow-up for the classroom or child outcomes.
Intervention Effects on Quality and Child Outcomes (Impact Evaluation and Process Study). In the Impact Evaluation, intervention effects will be estimated by comparing the classroom and child outcomes of centers in each of the two experimental groups receiving an intervention and the control group. For the purposes of powering the study, we are assuming that the VIQI study should be able to statistically detect effects on child-level outcomes that are small in magnitude (e.g., between 0.15 and 0.20 standard deviation effect sizes) and effects on classroom-level outcomes that are moderate in magnitude (e.g., between 0.40 and 0.60 standard deviation effect sizes). These ranges are based on prior studies showing that successful interventions typically achieve impacts of this magnitude, and that an intervention’s effects on classroom-level outcomes are typically 3-4 times larger than effects on children’s outcomes.
Exhibit B.1 shows the MDES for the impact of both interventions pooled together (top panel) and for each intervention separately (bottom panel). MDES are presented for the full group of participating centers, as well as for subgroups consisting of 50% and 33% of centers (the latter represents a situation where the two subgroups would not be evenly split.) For classroom quality, the MDES are shown for measures of process quality and for instructional quality based on classroom observations. For children’s outcomes, the MDES are shown for child outcomes measured using a direct assessment, because we expect our primary child outcomes to be based on direct assessments (as opposed to teacher reports).
As shown in this table, the Impact Evaluation will be well powered to statistically detect pooled intervention effects in the target range, for the full group of participating centers and classrooms as well as for subgroups consisting of 50% and 33% of these groups. The study will also be able to detect effects in the target range for each intervention separately, for the full sample and for a subgroup of 50% of centers. For child outcomes, it may not be possible to detect intervention-specific effects in the target range for a 33% subgroup of centers. For this reason, analyses based on a 33% subgroup will be considered more of an exploratory analysis.
Exhibit B.1. MDES for the Impact of the Intervention(s) on Quality and Child Outcomes
|
Classroom Quality |
Children’s Outcomes - Direct assessments |
|
|
Instructional quality |
Process quality |
|
Both Interventions Combined (Pooled Effect) |
|||
Full Sample |
0.254 |
0.220 |
0.109 |
50% of Sample |
0.362 |
0.314 |
0.156 |
33% of Sample |
0.447 |
0.387 |
0.192 |
Effect of Each Intervention |
|||
Full Sample |
0.295 |
0.255 |
0.127 |
50% of Sample |
0.422 |
0.365 |
0.181 |
33% of Sample |
0.523 |
0.453 |
0.225 |
Note: This is based on an 80 percent power and a 10% significance level (two-tailed test). It assumes 165 centers, 3 classrooms per center, 4 children sampled per classroom, 110 intervention centers [55 per experimental group]. It is assumed that follow-up data will be available for all classrooms (3 per center on average) and for 83.3% of children (3.3 children per classroom on average). Assumptions about the intraclass correlation and the variance explained by the baseline covariates are based on data from other studies. For instructional quality, we assume an intraclass correlation of .18, a between-center variance explained of 0.40, and a between-classroom variance explained of 0.01. For process quality, we assume an intraclass correlation of .14, a between-center variance explained of 0.97, and a between-classroom variance explained of 0.03. For child outcomes, we assume an intraclass correlation of .11 between centers and .01 between classrooms, a between-center variance explained of 0.98, a between-classroom variance explained of 0.27, and a within-classroom variance explained of 0.27. |
Effects of Quality on Child Outcomes (Impact Evaluation and Process Study). As explained in Supporting Statement A.16, the VIQI study will examine the effect of global quality and of each targeted dimension of quality (structural/process quality and instructional quality) on child outcomes, using an instrumental variables approach.
In terms of what size of effect the study should be able to detect, a reasonable rule of thumb is that the MDES for the effect of quality on child outcomes should be about 0.25-0.33. This rule of thumb is based on previous studies showing that the effect of an intervention on child outcomes is typically 25-33% of the size of its effect on classroom quality.
Exhibit B.2 shows the range of MDESs for the effect of quality on child outcomes. The MDES for the effect of quality is approximately equal to the MDES for the effect of the interventions on children (as shown in Exhibit B.1) divided by the impact of the intervention(s) on quality. In Exhibit B.2, we show the MDES based on the assumption that the impact of the interventions on global quality or their targeted dimension of quality will be in the target range and therefore moderately sized (effect size = 0.40 to 0.60), and that the interventions will not have an effect (or only a very small effect) on the dimension of quality that they do not target.
As shown in Exhibit B.2, the VIQI study will be adequately powered to detect global quality effects in the target range (0.25-0.33) for the full group of participating centers and for a 50% subgroup of centers. However, the study may not be able to detect global quality effects in the expected range for a 33% subgroup. The study will also be able to statistically detect effects in the expected range for each targeted dimension of quality, for the full group of participating centers. If intervention effects are in the upper range (0.60), the VIQI study may also be able to detect dimension-specific effects for a 50% subgroup, though not for a 33% subgroup. For this reason, analyses based on a 33% subgroup will be considered more of an exploratory analysis.
Exhibit B.2. MDES for the Effect of Quality on Child Outcomes
|
MDES for the effect of global quality |
MDES for the effect of each dimension of quality (structural/process and instructional) |
||
Sample |
If the pooled effect of the interventions on global quality is… |
If the effect of each intervention on its targeted quality dimension is… |
||
0.60 |
0.40 |
0.60 |
0.40 |
|
Full sample |
0.18 |
0.27 |
0.21 |
0.32 |
50% subgroup |
0.26 |
0.39 |
0.30 |
0.45 |
33% subgroup |
0.32 |
0.48 |
0.37 |
0.56 |
Thresholds in the Effects of Quality on Child Outcomes (Impact Evaluation and Process Study). As explained in Supporting Statement A.16, the VIQI study will also explore whether the effect of quality on children’s outcomes is nonlinear, by comparing the effect of quality on children across subgroups of centers defined by their baseline quality. Differences in the effect of quality across subgroups are harder to statistically detect, because the comparison of effects across subgroups introduces additional uncertainty (above and beyond the uncertainty associated with each of the subgroup effects). For this reason, in the VIQI study, it will only be possible to statistically detect a sharp nonlinearity. Specifically, the effect of quality on outcomes would need to be 0.42 larger for one subgroup than the other subgroup to conclude that the difference between them is statistically significant. As noted earlier, the effect of quality on children is expected to be about 0.25 to 0.33, so to statistically detect a difference of 0.42, the effect for one subgroup would have to be larger than the expected range (e.g., if one subgroup’s effect is 0.50, the other subgroups’ effect would have to be 0.08 in magnitude). For this reason, the analysis of nonlinearity will be considered as exploratory.
Process Study. For the Process Study, we will conduct analyses of fidelity of implementation that will be purely descriptive (means, standard deviations, correlations); comparisons between the experimental groups in the design will be descriptive. We will also measure the extent to which there are treatment differentials in curricular models used and use of teacher practices targeted by the interventions between the experimental groups in the study design and conduct hypothesis tests for the difference between groups (which will be exploratory and used to inform the findings from the Impact Study). Treatment differentials will be measured using the teacher logs, which will be completed by teachers in all experimental conditions, as well as fidelity observations. One of the goals of the Pilot Study will be to determine how to create measures from the logs that are reliable and valid. Once these measures have been developed (after the Pilot Study) and more information is available about their properties, the minimum detectable effect size (MDES) for the service contrast will be examined. Because measures of the service contrast are more proximal (and effects of larger magnitude are expected), we expect the study to be well powered to detect them.
Pilot Study. For the Pilot Study, each participating center (up to 40) will be randomly assigned to one of three groups: a group that receives Intervention 1 (Group 1), a group that receives Intervention 2 (Group 2), or a group that will continue to conduct “business as usual” (Control). There will be up to 3 classrooms per center in the study. Centers will be randomly assigned to each group such that 30 centers will install one of the targeted interventions (evenly split across the two interventions) and 10 centers will be in a business-as-usual control condition. Intervention 1 will target structural/process quality while Intervention 2 will target instructional quality. Although the sample size is small, the VIQI study will attempt to achieve equal representation in baseline quality (low and high) and by setting (community-based, Head Start) in the different intervention groups. Random assignment to the 3 groups will be blocked by metro area, and possibly by setting (community-based, Head Start) to ensure that each intervention is implemented and can be studied across different contexts. As noted earlier, data from the Pilot Study will be used to inform and refine the definition of blocks for the Impact Evaluation and Process Study.
One goal of the Pilot Study is to explore the likelihood that each intervention has the potential to achieve effects on quality that are sufficiently large to meet the goals of the Impact Evaluation and Process Study. This goal will be informed conducting a descriptive analysis of changes over time and differences between groups with respect to different dimensions of quality. This information will be taken into consideration along with information about the implementation of the interventions, the experiences of centers and teachers, and the challenges and barriers that may have inhibited implementation of the interventions with fidelity when planning for the impact evaluation. These analyses in the Pilot Study will be considered exploratory and descriptive. Given the number of participating centers and classrooms, the Pilot Study will not be adequately powered to definitively answer the questions underlying the Impact Evaluation and Process Study, particularly related to rigorously testing the impacts of each of the interventions and to understanding the nature of quality-child outcome relationships that are central to the VIQI project. The group of participating centers for the Pilot Study will also not be sufficient to reliably estimate differences or changes in different dimensions of quality for subgroups of centers or classrooms, so any subgroup analyses conducted will be considered exploratory as well. Therefore MDES for the statistical tests and analyses conducted during the Pilot Study are not shown.
B2. Procedures for Collection of Information
This section focuses on procedures for data collection activities in the Pilot Study, Impact Evaluation, and Process Study. The strategies used to collect this information aim to minimize burden and disruption to participants and typical activities in centers.
Data Collected from Screening and Recruitment Instruments (Attachments A.1-A.3)
For the Screening and Recruitment Instruments, regional and local ECE informants, many of whom are expected to be lead staff at Head Start grantees or community-based child care programs that operate or oversee multiple child care centers, will be asked to participate in small-group (or one-on-one) discussions. We expect these informants to be individuals who can provide detailed information to inform the extent to which the landscape of ECE programming, individual programs and centers align with the study’s screening and sampling criteria. Because the landscape of ECE programing and availability of administrative data sources that can inform these characteristics varies across metropolitan areas, we believe the most efficient way to gather these data is by conducting small group (or one-on-one) semi-structured discussions guided by protocols. This allows the study team to flexibly tailor the questions to the locality or program and ask follow-up questions depending upon what is known from existing data sources and what gaps in our understanding of ECE programming in a given locality remain, with the goal of fruitfully and efficiently collecting the needed information to inform the study’s screening and sampling criteria.
Each facilitator team (pair of two study team members) will make initial e-mail contacts, secure informant participation, and conduct the tailored, semi-structured phone or in-person discussions. The team will draw upon prior experience in the process of gathering information for purposes of informing the early design of Pilot Study, Impact Evaluation, and Process Study of the VIQI project. In addition, all involved in this data collection will receive training to ensure that informants are engaged in a consistent manner.
The remainder of this section describes the facilitator teams’ procedures for contacting informants.
Informants will be selected and contacted in waves, so that the facilitator teams can use the information obtained in previous waves to refine its selection of subsequent informants on an ongoing basis. We envision a combination of phone and in-person discussions with regional and local ECE informants. Prior to any in-person visits, the decision to conduct the discussion in person instead of by phone will be made by the study team in mutual agreement with OPRE.
In all waves, the facilitator teams will:
Send informants an e-mail invitation to participate in the discussion (see Attachment A.3: Protocol for In-person Visits for Screening and Recruitment Activities and Related Materials). The email communication will introduce the study, its goals, and the facilitator team and will offer suggested times for the discussion. The email will also state that participation in the discussion is voluntary.
Send informants the project description (see Attachment A.2: Screening Protocol for Phone Calls and Related Materials) and an agenda to guide the discussion (see Attachment A.3: Protocol for In-person Visits for Screening and Recruitment Activities and Related Materials).
Seek to involve multiple (approximately 2-3) informants in discussions where possible and appropriate, rather than conducting only one-on-one meetings. This strategy leverages efficient communication strategies.
Lead the discussion using a subset of the most relevant questions from the semi-structured protocol based on each informant’s expertise and our current gaps in knowledge (see Attachment A.1: Landscaping Protocol with Stakeholder Agencies and Related Materials).
Ensure that the discussion and any follow-up discussion use no more than a total of 1.5 hours of each informant’s time.
Ultimately, the purpose of these discussions will be to determine interest, eligibility and the extent to which Head Start and community-based centers fulfill the selection goals for the VIQI project to appropriately recruit programs that represent a combination of centers that are balanced across Head Start and community-based settings and provide high and low quality services. Upon identifying centers that meet these criteria, the study team will sign MOUs to confirm programs’ and centers’ participation in either the Pilot Study or the Impact Evaluation and Process Study.
Once programs and centers are on board, the information gathered through the screening and recruitment instruments will also be used to identify the classrooms that meet the study’s eligibility criteria to participate in the Pilot Study or Impact Evaluation and Process Study.
Data Collected from Baseline Instruments (Attachments B.1-B.6)
The study team will conduct baseline observations of classroom quality and will ask administrators, lead and assistant teachers, and coaches to complete baseline surveys. The study team will also ask a subset of children in the pilot and impact evaluation and process study to complete direct child assessments at baseline.
Each set of instruments aims to collect unique, but complementary, information about the context and characteristics of centers and programs; experiences, perceptions and activities of staff (teachers, assistant teachers, coaches, and administrators) in the classrooms and centers; classroom quality; and implementation fidelity. Because limited existing data can inform these constructs of interest in ECE programming, we plan to collect data from multiple sources to enhance our ability to appropriately measure these constructs.
Baseline classroom observations. The study team will aim to conduct observations of classroom quality in all of the participating classrooms at baseline. The observations at baseline will consist of two time-points conducted in the Fall 2018 for the Pilot Study or the Winter/Spring 2020 for the Impact Evaluation and Process Study.
In conducting the observations, we would like to capture when instruction is occurring in the classrooms and to remain unobtrusive while observing and coding classroom activities, so as not to disrupt typical classroom schedules and activities and teacher practices. Based on the study team’s past experiences collecting classroom observations, the morning time has been the optimal time to collect such observations.
To schedule and conduct these observations, the study team will first contact centers (and any programs overseeing multiple centers) through study liaisons and, if necessary, teachers to identify potential times that will work for the centers and classrooms for conducting the observations and allow the study team to capture instructional time to the extent possible. At this time, information will also be provided to the liaisons about what is entailed in the observations. A protocol will be used to guide the introductions and follow-up/wrap up activities with the teachers before and after the observations. Upon arriving at the centers, the member of the study team or observer will use this protocol to provide teachers with information about the observations and answer any questions they may have (e.g., observation purpose, length of time, privacy, voluntary nature of assessments, OMB statement). This protocol also asks teachers a series of questions about their classroom structure and their practices from that day; these questions will take approximately 18 minutes in total. The key points covered and related materials used to contact, gather information from centers and to introduce the observations to center staff and teachers, and to guide the pre- and post-observation discussions are included in Attachment B.4: Baseline Protocol for Classroom Observations.
Baseline surveys. The procedures for collecting the surveys will vary with the study participant. However, in all cases, participants will receive introductory materials about the study and the purpose of the data collection activity and how the information being gathered will be handled to maintain their privacy. Study participants will be asked to provide consent or assent prior to completing the surveys. Participants will also be informed that they can refuse to complete the survey, or refuse to answer any of the questions on the survey, and will not be penalized in any way.
Administrators. The study team will collect surveys from administrators in participating centers in Fall 2018 for the Pilot Study and Winter/Spring 2020 for the Impact Evaluation and Process Study on a rolling basis as programs and centers are recruited into the respective phase of the project. The surveys will be administered via mixed-mode methodology that consists of online web-based and paper-and-pencil formats. With all approaches, the survey is meant to be self-administered. Administrators will be contacted and will receive the survey in electronic format via an email sent by the study team with an embedded link to access to the survey for completion. If the administrator does not complete the survey upon initial receipt via email, s/he will be sent an email reminder and, if necessary, mailed a hard copy (with a pre-addressed, pre-paid FedEx envelope for returning a completed survey). On the survey instruments, a brief introduction will be shared that provides information about the purpose of the data collection activity, how the information will be used, and how efforts will be taken to protect respondents’ information. Contact information for the study team will also be available, so that the participant can ask and have their questions answered, if needed. The survey will take 36 minutes to complete. Assent from administrators to complete the baseline survey will be obtained if the participant chooses to complete and return the survey to the study team. The key points covered and information gathered from administrators on the baseline survey are included in Attachment B.1: Baseline Administrator Survey. Information regarding communication with administrators (e.g., email) can also be found at the end of Attachment B.1.
Lead and assistant teachers. The study team will collect surveys from lead and assistant teachers in Fall 2018 for the Pilot Study and Spring 2020 for the Impact Evaluation and Process Study. To account for turnover, data collection in the Spring of 2019 and 2021 will target new replacement teachers.
Prior to administering the baseline instrument, the study team will provide lead and assistant teachers with an informed consent form that references the other data collection activities that the study team will ask them to participate in throughout the course of the Pilot Study or Impact Evaluation and Process Study. The consent form and baseline survey will be available in web-based and paper-and-pencil formats, depending upon the phase of the study. With all approaches, the survey is meant to be self-administered.
For the Pilot Study, the consent form and baseline survey will be provided in paper-and-pencil format and sent to the centers, via care of study liaisons, for distribution to lead and assistant teachers in the participating classrooms.
In the Impact Evaluation and Process Study, the consent forms and baseline surveys will be made available through online web-based and paper-and-pencil formats. The study team will work with designated study liaisons to distribute the consent forms and baseline surveys in hard copy to lead and assistant teachers in the participating classrooms. A letter accompanying the consent form and baseline survey will also provide an electronic link for the consent form and lead and assistant teacher baseline survey.
If a teacher would like to participate in the data collection activities for the study, s/he will sign (either electronically or in hard copy, depending upon the phase of the study) the consent form, complete the baseline survey, and return the consent form and baseline survey to the study team by mail or electronically. A copy of the consent form will be made available to lead and assistant teachers to keep with them as reference. The key points covered and related materials used to contact, consent and gather information from lead and assistant teachers at the time the baseline survey is administered are included in Attachment B.2: Baseline Teacher Survey. Information regarding communication with teachers (e.g., letters, email) can also be found at the end of Attachment B.2.
On the baseline survey, a brief introduction will be shared that provides information about the purpose of the data collection activity, how the information will be used, and how efforts will be taken to protect respondents’ information. Contact information for the study team will also be available, so that the participant can ask and have their questions answered, if needed. The survey will take 36 minutes to complete. A $10 honorarium will be provided to centers for each lead and assistant teacher that completes the baseline survey.
Coaches. The study team will collect surveys from coaches in Summer/Fall 2018 for the Pilot Study and Summer/Fall 2020 for the Impact Evaluation and Process Study with some allowance for coaches who are on-boarded late. The surveys will be administered via mixed-mode methodology that consists of online web-based and paper-and-pencil formats. With all approaches, the survey is meant to be self-administered. Coaches will receive the survey in electronic format via an email sent by the study team with an embedded link to access to the survey for completion. If the coach does not complete the survey upon initial receipt via email, s/he will be sent a reminder email and if necessary a hard copy (with a pre-addressed, pre-paid FedEx envelope for returning a completed survey) by mail or when members of the study team visit the centers. On the survey, a brief introduction will be shared that provides information about the purpose of the data collection activity, how the information will be used, and how efforts will be taken to protect respondents’ information. Contact information for the study team will also be available, so that the participant can ask and have their questions answered, if needed. The survey will take 36 minutes to complete.
Assent from coaches to complete the baseline survey is obtained if the participant chooses to complete and return the survey to the study team. The key points covered and information gathered from coaches at the time the baseline survey is administered are included in Attachment B.3: Baseline Coach Survey. Information regarding communication with coaches (e.g., email) can also be found at the end of Attachment B.3.
Parents/Guardians of children in participating classrooms. Parents or Guardians of children being served in classrooms selected to participate in the study will be asked to complete a baseline information form to facilitate identification and selection of children who will be asked to participate in data collection activities for the study (see Supporting Statement A for more details). The demographic and background characteristics of parents/guardians and children being served in the classrooms can only be obtained via self-reported measures completed by the parents/guardians, as this information is not available in existing administrative records and often cannot be shared with the study team without parent/guardian consent. Parents/guardians will be approached to participate in the study during the Pilot Study and the Impact Evaluation and Process Study.
Prior to administering the baseline instrument, the study team will provide all parents/guardians of children in participating classrooms with an informed consent form that references the data collection activities that the study team will ask them and their children to participate in throughout the course of the Impact Evaluation and Process Study. The consent form will be available on paper-pencil format only for the Pilot Study and in online, web-based and paper-and-pencil formats for the Impact Evaluation and Process Study. It will be available in English and Spanish. The study team will work closely with designated site liaisons to distribute the consent forms to parents/guardians in hard copy. Attached to the consent forms will be the parent/guardian baseline information form in hard copy. Accompanying the consent form will be a letter that provides an electronic link for the consent form and parent/guardian baseline information form. If a parent/guardian would like their child to participate in the data collection activities for the study, s/he will sign (either electronically or in hard copy), will complete the baseline information form and return this information to the study team. The baseline information form is expected to take 10 minutes to complete. The key points covered and related materials used to contact, consent and gather information from parents/guardians at the time the baseline information form is administered are included in Attachment B.5: Baseline Parent/Guardian Information Form in Impact Evaluation.
Baseline child assessments. Baseline direct child assessments will be conducted in Fall 2018 of the Pilot Study and Fall 2020 of the Impact Evaluation and Process Study. This will be done after parental/guardian consent has been obtained. Once consent has been obtained, the study team will identify and select a subset of 3-year-old children in each classroom (anticipated to be about 4 children per classroom) whose parents have agreed to allow them to participate in the study and will attempt to stratify children based upon different subgroup characteristics of interest (such as family income [e.g., at or below the federal poverty level and 200% below the federal poverty level], race/ethnicity [e.g., White, Black, Hispanic], parent’s level of education [e.g., at least a high school diploma], dual language learner background [e.g., learning English as a second language], and the schedules that children are enrolled in the centers [e.g., child is cared by center for at least 6 hours, 5 days per week]). For the pilot study, this group of participants will be considered exploratory and used for descriptive purposes. For the impact evaluation and process study, we will aim to achieve a group with sufficient variation of children with low-income and racially and ethnically diverse backgrounds, so that selected group of children provides sufficient power to detect impacts of the interventions and to explore the relationship of quality to child outcomes for subgroups defined by these characteristics of interest. This sample will constitute the child impact evaluation sample. These children will be asked to complete a set of direct assessments at baseline.
Direct child assessments with children provide standardized and consistent information about children’s skills across centers, classrooms and metropolitan areas, since there are no consistent administrative data sources on children’s skills that are available. These assessments will be used for children’s skills, such as areas of math, language, literacy, science, self-regulation, and executive functioning, for which there are valid and reliable standardized assessments that have been used in prior studies with 3- and 4-year-old children. This strategy will be used at baseline and follow-up for the Pilot Study and Impact Evaluation and Process Study.
Children. The study team will schedule and conduct the child assessments by first contacting the centers (and any programs overseeing multiple centers) via designated study liaisons, and if necessary teachers, to identify targeted weeks that will work for the centers and classrooms for conducting the direct child assessments. The study team will also attempt to identify areas in the centers that can be used to conduct these assessments outside of the classrooms. At this time, information will also be provided to the centers about what is entailed in the assessments. The study team will plan to conduct the assessments at the potential times identified by the centers and classrooms to minimize disruptions. Upon arriving at the centers, the member of the study team or assessor will use a protocol that will provide teachers information about the assessments and answer any questions they may have (e.g., assessment purpose, length of time, privacy, voluntary nature of assessments, OMB statement). The assessor will then ask teachers to introduce them to the children being assessed in the classroom. The assessor will make small talk with children in the classroom, beginning to build rapport, before bringing them to the assessment area (a predetermined spot in the center where assessments can be conducted with minimal interruptions or distractions). The assessment battery will take about 30 minutes to complete per child at baseline. The assessments will be offered in English and Spanish. The assessments will be programmed on tablets or laptops, to the extent possible, to facilitate and streamline administration, to reduce errors in administration, and to minimize burden on children. Upon the completion of the assessments, children will be given stickers to thank them for their participation in the activities. The proposed assessments and related materials used to contact, introduce the assessments, and gather information from center staff, teachers, and children when assessments are administered are included in Attachment B.6: Baseline Protocol for Child Assessments in Impact Evaluation.
Due to the young age of the participating children, we will not require signed consent from them to participate. We will have the signed consent of the parents/guardians, and we will collect verbal assent from each child at the start of each assessment period. Should a child not provide assent or wishes to stop participating once the assessment has started, they will be returned to their classroom. We will make up to two attempts to assess each child, if s/he is unwilling to participate. If a child selected for the child impact evaluation sample is absent on a given day where the study team is scheduled to be at a center to complete the assessments, the study team will attempt to find an alternative day that works with the center and classroom schedules to attempt to complete the assessment with the targeted child.
Follow-up Data Collection
At follow-up, towards the end of the Pilot Study or Impact Evaluation and Process Study, the study team will collect follow-up observations of classroom quality and will ask administrators, lead and assistant teachers, and coaches to complete follow-up surveys. The study team will also ask a subset of children to complete direct child assessments. Similar procedures to collecting each data source at baseline will be employed at follow-up. These procedures are detailed below.
Follow-up classroom observations. Targeting the same classrooms participating at baseline in the Pilot Study or Impact Evaluation and Process Study, the study team will aim to conduct observations of classroom quality in all of the participating classrooms at follow-up. The observations at follow-up will consist of three time-points of observations conducted in the Winter/Spring 2019 for the Pilot Study and the Winter/Spring 2021 for the Impact Evaluation and Process Study.
In conducting the observations, we would like to capture when instruction is occurring in the classrooms, and we would like to remain unobtrusive while observing and coding classroom activities, so as not to disrupt typical classroom schedules and activities and teacher practices. Based on the study team’s past experiences collecting classroom observations and to mirror the timing of the day of baseline classroom observations, the study team will target the morning time for the follow-up classroom observations.
To schedule and conduct these observations, the study team will first contact centers (and any programs overseeing multiple centers) via designated study liaisons, and if necessary, teachers to identify potential times that will work for the centers and classrooms for conducting the observations and allow the study team to capture instructional time to the extent possible. Information will also be provided to the centers about what is entailed in the observations. A protocol will be used to guide the introductions and follow-up/wrap up activities with the teachers before and after the observations. Upon arriving at the centers, the member of the study team or observer will use this protocol that will provide teachers information about the observations and answer any questions they may have (e.g., observation purpose, length of time, privacy, voluntary nature of assessments, OMB statement). This protocol also asks teachers a series of questions about their classroom structure and their practices from that day; these questions will take approximately 18 minutes per observation. The key points covered and related materials used to contact, gather information from centers and to introduce the observations to center staff and teachers, and to guide the pre- and post-observation discussions are included in Attachment C.4: Follow-up Classroom Observation Protocol.
Follow-up surveys. The procedures for collecting the surveys will vary with the study participant. However, in all cases, participants will receive introductory materials about the study and the purpose of the data collection activity, how the information being gathered will be handled to maintain their privacy. Participants will also be informed that they can refuse to complete the survey, or refuse to answer any of the questions on the survey, and will not be penalized in any way.
Administrators. The study team will collect surveys from administrators in participating centers in Spring 2019 for the Pilot Study and Spring 2021 for the Impact Evaluation and Process Study. The surveys will be administered via mixed-mode methodology that consists of online web-based and paper-and-pencil formats. With all approaches, the survey is meant to be self-administered. Administrators will receive the survey in electronic format via an email sent by the study team with an embedded link to access to the survey for completion. If the administrator does not complete the survey upon initial receipt via email, s/he will be sent a reminder email and if necessary given a hard copy (with a pre-addressed, pre-paid FedEx envelope for returning a completed survey) by mail or when members of the study team visit the centers. On the survey instruments, a brief introduction will be shared that provides information about the purpose of the data collection activity, how the information will be used, and how efforts will be taken to protect respondents’ information. Contact information for the study team will also be available, so that the participant can ask and have their questions answered, if needed. The survey will take 30 minutes to complete. Assent from administrators to complete the follow-up survey is obtained if the participant chooses to complete and return the survey to the study team. The key points covered and information gathered from administrators on the follow-up survey are included in Attachment C.1: Follow-up Administrator Survey. Information regarding communication with administrators (e.g., email) can also be found at the end of Attachment C.1.
Lead and assistant teachers. The study team will collect surveys from lead and assistant teachers in Spring 2019 for the Pilot Study and Spring 2021 for the Impact Evaluation and Process Study. If there has been turnover in teachers since Fall of 2018 or 2020, depending upon the phase of the study, only the new replacement teachers will be targeted for the follow-up lead or assistant teacher survey.
The study team will only target lead and assistant teachers who have provided informed consent to participate in data collection throughout the course of the Pilot Study or Impact Evaluation and Process Study.
For the Pilot Study, the surveys will be administered via paper-and-pencil formats. For the Impact Evaluation and Process Study, the surveys will be administered via mixed-mode methodology that consists of online, web-based and paper-and-pencil formats. With all approaches, the survey is meant to be self-administered.
Depending upon the phase of the project, lead and assistant teachers will receive the survey in electronic format via an email sent by the study team with an embedded link to access to the survey for completion. If the teacher does not complete the survey upon initial receipt via email, s/he will be sent an email reminder or if necessary a hard copy (with a pre-addressed, pre-paid FedEx envelope for returning a completed survey) by mail or when members of the study team visit the centers. On the survey instruments, a brief introduction will be shared that provides information about the purpose of the data collection activity, how the information will be used, and how efforts will be taken to maintain the privacy of the respondents. Contact information for the study team will also be available, so that the participant can ask and have their questions answered, if needed. At follow-up, lead teachers will be asked to complete a slightly longer survey that includes a set of questions about how children in the child impact evaluation sample are doing in the classroom. In total, the survey will take about 45 minutes for teachers to complete at follow-up. Centers will receive a $15 honorarium for each lead and assistant teacher that completes the follow-up survey. Centers will receive an additional $16 honorarium for each lead teacher that completes the set of questions about how select children in the impact evaluation are doing in their classroom. The key points covered and information gathered from lead and assistant teachers at the time the follow-up survey is administered are included in Attachment C.2: Follow-up Teacher Survey. Information regarding communication with teachers (e.g., letters, email) can also be found at the end of Attachment C.2.
The teacher-reported questions about how children in the impact evaluation are doing in the classroom will primarily focus on children’s social, emotional, and behavioral skills as exhibited in the classroom, since teacher-reported measures have been shown to be valid and reliable reporters of children’s skills in these areas and there are no standardized child assessments available that capture children’s skills in these domains. Lead teachers will be asked to complete these reports at follow-up during pilot study and the impact evaluation and process study. The key points covered are included in Attachment C.6: Teacher Reports on Children.
Coaches. The study team will collect surveys from coaches in Spring 2019 for the Pilot Study and Spring 2021 for the Impact Evaluation and Process Study. Only coaches who have provided informed consent to participate in data collection throughout the course of the pilot or Impact Evaluation and Process Study will be asked to complete the follow-up survey. Further, if there has been turnover in coaches since Fall 2018 or 2020, depending upon the phase of the study, only the new replacement coaches will be targeted for the follow-up survey. The surveys will be administered via mixed-mode methodology that consists of online web-based and paper-and-pencil formats. With all approaches, the survey is meant to be self-administered. Coaches will receive the survey in electronic format via an email sent by the study team with an embedded link to access to the survey for completion. If the coach does not complete the survey upon initial receipt via email, s/he will be sent a hard copy (with a pre-addressed, pre-paid FedEx envelope for returning a completed survey) by mail or when members of the study team visit the centers. On the survey, a brief introduction will be shared that provides information about the purpose of the data collection activity, how the information will be used, and how efforts will be taken to protect respondents’ information. Contact information for the study team will also be available, so that the participant can ask and have their questions answered, if needed. The survey will take 30 minutes to complete. The key points covered and information gathered from coaches at the time the baseline survey is administered are included in Attachment C.3: Follow-up Coach Survey. Information regarding communication with coaches (e.g., email) can also be found at the end of Attachment C.3.
Follow-up child assessments. Follow-up direct child assessments will be conducted in Spring 2019 of the Pilot Study and Spring 2021 of the Impact Evaluation and Process Study. Only select children in the pilot study will be asked to complete a set of direct assessments at follow-up. In the impact evaluation and process study, only select children will be asked to complete a set of direct assessments at follow-up.
Children. The study team will schedule and conduct the child assessments by first contacting the centers (and any oversight agencies over these centers) via designated study liaisons, and if necessary teachers, to identify targeted weeks that will work for the centers and classrooms for conducting the direct child assessments. The study team will also attempt to identify areas in the centers that can be used to conduct these assessments outside of the classrooms. At this time, information will also be provided to the centers about what is entailed in the assessments and reminder pamphlets will be sent home to parents to let them know the follow-up assessments are going to occur. The study team will plan to conduct the assessments at the potential times identified by the centers and classrooms to minimize disruptions. Upon arriving at the centers, the member of the study team or assessor will use a protocol that will provide teachers information about the assessments and answer any questions they may have (e.g., assessment purpose, length of time, privacy, voluntary nature of assessments, OMB statement). The assessor will then ask teachers to introduce them to the children being assessed in the classroom. The assessor will make small talk with children in the classroom, beginning to build rapport, before bringing them to the assessment area (a predetermined spot in the center where assessments can be conducted with minimal interruptions or distractions). The assessment battery will take about 60 minutes to complete per child at follow-up. The assessments will be offered in English and Spanish. The assessments will be programmed on tablets or laptops, to the extent possible, to facilitate and streamline administration, to reduce errors in administration, and to minimize burden on children. Children will be given stickers to thank them for their participation in the activities. The proposed assessments and related materials used to contact, introduce the assessments, and gather information center staff, teachers, and children when assessments are administered are included in Attachment C.5: Follow-up Protocol for Child Assessments in the Impact Evaluation.
Due to the young age of the participating children, we will not require signed consent from them to participate. We will have the signed consent of the parents/guardians, and we will collect verbal assent from each child at the start of each assessment period. Should a child not provide assent or wishes to stop participating once the assessment has started, they will be returned to their classroom. We will make up to two attempts to assess each child, if s/he is unwilling to participate. If a child selected for the child impact evaluation sample is absent on a given day where the study team is scheduled to be at a center to complete the assessments, the study team will attempt to find an alternative day that works with the center and classroom schedules to attempt to complete the assessment with the targeted child.
Data Collected from Implementation Fidelity Instruments
The implementation fidelity instruments will be collected throughout the Pilot Study and Impact Evaluation and Process Study. The procedures for collecting the information vary, depending upon the data source.
Coach Logs. Beginning in September 2018 and ending in June 2019 of the Pilot Study and beginning in September 2020 and ending in June 2021 of the Impact Evaluation and Process Study, the study will ask coaches hired to support the installation of one of the interventions to complete logs after each coaching session with teachers in participating centers and classrooms. If there has been turnover in coaches since Fall 2018 or 2020, depending upon the phase of the study, only the new replacement coaches will be asked to complete the logs.
The logs will be available online, and coaches will be asked to complete them after each coaching session (assumed to be two visits per classroom per month). The information being gathered will serve the purpose of supporting coaches’ management of their caseloads. We expect that coaches will need to track and monitor this information, regardless of whether the process study was ongoing. This information will be shared with the research team to track and monitor the delivery of professional development and implementation of the interventions as well.
A data system will be used to collect the logs. Coaches will be trained on this system during the onboarding process about how to access and log into the data system to complete the logs. Each log is expected to take about 15 minutes to complete. Email and text message notifications will be sent to coaches to remind them to complete the logs. Information about who to contact for questions or to address technical issues in completing the logs will also be provided to coaches. The key points covered and related materials used to introduce the logs to coaches, how to complete the logs, how the information will be used, and how the information will be protected are included in Attachment D.2: Coach Log. Information regarding communication with coaches (e.g., email, text messages) can also be found at the end of Attachment D.2.
Teacher logs. Beginning in September 2018 and ending in June 2019 of the Pilot Study and beginning in September 2020 and ending in June 2021 of the Impact Evaluation and Process Study, the study will ask all lead and assistant teachers in participating classrooms across research conditions to complete weekly logs. If there has been turnover in teachers since Fall of 2018 or 2020, depending upon the phase of the study, only the new replacement teachers will be asked to complete the logs.
The logs will be available online, and teachers will be trained to log onto and use a data system to complete the logs. Each log is expected to take about 15 minutes to complete. Centers will be offered a $10 honoraria per month for each teacher that completes the logs. Email and text message notifications will be sent to teachers to notify them when it is time to complete the log and how to access the log. Information about who to contact for questions or to address technical issues in completing the logs will also be provided. If there is non-response from a teacher, they will receive an email/text with a thank-you for previous logs and confirmation of email address/other contact information. The key points covered and related materials used to introduce the logs with teachers, how to complete the logs, how the information will be used, and how the information will be protected are included in Attachment D.1: Teacher Log. Information regarding communication with teachers (e.g., email, text messages) can also be found at the end of Attachment D.1.
Implementation fidelity observations. In the Pilot Study and Impact Evaluation and Process Study, the study team will aim to conduct observations to assess fidelity of implementation of the interventions in a subset of classrooms assigned to each of the intervention conditions. We will also conduct these observations in a subsample of classrooms that are assigned to the control condition to assess the extent to which specific behaviors, practices and activities supported by the interventions is evident in the control classrooms to inform the potential relative treatment contrast across research conditions. The observations will consist of one time-point of observations conducted in the Winter/Spring 2019 for the Pilot Study or the Winter/Spring 2021 for the Impact Evaluation and Process Study.
To identify classrooms to participate in these observations, the research team will select a subset of centers that are stratified by whether they provide Head Start or community-based child care services and high and low quality at baseline. Within these centers, the study team will select one classroom at random to participate in the implementation fidelity observation (for up to 90 classrooms in the intervention conditions in the Pilot Study and up to 48 classrooms constituting a subset of classrooms in the intervention conditions in the Impact Evaluation and Process Study, for a total of 138 classroom implementation fidelity visits).
In conducting the observations, we would like to capture when instruction is occurring in the classrooms and to remain unobtrusive while observing and coding classroom activities, so as not to disrupt typical classroom schedules and activities and teacher practices. Based on the study team’s past experiences collecting classroom observations and to mirror the timing of the day of baseline and follow-up classroom observations, the study team will target the morning time for the implementation fidelity observations.
To schedule and conduct these observations, the study team will first contact centers (and any programs overseeing multiple centers) via study liaisons, and, if necessary, teachers to identify potential times that will work for the centers and classrooms for conducting the observations and allow the study team to capture instructional time to the extent possible. Information will also be provided to the centers about what is entailed in the observations. A protocol will be used to guide the introductions and follow-up/wrap up activities with the teachers before and after the observations. Upon arriving at the centers, the member of the study team or observer will use this protocol that will provide teachers information about the observations and answer any questions they may have (e.g., observation purpose, length of time, privacy, voluntary nature of assessments, OMB statement). This protocol also asks teachers a series of questions about their classroom structure and their practices from that day; these questions will take approximately 18 minutes per observation. The key points covered and related materials used to contact, gather information from centers and to introduce the observations to center staff and teachers, and to guide the pre- and post-observation discussions are included in Attachment D.3: Implementation Fidelity Observation Protocol.
Interviews/focus groups. The study team will conduct qualitative interviews with a subset of participating administrators, coaches, and teachers (both leads and assistants) in Winter 2019 in the Pilot Study and in Winter 2021 in the Impact Evaluation and Process Study. A random subset of administrators (up to 16 across all research conditions in the Pilot Study and up to 8 administrators within 4 localities in the Impact Evaluation and Process Study) will be asked to participate in a one-on-one interview. Only the coaches for the intervention conditions will be asked to participate in a one-on-one interview (6 coaches in the Pilot Study and up to 3 coaches within 4 localities in the Impact Evaluation and Process Study). A random subset of lead and assistant teachers (up to 48 across all research conditions in the Pilot Study and up to 48 teachers within 4 localities in the Impact Evaluation and Process Study) will be asked to participate in a small-group/one-on-one interview where lead and assistant teachers across centers are interviewed in separate groups by position.
Each one-on-one interview or small-group interview will last up to 1.5 hours. The purpose of the interviews is to gain insights from study participants on their experiences implementing the interventions, engaging in professional development and completing the data collection instruments. The interviews will be facilitated and led by a member of the research team using a semi-structured protocol that will be adapted depending upon the participants being interviewed. The one-on-one interviews are expected to be conducted by phone and the small-group interviews are expected to be conducted in person.
To identify individuals who will be asked to participate in these interviews, the research team will select a random subset of centers (up to 16 centers in the pilot study and up to 8 centers within 4 localities in the Impact Evaluation and Process Study) that are stratified by whether they provide Head Start or community-based child care services and high and low quality at baseline. Within these centers, the study team will look to interview staff at the different levels within the centers or who are providing coaching support to the centers. The study team will contact the coaches serving the centers directly to ask if they would be willing to participate in a one-on-one interview of their experiences in the study. The administrators in these centers will be contacted and asked if they would be willing to participate in a one-on-one interview. They will also be notified to inform them that lead and assistant teachers will be contacted separately to ask if they would be interested in participating in a small-group or one-on-one interview. The study team will then contact lead and assistant teachers directly to ask if they would be interested and willing to participate in a small group interview about their experiences in the study. The study team will work with the administrators and coaches to identify times for the individual interviews. The study team will propose times for the small group interviews with lead or assistant teachers, and those who are available and interested will confirm whether or not the proposed times work for them. Information will also be provided to administrators, coaches and lead and assistant teachers about the purpose of the interviews and what information will be gathered, how the information will be used, how the study team will protect their information, the voluntary nature of the data collection activity, and who to contact should they have questions about the data collection activity (e.g., interview purpose, length of time, OMB statement). The key points covered and related materials used to introduce the interviews, how the information will be used, and how the information be handled to maintain the privacy are included in Attachment D. 4: Interview/Focus Group Protocol.
B3. Methods to Maximize Response Rates and Deal with Nonresponse
Expected Response Rates
The expected response rates vary by instrument, time point, and participant type, but we generally expect high response rates. This is because we plan to develop strong relationships with participating centers through coordination with our study team’s operational and technical assistance staff, to identify a center liaison who will coordinate with the study team to facilitate data collection activities within the centers, and to leverage mixed-mode administration of data collection to the extent possible to minimize burden on study participants. We also draw upon our experience developing instruments and protocols that are streamlined, cleanly formatted, and as brief as possible in order to facilitate responses from targeted study participants. Further, we will closely monitor and track responses to ensure that appropriate follow-up and steps are taken to ensure that we are successfully able to meet the targeted response rates. In past studies, such as Making Pre-K Count, a foundation-funded, large-scale evaluation of a math enrichment intervention in 69 community-based and public school pre-k programs and nearly 200 classrooms across New York City, and Head Start CARES (0970-0363), a large-scale evaluation of 3 social-emotional interventions conducted in 104 Head Start centers and 207 classrooms spread across the United States, we have been able to achieve higher or comparable response rates as those that are expected in the VIQI project. Across these studies, members of the study team were able to consistently achieve very high consent and response rates (upwards of 90% for each data source and respondent group, Morris et al., 2014; Morris, Mattera & Maier, 2016). We discuss expected response rates, as well as our strategies for achieving and maximizing response rates in more detail, in the section below (See the Dealing with Nonresponse and Maximizing Response Rates sections below). Note that below we state our expected response rates, but in Section A.12 we base our estimated burden on 100% of participants completing the planned data collection instruments. As such, our burden estimates are likely an overestimate and allow for flexibility should we exceed our expected response rates during the fielding of the data collection instruments.
Screening and Recruitment Instruments
For screening and recruitment materials, maximum response rates are critical to ensuring that the study team selects the most appropriate centers that meet the sampling criteria for participating in the VIQI project. As such we expect little nonresponse. We further anticipate that the vast majority of informants will likely be interested in providing their insights to help inform the screening and recruitment of metropolitan areas and centers. Using the study team’s past experiences with engaging similar informants to collect information for screening and recruitment purposes, the team expects approximately 80 percent of targeted participants to respond to each protocol for screening and recruitment for the Pilot Study and Impact Evaluation and Process Study.
Baseline Instruments
At baseline for the Pilot Study and the Impact Evaluation and Process Study, we will ask administrators in centers participating in the study to complete a baseline survey. We will also ask coaches supporting the installation of the interventions to complete a baseline survey. We expect nearly 100 percent to respond to the baseline surveys for administrators and coaches in each phase of the study.
At baseline for the Pilot Study and the Impact Evaluation and Process Study, we will ask lead and assistant teachers to first consent to participating in the data collection activities for the respective phases of the project. We expect about 85 percent of lead and assistant teachers to consent to participating in the study and to complete a baseline survey in participating classrooms.
At baseline for the Pilot Study and the Impact Evaluation and Process Study, we will aim to collect two time-points of classroom observations in all participating classrooms. We expect nearly 100 percent completion of the baseline observations in participating classrooms.
At baseline for the Pilot Study and Impact Evaluation and Process Study, we will aim to collect consent and baseline parent/guardian information form from parents/guardians of children being served in participating classrooms to engage in data collection activities as part of the study. We will target almost all parents/guardians of children in the classrooms and we expect that 85 percent of parents/guardians of children in participating classrooms will return the consent forms on behalf of their children and will complete the baseline parent/guardian information form that accompanies the consent form.
At baseline for the Pilot Study and Impact Evaluation and Process Study, we will aim to collect baseline direct child assessments for a selected sample of children in participating classrooms. We expect 85 percent of selected children to complete the baseline child assessments.
Follow-up Instruments
At follow-up for the Pilot Study and the Impact Evaluation and Process Study, we expect similarly high response rates.
At follow-up for the Pilot Study and the Impact Evaluation and Process Study, we will ask administrators in centers participating in the study to complete a follow-up survey. We will also ask coaches supporting the installation of the interventions to complete a follow-up survey. We expect nearly 100 percent to respond to the follow-up surveys for administrators and coaches in each phase of the study.
At follow-up for the Pilot Study and the Impact Evaluation and Process Study, we will ask lead and assistant teachers to complete a follow-up survey. We expect about 95 percent of lead and assistant teachers who consented and completed the baseline survey to complete a follow-up survey in participating classrooms.
At follow-up for the Pilot Study and the Impact Evaluation and Process Study, we will aim to collect three time-points of classroom observations in all participating classrooms. We expect nearly 100 percent completion of the follow-up observations in participating classrooms.
At follow-up for the Pilot Study and Impact Evaluation and Process Study, we will aim to collect follow-up direct child assessments for a selected subset of children in participating classrooms. We expect 85 percent of selected children to complete the follow-up child assessments.
Implementation Fidelity Instruments
Throughout the Pilot Study and the Impact Evaluation and Process Study, we will aim to collect logs from coaches supporting the installation of the interventions and teachers across research conditions. We expect that nearly 100 percent of coaches to respond to at least one log throughout each phase of the study, with about 90 percent of the total number of logs expected to be completed. We expect slightly lower response rate for lead and assistant teachers for the weekly logs. We expect that 80 percent of lead and assistant teachers will respond to at least one log, with about 50 percent of the total number of logs completed. In Making Pre-K Count, for example, 100% of coaches completed logs on a weekly basis.
As part of the Process Study, a subset of centers and their underlying classrooms will be selected to participate in implementation fidelity observations. We expect nearly 100 percent of the subset of classrooms will participate in the implementation fidelity observations across research conditions.
As part of the Process Study, a set of one-on-one and small group interviews will be conducted with staff in participating centers. We expect nearly 100 percent of administrators and coaches will participate in one-on-one interviews. We expect a slightly lower response rate for lead and assistant teachers. We expect that 80 percent of lead and assistant teachers will participate in small group interviews, primarily because the scheduling of the small group interviews may not align with all targeted lead and assistant teachers’ schedules.
Dealing with Nonresponse
To minimize nonresponse to the data collection instruments, we will adopt several strategies to maximize response by study participants that are discussed in more detail below (see below for more details).
To assess the impact of nonresponse to the data collection instruments, an analysis will be conducted to determine whether the results from the baseline and follow-up surveys, observations or assessments may be biased by non-response. In particular, two types of bias will be assessed: (1) differences in response rates and respondents’ characteristics across the experimental groups in the design (differential response) and (2) differences in the characteristics of respondents compared to non-respondents. The first type of bias affects whether the impacts of the interventions are confounded with pre-existing differences between experimental group and control group respondents (internal validity), while the second type of bias affects whether the results from the study can be generalized to the wider group of eligible participants (external validity).
Several tests will be conducted to assess whether differential non-response is compromising the internal validity of the experimental design. For each data source:
Response rates by experimental group will be compared to make sure the response rate is not significantly higher for one research group. A multinomial logistic regression will be conducted among respondents. The “left hand side” variable will be random assignment group membership while the explanatory variables will include a range of baseline characteristics. An omnibus test such as a log-likelihood test will be used to test the hypothesis that the set of baseline characteristics are not significantly related to a respondent’s experimental group. Failure to reject this null hypothesis will provide evidence that respondents are similar across experimental groups.
The guidelines provided by the What Works Clearinghouse at the Institute of Education Sciences (Department of Education) will be used to determine whether attrition is “low” or “high” based on these analyses. If these tests indicate that differential non-response is “high”, we will regression-adjust the impact analyses using respondents’ baseline characteristics and outcomes. To make sure that the regression-adjustment is adequately removing the bias, we will conduct a sensitivity test where we will drop the random assignment blocks where differential response rates are the largest, and then estimate impacts based on this smaller sample.
To examine whether the results are generalizable to the eligible population (externally valid), the following analysis will be conducted for each data source:
The baseline characteristics of respondents will be compared to the baseline characteristics of non-respondents. This will be done using a logistic regression where the outcome variable is whether someone is a respondent and the explanatory variables are baseline characteristics. An omnibus test such as a log-likelihood test will be used to test the hypothesis that the set of baseline characteristics are not significantly related to being a respondent. Failure to reject this null hypothesis will provide evidence that non-respondents and respondents are similar at baseline.
If these tests indicate that respondents are different from non-respondents, the presentation of the findings will clarify that the results may not be generalizable to the full group of eligible respondents. As a sensitivity analysis, we will also reweight the respondent groups to reflect the characteristics of the full group of eligible participants, to explore whether the results could differ.
For the impact analyses, baseline data will be used as covariates in the analysis to describe the respondents and improve precision. Therefore, it will be acceptable to impute these baseline variables using an appropriate method such as multiple imputation. Follow-up data will not be imputed.
Maximizing Response Rates
To ensure that the VIQI project has sufficient power to address the research questions of interest for different phases of the project, it will be important to reach the expected response rates described above. We fully recognize these challenges and structured a data collection plan accordingly. Our plan draws upon our extensive experience managing and collecting similar sets of data from children, teachers, classrooms, coaches, and ECE centers in multiple large-scale, longitudinal, and experimental studies.
Our research team is comprised of seasoned operations staff across MDRC and MEF Associates who have worked extensively with ECE centers across the United States in large-scale studies not only to maintain strong relationships and work collaboratively with centers but also to trouble shoot and provide technical assistance when necessary to minimize disruptions and facilitate data collection activities within the centers. We will also establish MOUs with Head Start grantees and programs that operate or have oversight over multiple child care centers (and individual centers to the extent necessary) that designate liaisons who will coordinate with the study team to facilitate data collection activities within the centers during critical periods of the Pilot Study or Impact Evaluation and Process Study. Given that most of the data being collected will occur in participating centers, we see this as critical aspect of reaching expected response rates.
For each data collection activity and instrument, the study team will explain or materials will be provided to study participants about the importance of the data collection activities for advancing the ECE field prior to proceeding with any data collection. We also draw upon our expertise and experience to put in place mixed-mode administration for the instruments whenever possible to minimize burden on study participants, particularly for the survey instruments in the Impact Evaluation and Process Study when the scale of the samples being targeted are larger and spread across multiple metropolitan areas. Further, the instruments and protocols will be developed to be streamlined, cleanly formatted, and as brief as possible in order to facilitate responses from targeted study participants. We will draw upon principles from behavioral economics to tailor contact and communication with study participants to encourage responses as well. We will also aim to balance the breadth of data being collected by minimizing burden and disruptions to centers, staff and children by optimizing the amount of data collected at each observation or assessment point. Last, we will be flexible accommodating the schedules of centers and classrooms when collecting data, while still adhering to the planned timeline for data collection activities for the respective phases of the study.
Our team also includes Abt/DSET who will lead data collection efforts in participating centers and has extensive experience collecting high-quality classroom and teacher-level data and child-level data in large-scale studies. Abt/DSET’s senior data collection manager will provide centralized oversight of the collection of lead and assistant teacher consents and surveys, classroom observations, parent/guardian consents and baseline information forms, and child assessments. Staff experienced in managing early childhood data collection efforts will be hired as field supervisors to oversee data collection efforts at each locality. Field supervisors will hire and train local field staff to conduct each data collection activity (hiring of data collectors is discussed below). Abt/DSET will design, implement, maintain, and document an integrated study database that will provide oversight of all data collection activities. Such a system is critical for allowing project staff to monitor the flow of information and ensure that each designated sample unit (child, parent, teacher, coach, administrator, etc.) is properly surveyed and that all required information is obtained, identified, and stored.
Across members of the study team, we will train fielding staff of the instruments in conversation and avoidance of refusals, including training on distinguishing “soft” refusals from “hard” ones. Soft refusals often occur when a study participant has been reached at an inopportune time. In these cases, it is important to back off gracefully and to establish a convenient time to follow up with the study participant, rather than to persist at the moment. Hard refusals do occur and must also be accepted gracefully by the fielding staff.
The study team will closely monitor data collection and response rates by data source. Weekly meetings will address any issues that arise during preparations for data collection and data collection itself. The study team will also monitor data collection activities to ensure high response rates and no differential response rates by research conditions. The study team will send monthly progress reports to the Contracting Officer Representative (COR), which will include any issues and solutions for correcting issues. The study team will also review early files of data collected from each instrument to assess if there are any issues in the completeness or quality of the data being collected, so that issues can be quickly identified and solved early in the fielding stages of each instrument.
Further, MDRC will have a dedicated data collection coordinator who will work closely with Abt/DSET and will have oversight over all of their data collection activities. MDRC, leveraging the operational and TA/monitoring activities of MEF/MDRC operational team members, will also have direct oversight over the collection of administrator surveys, teacher and coach logs, and coach surveys. Thus, between Abt/DSET staff and MDRC/MEF operational staff, our team will be in contact with centers at regular intervals, allowing us to follow up with centers and respondents on a frequent basis to ensure high response rates.
B4. Tests of Procedures or Methods to be Undertaken
The baseline and follow-up teacher surveys will be pretested by the study team to assess timing, measurement and design issues. The pretests will be conducted on paper over a 1-week period using a specially trained group of interviewers. The pretest interviewers will complete 9 interviews. The study team will closely monitor each pretest interview to determine whether any substantial changes were needed to the questionnaire design and will conduct an interviewer debriefing after the pretest interviews are completed to discuss the flow of the interview, any questions that came up, etc. During the pretest, the study team will track the minimum, maximum, and average time to complete the interview, as well as the median times per section. The pretests have not yet been conducted, but are planned for Summer 2018. If revisions result from pretesting, the revised instruments will be submitted to OMB for review. This will be completed as a nonsubstantive change request if deemed appropriate through discussion between ACF and OMB.
B5. Individual(s) Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
The following is a list of individuals involved in the design of the VIQI project, the plans for data collection, and the analysis.
JoAnn Hsueh, MDRC
Michelle Maier, MDRC
Marie-Andree Somers, MDRC
Electra Small, MDRC
Noemi Altman, MDRC
Sharon Huang, MDRC
Evan Weissman, MDRC
Frieda Molina, MDRC
Dina Israel, MDRC
Amena Sengal, MDRC
Sharon Rowser, MDRC
Ilana Blum, MDRC
Jocelyn Page, MDRC
Hiwote Getaneh, MDRC
Emily Henry, MDRC
Seth Muzzy, MDRC
Nicole Leacock, MDRC
Marissa Strassberger, MDRC
Rama Hagos, MDRC
Mervett Hefyan, MDRC
Mallory Undestad, MDRC
Margaret Burchinal, Frank Porter Graham Child Development Institute
Mike Fishman, MEF Associates
Emily Ellis, MEF Associates
Kimberly Foley, MEF Associates
Liza Rodler, MEF Associates
Carly Morrison, MEF Associates
Jan Decoursey, MEF Associates
Kerry Hofer, Abt Associates
Barbara Goodson, Abt Associates
Catherine Darrow, Abt Associates
Brenda Rodriguez, Abt Associates
Cassandra Meagher, Abt Associates
Ricki Jarmon, Abt Associates
Faith Lewis, Abt Associates
Carter Epstein, Abt Associates
Mehera Baugher, Abt Associates
Adria Gallup-Black, Abt Associates
Carolyn Layzer, Abt Associates
Ivelisse Martinez-Beck, OPRE
Amy Madigan, OPRE
Tracy Carter Clopet, OPRE
Sarah Blankenship, OPRE
Erin Cannon, OPRE
Allison Walker, OPRE
File Type | application/msword |
Author | DHHS |
Last Modified By | SYSTEM |
File Modified | 2018-05-08 |
File Created | 2018-05-08 |