Head Start DESIGNATION RENEWAL SYSTEM EVALUATION PROJECT
OMB Information Collection Request
New Collection
Supporting Statement
Part B
August 2013
Submitted By:
Office of Planning, Research and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
7th Floor, West Aerospace Building
370 L’Enfant Promenade, SW
Washington, D.C. 20447
Amy Madigan
Contents
B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS
B1. Respondent Universe and Sampling Methods 3
B2. Procedures for Collection of Information 12
B3. Methods to Maximize Response Rates and Deal with Nonresponse 22
B4. Tests of Procedures or Methods to be Undertaken 24
B5. Individual Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 25
B1. Respondent Universe and Sampling Methods
The Head Start Designation Renewal System (DRS) is being implemented on a rolling basis across the population of Head Start grantees. Eventually all organizations receiving grants to implement any type of Head Start program will be subject to the DRS (1,596 grantees based on 2012 PIR data). From a research perspective, the competitive process can be divided into five stages as shown in Figure B-1 (for grantees designated for competition). Incumbent grantees enter at Stage 1. They all experience a point in which they know that they will be assessed through the DRS, but they do not yet know their designation status. At Stage 2, the incumbent grantees learn whether they may apply for a non-competitive five-year grant, or if their grant has been designated for competition. At this stage, potential competitors also assess whether they should enter the competition. In Stage 3, the competition occurs – incumbents and new competitors complete and submit applications to serve particular communities. In Stage 4, the competitors learn which organizations have received awards to deliver Head Start services and transition planning begins. Finally, in Stage 5, post-competition service delivery begins.
Figure B-1: DRS Implementation Stages
Due to the rolling implementation of the DRS, at any single point in time, different grantees will be in a different stage of DRS implementation (as indicated in Figure B-1). This evaluation has been designed to gain the perspectives of grantees and selected competitors, and to target exploration of issues across the DRS implementation stages (stages 1, 2, 3, and 4). Table B-1 shows the four initial DRS implementation cohorts, and approximately when they experience the five stages. Italics are used to denote the uncertainty surrounding future events; actual time periods may vary from what is shown in this chart for a variety of reasons.
To collect information across these stages in a timely manner, we propose to gather data from grantees subject to review through monitoring in 2013-2014 (some of which will be designated for competition as part of DRS Cohort 4), grantees designated to compete in DRS Cohort 3, and applicants for new grants in DRS Cohort 3. RQ1 and RQ2 draw from grantees that are experiencing the pre-designation stage. As indicated in Table B-2, the sample for RQ1 will be drawn through Sampling Frame A, the 434 grantees with center-based Head Start classrooms subjected to monitoring in Fall 2013-Spring 2014. Sampling Frames B and C, which support answering RQ2, are subsets of Frame A. This connection will facilitate linkages between data collected for RQ1 and data collected for RQ2, which layered together, may help explain findings across the two research questions.
Table B-1: DRS Implementation Cohorts and Timing of Implementation Stages
Stage of Implementation: |
Cohort 1 |
Cohort 2 |
Cohort 3 |
Cohort 4 |
Stage 1: Pre-Designation Status (Tracking Period for DRS Conditions) |
Deficiencies only from June 2009-Nov. 2011 |
Oct. 2011-Sept. 2012 |
Oct. 2012-Sept. 2013 |
Oct. 2013-Sept. 2014 |
Stage 2: Status Known/Decision to Apply or Not (Designation Notification) |
Dec. 2011 |
Jan. 2013 |
Jan. 2014 |
Jan. 2015 |
Stage 3: Application for Competition |
April-July 2012 |
Aug.-Oct. 2013 |
March-May 2014 |
March-May 2015 |
Stage 4: Award Notification |
April 2013 |
Dec. 2013 |
Dec. 2014 |
Dec. 2015 |
Stage 5: Post-Competition Service Delivery |
July 2013 |
Summer 2014 |
Summer 2015 |
Summer 2016 |
RQ3 focuses on a different element of the DRS process, represented by DRS implementation stages 3 and 4 – the levels and types of competition (stage 3) and perceptions of competition measured after award (stage 4). This different focus requires different sampling frames. The first part of RQ3, what does competition look like, is answered through Sampling Frame D. Sampling Frame D is comprised of all the applicants for Head Start grants resulting from competition associated with DRS Cohort 3. The number of applicants is estimated to be 500 (based on the number of applicants participating in the grant competition for DRS Cohort 1). The second part of RQ3, how programs respond in communities where Head Start grantees are designated for competition, is answered through Sampling Frame E. Sampling Frame E is the subset of Sampling Frame D that applied and were awarded a new five-year grant.
Table B-2. Description of Sampling Frames
Sampling Frame |
Population |
DRS Implementation Stage |
Frame Size and Description |
Sample Size |
RQ1: How effective is the DRS in identifying higher and lower quality Head Start grantees?
|
||||
Frame A |
Grantees subject to monitoring in 2013-2014 |
Stage 1: Pre-Designation Status |
434 grantees with center-based Head Start classrooms |
70 grantees, 300 centers, 560 classrooms |
RQ2: How have Head Start grantees understood and responded to the provisions of the DRS in terms of their efforts to improve program operations and quality?
|
||||
Frame B |
Grantees subject to monitoring in 2013-2014 |
Stage 1: Pre-Designation Status |
70 grantees (subsample of those in Sample A) |
35 grantees |
Frame C |
Grantees subject to monitoring in 2013-2014 |
Stage 1: Pre-Designation Status |
35 grantees (subsample of those in Sample B) |
15 grantees |
RQ3: What does competition look like and how do programs respond in communities where Head Start grantees are designated for competition?
|
||||
Frame D |
DRS Cohort 3 |
Stage 3: Competition |
Estimated 500 applicant organizations for designated grants |
Entire frame (500) |
Frame E |
DRS Cohort 3 |
Stage 4: Award Notification (prior to new service delivery) |
Applicants awarded grants (subset of Sample D) |
9 awardee organizations |
1.2a RQ1: How effective is the DRS in identifying higher and lower quality Head Start grantees?
As stated in Part A, this evaluation will use independent measures of quality, primarily on-site observational assessment tools administered at the classroom, center, and grantee levels, to assess the validity of the DRS in identifying higher- and lower-performing grantees. The validity assessment part of the evaluation will focus on the 434 grantees that will receive a monitoring review in FY 2014 (Sampling Frame A). This group was identified as the sampling population because it means that independent measures of quality can be collected around the same time that the Office of Head Start (OHS) is evaluating grantee status on the DRS conditions. Because quality can change over time, this proximity between OHS assessment and assessment conducted for the purpose of the study is important for reliably exploring the relationship between grantee status on the DRS conditions and independent measures of quality.
Collecting data near the time of the monitoring visit means that the designation status of the grantee will not be known at the time of data collection. The sample must ultimately include a sufficient number of grantees that have and have not been designated for competition so that the DRS identification into higher- and lower-performing grantees can be compared to the evaluation assessments on independent measures of quality. The sample design thus includes procedures for identifying which Head Start grantees are likely to be designated for competition when drawing the sample for the validity assessment portion of the evaluation.
Selecting Grantees for RQ1
Estimates below related to size of the universe and sampling frames are based on data reported by grantees in the Program Information Report (PIR) for FY 2012 and information provided by OHS. The estimates are based on the most current data sources available at the time of the development of this package. Actual sampling frames will be constructed using a combination of PIR data for FY 2013, data and information provided by OHS regarding changes to the grantee population for the 2013-2014 school year, and information provided by OHS on grantees subject to a monitoring review.
Universe:
As described previously, the universe for this study is Head Start grantees that offer a center-based program option to serve preschool-aged children, but excludes Migrant and Seasonal Head Start (MSHS), American Indian/Alaskan Native Head Start (AIAN), stand-along Early Head Start (EHS) and interim grantees as described in Part A. For FY 2012, the total number of Head Start and EHS grantees is 1,596. Excluding stand-alone EHS, Head Start grantees without center-based options, MSHS, AIAN, and interim grantees reduces the universe to approximately 1,195. The analytic unit (and thus the primary sampling unit) for this study is the Head Start grantee.
Sampling population:
Head Start grantees are divided into three monitoring cohorts so that OHS can monitor grantees on a three year cycle. This triennial monitoring, along with monitoring visits conducted for selected grantees outside of the triennial cycle, forms the basis for the DRS “monitoring deficiency” criterion. The DRS “CLASS” criterion only applies to grantees that receive a triennial monitoring visit during the relevant time period. Because the analytic objective is to compare grantees designated for competition to those not designated for competition, the most appropriate comparison group is grantees from the same monitoring cohort.
Thus, the sampling population for RQ1 will be restricted to the set of grantees that have a monitoring review during FY 2014 (n≈434). We expect that findings from a single monitoring cohort will be generalizable to the population of grantees included in the universe defined above. Assignment to a particular monitoring cohort is not expected to be associated with grantee characteristics or any of the outcomes of interest for this study. However, the contractor will carefully compare demographic characteristics across cohorts (i.e., funded grantee enrollment, location, program options offered, and others) to identify whether any systematic differences exist and if they do, to explore the implication of those differences for the generalizability of the study findings.
Sampling frame construction and sampling approach:
Sampling Grantees
The study will utilize a disproportionate stratified random sampling approach to selecting grantees for the study. The design calls for a representative sample of 35 grantees designated for competition and a comparison group of 35 grantees determined eligible for a five-year non-competitive grant award. Because we anticipate that only 20-30 percent of grantees in the sampling population will actually be designated for competition, the sampling approach involves selecting disproportionately from the two groups. The primary sampling frame will be constructed using information provided by OHS on grantees scheduled for a monitoring review in FY 2014, and information from the PIR (confirmed by OHS) on which of those grantees are part of the universe.
A major sampling design feature involves stratification of the list of grantees and sample allocation to strata (i.e., how many grantees to sample from each stratum). We will stratify grantees by region (4 categories) and size (3 categories) in order to ensure a representative sample. We will exclude grantees in Alaska, Hawaii, and the US territories for budgetary reasons. Based on the distribution of grantee sizes, we will classify grantees as small, medium, and large (with the super-grantees as their own category).
Within the (4 x 3) = 12 cells formed by cross classifying region by size, we will need to stratify further by a measure that reflects the propensity to be designated for competition. We will use the FY 2014 OHS monitoring data, scores from the OHS-measured Classroom Assessment Scoring System (CLASS) observations, any OHS monitoring findings that might be labeled a deficiency, and PIR information such as region, size, and composition to create a propensity measure reflecting likelihood of being designated for competition. We note that findings of deficiency through monitoring and CLASS scores are the two primary criteria that lead to a designation for competition. A propensity model will be developed using a previous monitoring cohort’s (Cohorts 2 and 3, if available) distribution of CLASS scores and deficiency findings, and historical monitoring data of the grantees in the sampling frame. The propensity model will be used to predict the propensity of each of the grantees in the sampling population to be designated for competition. We will use the predicted propensities to develop sampling rates within each region x size cell/stratum in order to select roughly equal samples of 35 competition and 35 non-competition grantees (or at least get very close to this allocation).
Sampling Classrooms
Because the unit of analysis is the grantee, a sufficient number of classrooms need to be selected to obtain sufficiently precise grantee-level estimates on the independent measures of quality. Thus, the number of classrooms sampled per grantee will depend on the number of classrooms in each grantee. The total number of classrooms operated by grantees in the sampling frame ranges from 2 to 1,174 with a mean of 40 and a median of 21. The formula for selecting the number of classrooms per grantee has been constructed in a manner parallel to that used by OHS in selecting the classrooms to be sampled for CLASS observations. In addition, intra-class correlations, the design effect, statistical power, and effect sizes have been considered.
Once a grantee is selected, we will use a listing of all classes associated with that grantee to develop and draw a proportionate stratified sample. We will capture the heterogeneity of the centers and classrooms by stratifying the list of classrooms by center. Specifically, we will randomly array a list of centers, and within each center, randomly array the list of classrooms, choosing every nth classrooms, with n defined as the multiplicative inverse of the number of classrooms sampled divided by the total number of classrooms operated by the grantee. This approach will reduce the risk of drawing a biased sample by chance and yield a highly representative sample of classrooms.
Sampling Centers
Because some of the proposed measures include elements that need to be collected at the center level, it is important to maximize the heterogeneity of centers through our sampling of classrooms. Center-level data would be collected from centers in which a classroom is included in the sample.
Power to Identify DRS Status Differences on Quality Measures Collected at the Grantee Level:
The DRS conditions measured at the grantee level include debarment of receipt of federal or state funds, suspension by the Administration for Children and Families (ACF), whether the grantee had a poor audit, and whether the grantee is at risk for failing to continue as a “going concern” (i.e. they are at risk for financial failure. For the evaluation, only the Tuckman and Chang (1991) Financial Ratios, which measure financial vulnerability, will be collected at the grantee level. Financial vulnerability is assumed to be measured without statistical error. We will contrast the measures of the n=35 “competition designated” grantees and the n=35 “non-competition” grantees. We will have 80% power to detect effect sizes of d=.56 or larger on continuous outcomes and odds ratios of 1.86 or larger, given we have 35 agencies in each DRS group and alpha=.05. This will provide reasonable power to detect differences, given we anticipate large differences on DRS conditions in which there is variation, but limited power to detect policy relevant differences related to DRS status such as size of agency or regions for these outcomes.
Power to Identify DRS Status Differences on Quality Measures Collected at the Classroom or Center Level:
The classroom quality measures will be collected at the classroom level, and measures of health and safety, child development and education, family involvement, and management and supervision will be collected at the center level. The proposed study involves a two-level design for classroom measures that will take the lack of independence of classrooms within the same grantee into account. Grantee measures that are based on classroom (or center) observation are necessarily subject to sampling variability. This variation will affect the resulting minimum effect sizes when contrasting ‘competition’ and ‘non-competition’ grantees. We are controlling the extent of this variation by ensuring that the coefficient of variation of classroom (and center) based measures are below 20 percent for any given grantee. In consequence, we will have adequate statistical power to detect differences between ‘competition’ and ‘noncompetition’ grantees. Power was computed using the Optimal Design software (Raudenbush et al., 2011). The power analysis assumed there were 35 grantees in each DRS Status group, 8 classrooms on average per grantee, alpha=.05, and an intra-class correlation of .05. We have 80% power to detect differences of d=.26 or larger on measures of classroom quality (e.g., a difference of .20 on the ECERS Interaction score). A similar analysis indicated that we have 80% power to detect differences of d=.34 or larger on measures of center quality, assuming there are an average of 4.3 centers per grantee.
Table B-3: Minimum Detectable Differences and Effect Sizes for Hypothetical Classroom Observation Measures (comparing grantees designated for competition vs. those not)
Measure |
Designated grantees |
Classrooms per grantee |
Centers per grantee |
Minimum Effect Size |
Classroom quality (e.g., ECERS-R) |
Yes n=35 No n=35 |
Yes n=8 No n=8 |
|
d=0.26 |
Center quality (e.g., Health and Safety Checklist total score) |
Yes n=35 No n=35 |
|
Yes n=4.28 No n=4.28 |
d=0.34 |
Grantee quality (e.g., Financial Ratios Health and Safety Finding) |
Yes n=35 No n=35 |
|
|
d=.56 Odds ratio = 1.86 |
Note: listed are the average anticipated number of classrooms per grantee and average number of centers per grantee, with a target of a total of 560 classrooms and 300 centers.
Assumes One-tailed test, Alpha=.05, Statistical Power= 80%, Intra Class Correlation = .05
A second power analysis was conducted for the third proposed analysis for RQ1 regarding validating the individual DRS criteria. This analysis used the proportion of cohort 2 agencies that were designated for competition due to either a deficiency or low CLASS score, the two DRS criteria that accounted for over 99% of agencies designated for competition. We assumed there will be 70 agencies in our sample, power of 80%, and a p-value of .05. Based on data from DRS cohort 2, 65% of grantees designated for competition in our sample are expected to have had a monitoring deficiency (i.e., 65% of 35 grantees). With 23 agencies in the sample expected to have a monitoring deficiency, we have good power to detect an agreement rate of .78. Similarly, we expect 40% of grantees designated for competition in our sample to have low CLASS scores (i.e., 40% of 35 grantees). With 14 grantees in the sample expected to have low CLASS scores, we have good power to detect an agreement rate of .74.
1.2b. RQ2: How have Head Start grantees understood and responded to the provisions of the DRS in terms of their efforts to improve program operations and quality?
The DRS Evaluation’s second research question involves describing grantee efforts to improve quality and perceived incentives for quality improvement. RQ2 will be addressed through qualitative data collection with purposively selected samples of grantees. As described above, the DRS is being phased in over time. To understand how the DRS relates to grantees’ quality improvement efforts at different points in time and with differing characteristics, the sampling approach for the qualitative data collection divides the universe of grantees into groups according to where they fall in the DRS process and targets those groups with questions related to their respective stage in the process.
RQ2 is designed to collect information from Head Start grantees on a set of topics which have not yet been systematically studied due to the new nature of DRS. Additionally, the study will collect information about responses to the DRS across the diverse set of grantees, for the purpose of ensuring that requirements and guidelines, technical assistance, and training are sensitive to the vast differences in grantee operational approaches and populations served. Thus, the study design generally centers on an exploratory approach that can effectively capture as much variation as possible and build our understanding to support future investigation, as opposed to an approach that measures incidence or tests hypotheses.
Selecting Grantees for RQ2
Universe: The universe for RQ2 is the same as that for RQ1. The sample for the DRS Telephone Interview: Program Directors (Appendix F) will be purposively selected, with 17 cases drawn from among the group expected to be designated for competition and 18 cases drawn from among the group expected to be designated for a non-competitive five year grant. Grantee size (funded center-based Head Start enrollment), rural/urban status, region, organizational auspice (for-profit, non-profit, school, etc.), and presence of delegate agencies will serve as additional stratification variables. The sample for the follow-up site visits for administration of the DRS In-Depth Interview: Agency Directors, DRS In-Depth Interview: Program Directors, DRS In-Depth Interview: Policy Council/Governing Body, and DRS In-Depth Interview: Program Managers (Appendices G-J) will be purposively selected to include 7 cases drawn from the group expected to be designated for competition and 8 cases drawn from the group expected to be designated for a non-competitive five year grant. Purposive selection of grantees for the site visits will focus on identifying a sample with diverse characteristics and diverse responses to the DRS, specifically in terms of the types of actions undertaken to improve quality, the perceived amount of potential competition for the Head Start grant in their community, and expressed level of concern about being designated for competition. Two sampling frames will be used (described below as B and C).
Sampling Frame B will be made up of grantees selected for the data collection as part of RQ1 through the sampling procedures outlined in the previous section (Sampling Frame A). According to those parameters, this frame will be made up of 70 grantees that are representative of grantees that receive OHS monitoring reviews during FY 2014. From that frame, we will select a subsample of 35 grantees to participate in the 75-minute DRS Telephone Interview: Program Directors (Appendix F) in Spring 2014. These interviews will focus on grantee understanding of and responses to the DRS, prior to being notified of their designation status. Purposive selection of grantees in this frame will focus on maximizing diversity in terms of expected designation status and characteristics such as grantee size (funded enrollment), rural/urban status, auspice, region, and presence of delegate agencies.
Sampling Frame C will be made up of the 35 grantees selected to participate in the DRS Telephone Interview: Program Directors (Appendix F) in Frame B. From that frame, we will select a subsample of 15 grantees to participate in the DRS In-Depth Interview: Agency Directors, DRS In-Depth Interview: Program Directors, DRS In-Depth Interview: Policy Council/Governing Body, and DRS In-Depth Interview: Program Managers (Appendices G-J) through one-to-two day site visits. Like the telephone interviews conducted within sampling Frame B, but in greater depth, these interviews will focus on grantee understanding of and responses to the DRS, prior to being notified of their designation status. Purposive selection of grantees in this frame will focus on maximizing diversity in terms of expected designation status and characteristics such as grantee size (funded enrollment), rural/urban status, auspice, region, presence of delegate agencies, and views of and reactions to the DRS as expressed in the DRS Telephone Interview: Program Directors (Appendix F).
1.2c. RQ3: What does competition look like and how do programs respond in communities where Head Start grantees are designated for competition?
The third research question will examine the level of competition generated by the DRS, the perceptions of how the competition works, and the incentives for quality improvement related to competition through administrative data and qualitative interviews.
Selecting Organizations for RQ3
Universe: The universe for RQ3 is all of the organizations, both incumbent grantees and new competitors, which participate in the competitive grant application process expected to take place between March and May 2014 for DRS Cohort 3. The size of this universe is unknown due to the newness of the competitive five year grant process. However, 500 applications were received for the Cohort 1 competition cycle so the universe is expected to be of relatively similar size.
The first part of RQ3 assesses the reality of the competitive process – What does the competition look like? This type of assessment requires collection of data to describe the number and characteristics of competitors. These data will be collected through an instrument designed for the purpose of this research, the Competition Data Capture Sheet (CDCS) (Appendix N). ACF will administer the CDCS as part of the Funding Opportunity Announcement (FOA) and application process in Spring 2014, and then provide the capture sheets to the evaluation team.
Sampling Frame D will be comprised of all organizations participating in the competitive grant application for DRS Cohort 3. Sampling will not occur for this data collection because we cannot get a count of how much competition has occurred without collecting data from all applicants.
The second part of RQ3 examines how organizations respond in communities where Head Start grantees are designated for competition. This type of assessment suggests the use of exploratory interviews, similar to those described for RQ2 but with an emphasis on experiences with and responses to the competitive process. Incumbent grantees that would have been eligible to compete, but have chosen not to do so, will not be included in the sampling frame. There have been fewer than five such grantees since the DRS process began.
Sampling Frame E will be made up of the grantees receiving an award through the DRS Cohort 3 competitive process. The application process is expected to take place in March-May 2014, and awards are expected around December 2014. Thus, the size of the sampling frame is not known at this time. The CDCS will capture the characteristics of the sampling frame. Purposive selection of grantees in this frame will focus on maximizing diversity in characteristics such as grantee size (funded enrollment), rural/urban status, auspice, region, reason for designation for competition, and previous relationship to Head Start. Our sample of nine will include four incumbent grantees and five new awardees. Grantees selected for this sample will participate in a one-to-two day site visit that will involve Competition In-Depth Interviews with the Head Start Program Director, Agency Director (if different from the program director), Governing Body and Policy Council members, and Program Managers (Appendices K-M). Interviews will focus on organizational decisions to apply for competitive funding, the participation of partners in the process, how their relationship to Head Start did/did not change in the process, how their relationship to the community did/did not change in the process, what they knew about the challenges faced in the community and how they proposed addressing those challenges, what barriers they expected in applying, what barriers they actually faced, how they experienced the process as a whole, and how having competed is likely to shape their thinking as they move forward in the implementation of their Head Start grant.
RQ1: How effective is the DRS in identifying higher and lower quality Head Start grantees?
Recruitment
Sample recruitment for this portion of the study will begin one month after receiving OMB approval for this data collection effort. The research team will collate available information about Head Start grantees and centers included in the sample, including director name, phone number, and e-mail (if available), grantee physical and e-mail address, language preference, program hours, and grantee/delegate status.
A member of the research team will initiate the first call (see Phone Script for Contacting Head Start Grantees, Appendix O1.3) to each Head Start Program Director at the grantee-level seeking their participation and highlighting the value of their participating in the evaluation of the DRS and how the results will be used. We also will ask the grantee-level director to email center directors to introduce the study, demonstrate their approval of this project, and encourage center directors to participate (see Email from Head Start Program Director at the Grantee-Level to Notify Center Director of Study Permission, Appendix O1.5).
The research team will send each selected Head Start Center Director an Informational Letter (see Appendix O2.1) about the study followed by phone calls using a pre-developed Phone Script for Contacting Head Start Center (see Appendix O2.3) highlighting that participation will aid in understanding the effectiveness of the DRS and that the data collected will be kept private, meaning that the data we collect will only be used for the purposes of this research. Verbal consent will be obtained from center directors during this call (see informed consent document in Appendix O2.4). If center directors agree to participate in the study, the research team will then schedule site visits (or the program director at the grantee level will be notified to begin scheduling of visits if they prefer to handle scheduling). Each site visit will last up to 5 days consisting of interviews with grantee staff, the center director, and selected classroom teachers, as well as classroom observation in each selected classroom. This is detailed below in the on-site data collection activity section.
Data Collector Recruitment, Hiring, and Training
The data collector recruitment, hiring, and training plan minimizes the negative impact of error during data collection and increases the likelihood of retaining trained data collectors throughout the data collection period. We will recruit staff from within or near the selected Head Start sites to minimize travel time and cost. To avoid potential conflicts of interest data collection staff will not be recruited from the pool of data collectors who collect data for the OHS monitoring process. Approximately half of the data collectors will be trained on the Environment Rating Scales (ECERS-R, ECERS-E) and the Child Care Health and Safety Checklist and half will be trained on the Teacher Style Rating Scale (Adapted TSRS), and the CLASS. About three-quarters will be trained on the Program Administration Scale (PAS).
Initial training of data collectors will take place in a group meeting before data collection commences with the goals of orienting them to the purposes of the study, the procedures required of them, and ethical principles of assessment and data handling, followed by specific training on the measures. During this training, data collectors will be briefed on the provisions for the protection of human subjects approved by the IRB, including procedures for informed consent, confidentiality and privacy (including legal requirement to report abuse or neglect and applicable procedures for that), and data security. Data collectors will also be trained to omit identifying information from any notes, even if respondents use identifying information in response to questions. Senior contractor staff experienced with both primary data collection and supervision of data collectors will provide this initial training.
Each of the observation assessment instruments has a standardized training process designed and typically facilitated by the developers of the scales. For the ECERS-R and ECERS-E, as has been done in previous studies, an experienced member of the research team will be trained to reliability by the authors of the measure. This staff member will then provide the training for the data collectors and serve as the anchor for reliability. Each consultant will be required to meet a reliability criterion of 80 percent agreement which is calculated by dividing the number of items that were within one scale point of the gold standard score by the total number of items. For example, if the data collector is within one scale point of the experienced staff score for 20 out of the 22 items, the reliability score would be 90 percent. Certification of data collectors will be based on observer’s reliability scores, as well as the measure trainer’s qualitative evaluations of each observer. Additional days for reliability will be scheduled for individuals who do not meet the reliability standard. If data collectors do not attain reliability after 5 days of the field observation, they will not be kept on staff for data collection. These procedures are similar to procedures that the study sub-contractor has used in previous studies, such as the Quality Initiatives for Early Care and Education and the National Center for Early Development and Learning. In conjunction with this training, the consultants will also be trained on the Child Care Health and Safety Checklist. Data collectors will practice using the checklist and certify with a master coder.
For CLASS, a trainer from Teachstone will provide a two-day CLASS Observation Training to prepare data collectors to take the test for the CLASS Observer Certification. Data collectors will review materials prior to reliability testing to increase proficiency and accuracy. Data collectors will have 30 days after training to take the test. Once data collectors take and pass the CLASS reliability test, they will be certified for one year. For data collectors who are already certified in CLASS, the Project Coordinator will confirm their certification. For data collectors who require re-certification, they will take and pass the recertification reliability test to be certified for another year. Training for the Adapted TSRS will follow standard procedures developed for previous studies with similar reliability procedures as the CLASS observation.
Data collectors hired to conduct the PAS will attend a four-day reliability training provided by the McCormick Center for Early Childhood Leadership. This intensive training provides an overview of the reliability and validity of the PAS; rating indicators and scoring items; interview protocol for collecting data; verifying documentation; and establishing and maintaining reliability. Certification is valid for two years.
Data Collection Procedures
The sub-contractor on the research team (Frank Porter Graham Child Development Institute at University of North Carolina-Chapel Hill; FPG) will oversee data collection, management, and analysis at FPG. The FPG Project Coordinator will supervise data collectors while on site, will coordinate observation schedules and will work with research assistants to coordinate scheduling of interviews to align with data collection in each region.
Typically, a two member data collection team will be used, but it depends on the size of the grantee and numbers of classrooms sampled. This team will spend up to 5 business days (dependent on numbers of sampled classrooms) with grantees and selected programs, collecting observation data, interviews with directors and teachers, and conducting additional ratings as needed. The data collector teams will have one member trained in the ECERS and the Child Care Health and Safety Checklist, one member trained in the CLASS and Adapted TSRS, and at least one of those trained in PAS and other project-developed ratings. This team will ensure reliability of data collection, as well as safety. For unforeseen issues with data collectors (e.g., sick leave, emergency), there will be on-call data collectors who can replace a team member on short notice.
The research team will develop computer-assisted interviews (CAI) for use during interviews with the key personnel and the center directors and use electronic-based tablets to record the data during observations and transmit it to the FPG database once collected. In addition to all relevant project materials (e.g., information letter, project fact sheet; see Appendix O), data collectors will also have paper copies of measures and protocols in case of equipment malfunction.
Data collectors will complete a Center Demographic Sheet (see Appendix D) for each center prior to beginning data collection, including address, director information, and type of program. Each data collector will record when they schedule a visit and when they finish the visit. Project staff will check the data tracking website daily and stay in communication with data collectors to ensure that data are being collected in a timely manner. The measures used in data collection are described in summary in Part A and Appendices B-E.
Site Visits
A team of two data collectors will visit each of the selected sites. Site visits will be up to 5 days with 4 classrooms completed by one data collection team in that period. In large grantees where there are up to 8 classrooms, site visits will occur over two weeks or a second team of data collectors will be used, depending upon which procedure is the most efficient. During the site visits, data collectors will spend up to 4 hours in observational measures, including conducting an approximately 110-minute interview with center directors to complete the PAS (Appendix B2), Child Care Health and Safety Checklist (Appendix B3), Center Director Questionnaire (Appendix D), and Technical Assistance and Training Interview (Appendix E). The site visit days will match the following basic schedule, with variation depending on availability of the director. However, interviews will be scheduled at a time that is convenient for Head Start staff. The observations will be conducted during normal class time, during which staff can go about their daily routine.
Day 1
Child Care Health and Safety Checklist
PAS (director interview at time convenient to director, document review, and program observation)
Center Director Questionnaire
Technical Assistance and Training Interview
Day 2
ECERS-R and ECERS-E (classroom observation + interview as needed) for Classroom 1
CLASS/Adapted TSRS (classroom observation) for Classroom 1
Day 3
ECERS-R and ECERS-E (classroom observation + interview as needed) for Classroom 2
CLASS/Adapted TSRS (classroom observation) for Classroom 2
Day 4
ECERS-R and ECERS-E (classroom observation + interview as needed) for Classroom 3
CLASS/Adapted TSRS (classroom observation) for Classroom 3
Day 5
ECERS-R and ECERS-E (classroom observation + interview as needed) for Classroom 4
CLASS/Adapted TSRS (classroom observation) for Classroom 4
For the Child Care Health and Safety Checklist and the ECERS measures the interviewer will be able to score many items based on observation alone. If the interviewer is unable to score an item based on observation, he/she will ask the appropriate staff member about standard practices and score the item appropriately (see Quality Measures Follow Up Interviews in Appendices C and D).
Informed Consent
Verbal informed consent will be requested from Head Start center directors and program directors. Written informed consent will be requested from teachers. One copy of each of these consent forms will be retained in the contractor’s files. Copies of the consent forms will be given to participants for their records. See each instrument for the informed consent language that will be used in the interviews and the observations in Appendix O.
Quality Control
Quality assurance will be maintained, in part, through a central tracking system, the use of electronic data collection, reliability checks for observations, and the review of interview tapes. There will be a central, web-based tracking system that will record each recruited grantee and the status of data collection for that site (ready to be scheduled, scheduled, collected, transferred to FPG). Project staff will check all data for quality assurance issues (e.g., valid IDs, range checks per variable) during data collection and can transmit the data to the FPG database once collected. Project staff will check for consistency between ID and personal identifiers and remove the identifiers once these checks are successfully completed. In addition, a data manager will monitor the data received, check it for other potential errors, update the data, and score the data. To prevent drift and to assist in maintaining satisfactory interobserver agreement the project’s master observer will conduct reliability visits in 10% of the classrooms and 10% of the centers. The master observer and the data collector will observe and independently rate the same classrooms.
RQ2: How have Head Start grantees understood and responded to the provisions of the DRS in terms of their efforts to improve program operations and quality?
DRS Telephone Interview: Program Directors
Data will be collected through the DRS Telephone Interview: Program Directors (Appendix F) from 35 local Head Start grantees that are a subset of the grantees where the observational assessments described in RQ1 have occurred. The interview respondent will be the Head Start program director. The telephone interview will follow the on-site assessments conducted for RQ1 by a couple of weeks.
Recruitment
Program directors will be initially introduced to the study through the recruitment processes described under RQ1, but because not all program directors participating in the observational assessments will be selected to participate in the telephone survey a separate outreach effort will be conducted. The program director will be contacted by phone to answer questions they have about this component of the research and to discuss their participation (Appendix O4.2). These initial calls will conclude by scheduling the interview at a future time that is convenient for respondents (or establishing a procedure for scheduling with an alternative respondent).
Informed Consent
Verbal consent will be obtained from respondents at the beginning of each telephone survey (see instrument for the verbal informed consent language that will be used in the survey, in Appendix F). The same protocol will be used for all grantees.
Training
All interviewers will be trained on the protocol by senior members of the research team. During interviewer training, the interviewers will be walked through the protocol and engage in discussion of the purpose of each item, examples of desired responses, and appropriate probes to use. Interviewer training will also include interviewer responsibilities regarding ethics of data collection, informed consent requirements, and confidentiality of data. To ensure ongoing quality and consistency, senior members of the research team will periodically review completed interview notes and meet with interviewers to discuss the interview process and resolve any issues.
Data Collection
Before conducting each survey, researchers will familiarize themselves with the grantee by reviewing data on key grantee characteristics from the PIR and other information available about the grantee. To facilitate data collection, entry, and management, interview protocols will be programmed into a web application such as Checkbox that interviewers will access on a secure server. This application will prompt interviewers through the protocol and also serve as the mechanism by which telephone interviewers record responses. Each telephone survey is expected to last approximately 75 minutes.
DRS In-Depth Interview: Agency Directors, Program Directors, Policy Council/Governing Body, and Program Managers
Additional qualitative data will be collected through site visits with a sub-set of the grantees that participated in the DRS Telephone Interview: Program Directors (Appendix F). During the summer of 2014, the contractor will examine preliminary data from telephone interviews and select 15 grantees for the DRS In-Depth Interview: Agency Directors, DRS In-Depth Interview: Program Directors, DRS In-Depth Interview: Policy Council/Governing Body, and DRS In-Depth Interview: Program Managers (Appendices G-J). These interviews will take place over the course of one-to-two-day site visits where a variety of perspectives will be sought on understanding of and reactions to the DRS. It is important to seek out a variety of perspectives because Head Start grantees rely on a diffuse governance system that includes a variety of types of managers and directors, governing body and policy council members to make and carry out decisions, and to allocate resources. The perceptions and understanding of all of these individuals is likely to influence how the DRS is experienced by the grantee.
Recruitment
Letters will be sent to the Head Start program director in the selected grantees to invite them to participate (Appendix O4.1), and follow-up phone calls will be made to further recruit grantees and schedule the site visits for the months September – December 2014 (Appendix O4.2). The site visit team will work with the program director or designee to determine the best methods for recruiting participants in each of the targeted respondent groups.
Training
All interviewers will be trained on the protocol by senior members of the research team before the interviews. During interviewer training, the interviewers will be walked through the protocol and discuss the purpose of each item, examples of desired responses, and appropriate probes to use. Interviewer training will also include interviewer responsibilities regarding ethical collection of data, informed consent requirements, and confidentiality of data.
Data Collection
Before visiting each site, researchers will familiarize themselves with the grantee by reviewing data on key grantee characteristics from the PIR and other information available about the grantee. A team of two researchers will visit each of the 15 selected sites. The senior researcher will lead most of the interviews, and the junior researcher will take detailed notes as close to verbatim as possible. With the permission of the respondent, the interviews will be recorded, solely for the purpose of editing and correcting the notes and creating a targeted transcription with key responses.
During the one-to-two-day site visits, researchers will conduct a 90-minute follow-up interview through the DRS In-Depth Interview: Program Directors (Appendix H) with the staff member who responded to the DRS Telephone Interview: Program Directors (Appendix F). Researchers will conduct a one-hour interview with the agency director using the DRS In-Depth Interview: Agency Directors (Appendix G). Some managers will be interviewed in a 90-minute small group interview (e.g., education services manager, or other specialty area) using the DRS In-Depth Interview: Program Managers (Appendix J). Researchers will also conduct 90-minute group interviews of policy council and governing board members using the DRS In-Depth Interview: Policy Council/Governing Body (Appendix I).
The interviews will all use discussion guides with key topics and open-ended questions rather than close-ended questions (i.e. rigidly specified and directly quantifiable questions). This approach is the best data collection method for understanding in depth how the DRS is understood, the incentives experienced, and the actions taken. The researchers will be trained to mark in their notes when a key statement is made so that quotes can later be checked for accuracy with the recording.
RQ3: What does competition look like and how do programs respond in communities where Head Start grantees are designated for competition?
Data Collection Procedures for Competition Data Capture Sheet (CDCS)
The CDCS (Appendix N) will collect information from competitive applicants in DRS Cohort 3. It captures information through the FOA application process by consolidating fields of interest for the evaluation into a single response section. All competitive applicants in DRS Cohort 3 will be requested to complete the sheet as part of the FOA process. OHS will forward the CDCS documents to the evaluation team when the application process closes.
Competition In-Depth Interview: Agency and Program Directors, Policy Council and Governing Body, and Program Managers
The contractor will examine preliminary data from the CDCS and purposively select 9 awardees (four incumbent grantees and five new awardees) for site visits maximizing diversity in characteristics such as grantee size (funded enrollment), rural/urban status auspice, region, and reason for designation for competition.
Recruitment
Letters will be sent to the Head Start program directors in the selected grantees to invite them to participate (Appendix O5.1), and follow-up phone calls will be made to further recruit grantees and schedule the site visits for the months of January-April 2015 (Appendix O5.2). The site visit team will work with the program director or designee to determine the best methods for recruiting participants in each of the targeted respondent groups.
Training
All interviewers will be trained on the protocol by senior members of the research team before the interviews. During interviewer training, the interviewers will be walked through the protocol and discuss the purpose of each item, examples of desired responses, and appropriate probes to use. Interviewers will be given an overview of the DRS competitive process, and will learn how to select the appropriate sub-protocol based on the incumbent status and when the organization had been formed. Interviewer training will also include interviewer responsibilities regarding ethical collection of data, informed consent requirements, and confidentiality of data.
Data Collection
Before visiting each site, researchers will familiarize themselves with the grantee by reviewing data on key grantee characteristics from the PIR, data collected through the CDCS, and other information available about the grantee. Two site visitors will conduct each of the 9 site visits as a team, with the senior researcher leading most of the interviews, and junior researcher taking detailed notes as close to verbatim as possible. With the permission of the respondent, the interviews will be recorded, solely for the purpose of editing and correcting the notes and creating a targeted transcription with key responses. Site visits will be one-to-two days that will feature data collection through the Competition In-Depth Interview: Agency and Program Directors, Competition In-Depth Interview: Policy Council/ Governing Body, and Competition In-Depth Interview: Program Managers (Appendices K-M).
During the site visits, researchers will conduct the 75-minute Competition In-Depth Interview: Agency and Program Directors (Appendix K), separately for each respondent. Researchers will also conduct the group Competition In-Depth Interview: Policy Council/ Governing Body (Appendix L), lasting about 90 minutes. Finally, in the Competition In-Depth Interview: Program Managers (Appendix M), some managers will be interviewed in a small group interview (e.g., education services manager, or other specialty area), also lasting about 90 minutes.
The interviews will use discussion guides with key topics and open-ended questions rather than close-ended questions (i.e. rigidly specified and directly quantifiable questions). This approach is the best data collection method for understanding in depth how competition is experienced, and how the competitive process relates to changes in partnerships, funding, and incentives for quality improvement. The researchers will be trained to mark in their notes when a key statement is made so that quotes can later be checked for accuracy with the recording.
Secondary Data Sources
Secondary data sources will be used for several purposes in the study. They will be used to develop the propensity model, and to create the propensity measure, for the sampling selection process (see B2.1). Additionally, secondary data sources will be used to inform the qualitative interviews with grantees by contributing to descriptive profiles of grantees and providing background information about where, when and from whom grantees receive information regarding the DRS. The research team will also connect the primary data collected for the study to secondary data sources to enrich the analyses of the research questions. Specifically, these data will be used to help understand the circumstances in which the DRS is working more or less well (e.g., to look at differences between grantees with different characteristics or experiences with the DRS). We intend to draw data from the following documents and data sources as part of our evaluation:
PIR data available through OHS. PIR will provide descriptive data on features and characteristics of the Head Start program, staff, and children/families served for each grantee participating in the study. The data will be used for the sampling selection process; to provide summary statistics on the population and sample of Head Start grantees; and to understand the circumstances under which DRS works more or less well by linking PIR with primary data sources.
Monitoring data available through OHS. Monitoring data will include scores for the OHS-assessed CLASS observations as well as data related to findings of non-compliance and deficiency during monitoring reviews. These data will be used for the sampling selection process; to conduct summary statistics characterizing the population and sample; and to link with primary data sources to understand how the DRS is working in relation to the monitoring condition.
Designation status data available through OHS. These data will provide information about which grantees are designated for competition (and which are not) and which conditions triggered designation for competition. The data will be used for the sampling selection process. It will also be linked to other data sources to examine the validity of the designation and to understand the circumstances in which the DRS is working more or less well.
Other materials available through federal web-sites. Other materials may include information memorandums, policy clarifications, postings about the DRS, and other materials intended to inform grantees and potential applicants about the DRS or competitive process. This information will be used to provide background on the timing of communications from OHS and the mechanisms by which information is distributed.
Data on nonprofit Head Start organizations available from the Urban Institute’s National Center for Charitable Statistics (i.e., IRS Form 990). These data include financial information about federally tax-exempt organizations reported to the IRS with Form 990. The data will be used as an independent measure of financial vulnerability to contribute to answering RQ1 by calculating the Tuckman and Chang (1991) financial ratios for nonprofit organizations participating in the study. A description and the calculation methods for the ratios are found in Table B-4:
Ratio |
Computation |
Form 990 Variables |
Equity Ratio |
|
|
Revenue Concentration |
|
|
Administrative Cost Ratio |
|
|
Operating Margin |
|
|
Expected Response Rates
Dealing with non-response
RQ1: How effective is the DRS in identifying higher and lower quality Head Start grantees?
The contractor will draw a sample of grantees that is at least 125 percent larger than needed (1/.8), and will hold back 20 percent of the sample from initial recruitment. Note that the sample is divided into two sampling strata (those likely to be put into competition versus those likely to not have to compete) and then those two strata may be further stratified by size and region. If a sampled grantee declines to participate in the research, the team will discuss the case, the concerns the site has about participating, and brainstorm options for addressing the site’s concerns. If it is ultimately determined that a selected site cannot or will not participate in the study, the contractor will go back to the sampling strata from which the non-respondent was selected and contact the grantee listed next in the array of grantees based on random numbers.
Drawing replacements from the randomized list of grantees within strata or of classrooms within a grantee will reduce any nonresponse bias. However, it is always possible that the grantees that refuse to participate in this study or grantees we could not get scheduled will differ from those who do participate, leading to potential bias. The size of the potential bias depends on both how much the non-participants differ and the response rate. Using administrative data the contractor will demographically compare the grantees that respond to those that do not respond and if there are significant differences then the contractor will adjust estimates by applying a post-stratification weighting adjustment that would make the study sample have the same demographic make-up as the overall population which will reduce the potential for nonresponse bias.
Multiple imputations will be used during analysis to account for missing data within recruited grantees. This approach should provide unbiased estimates that account for the inevitable failure to collect all data in all sites.
RQ2: How have Head Start grantees understood and responded to the provisions of the DRS in terms of their efforts to improve program operations and quality?
The 35 telephone interviews and 15 in-depth grantee site visits will be purposively selected to maximize diversity. During the purposive selection process, alternative grantees will be selected to replace those initially selected should interviewees refuse participation. If a sampled grantee declines to participate in the research, the team will discuss the case, the concerns the site has about participating, and brainstorm options for addressing the site’s concerns. If it is ultimately determined that a selected site cannot or will not participate in the research, the contractor will go back to the sampling strata in which the non-respondent was selected from and then contact the alternative grantee that is most similar to the grantee that declined.
RQ3: What does competition look like and how do programs respond in communities where Head Start grantees are designated for competition?
The CDCS is to be completed as part of the FOA application process. As the request to fill out the form is being directed to the population, it will not be possible to replace nonrespondents with others. However, it may be possible to capture some of the nonrespondent data directly from the FOA application. Any information learned about level and types of competition will be informative, and caveats about what can be said will be provided given response rates.
The 9 in-depth grantee site visits will be purposively selected to maximize diversity. During the purposive selection process, alternative grantees will be selected to replace those initially selected should interviewees refuse participation. If a sampled grantee declines to participate in the research, the team will discuss the case, the concerns the site has about participating, and brainstorm options for addressing the site’s concerns. If it is ultimately determined that a selected site cannot or will not participate in the research, the contractor will go back to the sampling strata in which the non-respondent was selected from and then contact the alternative grantee that is most similar to the grantee that declined.
Maximizing Response Rates
Responses to interviews and observations will be maximized in several ways. First, interviews will be scheduled at times that are convenient for respondents and can be separated into several sections to minimize fatigue. Observers will strive to be as unobtrusive as possible while in the classrooms and all staff can go about their regular activities while observations are underway. Further, recruitment materials inform participants that study data collection will not be used to evaluate individual Head Start grantees. Finally, incentives will be offered to participate in the assessments associated with RQ1. Teachers whose classrooms are observed will be offered $25 gift cards, and grantees that agree to participate will be offered $200 to $500 based on the numbers of centers and classrooms sampled and participating in assessments. The grantee level gift in appreciation will be used to support designation of a grantee staff member as an on-site coordinator to help in scheduling of classrooms and centers for observation. (Incentives are discussed in more detail in Supporting Statement A.)
The majority of the independent measures of program quality have been used successfully with similar populations in studies conducted by FPG. The research team will pilot the full data collection battery with several classrooms and centers to confirm timing and scheduling prior to official data collection.
The instruments developed by the research team will be pilot-tested prior to use with grantees that are not in the sampling frames identified for this study. Testing will be conducted both before and after the OMB clearance process with testing occurring within proximity to the data collection period. For example, for instruments that will collect data in Spring 2014, testing will occur in Fall 2013. For instruments that will collect data in Spring 2015, testing will occur in Fall 2014. Proximity of testing to the time period of instrument implementation is important because it is likely that understandings of how the DRS works and the language associated with the DRS will change over time.
The study design plan and data collection protocols were developed jointly by project staff at the Urban Institute and the Frank Porter Graham (FPG) Child Development Institute at the University of North Carolina-Chapel Hill. Key project staff include:
Urban Institute:
Teresa Derrick-Mills, Project Director and Co-Principal Investigator
Elizabeth Peters, Project Senior Advisor
Monica Rohacek, Task Leader
Rob Santos, Senior Methodologist
Tim Triplett, Sampling Methodologist
FPG Child Development Institute:
Peg Burchinal, Co-Principal Investigator
Iheoma Iruka, Co-Principal Investigator
The core team of Urban Institute and FPG researchers engaged in this study possesses the skills and experience needed to carefully, thoroughly, and successfully evaluate the Head Start DRS. This leadership team brings extensive experience designing and implementing complex, mixed-methods research studies, expert knowledge of Head Start/Early Head Start and other early childhood programs, and extensive field and survey research experience and training, including overall design, instrument design, survey and qualitative interviewing, and analysis of quantitative and qualitative data. The leadership team will be directly engaged in data collection and analysis, and will be joined by additional researchers from the Urban Institute.
In developing the study design and data collection protocols, the study team convened an expert workgroup consisting of the following six members:
Greg Duncan,
Distinguished Professor, School of Education, University of
California at Irvine
Stephanie Jones, Marie and Max
Kargman Associate Professor in Human Development and Urban Education
Advancement, Graduate School of Education, Harvard University
Christine McWayne, Associate Professor of Child Development, Eliot-Pearson Department of Child Development, Tufts University
Kathryn Newcomer, Professor and Director, Trachtenberg School of Public Policy and Public Administration, George Washington University
Kathryn Tout, Co-Director, Early Childhood Research & Senior Research Scientist, Child Trends
Mildred Warner, Professor, Department of City and Regional Planning, Cornell University
These experts include individuals with technical expertise in a number of areas in addition to expertise specific to Head Start, including Measurement of Quality in Child Care Classrooms; Measurement of Management and Financial Quality; Quality Rating and Improvement System (QRIS) Research and Evaluation; Research and Evaluation of Accountability Systems; Examination of Government-Induced Competition; General Study Design: Econometrics for Determining Causal Inference with Quasi-Experimental Design; Technical Expertise: Econometrics for Determining Causal Inference with Quasi-Experimental Design.
The Federal Project Officer for this project is Amy Madigan. Jennifer Brooks (OPRE) has also played an integral role in the review and approval of data-related aspects of the project. OHS has supplied grantee and administrative data necessary to construct a sampling plan and supplement primary observation data collected on-site.
References
Administration for Children and Families (ACF). (2005). Head Start Impact Study: First Year Findings. Washington, DC: U.S, Department of Health and Human Services.
West, J., L. Malone, L. Hulsey, N. Aikens, & L. Tarullo. (2010). Head Start Children Go to Kindergarten. Washingtong, DC: U.S. Department of Health and Human Services, Administration for Children and Families, Office of Planning, Research and Evaluation.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | OPRE OMB Clearance Manual |
Author | DHHS |
File Modified | 0000-00-00 |
File Created | 2021-01-28 |