Request for Clearance for
the Multi-Site Evaluation of Project LAUNCH
Supporting Statement B
OMB Information Collection Request
Additional Information Request under OMB #0970-0373
Submitted by:
Office of Planning, Research and Evaluation (OPRE)
Administration for Children and Families (ACF)
U.S. Department of Health and Human Services (HHS)
330 C Street, SW
Washington, DC 20201
Project Officers:
Laura Hoard (ACF/OPRE) and Kelley Smith (SAMHSA)
October 2016
Updated June 2017 and October 2017
TABLE OF CONTENTS
B1. Respondent Universe and Sampling Methods 1
B2. Procedures for Collection of Information 10
B3. Methods to Maximize Response Rates and Deal with Nonresponse 13
B4. Tests of Procedures or Methods to be Undertaken 18
B5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 19
EXHIBIT 1: Power to Detect Small (2) to Moderate (5) Differences in Mean Standardized DECA Scores among LAUNCH Grantees and Comparison Communities
Exhibit 2. Critical Difference in the Percentage of Vulnerable Children Detectable Across all LAUNCH Communities and the Percentage of Vulnerable Children Across all Comparison Communities
Exhibit 3. Critical Difference in the Percentage of Vulnerable Children Detectable in LAUNCH Grantees and Comparison Communities
Exhibit 4. Members of Project LAUNCH Consultant Cadre
Exhibit 5. Members of THE LAUNCH GRANTEE STEERING COMMITTEE
Administration
for Children & Families (ACF)
Multi-Site Evaluation of
Project LAUNCH
B1. Respondent Universe and Sampling Methods
The Multi-Site Evaluation of Project LAUNCH (MSE) will collect data using six data collection efforts. The Part A data collection instruments will gather information directly from LAUNCH grantees; Part B instruments will target a range of respondents within locations drawn from among LAUNCH grantees and comparison communities.
Part A
This component of the MSE consists of two Web-based data collection activities, each of which uses the same respondent universe and sampling methods.
Direct Services Survey: Completed semi-annually from fall 2016 through fall 2018, and
Systems Activities and Outcomes Survey: Completed annually from fall 2016 through fall 2018.
Part A Target Population. The target population for both Web-based data collection activities is the universe of all 31 Project LAUNCH grantees in Cohorts 4, 5, and 6. As the data collection relates directly to government-funded program implementation, it is appropriate to request this information from all grantees as opposed to a sample of grantees. In addition to supporting the MSE, these data will inform the government’s understanding of program implementation.
Part A Sampling Frame and Design. Because there is no sampling involved in Part A, questions related to design and sample sizes are not relevant.
Part A Response Rate. NORC has established technical assistance and quality assurance programs in place to ensure detailed and accurate responses to the items in both surveys among all LAUNCH grantees. The data provided by all Project LAUNCH grantees in Part A will be collected through Liberty, a Web-based platform through which the current CSE surveys are administered. Upon OMB approval for the MSE, the new surveys will be developed and administered through the updated Web-based data portal. To minimize burden across data reporting periods, some of the information entered at the first data collection time point will be pre-populated for grantees for subsequent reporting periods. Examples of data that can be pre-populated include the program description, the locations in which services are provided, and the types of services reported in previous reporting periods. Grantees will have the opportunity to revise the pre-populated information as needed. The design of this data collection and use of the Liberty platform are intended to be user-friendly and reduce burden on participants. NORC has routinely achieved 100 percent response rates in previous waves of the Web-based CSE data.
Part B
The second component of the MSE, Part B, consists of four separate data collection efforts to be conducted with a sample of communities as described below, albeit with different groups of respondents. The content and purpose of each of these instruments were detailed at length in Supporting Statement A (SSA). In this section, we describe the methods to be used to select the communities for Part B across all four instruments, and then discuss the specific universe and sampling method associated with each instrument. The four data collection efforts to be discussed in sequence below are:
School Survey
Parent Survey
Teacher Survey (Early Development Instrument or EDI)
Key Informant Interviews on Systems Change
LAUNCH Grantees and Comparison Communities- Target Population. LAUNCH grantees may be included in Part B data collection only if they meet the following inclusion criteria:
are an actively funded grantee from Cohorts 4, 5, or 6;
are a U.S. state (as opposed to being a tribal or territorially located program); and
have
achieved a level of implementation adequate to support evaluation,
defined as having initiated interventions in at least three of the
five core strategies.
All information required to determine eligibility will be obtained from LAUNCH’s Federal Project Officers (FPOs).
LAUNCH Grantees and Comparison Communities- Sampling Frame and Design. Each step of community selection uses a frame that fully covers the target communities of interest, including eligible LAUNCH grantees and eligible U.S. counties or county equivalents for comparison purposes. The selection design for LAUNCH communities is simple random selection, and the selection design for comparison communities is quasi-experimental using propensity score matching.
All LAUNCH grantees that meet the eligibility criteria will be assigned a recruitment number using a random number generator in Microsoft Excel, and NORC will recruit from the Excel-generated list until 10 LAUNCH grantees have agreed to participate in the MSE. As all eligible grantees will be assigned a recruitment number, those that are not in the first 10 to be approached for recruitment will used as the replacement sample should one or more of the 10 initially selected grantees be unable to participate. If fewer than 10 school districts agree to participate during the first year of recruitment, the approach will be revised to recruit parents from only ECEs in selected communities. During the second year, the school districts and schools in those communities will be asked to only participate in the Teacher and School surveys. As the data from the Parent Survey are particularly critical to the success of the MSE, the final list of participating communities will be limited to those in which it will be possible to field that instrument.
Part B data collection will also take place in 10 comparison communities. To be eligible for inclusion, a comparison community must be a county or county equivalent located in a non-tribal area of one of the 50 U.S. states or the District of Columbia. Because Project LAUNCH does not rely on a randomized design, our evaluation uses a quasi-experimental design in which individual comparison communities are selected based on their demographic and socioeconomic similarities to a LAUNCH grantee using a propensity score and a greedy matching algorithm based on the following variables (each of which is included in the American Community Survey):
U.S. State
Population size
Population density
Percentage of children under age 6 living below 200% of the Federal Poverty Level (FPL)
Percentage of children living in a single-parent household
Percentage of households receiving Temporary Assistance to Needy Families (TANF) payments
Percentage of households receiving Supplemental Nutrition Assistance Program (SNAP) benefits
Percentage of the population that is African American
Percentage of the population that is Latino
Using logistic regression, we will create propensity scores measuring the probability that a given county contains a LAUNCH community based on the variables above. We will then match each LAUNCH grantee to at least six possible counties to be used as a comparison community. Following OMB approval, the recruitment or likely recruitment of each LAUNCH community will trigger selection and recruitment from the list of matched potential comparison counties, starting with the best possible match and moving sequentially by propensity score until the best match is recruited. As with the LAUNCH grantees, a comparison community will be considered ‘recruited’ when the school district containing the schools needed for data collection has agreed to participate in at least the Parent Survey.
The sample size of 10 LAUNCH grantees and 10 comparison communities was chosen primarily based on the sample needs of the Parent Survey. The process by which this was determined is presented below, as are the specifications guiding selection for participation in the other components of MSE data collection.
LAUNCH Grantees and Comparison Communities- Response Rates. We estimate a 40 percent recruitment rate for LAUNCH grantees and a 20 percent recruitment rate for comparison communities (see Section B3 for further discussion).
School Survey - Target Population. The sampling frame for the School Survey is public primary schools and state-licensed ECEs within LAUNCH grantee areas and comparison communities.
School Survey - Sampling Frame and Design. The sample will be selected from the schools and ECEs taking part in the Parent Survey described below. One administrative respondent per school or ECE participating in the School Survey will be recruited to complete the School Survey once per year for two years. The survey (included as Attachment D) will collect data of interest to SAMHSA/ACF regarding suspensions and expulsions of young children that will also supplement the information collected for the Parent Survey and Teacher Survey (EDI). It aims to impose minimal burden on respondents, requiring the reporting of only the following administrative variables:
the number of children:
enrolled in the school or ECE during the last full year;
suspended during that timeframe;
expelled or involuntarily disenrolled during that timeframe;
the general reasons why the children were suspended or expelled; and
whether there is a mental-health consultant in the school or ECE.
Assuming recruitment of four ECEs and two schools within each of the 10 LAUNCH grantees and 10 comparison community areas, the School Survey will yield a sample size of 40 schools and 80 ECEs overall, accounting for approximately 6,000 children per year, assuming an average estimated 50 children of relevant ages in each school or ECE. A student population of this size will facilitate detection of any significant and meaningfully large differences in expulsion rates.
At the same time, it is extremely challenging to estimate the precise design effect for a cluster randomized data collection effort with 120 sampling units (in which the precise number of respondents and the proportion of expulsions in each unit are unknown). To determine the difference in expulsion proportions between the LAUNCH and comparison communities, we very conservatively assumed a design effect of 2.0 for an effective total sample size of 3,000, split evenly between 1,500 in LAUNCH grantee areas and 1,500 in comparison communities. This effective sample size (assuming an alpha of 0.05) will yield approximately 80 percent power to detect a difference in proportions when the proportion within LAUNCH grantees is 0.45 and that within comparison communities is 0.50. At the lower bound of expulsion rates, our sample will give us approximately 80 percent power to detect a difference in proportions when the proportion within LAUNCH grantees is 0.03 and that within comparison communities is 0.05.
Again, a design effect of 2.0 is a highly unlikely and extremely conservative assumption and many schools and ECEs may contain more than 50 children of relevant ages. As a result, we can comfortably conclude that our sample will be powered sufficiently to detect any differences in the proportion of children expelled or involuntarily disenrolled between LAUNCH and comparison communities.
School Survey - Response Rate. Given the low burden associated with the small number of survey items and the relationships the team will be building with the schools and ECEs by virtue of their participation in the other components of the data collection, we estimate a response rate of 75-95 percent. The survey is estimated to take no more than one hour to complete (including the time to gather the data necessary to complete the survey) and, in most cases, will require far less time. Respondents will also be able to consult external documents such as school records to provide accurate answers to the questions.
Parent Survey - Target Population. The target population for the Parent Survey is parents of all children who live in the areas (defined by ZIP code) served by the LAUNCH program and the equivalent areas in the comparison communities.
Parent Survey - Sampling Frame and Design. The sampling frame for the Parent Survey is all parents of children eight years old or younger who attend a public primary school or a state-licensed ECE in a LAUNCH or comparison community. This frame will exclude parents in the target population whose children do not attend school, attend non-licensed child-care facilities, or attend private school. The sampling frame will include parents whose children attend ECEs that accept state child-care subsidies to ensure a wide range of income levels. This sampling frame will include parents regardless of whether their children or families have received a LAUNCH service or participated in a LAUNCH intervention. This approach will allow us to examine the community-wide effects of LAUNCH, which is a significant priority of SAMHSA/ACF and the program itself.
The Parent Survey is cluster-randomized survey with a three-stage design in which parents are selected from schools or ECEs that are selected from within the geographic and demographic boundaries of included LAUNCH grantees and comparison communities. In each community, we will collect data from two primary schools and four ECEs. Lists of primary schools in each selected community will be obtained from federal No Child Left Behind data, and lists of ECEs will be obtained from state lists of licensed facilities. Schools and ECEs will be selected randomly from these lists and then sorted in random order to create a replacement sample to be used if needed. Based on NORC’s prior school survey experience, we anticipate that 50 percent of the ECEs approached and 75 percent of the schools approached will agree to participate in data collection efforts. In light of potential recruitment challenges, we will likely target larger ECEs to increase the yield of parents per facility for the Parent Survey.
We will attempt to recruit as many parent volunteers as possible from each school or ECE, with the goal of obtaining up to 15 parent surveys from each school or ECE. The specific content of the survey will vary as a function of the age of each parent’s child, grouping them by the following age ranges: 4 weeks to 18 months, 19 months to 3 years, >3 to 5 years, and >5 years old. For parent respondents who indicate that they have multiple children, the survey will clearly indicate which of their children should be the focus of their responses. In these cases, the specific child will be selected randomly by computer in order to avoid introducing any bias due to birth order (which would occur if the youngest or oldest child were always selected) or parents selecting the child themselves. As completed surveys are gathered, if there specific age groups that are not fully represented in the Parent Survey dataset (relative to the age-specific targets discussed below in Section B2), NORC will purposively select children of ages that will help meet recruitment needs, rather than rely on random selection.
We anticipate a sample of 1,800 parents to complete the Parent Survey. The sample size criteria for the study were selected to determine small differences in mean standardized Devereux Early Childhood Assessment (DECA) scores between LAUNCH and comparison communities, with the mean calculated for all individuals in each group regardless of age, and comparisons drawn using one year of data. The DECA was chosen as the measure on which to base sample size criteria because it is the main measure in the Parent Survey and has continuous scoring and published norms. Individual parent responses for each DECA instrument can be converted into nationally normed percentile scores, and can be used to communicate DECA results and standardized scores, which can in turn be used in statistical analyses to measure programmatic impact. Two other attributes of the DECA that increase its utility for study design purposes are:
nationally normed definitions that define the effect size associated with a small, medium, or large programmatic effect; and
the stability of normalized scores across the age groups included in the LAUNCH evaluation, allowing scores from children of different ages to be pooled to determine program effects.
Exhibit 1 displays the design effect, the effective sample associated with one wave of data collection, and the estimated probability (power) of detecting a difference in mean DECA scores of 2, 3, 4, and 5 in cross-sectional comparisons among the 10 LAUNCH grantees and 10 comparison communities. This also reflects different assumptions regarding the area and collection locations’ intra-class correlation (ICC). In the documentation provided with the DECA, a paired-sample t-test is used to compare differences in mean standardized DECA scores between intervention and comparison communities, and the magnitude of DECA differences are categorized as no meaningful difference (<2), small (2-4), medium (5-7), or large (8 or greater).1
Power to detect differences between groups is sensitive to the ICC of measurements collected within each clustering unit and, in this analysis, the primary clustering unit is the school or ECE in which the data are collected. Previous psychometric testing of the DECA has identified ICCs below 0.10 when the DECA was implemented across 25 Head Start facilities.2 To estimate our sample requirements, our power calculations used assumptions of low (0.05), medium (0.10), and high (0.15) levels of ICC. To be conservative, we designed the study assuming a moderate to high ICC. If the actual ICC is lower, the data will be able to detect smaller effect sizes between LAUNCH and its comparison sites. The table shows the power to detect a difference of 2, 3, 4, and 5 in mean DECA scores among the areas included in the evaluation. Thus, the Parent Survey will be adequately powered to detect differences in the survey’s main measure of interest, assuming such differences exist within the data collection timeline.
Exhibit 1. Power to Detect Small (2) to Moderate (5) Differences in Mean Standardized DECA Scores among LAUNCH Grantees and Comparison Communities
|
|||||||
|
|
|
|
Power at Each Effect Size |
|||
Area ICC |
Collection Location ICC |
Design Effect
|
Effective Sample Size per group |
2 |
3 |
4 |
5 |
0.1 |
0.15 |
14.1 |
128 |
0.255 |
0.552 |
0.821 |
0.956 |
0.1 |
0.1 |
9.9 |
182 |
0.367 |
0.73 |
0.941 |
0.994 |
0.05 |
0.1 |
9.65 |
187 |
0.379 |
0.745 |
0.948 |
0.995 |
0.05 |
0.05 |
5.45 |
330 |
0.628 |
0.947 |
0.998 |
1 |
Second Year of Data Collection. Our design calls for a second year of data collection for the Parent Survey among the same parents who participated in the first year. Collecting a second wave of data in the second year will substantially improve the study’s power to detect differences relative to cross-sectional comparisons alone. For example, by collecting data a second time, a sample of only 135 individuals will be sufficient to detect a difference in DECA scores of 2.0 given a standard error of the mean standardized DECA score of 8, and a correlation in individual DECA scores over time of 0.7. This will allow for a larger number of evaluation comparisons (e.g., of the effects of LAUNCH on different age cohorts over time). Collecting a second year of data also ensures that our study will be sufficiently powered to detect differences in mean standardized DECA scores between LAUNCH grantees and comparison communities (if such differences exist) even if recruitment falls short of our objectives in the first year. This is crucially important given the significant uncertainty concerning collecting data in schools and ECEs in the current research environment.
Parent Survey - Response Rate. Based on NORC’s extensive past experience with data collection in schools, we conservatively estimate that 50 percent of parents who initially volunteer their interest will complete the survey. This will require 30 parent volunteers per location. The portion of the Parent Survey comprised of the DECA instrument follows immediately after the 13 questions related to demographics, thereby increasing the likelihood that parents who do not complete the entire survey will likely have completed at least this section. Based on previous research conducted in schools and ECEs, we anticipate that 60 percent to 70 percent of respondents recruited in the first year will complete surveys in the second year. In the event that we recruit fewer than 10 LAUNCH sites and 10 comparison communities to participate in the MSE, we will increase the number of target completes for the Parent Survey in participating sites.
It is important to note that we have a finite number of parents in each site who can be recruited to complete the Parent Survey. We are targeting 15 completed Parent Surveys in each school and ECE. We will be surveying parents in two schools and four ECEs in each site. There will be a total of 20 sites, including ten LAUNCH sites and ten comparison communities. Thus, each year we are targeting 1,800 completed Parent Surveys. The longitudinal design of the study involves contacting the same group of parents who completed the Parent Survey in the initial year and asking them to complete the survey again. In total, across both years of the study, we aim to collect 3,600 completed Parent Survey. Due to the length of the Parent Survey (i.e., 30 minutes), the sensitive nature of some of the questions, and the study’s longitudinal design, we expect to encounter challenges with respect to the volunteer rate as well as the completion rate among parents who do volunteer. Further, because of the time and resources required to recruit school districts, schools, and ECEs in 20 sites, we will not be able to select and recruit new sites in the event that we cannot collect 90 completed Parent Surveys in a given site.
Teacher Survey (EDI) - Target Population. The target population for the Teacher Survey (EDI) are kindergarten students, as observed by their teachers (the survey respondents) in selected schools within the individual LAUNCH grantees and comparison communities.
Teacher Survey (EDI) - Sampling Frame and Design. The Teacher Survey (EDI) calls for kindergarten teachers to provide complete responses for the universe of students enrolled in their classrooms. Therefore, the sample size for the instrument is determined based on the number of children for whom teacher responses are collected, not the number of teachers who provide responses. The sampling frame used in this study consists of the kindergarten teachers in the two primary schools recruited for data collection for the Parent Survey in each LAUNCH grantee and comparison community. Because primary schools within each community will be recruited using simple random selection from among the primary schools that are eligible for Parent Survey collection, this frame should provide adequate coverage of the target population of interest.
The EDI is constructed to detect what the developers have defined as ‘critical differences’ among communities in children’s school readiness measured by either a summary measure across EDI domains or one of five subdomains: health and well-being; social competence; emotional maturity; language and cognitive development; and communication skills and general knowledge. The adequacy of the sample to detect critical differences of two percentage points or more (a value appropriate for community-wide comparisons) was determined based on prior testing of the EDI conducted by Gregory and Brinkman (2013).3 They used a four-stage method combining factor analysis of primary data with simulation to estimate the power needed to detect critical differences in the share of the population deemed vulnerable in terms of school readiness given varying sample sizes for each community.
Teacher Survey (EDI) - Response Rate. We anticipate a sample of responses for 3,200 children drawn from 80 classrooms selected from 40 primary schools in 10 LAUNCH grantees and 10 comparison communities (160 children from within each LAUNCH grantee and 160 in each comparison community). Exhibits 2 and 3 present the critical differences detectable for each EDI domain among all children surveyed in both the LAUNCH and comparison samples (Exhibit 2); and between the children in one LAUNCH sample and those in its matched comparison community (Exhibit 3). As evidenced by these tables, our study is powered sufficiently to detect even small critical differences between all LAUNCH and comparison communities, as well as moderate critical differences between any paired match of a LAUNCH grantee and its comparison. We also note that our design includes one additional level of clustering relative to the approach used by Gregory and Brinkman (2013). However, even if this additional clustering adds a substantial design effect of 1.5 (which is highly unlikely), our effective sample size of 1,067 would still be sufficient to detect a critical difference of 2.05% at most (vulnerable on 1 or more domains). Since critical differences of less than 2% are unlikely to be of substantial interest to policy makers, we conclude that that our study is sufficiently powered to detect differences in EDI domain scores that are of interest to LAUNCH program stakeholders.
Exhibit 2. Critical Difference in the Percentage of Vulnerable Children Detectable Across ALL LAUNCH Grantees and Comparison Communities |
||
Domain |
Sample Size for All LAUNCH Grantees/ Comparison Communities |
Critical Difference Detectable with 80% Power |
Physical Health and Well-Being |
1600 |
1.48 |
Social Competence |
1600 |
1.02 |
Emotional Maturity |
1600 |
1.08 |
Language and Cognitive Skills |
1600 |
1.07 |
Communication Skills and General Knowledge |
1600 |
1.27 |
Vulnerable on 1+ Domains |
1600 |
1.67 |
Vulnerable on 2+ Domains |
1600 |
1.24 |
Exhibit 3. Critical Difference in the Percentage of Vulnerable Children Detectable in Each LAUNCH Grantee and Comparison Community |
||
Domain |
Sample Size for Each LAUNCH Grantee/ Comparison Community |
Critical Difference Detectable with 80% Power |
Physical Health and Well-Being |
160 |
4.60 |
Social Competence |
160 |
3.14 |
Emotional Maturity |
160 |
3.52 |
Language and Cognitive Skills |
160 |
3.37 |
Communication Skills and General Knowledge |
160 |
4.03 |
Vulnerable on 1+ Domains |
160 |
5.31 |
Vulnerable on 2+ Domains |
160 |
3.86 |
Key Informant Interviews on Systems Change - Target Population. As described in Supporting Statement A, all of the quantitative data collection efforts described above will be supplemented with key informant interviews that will provide additional contextual information and allow the team to probe qualitatively on local community and system dynamics that may impact Project LAUNCH’s implementation. The target population for these interviews will be community- and state-level leaders and officials with perspectives on the purpose, implementation, and impact of Program LAUNCH in the local context.
In LAUNCH grantee communities, interviews will be conducted with Project LAUNCH leadership to gather additional information about their systems change efforts. These key informants may include the LAUNCH Project Director, the LAUNCH Local Evaluator, or the Young Child Wellness Council Coordinator. In comparison communities, the respondents may include the Early Childhood Comprehensive Systems (ECCS) Program Administrator; the Maternal, Infant, and Early Childhood Program Administrator; or the Title V/MCH Program Administrator.
Key Informant Interviews on Systems Change - Sampling Frame and Design. Initially, one to two key informants will be identified to participate in these interviews per LAUNCH and comparison community. Once the appropriate individuals in each community are identified and contact information has been obtained, the MSE team will send a letter or e-mail describing the purpose of the study and interview. If the individual does not respond within two weeks, we will contact the individual by phone to describe the purpose of the study and interview, and ask them to participate. If an individual declines to participate or is unreachable, we will choose another potential key informant from our original list. In addition, after all interviews are complete, participants may refer one or two additional people as potential respondents for any areas of the interview protocol they were unable to complete or to provide additional insights. This approach will support the aim of getting a reasonable breadth of perspectives on the impact of Project LAUNCH and other programmatic efforts within each community.
Key Informant Interviews on Systems Change - Response Rate. The target population and sampling frame described above are designed to be sufficiently versatile to allow for the participation of a range of community- and state-level leaders with valuable perspectives to share on Project LAUNCH and the policy environment in which the program and other similar efforts (in the case of comparison communities) are being implemented. Based on NORC’s experience leading other projects that involve telephone interviews with local public-health leaders, response rates are generally quite high given the enthusiasm and dedication often evident among these groups. The outreach strategy proposed here takes a rolling approach to identifying additional interviewees should those on the initial list be unable to participate. This will ensure at least two to four completed interviews per LAUNCH grantee and comparison community.
B2. Procedures for Collection of Information
Part A
Part A of the MSE will consist of the Direct Services Survey (completed semi-annually from fall 2016 through fall 2018) and the Systems Activities and Outcomes Survey (completed annually from fall 2016 through fall 2018). The information collected through these surveys will be entered into a Web-based data portal and will relate to: state, tribal, and community systems development; implementation of evidence-based services in local communities; and service system outcomes for children and families. Part A of the MSE replaces a previously approved LAUNCH grantee data collection system (for the Cross-Site Evaluation—or, “CSE”), which was tailored to provide precise and uniform responses related to LAUNCH program activities.
Part B
As noted above, Part B consists of four separate data collection efforts that will be conducted within the 10 LAUNCH grantees selected for inclusion in the MSE and the 10 comparison communities. These include:
the School Survey
the Parent Survey
the Teacher Survey (EDI)
Demographic data on kindergarten students
Key Informant Interviews on Systems Change
School Survey. Elementary school administrators and ECE Directors will complete a brief survey comprised of basic administrative questions concerning rates of suspension and expulsion from their school or center (see Attachment D for the survey). These items were drafted for the MSE, but were informed by both published literature and close consultation with selected members of the Consultant Cadre and the SAMHSA/ACF team. The team has also worked to ensure that the denominator measure of number of enrolled students will be collected in a manner consistent with Common Core data systems. In sum, although the survey items in the School Survey have not been used previously in this exact form, they are concise and straightforward and should not pose issues of terms of clarity or validity.
Administrators will complete the survey once per year for two years. After recruitment of the elementary school or ECE into the study, the MSE team will discuss the School Survey with the administration and school/ECE coordinator who may in turn designate a representative to complete the survey on his or her behalf. Once the respondent is identified, the team will initiate and complete informed-consent procedures (see Attachments N, O, and P for all school district, school, ECE, and school/ECE coordinator recruitment materials). The respondent will then complete the Web-based survey. See Attachment Q for emails that will be sent to School Survey respondents to facilitate responses.
Parent Survey. The Parent Survey will be administered to parents/guardians of young children (ages 0-8 years) and will cover children’s health, social-emotional health, parent-child relationships, parental depression, home environment, and parental social support (see Attachments E, F, G, and H). The survey will be comprised of pre-validated items and scales that vary in content as a function of the age of the child whom parent respondents will be referencing (see B1 for details on how this will be handled in the event that a parent has multiple children). The Parent Survey data will be collected via an internet-enabled data collection instrument from the same participants once per year for two years. Prior to any data collection, we will obtain informed consent from each parent respondent. Informed consent forms are included as the first page of each Parent Survey.
Using the school/ECE coordinator recruited by NORC, parents in each ECE and school will be asked to volunteer to participate. Coordinators will be offered a $100 incentive in appreciation of their time and participation. The school/ECE coordinator will provide NORC with the names, email addresses, and phone numbers for all of the parents who volunteer to participate in the study. Information collected by the school/ECE coordinator will also include the ages of each parent’s children. See Attachment R for the template that will be distributed to school/ECE coordinators to facilitate the collection of this information.
All of the parent volunteers’ names and contact information, including all those selected for recruitment or assigned to the replacement sample, will be entered into a secure NORC control system (as described above in B1). Selected respondents will receive an automated email prompting them to initiate the survey and providing them with a unique Personal Identification Number (PIN) and login (that allows automated retrieval should the original information be lost). Respondents will be able to pause their data entry and return to complete it later. The control system will load the survey directly onto NORC’s internal server upon initiation and record all responses as they are entered. See Attachment S for the materials that Parent Survey respondents will receive regarding the study.
Teacher Survey (EDI). Kindergarten teachers will complete the Teacher Survey (EDI) (see Attachment I for instrument) in selected schools (for more information, see the preceding Teacher Survey (EDI) - Target Population section). To implement the Teacher Survey (EDI), we will collaborate with researchers from the Transforming Early Childhood Community Systems (TECCS) Initiative at the University of California – Los Angeles (UCLA), which is licensed to administer a U.S. version of the EDI. The MSE team will have primary direct contact with the sites and UCLA will work with a lead contact on the MSE team to provide consultation and train-the-trainer materials that include details on how to train teachers to complete the instrument.
Prior
to any data collection, the MSE team will obtain informed consent
from each teacher participant, and will inform parents in
participating classrooms that the Teacher Survey (EDI) will be
completed by their child’s teacher. See Attachment I for the
informed consent form for this survey, which appears at the beginning
of the instrument. The MSE team will also work with each school
district to identify a district-specific Information Technology (IT)
manager who will export student demographic information into a
template designed by UCLA to help control for confounding factors in
EDI scoring. These student demographic characteristics will be linked
to student ID numbers or unique identifiers assigned to each student
for the purpose of the EDI. The IT manager will then generate a
hard-copy list of student names linked to each student’s ID or
identifier, which will be sent via a password-protected email to the
school coordinator. See Attachment T for the instructions that will
be provided to the IT manager to guide them in exporting the student
demographic information, and Attachment U for the Excel spreadsheet
into which the IT manager will export the demographic data.
Before teachers begin filling out the survey online, UCLA will also set up teacher user accounts and pre-populate those accounts with the student demographic information. At no time will the MSE team have access to the students’ names. Teachers will complete the Teacher Survey (EDI) for their current classroom of children one time only and will receive a $50 incentive. Since the teachers will complete the survey during the school day, the NORC team will reimburse schools up to $300 per teacher to cover the costs of a substitute teacher. Attachment V presents the Teacher Survey (EDI) fact sheet that will be disseminated to teachers and school coordinators to assist with recruitment, as well as the thank you email that will provide respondents with a link to their incentive.
Key Informant Interviews on Systems Change. Telephone interviews will be conducted with key informants in LAUNCH grantee areas and comparison communities to gather additional details about systems activities and outcomes and build on the information collected through the data portal in Part A of the MSE.
For comparison communities, the interview guide will provide information about the local services and systems designed to promote children’s social-emotional health. The team will conduct all interviews using a semi-structured interview guide that will allow both classification of information and flexibility for respondents.
We will complete the informed consent procedures via telephone before conducting the interview, which is expected to take no more than 60 minutes. The consent explains that: a) participation in the interviews is voluntary, and there are no penalties for refusing to participate or ending participation at any time during the interviews; b) the respondent can refuse to answer any question for any reason; c) data will be stored in de-identified files; and d) no names of individuals will be used in any evaluation reports. Respondents must also consent to the interview being recorded to ensure that their responses are captured accurately. See Attachments J and K for the informed consent forms, which appear at the beginning of the LAUNCH and comparison community interview guides.
We will conduct key informant interviews in both LAUNCH and comparison communities once per year for two years. In the second year of data collection, we will contact the previous key informants to request that they participate in the interview again to capture how their views have evolved over time. If previous interviewees have since left their position, we will contact the individuals who have replaced them. If an individual declines to be interviewed a second time, we will ask for recommendations for others who might participate in the interview, and will consult our original list of potential key informants for replacement participants. See Attachment W for all recruitment materials for key informants.
B3. Methods to Maximize Response Rates and Deal with Nonresponse
Expected Response Rates
Part A. As a condition of receiving Project LAUNCH funding, all grantees are required to participate in Part A of the MSE and thus are obligated to complete the Direct Services Survey and Systems Activities and Outcomes Survey in the Web-based data portal. NORC has routinely achieved 100 percent response rates in past waves of CSE data collection, administered by NORC after the transfer of contract responsibilities from Abt Associates. This has been facilitated by:
providing interactive webinars in which grantees are instructed on how to respond to each item in the data portal;
assigning a NORC staff member to work as an evaluation specialist with each grantee and provide one-on-one assistance with their reporting requirements;
setting specific start and end dates to each reporting period and following up with grantees until all had submitted their data; and
conducting quality assurance tests of data after submission and following up with specific grantees that have not reported their data clearly or comprehensively.
The designed proposed in this submission incorporates these same processes to ensure 100 percent response rates to the critical questions in the Part A data collection instruments.
Part B. In terms of securing participation in Part B overall, we estimate a 40 percent recruitment rate for LAUNCH grantees and a 20 percent recruitment rate for comparison communities. The 40 percent response rate for LAUNCH communities is based on the assumption that having an active LAUNCH program in an area will increase the participation rate of school districts, many of which are already actively collaborating with Project LAUNCH to facilitate evidenced-based activities around the LAUNCH strategy area of Mental Health Consultation in Early Childhood Education (ECE) and Schools. The lower expected rate for comparison communities is based on NORC’s recent experience recruiting school districts for educational studies for the Students with Disabilities Survey, the Healthy Communities Study, and the Evaluation of No Child Left Behind, each of which have experienced difficulty recruiting school districts for data collection efforts.
School Survey. The research team will secure approval from school districts to conduct this study prior to contacting selected schools. We anticipate a 75 percent to 95 percent response rate among schools that have agreed to participate in the evaluation. This is based on the assumption that schools that consent to join the data collection effort and participate in more burdensome portions of the data collection will participate in the comparatively brief School Survey.
Parent Survey. Based on past experience with data collection in schools, we conservatively estimate that 50 percent of parents who initially volunteer their interest will complete the survey. Update October 2017: Our experience to date shows that we are experiencing response bias and to improve the samples representativeness we strongly recommend the use of an incentive for this population. It is imperative that we collect data from a population that is representative of the LAUNCH and comparison communities in order to evaluate the program’s impact.
Several studies demonstrate the effectiveness of incentives in gaining survey participation among hard-to-reach populations, namely lower-income, lower-education and minority populations. Previous research also indicates that the inclusion of incentives improves the representativeness of survey sample. Beebe, et al. (2005) showed that an incentive increased response rates across the board in a survey of Medicaid recipients, and specifically with minority populations in the sample. Other studies have shown that incentives increase participation of respondents typically under-represented in surveys such as those with low education levels (Singer, Van Hoewyk, and Maher, 2000), racial/ethnic minorities, and low-income households (Mack, 1998). For example, Mack, et al. (1998) found that, while a $10 incentive had little effect on response rates, offering a $20 incentive (in 1996 dollars, not adjusted for inflation) boosted response rates overall and particularly among low-income individuals and African Americans. Martinez-Ebers, et al. (1997) found that incentives significantly increased the proportion of Hispanic respondents at follow-up. In another study, research in the Wisconsin Pregnancy Risk Assessment Monitoring System (PRAMS) found that, compared to a coupon or no incentive, a small cash incentive significantly improved response rates among African Americans (Dykema, et al., 2012).
All parent participants in the Parent Survey will consist of volunteers. The survey has been demonstrated to take a minimal amount of time (i.e., approximately 30 minutes). We have derived these response estimates from NORC’s own experience with school survey response rates4 (Section B1) as well as consultations with colleagues (Section B5) who have conducted similar data collection efforts with parents.
Teacher Survey (EDI). Based on discussion with experts at UCLA concerning the EDI and the level of effort and expense we are dedicating to recruitment and maximizing response, we expect nearly 100 percent participation from teachers asked to complete the Teacher Survey (EDI), provided that schools allow them to complete the survey during school hours. UCLA noted that, when teachers are asked to complete the EDI on their own time, the response rate drops significantly, although they are unable to provide exact figures for this decrease in response rate. Our design provides school-hour opportunities to complete the survey as well as incentives to each teacher following completion, both of which are likely to ensure a response rate of close to 100 percent among those who report to school on the day of data collection, which will be critical in order to collect comprehensive information about each kindergarten classroom.
Key Informant Interviews on Systems Change. Our approach to the systems change interviews aims to make the experience as convenient as possible for key informants. Once an interview is scheduled, we will send a copy of the interview guide at least two days prior to the interview so that the participant can prepare for the interview, thereby reducing burden on the individual.
Dealing with Non-Response
The two forms of non-response of relevance to this study are systematic differences in participation rates among respondents and non-respondents, and missing responses for specific items among those who do respond. We will examine patterns of missingness for each item (i.e., item non-response) within each survey. We will examine the percent and number of missing responses to a variable and use correlation and logistic regression to understand respondent characteristics that may be associated with missingness for a particular survey item. Non-response related to missing values is relatively straightforward to manage using multiple imputation, if necessary. As part of our routine data-cleaning efforts on many studies, NORC creates imputed datasets for use in analyses when there is a significant proportion of missing data on one or more key analytical variables. We plan to use multiple imputation (MI), which is demonstrated to be more consistent than other techniques. MI is implemented by using a regression model to estimate responses. It takes into consideration all the relationships among variables in the model, as well as each variable’s relationship to the outcome. Consistent with recommended practice, we “impute” missing responses at least five times and calculate an average of the estimates to obtain a single estimate and a single standard error. These estimates can then be interpreted in the normal fashion. Both SAS and Stata provide standard routines to implement MI and for combining estimates.
As an example, say we wish to examine the impact of being in the LAUNCH program, relative to the comparison community for the question “In general, how would you describe your child's health?” The covariate of interest is being in the treatment group, and we adjust our analyses for differences in parent characteristics, say race/ethnicity. If 10 percent of the respondents did not answer the question on race/ethnicity, we impute race/ethnicity, and report the parent characteristics, footnoting the percent missing and that data were imputed. Apart from this mention, the results from MI can be taken as “true” values.
We propose to impute, or use a model-based method, to replace a missing response to key socio-demographic questions in the parents’ survey. For example, if a parent does not answer the race question, and we are interested in examining how the impact of LAUNCH varies by race, we would not be able to analyze responses from parents for whom “race” is missing. In order to avoid this deletion, we will use all other available characteristics to replace the missing race measure. Using Stata, we will use this common and robust imputation technique known as multiple imputation.
We will impute characteristics when at least ten percent (10%) of respondents are missing data on a demographic item and if “missingness” is non-random for a given socio-demographic group. As examples, we would impute data if 12 percent of respondents did not report the selected child’s race and if 15 percent of parents who reported being unemployed did not report the selected child’s insurance status. This approach will help ensure the responses analyzed represent the socio-demographic make-up of the community from which the surveys were collected. These measures may include, among others: household size, race/ethnicity, insurance status, education, employment, and income. Imputation facilitates having more complete information for the profile for LAUNCH parents and households, conducting sub-group analyses, and being able to examine differences in LAUNCH community schools relative to schools in comparison counties. We do not intend to impute responses to survey items on child behaviors, use of LAUNCH services, or any outcome variables. We will assess patterns of “missingness” to understand if non-response is missing at random, and use multiple imputation conditional on all available information in the survey.
The county-level measures from the American Community Survey and other sources we use for the characterization and matching of school districts will have no missing data or have already been imputed.
In addition, we estimate about 50 percent of parents who initially volunteer their interest in the study will complete the survey. To ensure these parents are representative of the community, as we proceed through the data collection period, we will assess respondents’ characteristics relative to the socio-demographic profile of the school district. Update October 2017: To date, we have compared information from the first 224 parent respondents on their race/ethnicity, education level, employment status, and income range to the averages for these variables using the American Community Survey (ACS) data on these communities. Thus far, Parent Survey respondents are more likely to be white, college-educated, employed full-time, and from higher-income categories than are the parents in their communities at large. This response bias will pose significant risks to the validity of the survey results unless we take measures to increase the representativeness of the sample. As noted above, we strongly recommend the use of an incentive for this population given the importance of collecting data from a population that is representative of the LAUNCH and comparison communities.
In terms of challenges with participation, because all LAUNCH grantees are required to enter services and systems data into the grantee portal as a condition of receiving funding, we do not expect to have to deal with non-response for Part A of the MSE. For Part B, however, low participation rates will be most problematic in the context of recruited LAUNCH communities. If only certain types of school districts and ECEs agree to participate in data collection, this will limit the generalizability of findings from the evaluation to the overall LAUNCH program. In some respects, this issue is not addressable other than to be aware of the risk from the outset, allocate additional resources to the recruitment of a diverse set of LAUNCH grantees, measure differences between areas that agree and refuse to participate in the evaluation, and calibrate the conclusions to emerge from the evaluation accordingly. In an attempt to mitigate these risks, we have devoted significant resources and attention to the recruitment of LAUNCH communities. Also of note, while we expect a lower rate of participation from comparison communities, the risk of systematic bias independent from that of LAUNCH community participation in the comparison group is much lower. While we initially anticipate choosing up to six matching counties for each LAUNCH community, we can enhance this list if additional replacement sample is necessary.
To address non-response, we may engage in more-intense data collection efforts as needed for each instrument or group of respondents. These additional steps may include:
School Survey. In cases where schools or ECEs from our initial randomly selected list refuse to participate, we will randomly select additional schools or ECEs from those remaining on the list. In areas where a school refuses to participate and there are no other schools, we will investigate other community resources for recruitment, such as the YMCA or community centers. Due to the very high expected response rate for this particular data collection effort, participation bias is not expected to be a substantial problem affecting our ability to reliable analyze the results.
Parent Survey. All participants in the Parent Survey will volunteer to participate and will be offered a $25 incentive for their participation. In each school and ECE, we will maintain a list of additional interested parents/guardians, organized by the ages of their children, from which to choose replacement participants if necessary. The $25 incentive for parents will help ensure that we obtain completed surveys from parents who are representative of the population in selected LAUNCH and comparison communities as the goal of the MSE is to evaluate the impact of Project LAUNCH interventions on families in the communities where the interventions took place. These communities are primarily lower-income, less-educated, and in some states, include a higher percentage of racial and ethnic minorities than does the general population. Previous research indicates that the inclusion of incentives improves the representativeness of survey samples. Several studies demonstrate the effectiveness of incentives in hard-to-reach populations similar to those served by Project LAUNCH and therefore critical to the MSE (e.g., Beebe, et al., 2005; Singer, Van Hoewyk, and Maher, 2000; Mack, 1998; Dykema, et al., 2012).
Update October 2017: Based on our data collection thus far, we are seeing that response bias does appear to be a challenge in the absence of incentives. Based on data from the first 224 completed Parent Surveys, as compared to averages for selected demographic variables using the American Community Survey (ACS) data on these communities, parent respondents to date are more likely to be white, college-educated, employed full-time, and from higher-income categories than the parents in the communities at large. To the extent that some participation bias in the Parent Survey persists despite the use of incentives and other efforts, due to our large sample size, we can use weighting to address any lack of balance between the characteristics of respondents and the characteristics of the schools, ECEs, or communities from which they were drawn. Additionally, analyses of these survey data will be conducted using regression techniques that allow for the statistical control of residual sample imbalances.
Teacher Survey and Key Informant Interviews. We may experience some initial refusal to participate from teachers, school administrators, ECE Directors, and key informants in both LAUNCH and comparison communities. However, once teachers, school administrators, and key informants agree to participate, we will utilize the methods below to maximize response rates. According to UCLA, school administrator and district buy-in is integral to ensuring high response rates from teachers. We plan to begin forging strong relationships with school districts and administrators as early as possible in the data collection process to stimulate buy-in and thus increase teacher participation in the EDI. Due to the very high expected response rate for these particular data collection efforts, participation bias is not expected to be a substantial problem affecting our ability to reliable analyze the results.
Maximizing Response Rates
Across data collection efforts, we use a combination of four methods to maximize response rates: 1) provision of technical support to facilitate response; 2) the use of multiple, repeated, and mixed-mode recruitment methods to remind individuals to volunteer and respond; 3) the provision of incentives to motivate participation; and 4) the use of Web-based surveys to lower respondent burden. We detail the use of each of these strategies across data collection efforts below.
Part A. Multi-site evaluation specialists (ESs) will provide support and technical assistance to local evaluators to ensure that the Part A data collection is completed as thoroughly as possible. Grantees recognize they are involved in an innovative and critically important initiative that promises to help improve children’s social-emotional well-being. Additionally, grantees have the capacity to export Excel data files every six months with the cumulative data they have reported into the Web-based data portal. As a result, they can use the data in their own local evaluations to help track their efforts over time and keep SAMHSA apprised of their progress. Our Web-based data collection platform will also help reduce burden for grantees with respect to the Direct Services Survey and the Systems Activities and Outcomes Survey.
Part B. Gaining cooperation and buy-in from individual parents, teachers, school administrators, ECE Directors, and key informant interview participants will be essential to our Part B data collection efforts. The MSE team will employ the methods described below to maximize response rates from these participants (with recruitment materials for all participant types located in Attachments N, O, P, and W).
Recruitment Flyers and Posters: Recruitment posters describing the MSE and inviting parents to participate in the Parent Survey will be posted in each school and ECE where parents are most likely to see them (e.g., child pick-up areas). In addition, flyers describing the survey will be sent home in students’ backpacks. NORC will print and ship all study materials to school and ECE coordinators. See Attachment S for these recruitment flyers.
School/ECE Representatives: A staff member at each school and ECE will be identified as a site-based representative for the MSE. They will assist with creating support for the MSE data collection in their school or ECE.
Aggregation of Data: All participants will be ensured that all reported data will be aggregated and not attributable to individual respondents. In addition to the aggregation of data, we will ensure privacy to the fullest extent of the law.
Pre-Interview Preparation: Prior to conducting the Key Informant Interviews on Systems Change, key informant interview participants will be provided topics that will be covered in the interview. This will assist both in boosting participation and reducing burden during the interview.
Contact Information: We will obtain telephone numbers and email addresses for parents, which will serve to increase response rates for follow-up data collection efforts. We will also send an email six months after completion of the Parent Survey in the first year of data collection to check in and remind them of the upcoming survey to be administered in the second year. See Attachment S for check-in email.
To
offset some of the burden of participation, we have developed a
structure for respondents to receive incentives based on the
effective use in prior studies and our desire to acknowledge
respondents’ efforts in a respectful way. (For justification of
these incentive amounts, please see Supporting Statement A.) We will
offer incentives to the following individuals:
Teachers who participate in the Teacher Survey (EDI) will receive $50;
School/ECE coordinators who assist with the coordination of the Parent Survey and Teacher Survey (EDI) as well as completion of the School Survey will receive $100 per school year, up to two times; and
Parents who complete the Parent Survey will receive $25.
We also recognize that schools will need to hire substitute teachers to cover kindergarten teachers’ classes on the day they receive training and complete the Teacher Survey (EDI). The MSE will offer schools up to $300 per teacher to cover this cost.
Grantees
will not receive incentives because their participation in the MSE is
a contractual requirement. Key informant interview participants will
not receive any incentive given their professional capacity as
public-health leaders and/or childhood education program directors
and their associated vested interest in supporting programs that
promote child health and wellness and, by extension, the overall
success of the study.
B4. Tests of Procedures or Methods to Be Undertaken
Wherever possible, the MSE relies on measures that have been previously developed and tested, with their validity and reliability demonstrated among a range of populations. The Direct Services Survey and Systems Activities and Outcomes Survey were developed specifically for this evaluation to capture Project LAUNCH-specific information. These instruments have been adapted from the ones that previously cleared by OMB for the purposes of the CSE, as detailed above. The revisions to the instruments were pilot-tested with representatives from three grantee locations to test their interpretability and usability and revised accordingly.
As noted above, the items in the School Survey were drafted for the MSE, but were informed by both published literature and close consultation with selected members of the Consultant Cadre and the SAMHSA/ACF team. The Parent Survey is based almost entirely on measures and scales from validated instruments selected for their goodness of fit for the Project LAUNCH MSE. This prevalidation notwithstanding, the MSE team pilot-tested the Parent Survey with nine parents of young children to assess its clarity and ensure that it does not pose any undue burden on respondents.
The Teacher Survey consists of the EDI, a psychometrically validated measure of child well-being. It was selected after careful deliberation and the determination that, compared with other similar instruments (e.g., the Kansas Early Learning Inventory [KELI], Pennsylvania Kindergarten Entry Inventory [KEI]), the EDI captures multiple domains of child well-being that are congruent with the goals of Project LAUNCH. The EDI has been tested and found to have good reliability5 and predictive validity.6 Although it has not been tested in a nationally representative sample of classrooms in the United States, it has been tested among several different populations of children (e.g., male/female, native English speakers and those speaking English as a second language (ESL), aboriginal groups) and a growing body of research indicates that there is no systematic bias across different groups.7
B5. Individuals Consulted on Statistical Aspects and
Individuals Collecting and/or Analyzing Data
Many individuals and organizations, including the Project LAUNCH grantees and the Consultant Cadre, were consulted on aspects of the evaluation design and data collection instruments. Members of the Project LAUNCH Consultant Cadre (listed in Exhibit 5) have experience in the fields of child development, child health and wellness, mental health, tribal health, health policy, school readiness, early childhood programs, systems change, program implementation, and evaluation research.
The study design and sample size requirements were created in partnership with senior staff in NORC’s Statistics and Methodology Department. Senior staff in NORC’s Education and Child Development Department, who provided expertise on the recruitment methodologies, and the level of effort and time required to recruit school districts, schools, and parent survey participants. NORC’s Information Technology Services Department collaborated on the data collection methodologies and design of the web-based tools. Methods regarding recruitment and implementation of the EDI were created based on the input and direction of UCLA’s Center for Healthier Children, Families, & Communities.
All of the final instruments and materials prepared for this submission have been reviewed by staff at SAMHSA, ACF, and the LAUNCH Grantee Steering Committee (listed in Exhibit 6). The specific individuals consulted include: Ingrid Donato, Yanique Edmond, Anne Mathews-Younes, Jennifer Oppenheim, and Kelley Smith, SAMHSA; and Laura Hoard, ACF.
Exhibit 5. Members of the Project LAUNCH Consultant Cadre
Consultant |
Title and Affiliation |
Peg Burchinal, PhD |
Senior Scientist, Frank Porter Graham Child Development Institute Adjunct Professor, Department of Education, University of California, Irvine |
Christina Bethell, PhD |
Professor, Department of Pediatrics, School of Medicine, Oregon Health and Science University |
Catherine Walsh, MPH |
Owner/Founder, Results for Children |
Nancy Whitesell, PhD |
Associate Professor, Community and Behavioral Health Department, Colorado School of Public Health, University of Colorado at Denver |
Bob Goerge, PhD |
Senior Research Fellow, Chapin Hall at the University of Chicago |
Katherine E. Grimes, MD, MPH |
Associate Clinical Professor of Psychiatry and Child Psychiatrist, Department of Psychiatry, Harvard University Medical School |
Stephanie M. Jones, PhD |
Assistant Professor, Center on the Developing Child, Harvard University |
Michelle Christensen Sarche, PhD |
Associate Professor, Community and Behavioral Health Department, Colorado School of Public Health, University of Colorado at Denver |
David M. Chavis, PhD |
Principal Associate/CEO, Community Science |
Ruth Perou, PhD |
Child Development Studies Team Leader, National Center on Birth Defects and Developmental Disabilities (NCBDDD), Centers for Disease Control and Prevention (CDC) |
Aleta Meyer, PhD |
Senior Social Science Research Analyst, Office of Planning, Research, and Evaluation (OPRE), Administration for Children & Families (ACF) |
Robin Harwood, PhD |
Health Scientist, Maternal and Child Health Research Program, Maternal and Child Health Bureau (MCHB), Health Resources and Services Administration (HRSA) |
Mary Kay Kenney, PhD |
Health Statistician, Maternal and Child Health Bureau (MCHB), Health Resources and Services Administration (HRSA) |
Lara Robinson, PhD, MPH |
Behavioral Scientist, Child Development Studies Team, National Center for Birth Defects and Developmental Disabilities (NCBDDD), Centers for Disease Control and Prevention (CDC) |
Exhibit 6. Members of the LAUNCH Grantee Steering Committee
Member |
Grantee |
Cohort |
Cathy Ayoub |
New Mexico/Pueblo Laguna Michigan/Bodewadmi Consortium Red Cliff (Cohort 1) |
4 |
Lesli Johnson and Aimee Collins |
Ohio |
2 |
Miriam McGaugh |
Rogers County Oklahoma |
5 |
Cathy Sowell and Svetlana Yampolskaya |
Florida |
4 |
Anne Duggan |
New Jersey |
5 |
Christina Christopoulos |
North Carolina |
2 |
Miles McNall |
Michigan – Saginaw County |
2 |
Cecile C. Guin |
Louisiana – Lafayette Parish |
5 |
Yumiko Aratani |
New York City |
3 |
Mhora Lorentson and Jeana Bracey |
Connecticut |
3 |
Vivian Hayashi |
Iowa |
2 |
Deborah Perry |
Maryland |
4 |
Jill Shinkle |
California |
2 |
Naomi Clemmons |
Vermont |
4 |
1 Devereux Center for Child Resilience. Calculating DECA Change Scores. Accessed 1/18/2016. http://www.centerforresilientchildren.org/infants/calculating-deca-it-change-scores/
2 Ogg, JA, Brinkman TM, Dedrick RF, Carlson JS (2010). Factor Structure and Invariance Across Gender of the Devereux Early Childhood Assessment Protective Factors Scale. School Psychology Quarterly. 25(2). 107-118.
3 Gregory, T. and Brinkman, S. 2013. Methodological Approach to Exploring Change in the Australia Early Development Instrument (AEDI): The Estimation of a Critical Difference. Telethon Institute for Child Health Research, Western Australia.
4 These estimates were based on NORC’s experience with school survey response rates when an incentive was included.
5 Janus, M., Offord, D., Development and Psychometric Properties of the Early Development Instrument (EDI): A Measure of Children’s School Readiness. Canadian Journal of Behavioral Sciences, 2007. 39(1): p. 1-22.
6 Forget-Dubois, N., Lemelin, J., Boivin, M., Dionne, G., Predicting Early School Achievement with the EDI: A Longitudinal Population-Based Study. Early Education and Development. 2007. 18(3), 405-426.
7 Guhn, M., Gadermann, A., Zumbo, B. (2007). Does the EDI Measure School Readiness in the Same Way Across Different Groups of Children? Early Childhood Education Journal 18(3): 453-472.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | NORC |
File Modified | 0000-00-00 |
File Created | 2021-01-15 |