Appendix H. Outcomes Data Tables in End of Year Reports
The data for the CSE will come from three sources: the CSE portal (implementation and systems outcomes data), annual stakeholder interviews (implementation and systems outcome data), and findings from the annual End of Year Evaluation Reports.
Procedure for Data Extraction and Coding
We propose to extract both implementation and outcome findings from the End of Year Evaluation Reports using systematic procedures. This Procedures Guide describes which data elements will be extracted from the evaluation reports on provider, parent and child outcomes and how they will be coded. One component of the data extraction involves rating the strength of evidence of each study finding. Effects generated from quasi-experimental designs will be rated using the What Works Clearinghouse (WWC) Procedures and Handbook.1 Outcomes generated from pre-post studies will be rated using The R-SEED (Review of Studies with Emergent Evidence Designs) Procedures and Handbook (in preparation).
The CSE final report will include data on effects on providers, families/parents, and children in two ways. First, for each service strand, there will be tables of outcome findings for each sample (providers, parents, and children). Although individual findings will be shown, the findings will not be identified with the name of the LAUNCH site. For each of these findings, we will attach a rating of strength of evidence of that finding. Second, if we have multiple findings for any of these three samples within a service strand, we will calculate a weighted average effect. This average effect will be calculated separately for findings within each of the major levels of strength of evidence, e.g., findings rated as being strong evidence (QEDs with baseline equivalence and a measure of known psychometric adequacy), findings rated as representing intermediate strength of evidence, and findings rated as having limited strength of evidence. Findings that do not meet standards for limited strength of evidence will not be used in the calculation of overall LAUNCH effects.
The coding guide is organized following the 7 sections of the LAUNCH CSE Outcome Findings Data Extraction Guide. These sections include:
• Tab 1: Services description
• Tab 2: Provider outcomes (includes assessment of strength of evidence)
• Tab 3: Provider outcome data
• Tab 4: Parent outcomes (includes assessment of strength of evidence)
• Tab 5: Parent outcome data
• Tab 6: Child outcomes (includes assessment of strength of evidence)
• Tab 7: Child outcome data
Surveys to Obtain Incomplete Data
Following the data extraction from the End of Year Evaluation Reports, surveys will be sent to the local evaluators to request any data elements that were not in the reports. These surveys will match the data elements in the 7 sections of the Extraction Guide. The surveys will be pre-populated with the available data and evaluators will be asked to provide data for elements that, on the survey, are shown to be missing. This process implies that the surveys to evaluators will be individually tailored to request only the missing data elements.
The Paperwork Reduction Act Burden Statement: This collection of information is voluntary and will be used to evaluate implementation and outcomes of the Project LAUNCH program. Public reporting burden for this collection of information is estimated to average 480 minutes per response, including the time for reviewing instructions, gathering and maintaining the data needed, and reviewing the collection of information. An agency may not conduct or sponsor, and a person is not required to respond to, a collection of information unless it displays a currently valid OMB control number. The OMB control number for this collection is 0970-XXXX and it expires XX/XX/XXXX.
TAB1: SERVICE (intervention/program/service being evaluated) |
|||
Row |
Data element |
Definition |
Comments |
1 |
Project LAUNCH Grantee |
LAUNCH Grantee (Cohorts 1,2,4= State/tribe; Cohort 3 = community) |
Pre-populated on remaining tabs |
2 |
Local Evaluator |
Last name of lead evaluator |
Pre-populated on remaining tabs |
3 |
Grant Year of EOY Evaluation Report (One-Five) |
Year of grant represented in report (1 – 5) |
Pre-populated on remaining tabs |
4 |
Date of EOY Evaluation Report |
Report date |
Pre-populated on remaining tabs |
|
|
|
|
6 |
Description of LAUNCH-supported Service |
|
|
7 |
Strand (Home visiting (HV), Family support (FS), mental health consultation in preK (MHC-ECE), mental health consultation in school (MHC-ELEM), integration of behavior health in primary care (IBH-PC), developmental screening (DS) |
Enter abbreviation for SAMHSA strand |
Pre-populated on remaining tabs |
8 |
Other strands/types of services (mental health consultation in other settings (MHC-OTH), early childhood education (ECE) |
Enter abbreviation or specify type of service if not listed |
Pre-populated on remaining tabs |
9 |
Name of service/model |
Specify |
Pre-populated on remaining tabs |
10 |
Initiated or enhanced |
Enter “yes” if initiated by LAUNCH; enter “no” if LAUNCH is enhancing an existing service |
|
12 |
LAUNCH-supported enhancements |
|
|
13 |
Program expansion (additional staff, slots) |
Enter yes or no |
|
14 |
Workforce enhancement |
|
|
15 |
training on mental health & development |
Enter yes or no |
|
16 |
training on screening/assessment |
Enter yes or no |
|
17 |
Mental health consultation |
Enter yes or no |
|
18 |
New component added to model (specify component) |
Enter yes or no |
|
19 |
Other |
Enter yes or no |
|
TAB 2: PROVIDER OUTCOMES |
|||
Row |
Data element |
Definition |
Comments |
1 |
Project LAUNCH Grantee |
LAUNCH Grantee (Cohorts 1,2,4= State/tribe; Cohort 3 = community) |
Pre-populated from Service tab |
2 |
Local Evaluator |
Last name of lead evaluator |
Pre-populated from Service tab |
3 |
Grant Year of EOY Evaluation Report (One-Five) |
Year of grant represented in report (1 – 5) |
Pre-populated from Service tab |
4 |
Date of EOY Evaluation Report |
Report date |
Pre-populated from Service tab |
|
|
|
|
6 |
Description of LAUNCH-supported Service |
|
|
7 |
Strand (Home visiting (HV), Family support (FS), mental health consultation in preK (MHC-ECE), mental health consultation in school (MHC-ELEM), integration of behavior health in primary care (IBH-PC), developmental screening (DS) |
Enter abbreviation for SAMHSA strand |
Pre-populated from Service tab |
8 |
Other strands/types of services (mental health consultation in other settings (MHC-OTH), early childhood education (ECE) |
Enter abbreviation or specify type of service if not listed |
Pre-populated from Service tab |
9 |
Name of service/model |
Specify |
Pre-populated from Service tab |
|
|
|
|
10 |
Provider outcome measures |
|
|
12A |
Measure name |
Specify |
E.g., “LAUNCH Provider Survey,” Provider Stress Checklist |
12B |
Measure # |
|
|
12C |
Domain |
Specify overall domain being addressed |
E.g., provider knowledge, provider attitudes, provider practices |
12D |
Construct |
Indicate construct being measures |
E.g., provider stress, provider use of standardized assessments |
12E |
Reference |
Citation for measure |
Full published name or reference in which measure was described. Enter NA is measure is site-developed. |
12F |
Type of measure |
Specify type of measure |
E.g., interview, survey, observation, test |
12G |
Scoring |
Specify how measure is score |
E.g., binary, ordinal(ordered), categorical/nominal (unordered), continuous |
12H |
Outcome is valid for implementation (matches program objectives, enhancements)
|
Enter yes or no |
Outcome must appear to measure the domain into which it is classified; outcome must align with program theory of change |
12I |
Over-aligned |
Enter yes or no |
Measure must not be designed or administered in ways that are specifically aligned with the program intervention model |
12J |
Measure reliability: standardized tests |
If test is standardized, enter “yes”; otherwise, enter “no” |
Outcomes should have test-retest reliability = .40 or higher (for scale measures based on survey items) or inter-rater reliability = .50 higher for data based on observation measures. Standardized tests are assumed to satisfy the reliability criterion. Other measures exempt from the reliability criterion include health indicators such as immunizations. |
12K |
Reliability of non-standardized measure |
Enter test-retest reliability, internal consistency, or inter-rater reliability |
Enter reliability statistic appropriate to type of measure |
13A-13K |
Repeat 13A-13K for 2nd provider outcome |
|
|
14A-14K |
Repeat 13A-13K for 3rd provider outcome |
|
|
15A-15K |
Repeat 13A-13K for 4th provider outcome |
|
|
|
|
|
|
16 |
Design for measurement of outcome |
|
|
18A |
Measure name |
|
Pre-populated from 12a |
18B |
Measure # |
|
Pre-populated from 12b |
18C |
Type of design: QED |
Enter “yes” if design includes comparison group; otherwise, enter “no” |
|
18D |
Type of design: ITS |
Enter “yes” if design is ITS with longitudinal pre and/or post data; otherwise, enter “no” |
|
18E |
ITS: # of baseline data points |
Enter # data points |
|
18F |
ITS: # of data points during program implementation |
Enter # data points |
|
18G |
Type of design: pre-post |
Enter “yes” if design includes one pre- and one post-test data point; otherwise enter “no” |
2 measurement points |
18H |
Type of design: pre vs. norm |
Enter “yes” if design includes one pre- test data point and test is normed so that standardization sample can be used as comparison; otherwise enter “no” |
|
18I |
Type of design: retrospective pre-post |
Enter “yes” if design includes post-test that measures change since an assumed baseline; otherwise, enter “no” |
|
18J |
Type of design: post-only |
Enter “yes” if design includes post-test measurement only and no pre-test; otherwise, enter “no” |
|
19A-J |
Repeat 18A-18J for 2nd provider outcome |
|
|
20A-J |
Repeat 18A-18J for 3rd provider outcome |
|
|
21A-J |
Repeat 18A-18J for 4th provider outcome |
|
|
|
|
|
|
|
Counterfactual explanation |
|
|
23A |
Measure name |
|
Pre-populated from 12a |
23B |
Measure # |
|
Pre-populated from 12b |
23C |
Data on outcome collected in comparable ways: same timing of data collection for all sample members (e.g., data should be collected at the same time point in a family’s participation for all families), and the same data collection procedures for all sample members, (e.g., if measure is to be implemented as a self-administered questionnaire, this same procedure should be used with all sample members). |
Enter “yes” if data collection is comparable; otherwise, enter “no” |
Data for outcome defined and collected in a way that ensures the outcome measure is comparable for all groups being compared |
23D |
Measure defined consistently |
Enter “yes” if data collection is comparable; otherwise, enter “no” |
Outcome is defined in same way for all groups being compared |
23E |
Low likelihood of growth in intervention time period in absence of intervention
|
Enter “yes” if similar growth is unlikely in absence of the intervention; otherwise, enter “no” |
|
23F |
Lack of other interventions/interruptions likely causes of growth |
Enter “yes” if are no other interventions likely to create similar growth in absence of the intervention; otherwise, enter “no” |
|
|
|
|
|
24A-F |
Repeat 24a-24f for 2nd provider outcome |
|
|
25A-F |
Repeat 25a-25f for 3rd provider outcome |
|
|
26A-F |
Repeat 26a-26f for 4th provider outcome |
|
|
|
|
|
|
27 |
Evidence rating |
|
|
29A |
Measure name |
|
Pre-populated from 12a |
29B |
Measure # |
|
Pre-populated from 12b |
29C |
Design level |
Enter design number from R-SEED rating system |
Based on R-SEED design rating system |
29D |
Outcome meets standards |
Enter “yes” if outcomes meets all WWC standards; otherwise enter “no” |
Must meet standards for validity, over-alignment, and reliability. Measures will pass the screen for over-alignment unless there is clear evidence otherwise. Over-alignment arises when an outcome measures concepts or is administered in ways that are specifically aligned to the content of the intervention. For example, if an intervention includes repeatedly having parents practice reading specific texts and responding to questions from the interventionist and the outcome is an assessment that asks parents to read the same text and respond to the same (or very similar) questions, that assessment would be over-aligned. |
29E |
Study does not have a serious confound |
If study is a QED, enter “yes” if there is no serious confound (i.e., an n=1 confound); otherwise enter “no.” If study is a pre-post, enter NA. |
A serious confound occurs when there is a factor that has a separate effect on the outcome that cannot be eliminated by the study design, and which will bias the estimated effect of the intervention. An n=1 confound is a serious confound that occurs when an effect is estimated on the basis of comparing one unit (e.g., one teacher, one class, one school) to one or more such entities. |
29F |
Study can establish baseline equivalence of the analytic sample |
If study is a QED, enter “yes” if baseline equivalence has been established; otherwise enter “no.” If study has no comparison group, enter NA. |
The study must provide evidence that the groups being contrasted as equivalence at baseline on a pre-intervention measure of the outcome. |
29G |
The study design has a well-justified counterfactual explanation |
If study is a pre-post, enter “yes” if study has well-justified counterfactual explanation (rows |
|
30A-G |
Repeat 30A-30G for 2nd provider outcome |
|
|
31A-G |
Repeat 31A-31G for 3rd provider outcome |
|
|
32A-G |
Repeat 32A-32G for 4th provider outcome |
|
|
TAB 3: PROVIDER OUTCOME DATA |
|||
ROW |
Data element |
Definition |
Comments |
1 |
Project LAUNCH Grantee |
LAUNCH Grantee (Cohorts 1,2,4= State/tribe; Cohort 3 = community) |
Pre-populated from Service tab |
2 |
Local Evaluator |
Last name of lead evaluator |
Pre-populated from Service tab |
3 |
Grant Year of EOY Evaluation Report (One-Five) |
Year of grant represented in report (1 – 5) |
Pre-populated from Service tab |
4 |
Date of EOY Evaluation Report |
Report date |
Pre-populated from Service tab |
|
|
|
|
6 |
Description of LAUNCH-supported Service |
|
|
7 |
Strand (Home visiting (HV), Family support (FS), mental health consultation in preK (MHC-ECE), mental health consultation in school (MHC-ELEM), integration of behavior health in primary care (IBH-PC), developmental screening (DS) |
Enter abbreviation for SAMHSA strand |
Pre-populated from Service tab |
8 |
Other strands/types of services (mental health consultation in other settings (MHC-OTH), early childhood education (ECE) |
Enter abbreviation or specify type of service if not listed |
Pre-populated from Service tab |
9 |
Name of service/model |
Specify |
Pre-populated from Service tab |
|
|
|
|
|
Eligible outcomes |
|
|
19A |
1st eligible provider outcome |
Specify name of outcome from Provider Outcome tag, A29-32) |
Eligible outcomes are those whose evidence rating is Limited, Intermediate, or Strong |
20A |
Repeat 19A for 2nd eligible outcome |
|
|
21A |
Repeat 19A for 3rd eligible outcome |
|
|
22A |
Repeat 19A for 4th eligible outcome |
|
|
|
Samples |
|
|
19C |
Baseline sample (time 1) |
Describe sample participants at time 1 |
|
19D |
Posttest sample (time 2) |
Describe sample participants at time 2 |
|
19E |
Matched sample |
Enter “yes” if samples for two timepoints are matched; otherwise enter “no.” |
|
19F |
Pre-post time period |
Indicate number of months between time 1 and 2 and amount of exposure |
|
20C-F |
Repeat 19C-F for 2nd eligible outcome |
|
|
21C-F |
Repeat 19C-F for 3rd eligible outcome |
|
|
22C-F |
Repeat 19C-F for 4th eligible outcome |
|
|
|
Sample sizes |
|
|
19W |
Eligible sample time 1 |
Indicate # of units in baseline or time 1 eligible sample |
# of all eligible units in sample at time 1 |
19X |
Analysis sample time 1 |
Indicate # of units in baseline or time 1 analysis sample |
# of units with data in analysis sample at time 1 |
19Z |
Eligible sample time 2 |
Indicate # of units in posttest or time 2 eligible sample |
# of all eligible units in sample at time 1 |
19AA |
Analysis sample time 2 |
Indicate # of units in posttest or time 2 analysis sample |
# of units with data in analysis sample at time 1 |
20W-AA |
Repeat 19W-AA for 2nd eligible outcome |
|
|
21W-AA |
Repeat 19W-AA for 3rd eligible outcome |
|
|
22W-AAF |
Repeat 19W-AA for 4th eligible outcome |
|
|
|
Findings |
|
|
19AV |
Method used to calculate significance of pre-post difference |
Select from drop-down menu |
|
19AW |
Mean outcome for analysis sample at time 2 |
Enter mean or proportion |
|
19AY |
Standard deviation of outcome for analysis sample at time 2 |
Enter standard deviation (if applicable) |
|
10BC |
Mean outcome for analysis sample at time 1 |
Enter mean or proportion |
|
19BE |
Standard deviation of outcome for analysis sample at time 1 |
Enter standard deviation (if applicable) |
|
19BL |
T statistic |
Enter t-statistic from t-test of pre-post difference on outcome |
|
19BO |
P-value |
Enter p-value for t-statistic |
|
19BP |
Significance |
Enter “yes” if finding is statistically significant; otherwise, enter “no” |
|
20AV-BP |
Repeat 19AV-BP for 2nd eligible provider outcome |
|
|
21AV-BP |
Repeat 19AV-BP for 3rd eligible provider outcome |
|
|
22AV-BP |
Repeat 19AV-BP for 4th eligible provider outcome |
|
|
TAB 4: PARENT OUTCOMES |
|||
Row |
Data element |
Definition |
Comments |
1 |
Project LAUNCH Grantee |
LAUNCH Grantee (Cohorts 1,2,4= State/tribe; Cohort 3 = community) |
Pre-populated from Service tab |
2 |
Local Evaluator |
Last name of lead evaluator |
Pre-populated from Service tab |
3 |
Grant Year of EOY Evaluation Report (One-Five) |
Year of grant represented in report (1 – 5) |
Pre-populated from Service tab |
4 |
Date of EOY Evaluation Report |
Report date |
Pre-populated from Service tab |
|
|
|
|
6 |
Description of LAUNCH-supported Service |
|
|
7 |
Strand (Home visiting (HV), Family support (FS), mental health consultation in preK (MHC-ECE), mental health consultation in school (MHC-ELEM), integration of behavior health in primary care (IBH-PC), developmental screening (DS) |
Enter abbreviation for SAMHSA strand |
Pre-populated from Service tab |
8 |
Other strands/types of services (mental health consultation in other settings (MHC-OTH), early childhood education (ECE) |
Enter abbreviation or specify type of service if not listed |
Pre-populated from Service tab |
9 |
Name of service/model |
Specify |
Pre-populated from Service tab |
|
|
|
|
10 |
Parent outcome measures |
|
|
12A |
Measure name |
Specify |
E.g., Parenting Stress Index |
12B |
Measure # |
|
|
12C |
Domain |
Specify overall domain being addressed |
E.g., parent knowledge, parent attitudes, parent practices |
12D |
Construct |
Indicate construct being measures |
E.g., parent stress, parent use of standardized assessments |
12E |
Reference |
Citation for measure |
Full published name or reference in which measure was described. Enter NA is measure is site-developed. |
12F |
Type of measure |
Specify type of measure |
E.g., interview, survey, observation, test |
12G |
Scoring |
Specify how measure is score |
E.g., binary, ordinal(ordered), categorical/nominal (unordered), continuous |
12H |
Outcome is valid for implementation (matches program objectives, enhancements)
|
Enter yes or no |
Outcome must appear to measure the domain into which it is classified; outcome must align with program theory of change |
12I |
Over-aligned |
Enter yes or no |
Measure must not be designed or administered in ways that are specifically aligned with the program intervention model |
12J |
Measure reliability: standardized tests |
If test is standardized, enter “yes”; otherwise, enter “no” |
Outcomes should have test-retest reliability = .40 or higher (for scale measures based on survey items) or inter-rater reliability = .50 higher for data based on observation measures. Standardized tests are assumed to satisfy the reliability criterion. Other measures exempt from the reliability criterion include health indicators such as immunizations. |
12K |
Reliability of non-standardized measure |
Enter test-retest reliability, internal consistency, or inter-rater reliability |
Enter reliability statistic appropriate to type of measure |
13A-13K |
Repeat 13A-13K for 2nd parent outcome |
|
|
14A-14K |
Repeat 13A-13K for 3rd parent outcome |
|
|
15A-15K |
Repeat 13A-13K for 4th parent outcome |
|
|
|
|
|
|
16 |
Design for measurement of outcome |
|
|
18A |
Measure name |
|
Pre-populated from 12a |
18B |
Measure # |
|
Pre-populated from 12b |
18C |
Type of design: QED |
Enter “yes” if design includes comparison group; otherwise, enter “no” |
|
18D |
Type of design: ITS |
Enter “yes” if design is ITS with longitudinal pre and/or post data; otherwise, enter “no” |
|
18E |
ITS: # of baseline data points |
Enter # data points |
|
18F |
ITS: # of data points during program implementation |
Enter # data points |
|
18G |
Type of design: pre-post |
Enter “yes” if design includes one pre- and one post-test data point; otherwise enter “no” |
2 measurement points |
18H |
Type of design: pre vs. norm |
Enter “yes” if design includes one pre- test data point and test is normed so that standardization sample can be used as comparison; otherwise enter “no” |
|
18I |
Type of design: retrospective pre-post |
Enter “yes” if design includes post-test that measures change since an assumed baseline; otherwise, enter “no” |
|
18J |
Type of design: post-only |
Enter “yes” if design includes post-test measurement only and no pre-test; otherwise, enter “no” |
|
19A-J |
Repeat 18A-18J for 2nd parent outcome |
|
|
20A-J |
Repeat 18A-18J for 3rd parent outcome |
|
|
21A-J |
Repeat 18A-18J for 4th parent outcome |
|
|
|
|
|
|
|
Counterfactual explanation |
|
|
23A |
Measure name |
|
Pre-populated from 12a |
23B |
Measure # |
|
Pre-populated from 12b |
23C |
Data on outcome collected in comparable ways |
Enter “yes” if data collection is comparable; otherwise, enter “no” |
Data for outcome defined and collected in a way that ensures the outcome measure is comparable for all groups being compared |
23D |
Measure defined consistently |
Enter “yes” if data collection is comparable; otherwise, enter “no” |
Outcome is defined in same way for all groups being compared |
23E |
Low likelihood of growth in intervention time period in absence of intervention
|
Enter “yes” if similar growth is unlikely in absence of the intervention; otherwise, enter “no” |
|
23F |
Lack of other interventions/interruptions likely causes of growth |
Enter “yes” if are no other interventions likely to create similar growth in absence of the intervention; otherwise, enter “no” |
|
|
|
|
|
24A-F |
Repeat 24a-24f for 2nd parent outcome |
|
|
25A-F |
Repeat 25a-25f for 3rd parent outcome |
|
|
26A-F |
Repeat 26a-26f for 4th parent outcome |
|
|
|
|
|
|
27 |
Evidence rating |
|
|
29A |
Measure name |
|
Pre-populated from 12a |
29B |
Measure # |
|
Pre-populated from 12b |
29C |
Design level |
Enter design number from R-SEED rating system |
Based on R-SEED design rating system |
29D |
Outcome meets standards |
Enter “yes” if outcomes meets all WWC standards; otherwise enter “no” |
Must meet standards for validity, over-alignment, reliability |
29E |
Study does not have a serious confound |
If study is a QED, enter “yes” if there is no serious confound (i.e., an n=1 confound); otherwise enter “no.” If study is a pre-post, enter NA. |
A serious confound occurs when there is a factor that has a separate effect on the outcome that cannot be eliminated by the study design, and which will bias the estimated effect of the intervention. An n=1 confound is a serious confound that occurs when an effect is estimated on the basis of comparing one unit (e.g., one teacher, one class, one school) to one or more such entities. |
29F |
Study can establish baseline equivalence of the analytic sample |
If study is a QED, enter “yes” if baseline equivalence has been established; otherwise enter “no.” If study has no comparison group, enter NA. |
The study must provide evidence that the groups being contrasted as equivalence at baseline on a pre-intervention measure of the outcome. |
29G |
The study design has a well-justified counterfactual explanation |
If study is a pre-post, enter “yes” if study has well-justified counterfactual explanation (rows |
|
30A-G |
Repeat 30A-30G for 2nd parent outcome |
|
|
31A-G |
Repeat 31A-31G for 3rd parent outcome |
|
|
32A-G |
Repeat 32A-32G for 4th parent outcome |
|
|
TAB 5: PARENT OUTCOME DATA |
|||
ROW |
Data element |
Definition |
Comments |
1 |
Project LAUNCH Grantee |
LAUNCH Grantee (Cohorts 1,2,4= State/tribe; Cohort 3 = community) |
Pre-populated from Service tab |
2 |
Local Evaluator |
Last name of lead evaluator |
Pre-populated from Service tab |
3 |
Grant Year of EOY Evaluation Report (One-Five) |
Year of grant represented in report (1 – 5) |
Pre-populated from Service tab |
4 |
Date of EOY Evaluation Report |
Report date |
Pre-populated from Service tab |
|
|
|
|
6 |
Description of LAUNCH-supported Service |
|
|
7 |
Strand (Home visiting (HV), Family support (FS), mental health consultation in preK (MHC-ECE), mental health consultation in school (MHC-ELEM), integration of behavior health in primary care (IBH-PC), developmental screening (DS) |
Enter abbreviation for SAMHSA strand |
Pre-populated from Service tab |
8 |
Other strands/types of services (mental health consultation in other settings (MHC-OTH), early childhood education (ECE) |
Enter abbreviation or specify type of service if not listed |
Pre-populated from Service tab |
9 |
Name of service/model |
Specify |
Pre-populated from Service tab |
|
|
|
|
|
Eligible outcomes |
|
|
19A |
1st eligible parent outcome |
Specify name of outcome from Parent Outcome tag, A29-32) |
Eligible outcomes are those whose evidence rating is Limited, Intermediate, or Strong |
20A |
Repeat 19A for 2nd eligible outcome |
|
|
21A |
Repeat 19A for 3rd eligible outcome |
|
|
22A |
Repeat 19A for 4th eligible outcome |
|
|
|
Samples |
|
|
19C |
Baseline sample (time 1) |
Describe sample participants at time 1 |
E.g., “new mothers at time of entry into program” |
19D |
Posttest sample (time 2) |
Describe sample participants at time 2 |
E.g., “mothers at one year of participation in program” |
19E |
Matched sample |
Enter “yes” if samples for two timepoints are matched; otherwise enter “no.” |
|
19F |
Pre-post time period |
Indicate number of months between time 1 and 2 and amount of exposure |
E.g., average of 9 months pre-post during mothers’ 1st year of participation |
20C-F |
Repeat 19C-F for 2nd eligible outcome |
|
|
21C-F |
Repeat 19C-F for 3rd eligible outcome |
|
|
22C-F |
Repeat 19C-F for 4th eligible outcome |
|
|
|
Sample sizes |
|
|
19W |
Eligible sample time 1 |
Indicate # of units in baseline or time 1 eligible sample |
# of all eligible units in sample at time 1 |
19X |
Analysis sample time 1 |
Indicate # of units in baseline or time 1 analysis sample |
# of units with data in analysis sample at time 1 |
19Z |
Eligible sample time 2 |
Indicate # of units in posttest or time 2 eligible sample |
# of all eligible units in sample at time 1 |
19AA |
Analysis sample time 2 |
Indicate # of units in posttest or time 2 analysis sample |
# of units with data in analysis sample at time 1 |
20W-AA |
Repeat 19W-AA for 2nd eligible outcome |
|
|
21W-AA |
Repeat 19W-AA for 3rd eligible outcome |
|
|
22W-AAF |
Repeat 19W-AA for 4th eligible outcome |
|
|
|
Findings |
|
|
19AV |
Method used to calculate significance of pre-post difference |
Select from drop-down menu |
|
19AW |
Mean outcome for analysis sample at time 2 |
Enter mean or proportion |
|
19AY |
Standard deviation of outcome for analysis sample at time 2 |
Enter standard deviation (if applicable) |
|
10BC |
Mean outcome for analysis sample at time 1 |
Enter mean or proportion |
|
19BE |
Standard deviation of outcome for analysis sample at time 1 |
Enter standard deviation (if applicable) |
|
19BL |
T statistic |
Enter t-statistic from t-test of pre-post difference on outcome |
|
19BO |
P-value |
Enter p-value for t-statistic |
|
19BP |
Significance |
Enter “yes” if finding is statistically significant; otherwise, enter “no” |
|
20AV-BP |
Repeat 19AV-BP for 2nd eligible parent outcome |
|
|
21AV-BP |
Repeat 19AV-BP for 3rd eligible parent outcome |
|
|
22AV-BP |
Repeat 19AV-BP for 4th eligible parent outcome |
|
|
TAB 6: CHILD OUTCOMES |
|||
Row |
Data element |
Definition |
Comments |
1 |
Project LAUNCH Grantee |
LAUNCH Grantee (Cohorts 1,2,4= State/tribe; Cohort 3 = community) |
Pre-populated from Service tab |
2 |
Local Evaluator |
Last name of lead evaluator |
Pre-populated from Service tab |
3 |
Grant Year of EOY Evaluation Report (One-Five) |
Year of grant represented in report (1 – 5) |
Pre-populated from Service tab |
4 |
Date of EOY Evaluation Report |
Report date |
Pre-populated from Service tab |
|
|
|
|
6 |
Description of LAUNCH-supported Service |
|
|
7 |
Strand (Home visiting (HV), Family support (FS), mental health consultation in preK (MHC-ECE), mental health consultation in school (MHC-ELEM), integration of behavior health in primary care (IBH-PC), developmental screening (DS) |
Enter abbreviation for SAMHSA strand |
Pre-populated from Service tab |
8 |
Other strands/types of services (mental health consultation in other settings (MHC-OTH), early childhood education (ECE) |
Enter abbreviation or specify type of service if not listed |
Pre-populated from Service tab |
9 |
Name of service/model |
Specify |
Pre-populated from Service tab |
|
|
|
|
10 |
Child outcome measures |
|
|
12A |
Measure name |
Specify |
E.g., CBCL |
12B |
Measure # |
|
|
12C |
Domain |
Specify overall domain being addressed |
E.g., child knowledge, child attitudes, child practices |
12D |
Construct |
Indicate construct being measures |
E.g., child stress, child use of standardized assessments |
12E |
Reference |
Citation for measure |
Full published name or reference in which measure was described. Enter NA is measure is site-developed. |
12F |
Type of measure |
Specify type of measure |
E.g., interview, survey, observation, test |
12G |
Scoring |
Specify how measure is score |
E.g., binary, ordinal(ordered), categorical/nominal (unordered), continuous |
12H |
Outcome is valid for implementation (matches program objectives, enhancements)
|
Enter yes or no |
Outcome must appear to measure the domain into which it is classified; outcome must align with program theory of change |
12I |
Over-aligned |
Enter yes or no |
Measure must not be designed or administered in ways that are specifically aligned with the program intervention model |
12J |
Measure reliability: standardized tests |
If test is standardized, enter “yes”; otherwise, enter “no” |
Outcomes should have test-retest reliability = .40 or higher (for scale measures based on survey items) or inter-rater reliability = .50 higher for data based on observation measures. Standardized tests are assumed to satisfy the reliability criterion. Other measures exempt from the reliability criterion include health indicators such as immunizations. |
12K |
Reliability of non-standardized measure |
Enter test-retest reliability, internal consistency, or inter-rater reliability |
Enter reliability statistic appropriate to type of measure |
13A-13K |
Repeat 13A-13K for 2nd child outcome |
|
|
14A-14K |
Repeat 13A-13K for 3rd child outcome |
|
|
15A-15K |
Repeat 13A-13K for 4th child outcome |
|
|
|
|
|
|
16 |
Design for measurement of outcome |
|
|
18A |
Measure name |
|
Pre-populated from 12a |
18B |
Measure # |
|
Pre-populated from 12b |
18C |
Type of design: QED |
Enter “yes” if design includes comparison group; otherwise, enter “no” |
|
18D |
Type of design: ITS |
Enter “yes” if design is ITS with longitudinal pre and/or post data; otherwise, enter “no” |
|
18E |
ITS: # of baseline data points |
Enter # data points |
|
18F |
ITS: # of data points during program implementation |
Enter # data points |
|
18G |
Type of design: pre-post |
Enter “yes” if design includes one pre- and one post-test data point; otherwise enter “no” |
2 measurement points |
18H |
Type of design: pre vs. norm |
Enter “yes” if design includes one pre- test data point and test is normed so that standardization sample can be used as comparison; otherwise enter “no” |
|
18I |
Type of design: retrospective pre-post |
Enter “yes” if design includes post-test that measures change since an assumed baseline; otherwise, enter “no” |
|
18J |
Type of design: post-only |
Enter “yes” if design includes post-test measurement only and no pre-test; otherwise, enter “no” |
|
19A-J |
Repeat 18A-18J for 2nd child outcome |
|
|
20A-J |
Repeat 18A-18J for 3rd child outcome |
|
|
21A-J |
Repeat 18A-18J for 4th child outcome |
|
|
|
|
|
|
|
Counterfactual explanation |
|
|
23A |
Measure name |
|
Pre-populated from 12a |
23B |
Measure # |
|
Pre-populated from 12b |
23C |
Data on outcome collected in comparable ways |
Enter “yes” if data collection is comparable; otherwise, enter “no” |
Data for outcome defined and collected in a way that ensures the outcome measure is comparable for all groups being compared |
23D |
Measure defined consistently |
Enter “yes” if data collection is comparable; otherwise, enter “no” |
Outcome is defined in same way for all groups being compared |
23E |
Low likelihood of growth in intervention time period in absence of intervention
|
Enter “yes” if similar growth is unlikely in absence of the intervention; otherwise, enter “no” |
|
23F |
Lack of other interventions/interruptions likely causes of growth |
Enter “yes” if are no other interventions likely to create similar growth in absence of the intervention; otherwise, enter “no” |
|
|
|
|
|
24A-F |
Repeat 24a-24f for 2nd child outcome |
|
|
25A-F |
Repeat 25a-25f for 3rd child outcome |
|
|
26A-F |
Repeat 26a-26f for 4th child outcome |
|
|
|
|
|
|
27 |
Evidence rating |
|
|
29A |
Measure name |
|
Pre-populated from 12a |
29B |
Measure # |
|
Pre-populated from 12b |
29C |
Design level |
Enter design number from R-SEED rating system |
Based on R-SEED design rating system |
29D |
Outcome meets standards |
Enter “yes” if outcomes meets all WWC standards; otherwise enter “no” |
Must meet standards for validity, over-alignment, reliability |
29E |
Study does not have a serious confound |
If study is a QED, enter “yes” if there is no serious confound (i.e., an n=1 confound); otherwise enter “no.” If study is a pre-post, enter NA. |
A serious confound occurs when there is a factor that has a separate effect on the outcome that cannot be eliminated by the study design, and which will bias the estimated effect of the intervention. An n=1 confound is a serious confound that occurs when an effect is estimated on the basis of comparing one unit (e.g., one teacher, one class, one school) to one or more such entities. |
29F |
Study can establish baseline equivalence of the analytic sample |
If study is a QED, enter “yes” if baseline equivalence has been established; otherwise enter “no.” If study has no comparison group, enter NA. |
The study must provide evidence that the groups being contrasted as equivalence at baseline on a pre-intervention measure of the outcome. |
29G |
The study design has a well-justified counterfactual explanation |
If study is a pre-post, enter “yes” if study has well-justified counterfactual explanation (rows |
|
30A-G |
Repeat 30A-30G for 2nd child outcome |
|
|
31A-G |
Repeat 31A-31G for 3rd child outcome |
|
|
32A-G |
Repeat 32A-32G for 4th child outcome |
|
|
TAB 7: CHILD OUTCOME DATA |
|||
ROW |
Data element |
Definition |
Comments |
1 |
Project LAUNCH Grantee |
LAUNCH Grantee (Cohorts 1,2,4= State/tribe; Cohort 3 = community) |
Pre-populated from Service tab |
2 |
Local Evaluator |
Last name of lead evaluator |
Pre-populated from Service tab |
3 |
Grant Year of EOY Evaluation Report (One-Five) |
Year of grant represented in report (1 – 5) |
Pre-populated from Service tab |
4 |
Date of EOY Evaluation Report |
Report date |
Pre-populated from Service tab |
|
|
|
|
6 |
Description of LAUNCH-supported Service |
|
|
7 |
Strand (Home visiting (HV), Family support (FS), mental health consultation in preK (MHC-ECE), mental health consultation in school (MHC-ELEM), integration of behavior health in primary care (IBH-PC), developmental screening (DS) |
Enter abbreviation for SAMHSA strand |
Pre-populated from Service tab |
8 |
Other strands/types of services (mental health consultation in other settings (MHC-OTH), early childhood education (ECE) |
Enter abbreviation or specify type of service if not listed |
Pre-populated from Service tab |
9 |
Name of service/model |
Specify |
Pre-populated from Service tab |
|
|
|
|
|
Eligible outcomes |
|
|
19A |
1st eligible child outcome |
Specify name of outcome from Child Outcome tag, A29-32) |
Eligible outcomes are those whose evidence rating is Limited, Intermediate, or Strong |
20A |
Repeat 19A for 2nd eligible outcome |
|
|
21A |
Repeat 19A for 3rd eligible outcome |
|
|
22A |
Repeat 19A for 4th eligible outcome |
|
|
|
Samples |
|
|
19C |
Baseline sample (time 1) |
Describe sample participants at time 1 |
E.g., “Children at time of entry into program” |
19D |
Posttest sample (time 2) |
Describe sample participants at time 2 |
E.g., “children at one year of participation in program” |
19E |
Matched sample |
Enter “yes” if samples for two timepoints are matched; otherwise enter “no.” |
|
19F |
Pre-post time period |
Indicate number of months between time 1 and 2 and amount of exposure |
E.g., average of 9 months pre-post during children’s’ 1st year of participation |
20C-F |
Repeat 19C-F for 2nd eligible outcome |
|
|
21C-F |
Repeat 19C-F for 3rd eligible outcome |
|
|
22C-F |
Repeat 19C-F for 4th eligible outcome |
|
|
|
Sample sizes |
|
|
19W |
Eligible sample time 1 |
Indicate # of units in baseline or time 1 eligible sample |
# of all eligible units in sample at time 1 |
19X |
Analysis sample time 1 |
Indicate # of units in baseline or time 1 analysis sample |
# of units with data in analysis sample at time 1 |
19Z |
Eligible sample time 2 |
Indicate # of units in posttest or time 2 eligible sample |
# of all eligible units in sample at time 1 |
19AA |
Analysis sample time 2 |
Indicate # of units in posttest or time 2 analysis sample |
# of units with data in analysis sample at time 1 |
20W-AA |
Repeat 19W-AA for 2nd eligible outcome |
|
|
21W-AA |
Repeat 19W-AA for 3rd eligible outcome |
|
|
22W-AAF |
Repeat 19W-AA for 4th eligible outcome |
|
|
|
Findings |
|
|
19AV |
Method used to calculate significance of pre-post difference |
Select from drop-down menu |
|
19AW |
Mean outcome for analysis sample at time 2 |
Enter mean or proportion |
|
19AY |
Standard deviation of outcome for analysis sample at time 2 |
Enter standard deviation (if applicable) |
|
10BC |
Mean outcome for analysis sample at time 1 |
Enter mean or proportion |
|
19BE |
Standard deviation of outcome for analysis sample at time 1 |
Enter standard deviation (if applicable) |
|
19BL |
T statistic |
Enter t-statistic from t-test of pre-post difference on outcome |
|
19BO |
P-value |
Enter p-value for t-statistic |
|
19BP |
Significance |
Enter “yes” if finding is statistically significant; otherwise, enter “no” |
|
20AV-BP |
Repeat 19AV-BP for 2nd eligible child outcome |
|
|
21AV-BP |
Repeat 19AV-BP for 3rd eligible child outcome |
|
|
22AV-BP |
Repeat 19AV-BP for 4th eligible child outcome |
|
|
1 The current version of the WWC standards is contained in the September 2011 What Works Clearinghouse Procedures and Standards Handbook (Version 2.1), U.S. Department of Education, Institute for Education Sciences. Version 3.0 (February 2013) is the most recent update and is currently available for public comment.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | DHHS |
File Modified | 0000-00-00 |
File Created | 2021-01-24 |