Appendix A - FaMLE Impact Report Instructions

Appendix A - FaMLE Impact Report Instructions.docx

Formative Data Collections for ACF Research

Appendix A - FaMLE Impact Report Instructions

OMB: 0970-0356

Document [docx]
Download: docx | pdf


Instructions for the Final Impact Evaluation Report for Healthy Marriage and Responsible Fatherhood Grantees

Purpose of this document

The Administration for Children and Families (ACF), Office of Family Assistance (OFA) requires that all Healthy Marriage and Responsible Fatherhood (HMRF) grantees with local impact evaluations funded by OFA to submit a final evaluation report. The purpose of this document is to provide instructions on how to structure your final report so it is comprehensive and accessible. Many of the report sections draw directly from your analysis plans. When feasible, the instructions below reference the relevant sections in the analysis plan template. You may use text from your impact and implementation analysis plans, updated as appropriate, to simplify report writing. Below, we provide an annotated outline that contains guidance for each section. This outline describes: (1) the purpose of the section and what should be discussed, (2) things to keep in mind when writing the section, (3) existing documents that may be potential sources for the section, and (4) whether non-text elements (tables or figures) could be useful.

You can write the report directly in the accompanying template (FaMLE Impact Report Template.docx), which will facilitate review by ACF and your Evaluation Team Technical Assistance (ETTA) liaison.1 Or you can provide the specified information in a separate document, ensuring all sections of the template are included in that document. The template provides an outline of the report with places for you to fill in each section. There is a separate table shell file (FaMLE Impact Report Table Shells.docx) that provides some required and optional table shells for you to use and paste into the outline file as you write the report. Using these shells will help you more quickly complete the report and include key details that will help readers understand your findings.

Here are some additional suggestions for your impact report:

  • Organize the final report so that it is approximately 30-40 pages, double-spaced (15-20 pages single-spaced), excluding tables, figures, references, and appendices.

  • The report should be written for all audiences. Write as if the audience has not been involved in the grant and knows nothing about the program or the evaluation. The report should provide enough detail for readers to understand the program and its evaluation, and it should be free of project- or program-specific jargon and abbreviations.

  • Reach out to your ETTA with questions about the report, the report guidance, or your approach as you begin to work on your report. They are available and eager to address your questions quickly. Getting questions resolved early in the process will simplify the review process at the end of the grant period.

  • Please submit a report that you believe is ready for review, by your ETTA and OFA. Ideally it will have been edited and read by multiple people before submitting to minimize the amount of editorial comments your Family Program Specialist (FPS) and ETTA will need to provide. Their goal is to focus on content and technical details rather than presentation.

Please email your final report to your FPS and ETTA liaison when it is ready, but no later than [due date]. For consistency, please use this common naming convention when submitting your report: [short version of Grantee name]_Impact Report_[MM.DD.YYYY - report draft date]. Please send a Word version of the document, not a PDF. Your FPS and ETTA liaison will review the final report, provide comments and suggested edits, and return it to you for revisions. Your final report must be approved by your FPS by the end of your grant.



Instructions for completing the impact report template

Cover Page and Front Matter

The report cover page should include the title of the report, all authors, and author affiliation(s).


On page ii, specify the recommended citation for this report, list acknowledgements, and disclose any conflicts of interest—financial or otherwise. For an example of how to identify a conflict of interest, please see the International Committee of Medical Journal Editors. Please note, if the evaluation team is not completely independent from the program team (that is, if they are not different organizations with completely separate leadership and oversight), this is a conflict of interest that must be documented.


Finally, include the attribution to ACF:


This publication was prepared under Grant Number [Insert Grant Number] from the Office of Family Assistance (OFA) within the Administration for Children and Families (ACF), U.S. Department of Health & Human Services (HHS). The views expressed in this report are those of the authors and do not necessarily represent the views or policies of HHS, ACF, or the OFA.


Structured Abstract

Purpose. Provide an executive summary of the final report.

Instructions and Reminders. In no more than 350 words, summarize the objective of the intervention and impact study, study design, main result(s), and key conclusion(s).

The summary of the study design should briefly note whether the study was a randomized control trial (RCT) or quasi-experimental design (QED), the number of sites that participated in the study, the final analytic sample size, and the timing of the follow-up relative to the baseline.

The summary of the main results should include only findings from the primary impact analyses and not the implementation analysis, unless the implementation findings are essential for drawing the key conclusion(s).

Potential Sources. Impact Analysis Plan

Non-Text Elements. None

  1. Introduction

  1. Introduction and Study Overview

Purpose. Orient the reader to the study.

Instructions and Reminders. In this section, (1) explain the motivation for this intervention and why responsible fatherhood and/or marriage and relationship education is important for the local community studied (2) briefly describe the program studied, (3) explain the motivation for conducting this impact evaluation and, if applicable, (4) briefly summarize previous research describing the effects of the program, or of similar programs, and how this current study compares to prior research. Note if this is the first study of this program. The reader should understand why the intervention was targeted to the population studied and the motivation for selecting the chosen intervention and for the research questions examined.

At the end of the introduction please add a paragraph outlining the structure for the remainder of the report.

Potential Sources. Grant application

Non-Text Elements. None

B. Primary Research Question(s)



Purpose. Articulate the main hypotheses about the impacts of the intervention on healthy relationship or responsible fatherhood outcomes.

Instructions and Reminders. This section should present the primary research question(s) from Section A.1 of your analysis plan. Reminder: The primary research question(s) focus on the impact of the intervention (i.e. the specifics aspects of your program that you are testing the in the evaluation). Each question focuses on a specific outcome and time point to help connect the outcome(s) and time point(s) to the intervention’s theory of change. Please include information about where your study was registered (for many this was clinicaltrials.gov).

Potential Sources. Impact Analysis Plan (Section A.1)

Non-Text Elements. None

C. Additional Research Question(s) (if applicable)

Purpose. Outline additional (non-primary), important hypotheses about your program. In your analysis plan, these may be listed as secondary research questions or additional analyses.

Instructions and Reminders. This section should present all non-primary research question(s). This includes the secondary research question(s) and proposed additional research question(s) from Sections A.2 and E, respectively, in your analysis plan. Reminder from the analysis plan instructions: The secondary research question(s) examine, for example, (1) outcomes specified in the primary research question but at a different point in time than in the primary analysis (2) other outcomes that might be influenced by the intervention, such as precursors to the healthy relationship or responsible fatherhood outcomes of primary interest (3) the relationship among mediating variables and outcomes, such as the relationship between dosage or participation (4) moderator analyses, including the relationship between baseline characteristics and impacts (5) impact analyses of primary outcomes using methods that impute outcomes for individuals missing a survey.

Please combine secondary and additional analyses, listing the research questions in this section, and describing the analyses and findings below in Section V.D. Additional Analyses.

Potential Sources. Impact Analysis Plan (Sections A.2 and E)

Non-Text Elements. None

II. Intervention and Counterfactual Conditions

Provide a one or two paragraph overview to this section, highlighting that this section will describe the intended program and counterfactual conditions.

A. Description of Program as Intended

Purpose. Provide a summary of the program being tested.

Instructions and Reminders. This section should describe the intervention condition as intended (or what the intervention group was supposed to receive). You can draw from the description already detailed in Section B.1 of your analysis plan. Later, you will describe how you assessed implementation fidelity (in Section III.B.1 of this report) and what the intervention group actually received in this study (the implementation findings discussed in Section V.A of this report).

Discuss (1) program components; (2) program content; (3) intended implementation, including the location or setting, duration and dosage, and staffing (including the education and training of staff); and (4) target population.

Optional: If you had a graphical representation of your logic model in the Impact Analysis Plan, please include it in Appendix A of this report and briefly discuss it here.

Potential Sources. Impact Analysis Plan (Section B.1)

Non-Text Elements. It is often useful for the reader to summarize information about the program description in a table (for example, with rows for target population, program components, program content, planned mode of delivery, and so on).

Please refer to Tables II.1 and II.2 in the table shells document to summarize intervention and counterfactual component in a table. These are the same sample Tables 2 and 3 in the Impact Analysis Plan template.

B. Description of Counterfactual Condition as Intended

Purpose. Provide a summary of the counterfactual being examined.

Instructions and Reminders. This section should describe the control/comparison condition as intended, that is, business as usual or services (if any) those in the counterfactual condition are expected to receive. Later, in Section V.A on implementation findings, you will discuss the actual comparison group experience for the study.

If the control/comparison group was intended to receive usual services, briefly describe what services or programs were available to individuals in your service area that you expected the control/comparison group might access.

If the control/comparison group received a defined program, describe what was offered and how it was intended to be implemented. Similar to the description of the intervention condition above, this discussion should include the counterfactual components, dosage, content, and delivery. It does not need to discuss theory of change.

If the intervention and comparison conditions share common elements, please highlight the similarities and differences in services received in the text, and consider adding a table to help the reader compare the content.

Potential Sources. Impact Analysis Plan (Section B.2)

Non-Text Elements. Please refer to Tables II.1 (Description of intended intervention and counterfactual components and target populations) and II.2 (Staff training and development to support intervention and counterfactual components) in the table shells document to summarize intervention and counterfactual components in tables. These are the same Tables 2 and 3 from your Impact Analysis Plan template.

C. Research Questions about the Intervention and Counterfactual Conditions as Implemented

Purpose. Outline the main research questions about your implementation analysis.


Instructions and Reminders. This section should briefly describe what you want to know about the implementation of the intervention (for example, aspects of engagement, dosage, or fidelity). In other words, this section should present the research questions you addressed in your implementation analysis. Implementation analysis methods and findings will be discussed later in Sections III and V.


Potential Sources. Implementation Analysis Plan (Section A)


Non-Text Elements. You may use Table 1 from the Implementation Analysis Plan to list the implementation research questions.


III. Study Design

Provide a one paragraph introduction to this section, highlighting that this section will provide an overview of the study design, sample, and data collection.

A. Sample Formation and Research Design

Purpose. Describe the study eligibility criteria, research group formation, research design, and data collection.


Instructions and Reminders. This section should describe the ways members of the target population became part of the impact study, including the sample eligibility criteria, purposeful sampling, and consent process. Clearly identify the research design used to test program effectiveness (RCT or QED). If the study is an RCT, describe the construction of the intervention and control groups, the unit of randomization, when and how randomization occurred, method of randomization, and, if applicable, stratification/blocking. Report the intended probability of assignment to the intervention group. If the study has a QED, describe the process used to construct the intervention and comparison group. Readers should also be clear about when baseline data were collected relative to sample formation.

If your study was originally an RCT but had to construct equivalent groups using a QED approach, such as propensity score matching, because of high attrition and lack of baseline equivalence (or some other issue with random assignment), please state that and note that . Also indicate that Section IV.C of this report will discuss how baseline equivalence was assessed for the final QED analytic sample used to estimate the main findings and that Appendix C will discuss details of the original RCT design, including attrition rates and baseline equivalence.


Potential Sources. Impact Analysis Plan (Sections C.1 and D.4)


Non-Text Elements. None


B. Data Collection

Provide a brief introduction to this section on data collection, indicating you will first discuss data collection for the implementation evaluation and then the impact evaluation.

1. Implementation Analysis

Purpose. Document the sources that were used to conduct the implementation analysis.

Instructions and Reminders. Describe the data collected for the implementation study. Using your research questions as a guide (from Section II.C), discuss the data sources for each aspect of implementation and research question being examined. What data were collected for each aspect of implementation and how? Who was responsible for collecting the data? This information can come directly from your Implementation Analysis Plan.

Potential Sources. Implementation Analysis Plan (Section B)

Non-Text Elements. If you collected data on multiple components and to address multiple research questions through a variety of sources, a table may be helpful for organizing the presentation of this section. If you include a table, please mention the table in the main body of the report, and include it in the appendix. See Appendix B, Table B.1 in the table shells document as an example (and you can use Table 2 from your implementation analysis plan).

2. Impact Analysis

Purpose. Indicate how and when data on outcomes of interest (as well as key explanatory variables, including baseline assessments and demographics) were obtained from sample members.

Instructions and Reminders. Describe the data collections conducted, including who collected the data (for example, program staff or evaluation staff), timing, mode of administration, and overall process. Provide details on differences between intervention and comparison conditions in who collected the data, timing, mode, and processes used for data collection, as appropriate.

Potential Sources. Impact Analysis Plan (Section C.2)

Non-Text Elements. A table can complement the text discussion to help succinctly summarize the features of the data collection process. If you include a table on the data collection process, mention the table in the main body of the report when you describe data collection and include it in the appendix. See Appendix B, Table B.2 in the table shells document (and you can use Table 3 from your impact analysis plan).

IV. Analysis methods

Provide a brief introduction to this section, indicating you will describe the construction of the sample used for analysis, the outcome measures, and the baseline equivalence of the treatment and comparison/control groups.

A. Analytic Sample

Purpose. Describe the flow of participants into the analytic sample that is used to estimate impacts of the intervention.

Instructions and Reminders. This section should clearly state how many participants (for example, individuals or couples) are in your analytic sample for each research question, and how the analytic sample was created. The reader should understand the flow of sample members from the point of random assignment (or consent for QEDs) through the follow-up assessments used to estimate impacts, factoring in nonconsent, attrition, item nonresponse, and additional exclusions authors made. Use your final CONSORT diagram as a guide in preparing this description.

This section should clearly report overall and differential (between the intervention and comparison/control groups) attrition from the initial assigned sample. It should also note what percent of individuals in the analytic sample are crossovers (individuals assigned to the RCT control group but who actually received intervention services).

Potential Sources. Impact Analysis Plan (Section D.3)

Impact analysis Plan CONSORT diagram (Section C.3 and/or Appendix), updated with final sample numbers.

Non-Text Elements. To support the discussion above, include (1) the final CONSORT diagram(s) from your Analysis Plan (updated accordingly) with details of the final sample in the Appendix (these can be included in Appendix B) and (2) one of the sample flow tables referenced below. Use either the individual-level (Table IV.1a) or cluster-level (Table IV.1b) design table, whichever is appropriate for the study design. The next two pages include more detailed guidance for completing these tables.

Please refer to the table shells document for two versions of table shells for reporting sample flow for either individual-level (Table IV.1a) or cluster-level (Table IV.2b) assignment designs. Complete only one table for this section. Please use the table appropriate for the study design.

Instructions for completing Table IV.1a. Individual sample sizes by intervention status

  • The purpose of this table is to clearly present the sample sizes and response rates for participants in individual-level assignment studies.

  • Italicized text highlights how total sample sizes and response rates can be calculated given other information in the table. Italics text in the first column also provides guidance. Please clearly indicate the “timing” of follow-up surveys relative to the administration of the baseline survey.

  • To describe the sample from more than two follow-up periods, please add rows as needed.

  • In the columns “Total sample size,” “Intervention sample size,” and “Comparison sample size,” enter the number of individuals who were assigned to condition in the “Assigned to condition” row. In the following rows, enter the number of individuals that completed the survey.

  • In the columns “Total response rate,” “Intervention response rate,” and “Comparison response rate,” please conduct the calculations indicated by the italicized formula.

    • Note: the denominator for the response rate calculations will be the numbers entered in sample size columns in the “Assigned to condition” row.

  • For the row “Contributed to first follow-up (accounts for item non-response and any other analysis restrictions)” and, if applicable, the corresponding row for a second follow-up, you may have very different sample sizes for two outcomes of interest because of very different rates of missing data for the outcomes. If this is the case, please add a row for each outcome in each time period, as needed. Indicate in a table note to which outcome the sample sizes apply. For example, if you have two primary outcomes and there was very different response rates on the items needed to construct these outcomes, you should include two rows for “Contributed to first follow-up (accounts for item non-response and other analysis restrictions)”: one for the analysis sample for outcome one and one for the analysis sample for outcome two.


Instructions for completing Table IV.1b. Cluster and individual sample sizes by intervention status (Please use this version if your study enrolled couples or family groups, or if groups such as schools or classrooms were randomly assigned.)

  • The purpose of this table is to clearly present the sample sizes and response rates for both individual and cluster-level assignment studies.

  • The table is split into two sections. The top section focuses on cluster sample sizes and response rates. The bottom section focuses on individual sample sizes and response rates.

  • In the columns “Total sample size,” “Intervention sample size” and “Comparison sample size,”:

    • In the top section, enter the number of clusters at the beginning of the study that were assigned to condition in the “Clusters: at beginning of study” row. In the next four roles, enter the number of clusters in which at least one individual completed the relevant survey.

    • In the bottom section, start with the number of individuals from non-attriting clusters. For all rows in this section, exclude individuals that were in clusters that dropped (attrited) from the study. For example, if you randomly assigned 10 clusters (5 to each condition), and one intervention group cluster dropped from the study, you would only include individuals in this row from the 9 clusters that did not drop from the study (exclude individuals from the dropped cluster). List how many of these individuals were present in the clusters at the time of assignment in the “Individual: At time that clusters were assigned to condition” row, and then list how many of these individuals completed the relevant surveys in the rows beneath.

  • For each row, the value entered in the “Total sample size” column should be the sum of the “Intervention sample size” and “Comparison sample size” (this calculation is shown in italics). In the columns “Total response rate,” “Intervention response rate,” and “Comparison response rate,” please conduct the calculations indicated by the italicized formula.

    • Note that for the top section, the denominator for the response rate calculations will be the numbers entered in sample size columns in the “Clusters: At beginning of study” row. For the bottom section, the denominator for the response rate calculations will be the numbers entered in the sample size columns in the “Individual: At the time that clusters were assigned to condition” row.

  • In the row for “Individual: who consented”, if consent occurred prior to assignment, delete this row and insert the number of consented individuals in the non-attriting clusters in row 6. Add a note at the bottom of the table indicating that consent occurred prior to random assignment.


B. Outcome Measures

Purpose. Describe how the outcomes of interest in the primary and additional research questions were operationalized using survey data or other data elements.

Instructions and Reminders. Define the outcomes being examined in the primary and additional research questions. Briefly explain how each outcome measure was operationalized and constructed. If a measure was constructed from multiple items, please document the source items and explain how the items were combined to create an outcome that was analyzed.

Potential Sources. Impact Analysis Plan (Section D.1 and Tables 4 and 5)

Non-Text Elements. Please present this information in a table. Please refer to the table shells document, which includes table shells for Tables IV.2 (outcomes for primary research questions) and IV.3 (outcomes for additional research questions). These are the same tables you presented in your analysis plan, with some new information (see guidelines below). Do not include Table IV.3 if there are no additional research questions. The templates include examples for you in italics.

  • In the “Outcome measure” column, please include the name of the outcome that will be used throughout the report, in both text and tables.

  • In the “Description of outcome measure” column, please provide a description of the outcome and details on any items used to construct it. Important: If an outcome is a composite of multiple items (for example a scaled measure that is the average of five survey items) please report its Cronbach’s alpha (measure of internal consistency). See the example in Tables IV.2 and IV.3. If the outcome is a published measure or scale, please provide the name of the measure.

  • In the “Source column”, document the source for each measure. If all measures in the table are from the same source, please delete this column and add the source as a note at the bottom of the table.

  • In the “Timing of measure” column, please indicate the amount of time that has passed since the baseline survey was administered.



C. Baseline Equivalence and Sample Characteristics

Purpose. Provide information on how baseline equivalence was assessed for the final analytic sample used to estimate the main findings, and present the results of the assessment. Describe the sample characteristics for the reader.

Instructions and Reminders. Section D.3 of your impact analysis plan described the methods you would use to test the significance of differences between the study groups at baseline (for example using a p > 0.05 statistical significance threshold). Briefly describe the analytic methods used to assess the equivalence of the analytic sample(s) used to answer the primary research questions. Reminder: the analytic method used to show baseline equivalence should account for the study design (for example, clustering, stratification, propensity score weighting).

Present an equivalence table for each analytic sample (sample on which impacts are estimated) being used to answer the primary research questions. Discuss the key takeaways from your baseline equivalence analysis, including why covariates were or were not included in the impact model.

Your impact analysis plan described the measures you would use to demonstrate baseline equivalence. The equivalence table(s) must include these measures (for example, baseline measures of the outcomes of interest and key demographic characteristics such as race/ethnicity and socioeconomic status). These baseline measures should also be consistent with the covariates included in the impact estimation models discussed in section V.B below. If the covariates included in your impact analyses differ from those used to assess baseline equivalence, please note why.

Important: if your study was originally an RCT but you had to construct equivalent groups using a QED, the discussion in this section should focus only on the QED baseline equivalence and sample. Details on the baseline equivalence, samples, and attrition rates for the original RCT should be described in Appendix C and referenced here.

See guidelines for completing the baseline equivalence table below.

Potential Sources. Impact Analysis Plan (Section D.3)

Non-Text Elements. Please refer to Table IV.4 in the table shells document that can be used to demonstrate baseline equivalence.

Guidelines for completing Table IV.4

  • The purpose of this table is to demonstrate equivalence between study groups on key baseline characteristics and present useful summary statistics on these measures.

  • Copy and paste this table so there is one table for each analytic sample used to address primary research questions in the report.

  • Replace the “[Survey follow-up period]” text in the header with the time point of the survey. For example, “Table IV.4 Summary statistics of key baseline measures and baseline equivalence across study groups, for couples completing the first year follow-up”

  • The template table includes example rows for you in italics. Please edit accordingly and add additional rows as needed.

  • In columns 2 and 3 (“Intervention mean (standard deviation)” and “Comparison mean (standard deviation)”), enter the mean value and standard deviation of each measure. If a measure is binary (e.g. male/female) report as a percentage. For example if 50% of the sample is female, enter a mean of 50 and denote that this measure is a percentage by adding a “%” next to the measure name (see the example in Table IV.4) If the measure is a scaled variable please note the range next to the measure name (for example, “range: 1 to 5”) (see the example Table IV.4)

  • In column 4 (“Intervention versus comparison mean difference (p-value of difference)” enter the mean difference and in parenthesis, under the difference, the p-value for the difference.

  • Converting impact estimates into effect sizes units facilitates interpreting the size of the difference across outcomes using a common threshold. If this is helpful for your study, we recommend including the effect size for the difference in column 5. A common way to calculate an effect size is by dividing the differences in means by the standard deviation of the comparison group mean. They are measured in standard deviations.

  • In the final row, enter the sample size in columns 2 and 3. These numbers should align with the analytic sample presented in Section IV.A.



V. Findings and Estimation Approach

This section describes the impact and implementation analysis results and the estimation approach(es) used to estimate these findings. For your study, if it makes more sense to structure this section differently, please discuss with your ETTA. The sections below are organized as follows: implementation analysis, impact analysis to address primary research questions, sensitivity analysis, and additional analysis. Begin each section by summarizing the key finding(s) in the Key findings box (see the report template).

A. Implementation Evaluation

Purpose. Describe the program as actually implemented (actual services received) and provide context for the impact findings.

Instructions and Reminders. Begin the section by briefly summarizing the key findings, in the Key findings box.

The findings should be written concisely and grounded in numeric findings (for example, The program was implemented with fidelity and the program achieved its goals for attendance. Ninety-five percent of all program sessions were delivered, and 82 percent of the sample attended at least 75 percent of program sessions).

This section should describe what was actually received by people in the intervention and control/comparison groups. Describe how you measured services received by each study group by following the instruction in the Implementation Analysis Plan.

This section should also describe the types of analyses conducted. What measures were created? How were implementation elements quantified? What methods were used to analyze the data?

Important: Discuss the key limitations of the implementation data.

We encourage the use of subheadings in this section to discuss the findings related to fidelity, dosage, quality, engagement, context, and experiences of the intervention and comparison groups, to the extent they are available for your evaluation. Use this section to tell the story of the implementation that provides both context for the impacts and also the key lessons learned from implementation.

Potential Sources. Implementation Analysis Plan (Section C)

Non-Text Elements. A table may be helpful if many implementation data elements are being described or quantified.

B. Primary Impact Evaluation

Purpose. Present the impact results for the primary research questions and describe the methods used to estimate the program effectiveness, which includes methods to address concerns about underlying differences in the analytic samples.

Instructions and Reminders. Begin the section by briefly summarizing the key findings, in the Key findings box.

Present impacts of the intervention in tables, then discuss the findings in the text. Make sure each finding aligns with a given research question. Briefly elaborate on the findings and patterns of findings in this section, but save the broader discussion for the conclusion (for example, discussing how strong adherence to the program model during implementation may partly explain positive findings of the program’s effectiveness).

Please present the findings in a metric (for example, percentage point difference) that is easy for readers to interpret. For example, if the method was logistic regression, do not present results as odds-ratios in the body of the report; rather, transform them into something that will make sense to a lay reader, such as predicted probability or an adjusted mean.

Next, describe the analytic method(s) used to answer the primary research questions. Discuss the method for the primary research question(s). This discussion of methods should summarize the following:

  1. The model specification, including covariates included. Note whether covariates differed across models used to answer research questions.

  2. Criteria used to assess statistical significance.

  3. How you handled missing data

  4. If applicable, information on clustering, stratification, or propensity score weighting and other items related to the study design.


Any details about data cleaning can be described in Appendix D. In addition, if alternate model specifications were tested, include that information in Appendix F and reference the appendix in this section.

Equations for estimating impacts should also be included in Appendix E for transparency (along with any additional details about the model specification, and other details, not described in the body of the text).

Please note if the study is an RCT with high attrition at the unit of assignment or a QED, ACF guidelines dictate that evaluations conduct analyses using only individuals with complete data (i.e. a complete case analysis, no imputation of missing data) as the primary analysis. In these situations, you may include the analyses using imputed data as additional analyses (see Section V.D).

If your analysis plan included additional (non-primary) analyses, you can include them in the Additional Analyses section below in Section V.D.

Potential Sources. Impact Analysis Plan (Sections D.3 and D.4)

Non-Text Elements. Please refer to the table shells document, which includes table shells for Tables V.1 and V.2.

Please include a table that lists and describes the covariates included in your impact analyses, Table V.1 in the table shells document.

Instructions for Completing Table V.2

  • The purpose of this table is to summarize the estimated effects.

  • Edit and add rows as needed to represent all outcomes for which estimated effects will be reported.

  • In columns 2 and 3, enter the model-based (adjusted) mean and standard deviation. The model-based mean should adjust for baseline covariates. If the measure is binary, report the means as a percentage.

  • In column 4, enter the mean difference across groups and the p-value of this difference. The ETTA team recommends conducting a regression model to assess the impact of the intervention, to adjust for baseline differences and to improve the precision of the impact estimate.


C. Sensitivity Analyses

Provide a one paragraph overview of this section, highlighting that this section presents findings from sensitivity analyses conducted to check the robustness of your primary impact findings to alternative assumptions. Provide the key finding(s) from the sensitivity analyses in the Key findings box. For example, if the impact results are relatively unchanged regardless of the alternative assumptions, clearly state that.

Purpose. Briefly describe any analyses conducted to test the robustness of the main impact results to alternative assumptions that reflect important research decisions and document whether the observed main impact results are not due to researcher decisions about how data were cleaned and analyzed.

Instructions and Reminders. Describe the methods you used to test the robustness of results or the appropriateness of the analytic model discussed above. For example, sensitivity analyses might adjust for alternative sets of covariates. Briefly summarize the results if the sensitivity analysis findings are similar to the findings from the benchmark approach.

This section should provide more discussion of sensitivity analyses that show results that are different from the benchmark approach, and thus challenge key research decisions. If findings differ for some sensitivity analyses, briefly discuss the similarities and differences (both in terms of magnitude and statistical significance) in the impact findings across the sensitivity analyses, relative to the benchmark approach. If the results from the sensitivity analyses differ substantially from the main results presented in the report, please provide commentary on which set of results is more appropriate.

Please refer to the table shells document, Table V.3. If the sensitivity analysis findings are similar to the main findings for the benchmark approach, include Table V.3 in Appendix F. If sensitivity analysis findings differ from the main findings, present Table V.3 in the text with a brief discussion of the findings.

Equations for estimating the sensitivity analyses can be included in Appendix F for transparency.

Potential Sources. None

Non-Text Elements. Please refer to the table shells document, which includes table shells for Table V.3.

Guidelines for Completing Table V.3

  • The purpose of this table is to summarize the sensitivity of estimated impacts to methodological decisions.

  • Only present the impact estimates (difference in the mean between the two study groups)

  • Denote statistical significance next to the estimated impact using asterisk.

  • Add rows as needed for additional outcomes or comparisons. Add columns as needed for the sensitivity analyses presented.

  • Replace column heading in italics with a descriptive name for the sensitivity analysis conducted, such as “No covariate adjustment”. These headings should match headings in which the approach is described, either in the text or in Appendix F.


D. Additional Analyses

Purpose. Describe the analyses and findings for non-primary (additional) research questions, which may examine other outcomes, other time periods, subgroup analyses, mediated analyses, moderator analyses, or impact analyses using imputed outcome data.

Instructions and Reminders. Begin the section by briefly summarizing the key findings, in the Key findings box.

Briefly describe any additional (non-primary) analyses addressed in the study, including the motivation for doing the analyses, the measures explored, and the analytic approach used. Make sure each finding aligns with a given research question.

Present the findings. Findings from exploratory analyses are not considered impact findings, but can complement impact findings nicely and provide context for the study’s conclusions. Please present the findings in a metric (for example, percentage point difference or average scale value) that is easy for readers to interpret.

Briefly describe the analytic method(s) used to answer the research questions. If the methods are identical to those used to answer the primary research questions, state that. You can include more detailed information in Appendix G and refer readers to the appendix. For example, if you conducted analysis on imputed data (optional), you may want to describe how you did the imputation in Appendix G. When describing your analytic approach detail in Appendix G, follow the same guidance as in section V.B.

Potential Sources. Impact Analysis Plan (Sections A.2 and E)

Non-Text Elements. Please refer to the table shells document for Table V.4 to show details of impact findings.

VI. Discussion

Purpose. Summarize the impact and implementation findings and describe lessons learned and limitations.

Instructions and Reminders. Restate the main impact and implementation findings, and weave the impact and implementation findings together to create a coherent story about any observed impacts (or lack thereof) and program implementation. For example, explain how program adherence or implementation quality might help explain the observed impacts. Discuss important lessons learned that explain the impacts or that could help others replicate the program or serve the same target population. State directions for future research that this study suggests.

Describe limitations of the study (for example, issues with randomization, study power, or implementation).

Potential Sources. Earlier sections of the report

Non-Text Elements. None

VII. References

Purpose. Provide the full reference for any work cited in the report.

Instructions and Reminders. Please use the American Psychological Association (APA) style guide for citing works in the report. This section should include the full reference for any work cited.

Potential Sources. None

Non-Text Elements. None

VIII. Appendices

Based on our guidance for the report sections, the report may include the following appendices (note: it may not be necessary to include appendices for all of these items):


  1. Logic model (or theory of change) for the program.

  2. Data and Study Sample. First, provide details about data collection for the implementation study (See Table B.1 in the table shells document). Italicized text in the table are examples for expository purposes. The purpose of this table is to enable the reader to understand the data collected for the implementation analysis. Consider adding benchmarks for sufficient attendance or quality. Second, provide detailed descriptions of methods used to analyze the implementation data. Third, describe the data sources and data collection processes for the impact study (see Table B.2 in the table templates document). Finally, include the final CONSORT diagram, which is referenced in Section IV.A of your report (see page 10 of the instructions document)..

  3. Attrition Rates and Baseline Equivalence of the RCT Design. For studies that were originally an RCT but had to construct equivalent groups using a QED, describe the original RCT design, attrition rates, and baseline equivalence (see Appendix C in the table templates document). For QEDs and RCTs that had to construct equivalent groups, please describe the approach to constructing equivalent groups in this appendix.

  4. Data Preparation. Methods used to clean and prepare data (including descriptions of how missing and inconsistent data were handled). Refer to the Impact Analysis Plan, Section D.2, for your plan; add any details as appropriate.

  5. Impact Estimation. Include model specifications (equations) used in the assessment of baseline equivalence and program impacts.

  6. Sensitivity analyses and alternative model specifications

  7. Additional analyses. Include additional impact analyses, exploratory analyses, alternative approaches to missing data, subgroup and moderator analyses, and mediated analyses.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorWWC Management
File Modified0000-00-00
File Created2023-10-29

© 2024 OMB.report | Privacy Policy