Supporting Statement B - R3-Impact 02.22.24_clean

Supporting Statement B - R3-Impact 02.22.24_clean.docx

Replication of Recovery and Reunification Interventions for Families-Impact Study (R3-Impact)

OMB: 0970-0616

Document [docx]
Download: docx | pdf

Alternative Supporting Statement for Information Collections Designed for

Research, Public Health Surveillance, and Program Evaluation Purposes





Replication of Recovery and Reunification Interventions for

Families-Impact Study (R3-Impact)



OMB Information Collection Request

New Collection





Supporting Statement

Part B



February 2024








Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officers: Calonie Gray and Kelly Jedd McKenzie


Part B


B1. Objectives

Study Objectives

The objective of the Replication of Recovery and Reunification Interventions for Families–Impact Study (R3-Impact) is to conduct a rigorous evaluation of two promising and potentially replicable programs (the Parent Mentor Program (PMP) and Sobriety Treatment and Recovery Teams (START)) that use recovery coaches in child welfare settings. The impact evaluations will be supplemented by implementation evaluations of the PMP and START sites to document local contexts, fidelity, and frontline practices and understand the ways they may intersect with program impacts.


The R3-Impact Study meets the requirements of the 2018 Substance Use Disorder Prevention that Promotes Opioid Recovery and Treatment for Patients and Communities Act (SUPPORT Act), wherein the U.S. Congress called for the replication and evaluation of a recovery coaching intervention with the goal of understanding its effectiveness and informing future potential efforts to scale effective interventions. This study aims to help close the gaps in knowledge and practice by providing the public with information from a rigorous examination of recovery coaching programs designed for parents engaged in the child welfare system due to parental substance use disorder (SUD). The R3-Impact Study aims to produce evidence that is reviewable by the Title IV-E Prevention Services Clearinghouse, with the ultimate goal of identifying evidence-based programs that states can implement as part of their Family First programming.1


Generalizability of Results

Impact Evaluation

The R3-Impact Study will help us understand the impact of recovery coaching programs on families with SUDs engaged in the child welfare system. The impact evaluation of PMP will use randomized controlled trials (RCT) in four states (referred to as “sites”) to estimate the impact on study participants’ wellbeing and child welfare outcomes.2


By conducting the evaluation across multiple sites, we are generating more robust evidence on the effectiveness of the interventions than would be the case if conducting the evaluation in only one site. The main findings of the evaluation will be based on pooled data from across the four sites. The PMP evaluation is intended to produce internally valid estimates of the intervention’s impact in chosen sites, not to promote statistical generalization to other sites or service populations.


Implementation Evaluation

The implementation evaluation is intended to present descriptions of PMP and START in the chosen sites, not to promote statistical generalization to other sites or service populations.


Appropriateness of Study Design and Methods for Planned Uses

Impact Evaluation

The R3-Impact Study uses a rigorous, multisite design that includes existing and new program sites across multiple states and policy contexts. Using this design allows ACF to satisfy the legislative requirements called for by the 2018 SUPPORT Act and fill gaps in the evidence base. Because PMP and START are both established programs with an existing or promising evidence base, they are well suited to be evaluated using rigorous impact methods.3


The impact evaluation of PMP will use an RCT design. In each of the sites (Michigan, Minnesota, Virginia, and Oregon), PMP is being newly implemented for families who are receiving in-home child welfare services and assignment to the control group in those locations is consistent with the standard of care and the treatment (the offer of PMP) is an enhancement.


As noted in Supporting Statement A, section A2, the PMP impact evaluation has two primary limitations. These include (1) point-in-time measurement of well-being outcomes that may not reflect average well-being in the post intervention period, and (2) lack of generalizability to certain prospective sites that have characteristics that differ from the four PMP evaluation sites. We will note these limitations in R3-Impact Study publications.


Implementation Evaluation

The implementation evaluation will collect information about program design and implementation as well as local context directly from those involved in program implementation. This information will not be representative of all experiences, but it will be critical for interpretation of the findings from the impact evaluation as well as documenting the program, facilitators, and barriers for the purpose of replication.


As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.


B2. Methods and Design

Target Population

Impact Evaluation

The impact evaluation of PMP, across all sites, will target parents involved in the child welfare system for whom parent substance use has been identified as a child safety concern. Each of the four PMP sites plan to target the program to parents with open child welfare cases who are receiving in-home services with the aim of preventing foster care placement. There are few other program eligibility criteria for PMP, except for considerations related to the ability to participate in the program based on mental health and incarceration status. 4


For the analysis of PMP impacts on parent well-being outcomes, the unit of analysis is the focal parent (with one focal parent per family). For the analysis of PMP impacts on placement in foster care, family reunification, and subsequent maltreatment, the unit of analysis is the child (with multiple children per family allowed).


Implementation Evaluation

For the implementation studies of PMP and START, the study team (staffed by Abt Associates and its subcontractor) will conduct interviews during two rounds of site visits to all four states implementing PMP and with up to two START states. During the first site visit (in-person; beginning about 12 months after OMB approval), the study team will interview county child welfare staff, program managers, system partners, and recovery coaches about early implementation experiences related to adoption and installation of the program and alternative services available in the community. The study team will also conduct interviews with parents who have participated in the program. During the second site visit (virtual; beginning about 24 months after OMB approval), site visitors will interview county child welfare staff, provider agencies, and recovery coaches about changes in and evolution of program implementation, and interview system partners about any changes to their involvement in the program and reflections on the evolution of their role.


Sampling and Site Selection

The study team selected four sites to participate in the PMP impact evaluation: Oregon, Michigan, Minnesota, and Virginia. Sites were selected based on several factors, including (1) their level of interest in implementing PMP, (2) their capacity to implement the program, (3) the number of prospective subsites (i.e., counties) and eligible participants, and (4) the sites’ interest and willingness to participate in the evaluation. The study team selected subsites (counties) in collaboration with the sites based on their willingness and capacity to refer eligible individuals to PMP and participate in the evaluation, the number of prospective program participants, and the strength of the counterfactual.


Impact Evaluation

Parents will be eligible for the study if they have an open child welfare case indicating parental substance use as a concern whose child(ren) have not been placed in foster care at the time of random assignment but are at high risk for future foster care placement.


Implementation Evaluation

For the implementation evaluation, we will gather information during two site visits to two-to-four subsites (i.e., counties) in each site. The study team will select subsites based on implementation progress, urbanicity, local child welfare system conditions and outcomes, local demographics, recommendations from state-level partners and organizations providing the intervention across multiple counties, and willingness to participate in data collection activities. Within each subsite, the study team will work with county staff to identify respondents for recruitment. The study team will use non-probability, purposive sampling to identify potential respondents who can collectively provide the required breadth of perspective on the study’s research questions. Because participants will be purposively selected, they will not be representative of the population of child welfare lead and frontline staff, program managers, mentor supervisors, parent/family mentors, and parents. The study team will, however, consider parent experiences and demographics when developing the respondent pool.

B3. Design of Data Collection Instruments

Development of Data Collection Instruments

Impact Evaluation

The study team developed the baseline survey in coordination with experts with lived experience as well as those with in-depth research and policy knowledge in the areas of child welfare and substance use. Our starting points for survey development and these expert consultations were the program’s logic model, theoretical models of adult well-being, substance use, and behavior change and considerations about the experiences and circumstances of individuals involved in the child welfare system as a parent. From these we developed a set of outcome measures and background characteristics essential to measure at baseline to support the impact analysis that will answer the study’s research questions (see Attachment D for a table linking each survey item to the relevant research question(s)). We vetted these outcomes with experts and in some cases eliminated measures to narrow the survey’s scope or added outcomes based on expert input.


To ensure a reliable and valid instrument and to minimize measurement error, we identified potential measures via a comprehensive scan and literature search, with particular attention to validated measures used on similar studies (e.g., HUD’s Family Options Study and ACF’s Building Evidence on Employment Strategies (BEES) project) or with similar populations. This produced a catalog with several options per outcome from which we selected the strongest candidates. In doing so, we considered the burden on respondents (number of items per scale) and the face validity of scales for measuring the constructs of interest. We then sought feedback from our panel of lived experience experts (see Supporting Statement A, section A8) on the wording, understandability, and relative appropriateness of competing scales5. We incorporated their feedback into our final selection and by adjusting certain items in order to improve comprehension or ensure equity and non-judgement in the framing of questions. This is an important step to guard against measurement error, and we plan to measure the reliability of any scale we have modified.


Implementation Evaluation

The study team developed the implementation evaluation topic guides for R3 based on the program logic models, the consolidated framework for implementation research (CFIR), and the study team’s expertise on past OPRE implementation studies. The instruments will be used to guide semi-structured discussions with county-level child welfare staff, program managers and supervisors, staff implementing the program, other providers in the community, and parents. Because each subsite may be structured differently and each instrument will therefore need to be tailored by subsite, the project team will not ask all questions of the same type of respondent in every subsite, and will not pretest instruments, but will instead use instruments to guide conversations.


B4. Collection of Data and Quality Control

The study team will collect all evaluation data.

PMP Impact Evaluation

Given that information on key aspects of adult well-being and SUD recovery cannot be collected from administrative data, survey data collection will be an essential part of the evaluation. This, in turn, points the design toward an enrollment process where all participants provide informed consent at study enrollment, complete a baseline survey, and complete a follow-up survey (to be discussed in a subsequent information collection request).


Participants will complete the baseline survey at study enrollment. The following describes the procedures for data collection at study enrollment and how study enrollment will be combined with program operations.


Before recruiting participants into the study, following the site’s existing procedures, the child welfare agency will check information using their normal processes to determine whether the participant is eligible for the program’s services. Child welfare staff will inform eligible parents about the opportunity to participate in the study and request their permission to be contacted. Child welfare staff may also conduct study informed consent.


If parents provide permission to be contacted, a field interviewer from the study team will contact the parent to schedule an enrollment meeting. The field interviewer will aim to complete the enrollment meeting in-person. However, in cases where the participant is unable to meet in-person, the field interviewer will conduct the enrollment meeting virtually. The study enrollment meeting will follow the following steps.

  1. Introduce the study to the participant. Provide a commitment to privacy, explain random assignment, and answer questions just prior to going through the consent form to ensure an understanding of the implications of the study.

  2. Attempt to obtain informed consent to participate in the R3-Impact study using the informed consent form (Attachment B). Field interviewers and Site Staff will be trained to explain the issues in the consent form and to be able to answer questions. If the applicant would like to participate, they will provide verbal consent (The IRB approved a waiver of written consent)..

  3. Administer the Baseline Parent Survey (Instrument 1), asking the participant questions from the baseline survey and recording their responses using a computerized survey instrument. For in-person enrollment, a portion of the survey that asks sensitive questions will be self-administered using audio-computer assisted self-interview (ACASI) technology; during this portion of the survey, the interviewer will provide the participant with a tablet to complete the self-administered section. For surveys not administered in person, the field interviewer will use computer-assisted telephone interviewing (CATI).

  4. Indicate in the random assignment system that the participant is ready to be randomly assigned. The system will immediately retrieve the random assignment result and the field interviewer will inform the participant of the result. Participants will be assigned to either a group that may receive program services (i.e., PMP) in addition to currently available services, or a group that may not receive program services but can access the services available to both treatment and control groups (i.e., the “services as usual” or “SAU” group).


The study team will undertake the following activities to enhance data quality:

  • Interviewer training: Prior to data collection, field interviewers will participate in a multi-day training that covers interviewing best-practices, the survey instrument content, the informed consent process, and the equipment they will use to conduct the interviews. Interviewers will also receive training on best practices for working effectively with the target population, how to handle sensitive questions, background information on the study, and FAQs to use when addressing participant questions. At the conclusion of the training, the field interviewers complete a certification wherein they demonstrate their ability to administer the survey and informed consent according to study protocols.

  • Data spot checks: Upon survey launch, the study team will do an initial review of data collected on the first 15 participants to ensure that the survey instrument is operating as expected. After that, the study team will do monthly spot checks to ensure that data are being collected as intended and to flag any problems with item nonresponse or responses occurring with unusual frequency which may signal a need to refine response options.

  • Validation Interviews: Throughout the course of data collection, the study team will conduct validation interviews for 10 percent of the baseline survey responses. Interview supervisors conduct validation interviews with study participants by phone. During these short interviews, the study team will confirm that the interview took place, ask about characteristics of the interaction with the interviewer, and collect factual responses that can be compared to the information in baseline survey to validate that the interviewer spoke with the participant.

  • Tokens of Appreciation. Securing complete and accurate data from participants throughout the study period is vital for the impact and implementation evaluations. Study participants will receive tokens of appreciation at enrollment, after completing the baseline survey, and after completing each quarterly contact form to support their participation in the study over time.


Implementation Evaluation

The study team will collect data on program implementation through in-person and virtual discussions. Prior to starting data collection, the study team will conduct a training for all data collection staff that will cover the purpose of the data collection, how to tailor the data collection instruments to specific respondents, and how to summarize findings following the discussion using a site reflection summary template. During the site visits, one study team member will ask questions and lead the discussion. A second team member will take notes and ask for clarification where necessary. With respondent permission, the study team will audio record all interviews. The team member who leads the discussion will be responsible for reviewing the notes and transcripts to check for accuracy, identify any missing information, and follow-up as needed.


B5. Response Rates and Potential Nonresponse Bias

As noted in section A9 of Supporting Statement A, the R3-Impact Study panel for the PMP evaluation is small (2,750 parents) and a high response rate across this longitudinal study is necessary to maintain statistical power to detect meaningful effects when measuring participant outcomes. In addition, the integrity of the study’s estimates requires maintaining similar response rates for the treatment and control or comparison groups and across demographic groups of central interest to the research study. We expect that maintaining high response rates for follow-up data collection will be difficult in the R3-Impact Study because the study includes a target population facing SUD who may be particularly difficult to maintain contact with over time. Because of the complex design and study population, it is important to build respondent buy-in early in the study and retain as much of the sample as possible over time. We have provided information above about plans for data collection, which we have designed with the goal of building early buy-in. The following information in this current request focuses on response rates for initial activities and the future request specific to follow up activities will provide updated information about response rates during these initial activities in addition to details about expected response rates for follow-up activities.



Response Rates



Impact Evaluation

Consented study participants are required to complete the Baseline Parent Survey (Instrument 1) as part of study enrollment. As such, we expect 100 percent participation for this data collection.


Implementation Evaluation

The interviews are not designed to produce statistically generalizable findings and participation is wholly at the participant’s discretion. Response rates will not be calculated or reported.


Non-Response

Impact Evaluation

We do not have concerns about nonresponse bias in the analysis of the baseline survey data. The baseline survey will be completed by all study participants after informed consent is given. Further, by doing an interview-administered survey, item non-response will be minimized because interviewers will be trained to probe for completion. Interviewers will also be thoroughly trained on how to address study participants’ questions about the forms and their questions. Items that are not answered will be at the participant’s discretion.


Implementation Evaluation

As the study team will purposefully select respondents for the implementation evaluation and findings are not intended to be representative, we will not calculate non-response bias. The study team will document and report parent demographics in written materials associated with the data collection.


B6. Production of Estimates and Projections

Impact Evaluation

The evaluation will use the baseline survey data to describe the study sample. The evaluation will present means and standard deviations of variables collected at baseline. The evaluation will also use baseline survey data to demonstrate baseline equivalence between the treatment (PMP) and control/comparison (SAU) groups.


When demonstrating baseline equivalence, we will use the methods described by the Title IV-E Prevention Services Clearinghouse to estimate the effect size of differences in mean values of “pre-tests” between the treatment and control/comparison groups. A “pre-test” is the baseline measure of an outcome that will be examined at follow-up for the impact of the intervention.


Implementation Evaluation

The qualitative data through interviews will not be used to generate population estimates, either for internal use or dissemination.


B7. Data Handling and Analysis

Data Handling

Impact Evaluation

The study team will undertake several steps to enhance data quality and minimize errors. These include:

  • Mode-Specific Routing: The data collection platform can accommodate mode-specific logic, text fills, within survey skip routing, etc. to ensure that we ask the right questions of the right people, and our interviewers have appropriate probes and instructions at the ready.

  • Real-time Data Entry and the Inclusion of Consistency Checks: The data collection platform can limit responses to acceptable ranges and require consistency across questions.

  • Testing: Once the survey is programmed, but prior to being deployed, the study team will conduct a thorough test of the data collection platform. Multiple study team staff test the program independently at each step in the process to identify problems so that the program is error free. The team will also use Autopilot to generate data that automatically populates all paths through the questionnaire to check all skip patterns. The study team then reviews the file to ensure that data is being stored according to study specifications.

  • Comprehensive Quality Assurance and Monitoring: The study team will run reports in real-time to monitor data collection and track survey progress. Reports contain vital statistics to review project status and identify interviewers who need additional assistance. They also review the data frequencies to confirm that there are no unusual outliers or errors in the logic.


Implementation Evaluation

Study team staff will take notes during the interviews and, with permission, record the conversations either using a portable audio recorder (for in-person interviews) or Microsoft Teams transcription feature (for virtual interviews). Final versions of the notes, transcripts, and site reflection summaries will be stored on Microsoft Teams in a folder only accessible to members of the research team. The study team project director and implementation evaluation task lead will review all site reflection summaries for clarity and completeness and will request additional information from the team that facilitated the discussion as needed.


Data Analysis

Impact Evaluation

Please see Section B6 for analytic methods that will be used with the baseline survey data collected under this information collection request.


The study team will pre-register the impact evaluation design plan using the AEA RCT Registry

developed and operated by the American Economic Association.


Implementation Evaluation

All the implementation evaluation data collected during the visits will be coded in the NVivo software application using a hierarchical coding scheme mapped to the domains of the CFIR. Interview transcripts, and any other materials collected on-site, will be coded using this mapped coding scheme by a team of site visitors who will be overseen by the implementation evaluation task lead. The coding team will meet weekly to ensure consistency in application of codes.


Data Use

The survey data collected through this information collection request will be used to establish a baseline measure of parent well-being, substance use treatment, and recovery support services engagement. It will also be used to describe the demographic characteristics of the study population and to test baseline equivalency. The study team will use these data to produce annual reports on study execution that will describe challenges encountered or anticipated and their solutions, information on site-level performance, and where appropriate noteworthy findings or insights. Interview data will be used in two reports (interim and final) describing the progress and results of the implementation evaluation. A future information collection request will seek approval for the follow-up survey. The study team will use those data, along with the data collected under this information collection request, to compare outcomes for families offered the treatment intervention with outcomes for control group families receiving services as usual. Progress and results from the impact evaluation will also be presented in an interim and final report. The study team may also produce a special topics report on a topic selected by ACF. The goal of these products and their dissemination is to advance the field’s understanding of research evidence generated by R3-Impact on recovery coaching interventions in child welfare.


B8. Contact Persons

Name

Organization

Role on Contract

Phone/email

Calonie Gray

OPRE, ACF, HHS

Co-Contracting Officer’s Representative

Calonie.Gray@acf.hhs.gov

(202) 565-0205


Kelly Jedd McKenzie

OPRE, ACF, HHS

Co-Contracting Officer’s Representative

Kelly.McKenzie@acf.hhs.gov

(202) 245-0976


Kimberly Francis

Abt Associates

Project Director / Principal Investigator

Kimberly_Francis@abtassoc.com

(617) 520-2502


Attachments

  • Instruments

    • Instrument 1: Baseline Parent Survey

    • Instrument 2: Contact Form

    • Instrument 3: Validation Interview Script

    • Instrument 4: Topic Guide – Child Welfare Lead Staff

    • Instrument 5: Topic Guide – Child Welfare Frontline Staff

    • Instrument 6: Topic Guide – Partners

    • Instrument 7: Topic Guide – Program Managers

    • Instrument 8: Topic Guide – Mentor Supervisors

    • Instrument 9: Topic Guide – Parent/Family Mentors

    • Instrument 10: Topic Guide – Parents

    • Instrument 11: Participant Interview Information Form

  • Appendices

    • Appendix A: Text from SUPPORT Act Section C.8082

    • Appendix B: Informed Consent Form – Impact Evaluation

    • Appendix C: Informed Consent Form – Parent Interview

    • Appendix D: Crosswalk of Survey Items to Research Questions

    • Appendix E: Crosswalk of Implementation Topic Guides to Research Questions



1 The Family First Prevention Services Act, signed into law in 2018, provides funds to states for programming focused on keeping families together when it is safe to do so.

2 The impact study of START will rely on quasi-experimental (QE) methods to estimate the program’s impact on participants’ child welfare outcomes. The START impact study will rely solely on administrative data and is not included in this information collection request.

3 Based on impact evidence from Kentucky, the Title IV-E Preventions Services Clearinghouse has rated START as “supported.” An initial impact and implementation evaluation of PMP in Oregon did not find impacts on child welfare outcomes, though it did find that most participating parents responded positively to PMP’s self-directed change model, developed recovery skills and engaged in treatment and recovery activities.

4 On a case-by-case basis, individuals with severe mental health conditions that would prevent them from safely engaging in services may be referred to other appropriate mental health services to gain stability prior to participating in PMP. In addition, PMP is not designed to support individuals who are incarcerated.

5 Through these activities we did not request the same information from more than 9 individuals and therefore the feedback requests were not subject to the Paperwork Reduction Act.

10


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
AuthorRachel Cook
File Modified0000-00-00
File Created2024-07-26

© 2024 OMB.report | Privacy Policy