Building Evidence on Employment Strategies
OMB Information Collection Request
0970-0537
Supporting Statement
Part B
September 2022
Submitted By:
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officer:
Megan Reid
B1. Respondent Universe and Sampling Methods
The Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) seeks approval for an extension to complete data collection for the Building Evidence on Employment Strategies (BEES) study. This document has been updated to reflect the current state of the study, but no changes are proposed to the currently approved materials1.
Sampling and Target Population
All BEES sites focus on providing employment services to low-income populations, particularly individuals overcoming substance use disorder, mental health issues, and other complex barriers to employment. BEES includes 20 programs, approximately 11 of which will include an impact and implementation study, while the remaining programs – where an impact study is not feasible – will involve implementation-only studies.
Impact and Implementation Studies
Among the programs with an impact study, we anticipate sample sizes of 650-800 in each of the programs.
Within each program participating in the impact and implementation analyses, participants who are eligible for the offered services will be enrolled in the BEES study. Not participating in the study will not affect access to program services. Program staff identify eligible applicants using their usual procedures. Then, a staff member will explain the study and obtain informed consent. We anticipate enrolling all eligible participants who provide consent over the enrollment period, with an estimated sample size of 5,000 across all BEES sites. Thus far, 1,700 individuals have been enrolled as BEES study participants.
Implementation-Only Studies
Although BEES prioritized mature programs ready for rigorous evaluation, such as randomized control trials, the research team identified programs where an impact study is not possible at this time. New programs tend to be small and are still developing their program approach and services, making them unsuitable for rigorous evaluations. Designed to provide information to the field in a timely and accessible manner, implementation studies complement the rigorous evaluations of larger, and perhaps more established, programs. Eight programs identified as unsuitable for random assignment at this time have been selected for implementation-only studies. Within the set of SUD programs, we selected sites that represent a variety of approaches to designing and operating programs that provide employment services to individuals struggling with or in recovery from substance use disorder, particularly opioid use disorder. Programs using a whole family approach have been selected to represent different approaches to multi-generational work and in different locations. For these studies, site visitors use a semi-structured interview protocol during two-day visits to each site; the protocol was developed for the BEES implementation study to conduct interviews with program managers, staff, and partners and tailored to these types of programs being visited. When it is not possible to do an in-person visit, interviews have been conducted virtually via phone calls or video conference.
Research Design and Statistical Power
Impact Analysis
This section briefly describes some principles underlying the impact analysis for each program in BEES.
Preference for random assignment. Randomized control trials (RCTs) are the preferred method for estimating impacts in the BEES evaluations. However, RCTs might not be feasible, for example, if there are not enough eligible individuals to provide a control group or if a program serves such a high-risk population that it would be problematic to assign any of them to a control group. In that case, the team will propose strong quasi-experimental designs, such as regression discontinuity designs (Imbens and Lemieux, 2008), comparative interrupted time series (Somers, Zhu, Jacob, and Bloom, 2013), and single case designs (Horner and Spaulding, 2010). All of these designs can provide reliable estimates of impacts under the right circumstances, and we would use procedures that meet the best practices for these designs, such as those proposed by the What Works Clearinghouse.
Intent-to-treat impact estimates. The starting point for the impact analysis is to compare outcomes for all program group members and control group members. In an RCT, random assignment ensures that this comparison provides an unbiased estimate of offering the program group access to the intervention. To increase precision, impact estimates are regression adjusted, controlling for baseline characteristics.
Following the recommendations of Schochet (2008), the impact analysis would reduce the likelihood of a false positive by focusing on a short list of confirmatory outcomes that would be specified before the analysis begins. To further reduce the chance of a false positive finding, results for any confirmatory outcomes would be adjusted for having multiple outcomes, for example, by using the method of Westfall and Young (1993).
Exploratory analyses might also be proposed, depending on what is found in the primary analysis. For example, if the primary impact analysis finds that an intervention increases cumulative earnings, secondary analyses could investigate whether earnings gains were sustained at the end of the follow-up period.
Sample size and statistical power. The ability of the study to find statistically significant impacts depends in part on the number of families or individuals for which data are collected. Exhibit 1 presents some minimum detectable effect (MDE) estimates for several different scenarios (all assuming that 50 percent of study participants are assigned to the program group and 50 percent are assigned to the control group). An MDE is the smallest true effect that would be found in 80 percent of studies. Since our assumed sample sizes and data sources vary by type of intervention, results are shown separately for behavioral health and non-behavioral health interventions. Results are expressed both in effect sizes – that is, as a number of standard deviations of the outcome – and for illustrative outcomes. The sample sizes are as follows: (1) administrative data matched to 90 percent of the full sample (assumed 800) (2) a 12- to 24-month survey with 640 respondents. Key points from Exhibit 1 include:
With administrative data, the study would be able to detect impacts of 0.166 standard deviations in non-behavioral health sites and .185 in the other sites. This translates into impacts on employment of 7.6 and 8.5 percentage points, respectively, assuming 70 percent of the control group works.
The 12- to 24-month survey would be able to detect impacts of 0.197 standard deviations, which would translate into a reduction in having moderate or worse depression of 9.6 percentage points (using results from the Rhode Island depression study).
|
|
Control Group Level |
MDEs |
|
|
|
|
Administrative records |
|
0.185 |
|
|
Employed (%) |
70 |
8.5 |
|
|
|
|
12-month survey |
|
0.197 |
|
|
Moderate or worse depression (%) |
60 |
9.6 |
NOTES: Results are the smallest true impact that would generate statistically significant impact estimates in 80 percent of studies with a similar design using two-tailed t-tests with a 10 percent significance level. No adjustment for multiple comparisons is assumed.
Estimated effects for participants. If a substantial portion of the program group receives no program services or a substantial portion of the control group receives similar services offered to those in the program group, the study may provide estimates of the effect of the intervention among those for whom being assigned to the program changed their receipt of program services. These so-called local average treatment effects (Angrist and Imbens, 1995) can be as simple as dividing intent-to-treat impacts by the difference in program participation between the program and control groups. Such estimates could be made more precise if there are substantial differences in use of program services across subgroups of participants or sites offering similar services (Reardon and Raudenbush, 2013).
Subgroup estimates. The analysis would also investigate whether the interventions have larger effects for some groups of participants (Bloom and Michalopoulos, 2011). In the main subgroup analysis, subgroups have been chosen using baseline characteristics, based on each evaluation’s target population and any aspects of the theory of change that suggest impacts might be stronger for some groups. This type of subgroup analysis is standard in MDRC studies and has been used in analyzing welfare-to-work programs, including for subgroups at risk for mental health issues, such as depression (e.g., Michalopoulos, 2004; Michalopoulos and Schwartz, 2000).
B2. Procedures for Collection of Information
Impact Study Data Sources
Data Collected at Study Enrollment.
BEES study enrollment has built upon each participating program’s existing enrollment and data collection processes. The following describes the procedures for data collection at study enrollment and how study enrollment has been combined with program operations.
Before recruiting participants into the study, following the site’s existing procedures, the program collects information using their normal processes to determine whether the participant is eligible for the program’s services. Information is logged into their existing management information system (MIS) or data locating system.
For study enrollment, the program staff then conducts the following procedures. Some steps might be conducted virtually, as necessary.
Step 1. Introduce the study to the participant. Provide a commitment to privacy, explain random assignment, and answer questions just prior to going through the consent form to ensure an understanding of the implications of the study.
Step 2. Attempt to obtain informed consent to participate in BEES using the informed consent form (Attachment H). Staff members are trained to explain the issues in the consent/assent form and to be able to answer questions. If the applicant would like to participate, they sign the consent form – either electronically or on a paper form. The participant is also given a copy of the consent form for their records.
Step 3. A staff person provides the Baseline Information Form (Attachment D) for the participant to complete on paper or electronically.
Step 4. Indicate in the random assignment system that the participant is ready to be randomly assigned, if random assignment is occurring on an individual basis. The result of random assignment is immediate. Random assignment may also occur on a cohort-basis, depending on the site’s usual enrollment processes.
Step 5. Inform the participant of assignment to receive the services available to the program group or the alternative services that are available to the control group.
Data Collected After Study Enrollment
This section describes procedures related to data collection after study enrollment.
Interviewer Staffing. An experienced, trained staff of interviewers will conduct the 12- to 24-month participant surveys. The training includes didactic presentations, numerous hands-on practice exercises, and role-play interviews. The evaluator’s training materials places special emphasis on project knowledge and sample complexity, gaining cooperation, refusal aversion, and refusal conversion. The study team maintains a roster of approximately 1,700 experienced interviewers across the country. To the extent possible, the BEES study recruits interviewers who worked successfully on prior career pathways studies for ACF (such as first round Health Profession Opportunity Grants (HPOG) Impact and Pathways for Advancing Careers and Education (Pace) 15-month, 36-month, and 72-month surveys under OMB control numbers 0970-0394 and 0979-0397 respectively). These interviewers are familiar with employment programs, and they have valuable experience locating difficult-to-reach respondents. We will also continue to recruit other interviewers with expert locating skills to ensure that the needs of all studies are met successfully.
All potential interviewers are carefully screened for their overall suitability and ability to work within the project’s schedule, as well as the specific skill sets and experience needed for this study (e.g., previous data collection experience, strong test scores for accurately recording information, attention to detail, reliability, and self-discipline).
Participant Contact Update Request and Survey Reminders. A variety of tools are utilized to maintain current contact information for participants and remind them of the opportunity for study participation, in the interest of maximizing response rates.
Impact Study Instruments
Contact Update Request Letter and Form (Attachment E). All study participants are included in the effort to maintain updated contact information. The participant contact update form is self-administered. The letter and accompany form (Attachment E) is mailed to sample members, beginning three months after random assignment. Participants are encouraged to respond by returning the form by mail, through a secure online portal, or they can update their contact information by phone. Participants can indicate that the information is correct, or they can make any necessary changes in contact information.
Locating Materials. The following supplementary materials are used throughout the study to maintain contact with participants and remind them of their participation in the study for successful survey completion.
Supplementary Materials:
Welcome Letter (Attachment I). The welcome letter is mailed to all participants soon after enrollment in the study. Its intention is to remind the participant of their agreement to participate, what it means to participate, and what future contacts to expect from the research team.
12- to 24-Month Survey Advance Letters (Attachment P). To further support the data collection effort, an advance letter is mailed to study participants approximately one and a half weeks before interviewers begin the data collection. The advance letter serves as a way to re-engage the participant in the study and alert them to the upcoming effort so that they are more prepared for the interviewer’s outreach. The evaluators update the sample database prior to mailing advance letters to help ensure that the letter is mailed to the most up to date address. The advance letter reminds each participant about the timing of the upcoming data collection effort, the voluntary nature of participation, and that researchers keep all answers private. The letter provides each study participant with a toll-free number that he/she can call to set-up an interview or a link to complete the survey online. Interviewers may send an email version of the advance letter to participants for whom we have email addresses.
12- to 24-Month Survey Email Reminders (Attachment Q). In many cases, interviewers attempt to contact participants by telephone first. If initial phone attempts are unsuccessful and a participant email address is available, interviewers can use their project-specific email accounts to follow up. They often send this email about halfway through the time during which they are working the cases. In some cases, the participant might also be able to complete the survey online. If so, a reminder about that option and the appropriate link is included in this email.
12- to 24-Month Survey Flyers (Attachment R). This flyer, to be sent by mail, is another attempt to contact participants who have not responded to 12- to 24-month survey attempts. Interviewers may leave specially designed project flyers with family members or friends.
Implementation Study Data Sources
An implementation study is being conducted in all the sites in BEES using a random assignment research design. The instruments for the implementation study include the Program, Managers, and Staff Interview Guide (Attachment L), the In-depth Case Study of Participant-Staff Perspectives Interview Guides (Attachments M and N), and the Program Staff Survey (Attachment O). Staff interviews are conducted during two rounds of implementation study visits for each impact study site while, in order to tailor instruments based on data collected in the first round, the in-depth case studies and program staff survey is conducted during the second round of site visits. If it is not possible to do an in-person site visit, interviews are done virtually via phone calls or video conference.
Additional programs recommended for implementation-only studies were identified through phone interviews conducted as part of the scan to identify BEES sites conducted under OMB #0970-0356. These sites were determined to be inappropriate for more rigorous study due to a range of factors, including limited scale or lack of excess demand to create a control group. Additional sites were added to this pool after the start of the COVID-19 pandemic as some program’s circumstances changed.
Data collection procedures for the implementation studies are outlined below.
Previously Approved Instruments:
Program Managers, Staff, and Partners Interview Guide – Substance Use Disorder Treatment Programs (Attachment F). Staff interviews are conducted during a site visit (done either in-person, or if not possible, virtually) to each of the programs. During each visit, 90-minute semi-structured interviews of program staff and partners explore implementation dimensions, including program context and environment; program goals and structure; partnerships; recruitment, target populations, and program eligibility; program services; and lessons learned. The number of interviewees vary by site depending on the organization staffing structure, however, the goal is to include all program managers, program staff (i.e. case managers), and key partners. If the program is large and interviews with all staff cannot be completed in the time available, we interview at least one (and likely more) in each job category. Approximately 10 staff total will be interviewed at each. Topics for the interviews include: the choice of target groups; participant outreach strategies; employment, training, and support service provided; SUD treatment and recovery services, development or refinement of existing employment-related activities and curricula to serve the target group; how and why partnerships were established; strategies for engaging employers with a SUD population; and promising practices and challenges. All protocols begin with a brief introductory script that summarizes the overall evaluation, the focus of each interview, how respondent privacy will be protected, and how data will be aggregated.
Program Managers, Staff, and Partners Interview Guide – Whole Family Approach Programs (Attachment G). Staff interviews will be conducted over two rounds of visits to two locations within these sites (done either in-person, or if not possible, virtually). The number of interviewees will vary by location depending on the organization staffing structure, however, the goal is to include all program managers, program staff (i.e. case managers), and key partners. If the program is large and interviews with all staff cannot be completed in the time available, we will interview at least one (and likely more) in each job category. Approximately 10 staff total will be interviewed at each program. Topics for the interviews include: program model and structure, program start up, staffing, program implementation, program components strategies and staff experiences, participant knowledge, awareness, participation, and views of program, interactions with partners, and eligibility criteria for participants.
Program Managers, Staff, and Partners Interview Guide (Attachment L). Site visits (done either in-person, or if not possible, virtually) are led by senior evaluation team members with expertise in employment program and in implementation research. All protocols (Attachment I) begin with a brief introductory script that summarizes the overall evaluation, the focus of each interview, how respondent privacy will be protected, and how data will be aggregated. During each visit, 90-minute semi-structured interviews of program staff and partners explore staff roles and responsibilities, program services, and implementation experiences. The number of interviewees vary by site depending on the organization staffing structure, however, the goal is to include all program managers, program staff (i.e. case managers), and key partners. If the program is large and interviews with all staff cannot be completed in the time available, we interview at least one (and likely more) in each job category. Approximately 10 staff total will be interviewed at each program. Topics for the interviews include: program model and structure, staffing, program implementation, program components strategies and staff experiences, participant knowledge, awareness, participation, and views of program, use of services and tokens of appreciation, and counterfactual environment.
In-depth Case Study of Participant-Staff Perspective Program Staff Interview Guides (Attachments M and N). In-depth case studies will be conducted at select evaluation sites examining selected participants and their corresponding case manager to understand how program staff addressed a specific case, how the participant viewed the specific services and assistance received, and the extent to which program services addressed participant needs and circumstances. For each site, one-on-one interviews will be conducted with approximately six participants (each 90 minutes in length), with separate one-on-one interviews with their respective case managers (60 minutes in length) We will work with case managers who express an interest in this study and select participants and to represent a range of situations, such as different barriers to employment and different completion and drop-out experiences, and different background characteristics. If a participant declines to participate, we will select another participant with similar experiences, to ensure a sample of six. Staff interviews differ from those described above because they will focus on how the staff member handled a specific case, in contrast to how the program works overall. The case studies will provide examples of how the program worked for specific cases, and will enhance the overall understanding of program operations, successes, and challenges.
Program Staff Survey (Attachment O). Online staff surveys will be fielded to all line staff (will not include supervisors or managers) at each site (we estimate 20 per site) and will cover background and demographics, staff responsibilities, types of services provided by the organization, barriers to employment, program participation, and organizational and program performance. Each survey will take 30 minutes to complete. The online staff survey (Attachment K) will be accessed via a live secure web-link. This approach is particularly well-suited to the needs of these surveys in that respondents can easily start and stop if they are interrupted or cannot complete the entire survey in one sitting and review and/or modify responses in a previous section. Respondents will be emailed a link to the web-based survey.
B3. Methods to Maximize Response Rates and Deal with Nonresponse
Expected Response Rates
Study participants are required to complete the consent form (Attachment H) and baseline information form (Attachment D) as part of study enrollment. As such, we expect nearly 100 percent participation for this data collection of Attachment D.
For the 12- to 24-month follow-up participant surveys (Attachment K), we expect a response rate of 80 percent. This response rate provides the sample size used in the MDE calculations (shown in Exhibit 1), which will allow us to detect differences across research groups on key outcomes. Numerous MDRC studies with similar populations have achieved response rates of at least 80 percent. For example, the Work Advancement and Support Center demonstration achieved an 81 percent response rate for the 12-month follow-up interview for a sample which included ex-offenders (Miller et al., 2012). The Parents’ Fair Share study, which included non-custodial parents, achieved a response rate of 78 percent (Miller and Knox, 2001). The Philadelphia Hard-to-Employ study (a transitional jobs program for TANF recipients) achieved a 79 percent response rate (Jacobs and Bloom, 2011). Several sites in the Employment Retention and Advancement evaluation achieved 80 percent response rates as well (Hendra et al., 2010).
Program staff will be asked to complete a survey (Attachment O). Based on the response rates for similarly fielded surveys in recent MDRC projects, we expect at least 80 percent of staff to complete the survey.
A subset of staff members will also be asked to participate in semi-structured interviews (Attachment L). Based on past experience, we expect response rates to be nearly 100 percent for these interviews, with the only nonrespondents being those staff members who are not available on the days the interviews are occurring. Similarly, we expect response rates to be high for the case study interviews (Attachments M and N). These are not intended to be representative, so program managers will select participants and take interest in participating into account.
Dealing with Nonresponse
We try to obtain information on a high proportion of study participants through the methods discussed in the section below on maximizing response rates as well as elsewhere in the Supporting Statement. Further, the study team monitors response rates for the program and control groups throughout data collection. Per the American Association for Public Opinion Research (AAPOR) guidelines, we pay specific attention to minimizing differences in response rates across the research groups.
In addition to monitoring response rates across research groups, the study team minimizes nonresponse levels and the risk of nonresponse bias through strong sample control protocols, implemented by:
Using trained interviewers with experience working on studies with similar populations and who are skilled in building and maintaining rapport with respondents, to minimize the number of break-offs and incidence of nonresponse bias.
If appropriate, providing a Spanish language version of the instrument to help achieve a high response rate among study participants for whom Spanish is their first language.
Sending email reminders to non-respondents (for whom we have an email address) informing them of the study and allowing them the opportunity to schedule an interview.
Providing a toll-free study hotline number—which is included in all communications to study participants—to help them ask questions about the interview, update their contact information, and indicate a preferred time to be called for the interview.
For the mixed mode efforts, taking additional locating steps in the field, as needed, when the interviewers do not find sample members at the phone numbers or email addresses previously collected.
Reallocating cases to different interviewers or more senior interviewers to conduct soft refusal conversions.
Requiring the interview supervisors to manage the sample to ensure that a relatively equal response rate for treatment and control groups is achieved.
Using a hybrid approach for tracking respondents. First, our telephone interviewers look for confirmation of the correct number (e.g., a call back later, respondent not at home, or a voicemail message that confirms name of respondent). With contact confirmation, interviewers continue with up to 10 attempts by phone. If interviewers are not able to confirm the correct telephone number, they make fewer attempts before sending the case to locating. Interviewers send cases incorrect or incomplete numbers to batch locating with a vendor such as Accurint/LexisNexis. The use of email addresses to follow up and field the survey, when possible. Following locating, interviewers reach out again if a number or email address is supplied or, begin field locating, if appropriate. Once a respondent is located, “Sorry I Missed You” cards are left with field interviewers’ phone numbers so the cases can be completed by phone.
Through these methods, the research team anticipates being able to achieve the targeted response rate.
To assess non-response bias, several tests are conducted:
The proportion of program group and control group respondents are compared throughout data collection to make sure the response rate is not significantly higher for one research group.
A logistic regression is conducted among respondents. The “left hand side” variable is their assignment (program group or control group) while the explanatory variables include a range of baseline characteristics. An omnibus test such as a log-likelihood test is used to test the hypothesis that the set of baseline characteristics are not significantly related to whether a respondent is in the program group. Not rejecting this null hypothesis provides evidence that program group and control group respondents are similar.
Baseline characteristics of respondents are compared to baseline characteristics of non-respondents. This is done using a logistic regression where the outcome variable is whether someone is a respondent, and the explanatory variables are baseline characteristics. An omnibus test such as a log-likelihood test is used to test the hypothesis that the set of baseline characteristics are not significantly related to whether someone responded to the follow-up interview. Not rejecting this null hypothesis provides evidence that non-respondents and respondents are similar.
Impacts from administrative records sources – which are available for the full sample – are compared for the full sample and for respondents to determine whether there are substantial differences between the two.
The main analysis will report results for individuals who respond to data collection efforts. If any of these tests indicate that non-response is providing biased impact estimates, the main analysis will be supplemented by a standard technique for dealing with nonresponse to determine the sensitivity of impact estimates to non-response. Two typical methods – inverse probability weighting and multiple imputation – are described below:
Inverse probability weighting: A commonly used method for adjusting for survey response bias is to weight survey responses so they reflect the composition of the fielded survey sample. A method such as logistic regression results in predicted probability of response for each sample member. The outcomes for respondents are weighted inverse to their predicted probability of results so that the weighted results reflect the observed characteristics of the original study sample.
Multiple imputation: With multiple imputation, information for respondents is used to develop a model that predicts outcome levels.2 The model is used to generate outcome data for nonrespondents. This is done multiple times, with each iteration generating one dataset with imputed values that vary randomly from iteration to iteration. Multiple imputation accounts for missing data by restoring the natural variability in the missing data as well as the uncertainty caused by imputing missing data.
As discussed in Puma, Olsen, Bell, and Price, both methods reduce the bias of the estimated effects if survey response are due to observable factors. However, neither is guaranteed to reduce bias if nonresponse is associated with unobserved factors, which is why they methods would be used as a sensitivity check rather than for the main analysis.
Maximizing Response Rates
Impact Study
For the 12- to 24-month surveys, the study team’s approach couples tokens of appreciation with active locating to be conducted by letter or online as well as passive locating (access to public-use databases). This combination of strategies is a powerful tool for maintaining low attrition rates in longitudinal studies, especially for the control group who is not receiving the program services. The 3-month interval and token of appreciation strategy is based on the study team’s experience as well as the literature on the high rates of mobility among low-income individuals and families (Phinney, 2013), and the effectiveness of pre-payments for increasing response rates (Cantor, O’Hare, and O’Connor, 2008).
Another important objective of active locating is to build a connection between the study and the respondents. Written materials remind the respondents that they are part of an important study, and that their participation is valuable because their experiences are unique. Written materials also stress the importance of participation for those who may be part of control group, if random assignment is used. At the same time, locating methods must minimize intrusiveness.
Active locating starts at baseline, with the collection of the participant’s address, telephone number, email address, and contact information of two people who do not live with the respondent but who will know how to reach him or her (Attachment D).
Following study enrollment, participants receive a “welcome letter” which reminds the participant that they are part of a study, what participation in the study entails, and why their experiences are important. (Attachment I).
After that, all locating letters include a contact information update form (Attachment E), and a promise of a $5 gift card for all sample members who respond to the letter (either by updating contact information or letting the study team know that there have been no changes).
The passive methods we use require no contact with the sample member. For example, the study team automatically runs the last known address for each respondent through the National Change of Address (NCOA) database, as well as LexisNexis.
Implementation Study
Maximizing response rates for the data collection efforts targeted towards staff members is also important. When a site enters the study, the research team explains the importance of the data collection efforts for advancing the evidence base. In addition:
For the Program Managers, Staff, and Partners Interview Guide (Attachments F, G, and L), it is important to plan the visits well in advance with the assistance of program management and to schedule interviews at the most convenient times for staff.
For the In-Depth Participant/Program Staff Case Study (Attachments M and N), we work to the gain cooperation of six participants and their corresponding case manager. We select staff who want to participate, and then participants from their caseloads. If a participant does not want to participate or does not show up, we identify another participant to include in this study. As described above, we are seeking participants who provide examples of a range of experiences in the program.
For the web-based Program Staff Survey (Attachment O), we maximize response rates primarily through good design and monitoring of completion reports. It is important to 1) keep the survey invitation attractive, short, and easy to read, 2) make accessing the survey clear and easy, and 3) communicate to the respondent that the completed survey is saved and thank them for completing the survey. Research staff closely monitors data completion reports for the survey. If a site’s surveys are not completed within one week of the targeted timeframe, the site liaison follows up with the site point of contact to remind their staff that survey responses are due and send out reminder e-mails to staff.
B4. Tests of Procedures or Methods to be Undertaken
Where possible, the survey instruments contain measures from published, validated tools. As needed, the study team conducts pretests for the tailored 12- to 24-month surveys to test the instrument wording and timing. The study team recruits 9 diverse individuals for these pretests. The pretest includes a debriefing after the interview is completed, through which we can explore individuals’ understanding of questions, ease or difficulty of responding, and any concerns. For many sites, the basic instrument is similar, with minor adaptations to local terminology or the site’s specific intervention model. Through the pretests, the same question is asked of no more than 9 people.
B5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
The following is a list of individuals involved in the design of the BEES project, the plans for data collection, and the analysis.
Dan Bloom Senior Vice President, MDRC
Emily Brennan Research Associate, MDRC
Lauren Cates Senior Associate, MDRC
Clare DiSalvo Contract Social Science Research Analyst, OPRE/ACF
Mary Farrell Subcontractor, MEF Associates
Mike Fishman Subcontractor, MEF Associates
Kimberly Foley Subcontractor, MEF Associates
Lily Freedman Research Associate, MDRC
Robin Koralek Subcontractor, Abt Associates
Caroline Mage Research Associate, MDRC
Karin Martinson Co-Principal Investigator, Subcontractor, Abt Associates
Doug McDonald Subcontractor, Abt Associates
Charles Michalopoulos Co-Principal Investigator, Chief Economist and Director, MDRC
Megan Millenky Project Director, Senior Associate, MDRC
Megan Reid Social Science Research Analyst, OPRE/ACF
Sue Scrivener Senior Associate, MDRC
Johanna Walter Senior Associate, MDRC
Attachments
Previously Approved Instruments Currently in Use:
Baseline Information Form for Participants (Attachment D)
Baseline Information Form for Participants (CCC) (Attachment D-1)
Baseline Information Form for Participants (2Gen-Chicago) (Attachment D-2)
Baseline Information Form for Participants (IPS FQHC) (Attachment D-3)
Baseline Information Form for Participants (IPS SUD) (Attachment D-4)
Baseline Information Form for Participants (IPS TANF-SNAP) (Attachment D-5)
Contact Update Request Form (Attachment E)
Program Managers, Staff, and Partners Interview Guide – SUD Programs (Attachment F)
Program Managers, Staff, and Partners Interview Guide – Whole Family Approach Programs (Attachment G)
12- to 24-Month Follow-Up Participant Survey (Attachment K)
12- to 24-Month Follow-Up Participant Survey (IPS SUD/FQHC) (Attachment K-1)
12- to 24-Month Follow-Up Participant Survey (CCC) (Attachment K-1)
Program Managers, Staff, and Partners Interview Guide (Attachment L)
In-Depth Case Study of Staff-Participant Perspectives
Participant Case Study Interview Guide (Attachment M)
Program Staff Case Study Interview Guide (Attachment N)
Program Staff Survey (Attachment O)
Previously Approved Instruments No Longer in Use:
Discussion Guide for National Policy Experts and Researchers (Attachment A)
Discussion Guide for State and Local Administrators (Attachment B)
Discussion Guide for Program Staff at Potential Sites (Attachment C)
6-Month Follow-Up Participant Survey (Attachment J)
Supplementary Materials Currently in Use:
Informed Consent Form for Participants (Attachment H)
Welcome Letter (Attachment I)
12- to 24- Month Survey Advance Letters (Attachment P)
12- to 24- Month Survey Email Reminders (Attachment Q)
12- to 2- Month Survey Flyer (Attachment R)
References
Allison, P.D. (2002). Missing Data. Thousand Oaks, CA: Sage University Paper No. 136.
Angrist, Joshua D., and Guido W. Imbens. 1995. Identification and estimation of local average treatment
effects. NBER Technical Working Paper No. 118.
Bloom, Howard S., and Charles Michalopoulos. 2011. “When is the Story in the Subgroups? Strategies
for Interpreting and Reporting Intervention Effects on Subgroups” Prevention Science, 14, 2: 179-188.
Bloom, Dan, and Charles Michalopoulos. 2001. How Welfare and Work Policies Affect Employment and
Income: A Synthesis of Research. New York: Manpower Demonstration Research Corporation.
Cantor, David, Barbara C. O'Hare, and Kathleen S. O'Connor. 2008. "The use of monetary incentives to
reduce nonresponse in random digit dial telephone surveys." Advances in telephone survey
methodology, 471-498.
Gennetian, L. A., Morris, P. A., Bos, J. M., and Bloom, H. S. 2005. Constructing Instrumental Variables
from Experimental Data to Explore How Treatments Produce Effects. New York: MDRC.
Hendra, Richard, Keri-Nicole Dillman, Gayle Hamilton, Erika Lundquist, Karin Martinson, and Melissa
Wavelet. 2010. The Employment Retention and Advancement Project: How Effective Are Different Approaches Aiming to Increase Employment Retention and Advancement? Final Impacts for Twelve Models. New York, NY: MDRC.
Horner, R., and Spaulding, S. 2010. “Single-case research designs” (pp. 1386–1394). In N. J. Salkind
(Ed.), Encyclopedia of Research Design. Thousand Oaks, CA: Sage Publications.
Imbens, Guido W., and Thomas Lemieux. 2008. "Regression discontinuity designs: A guide to practice."
Journal of econometrics 142.2: 615-635.
Jacobs, Erin, and Dan Bloom. 2011. Alternative Employment Strategies for Hard-to-Employ TANF
Recipients: Final Results from a Test of Transitional Jobs and Preemployment Services in Philadelphia. New York: MDRC.
Little, R.J.A., and D.B. Rubin (2002). Statistical Analysis with Missing Data. New York: John Wiley and Sons.
Michalopoulos, Charles. 2004. What Works Best for Whom: Effects of Welfare and Work Policies by
Subgroup. MDRC: New York.
Michalopoulos, Charles and Christine Schwartz. 2000. What Works Best for Whom: Impacts of 20
Welfare-to-Work Programs by Subgroup. Washington: U.S. Department of Health and Human Services, Office of the Assistant Secretary for Planning and Evaluation and Administration for Children and Families, and U.S. Department of Education.
Miller, Cynthia, and Virginia Knox. 2001. The Challenge of Helping Low-Income Fathers Support Their
Children: Final Lessons from Parents’ Fair Share. New York, NY: MDRC.
Miller, Cynthia, Mark Van Dok, Betsy Tessler, and Alex Pennington. 2012. Strategies to Help Low-Wage
Workers Advance: Implementation and Final Impacts of the Work Advancement and Support
Center (WASC) Demonstration. New York, NY: MDRC.
Phinney, Robin. 2013. “Exploring residential mobility among low-income families.” Social Service
Review, 87(4), 780-815.
Puma, Michael J., Robert B. Olsen, Stephen H. Bell, and Cristofer Price. 2009. What to Do When Data Are Missing in Group Randomized Controlled Trials. (NCEE 2009-0049). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education.
Reardon, Sean F., and Stephen W. Raudenbush. 2013. "Under what assumptions do site-by-treatment
instruments identify average causal effects?" Sociological Methods & Research 42.2: 143-163.
Schochet, Peter Z. 2008. Guidelines for multiple testing in impact evaluations of educational
interventions. Final report. Princeton, NJ: Mathematica Policy Research, Inc. Retrieved from http://www.eric.ed.gov/ERICWebPortal/detail?accno=ED502199
Somers, Marie-Andree, Pei Zhu, Robin Jacob, and Howard Bloom. 2013. The validity and precision of
the comparative interrupted time series design and the difference-in-difference design in educational evaluation. New York, NY: MDRC working paper in research methodology.
Westfall, Peter H., and S. Stanley Young. 1993. Resampling-based multiple testing: Examples and
methods for p-value adjustment. Vol. 279. Hoboken, NJ: John Wiley & Sons.
1 As noted in Supporting Statement A, the timing of the follow up surveys has been updated slightly. As such, the titles of those surveys and related recruitment materials have been updated to reflect a fielding period of 12- to 24- months. The content of those materials has not changed.
2 For more discussions on imputation model specification, see Little and Rubin (2002), and Allison (2002).
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | OPRE OMB Clearance Manual |
Author | DHHS |
File Modified | 0000-00-00 |
File Created | 2023-12-14 |