HPOG 2.0 Supporting Statement Part B_Revised_071819

HPOG 2.0 Supporting Statement Part B_Revised_071819.docx

OPRE Evaluation - National and Tribal Evaluation of the 2nd Generation of the Health Profession Opportunity Grants [descriptive evaluation, impact evaluation, cost-benefit analysis study, pilot study]

OMB: 0970-0462

Document [docx]
Download: docx | pdf



Supporting Statement for OMB Clearance Request


Part B


National and Tribal Evaluation of the 2nd Generation of the Health Profession Opportunity Grants


0970-0462


Revised April 2019

Revised July 2019


Submitted by:

Office of Planning,
Research & Evaluation

Administration for Children & Families

U.S. Department of Health
and Human Services


Federal Project Officers:

Hilary Bruck

Nicole Constance

Amelia Popham

Instruments

Previously Approved Instruments

  • Instrument 1: PAGES Grantee- and Participant-Level Data Items List

  • Instrument 2: HPOG 2.0 National Evaluation Screening Interview

  • Instrument 3: HPOG 2.0 National Evaluation first-round telephone interview Protocol

  • Instrument 4: HPOG 2.0 National Evaluation in-person implementation interviews

    • Instrument 4A HPOG 2.0 National Evaluation In-Person Implementation Interview

    • Instrument 4B HPOG 2.0 National Evaluation In-Person Implementation Interviews Basic Skills Training

    • Instrument 4C HPOG 2.0 National Evaluation In-Person Implementation Interviews Career Pathways

    • Instrument 4D HPOG 2.0 National Evaluation In-Person Implementation Interviews Work-Readiness

    • Instrument 4E HPOG 2.0 National Evaluation In-Person Implementation Interviews Sustainability

  • Instrument 5: HPOG 2.0 National Evaluation welcome packet and participant contact update forms

  • Instrument 5a: HPOG 2.0 National Evaluation welcome packet and contact update form

  • Instrument 5b: HPOG 2.0 National Evaluation participant contact update letter and form

  • Instrument 6: HPOG 2.0 Tribal Evaluation grantee and partner administrative staff interviews

  • Instrument 7: HPOG 2.0 Tribal Evaluation program implementation staff interviews

  • Instrument 8: HPOG 2.0 Tribal Evaluation employer interviews

  • Instrument 9: HPOG 2.0 Tribal Evaluation program participant focus groups

  • Instrument 10: HPOG 2.0 Tribal Evaluation program participant completer interviews

  • Instrument 11: HPOG 2.0 Tribal Evaluation program participant non-completer interviews

  • Instrument 12: HPOG 2.0 National Evaluation Short-term Follow-up Survey

New Instruments Included in this Request

  • Instrument 13:HPOG 2.0 Screening Interview Second Round

  • Instrument 14:HPOG 2.0 Second Round Telephone Interview Guide

  • Instrument 15:HPOG 2.0 Program Operator Interview Guide for Systems Study

  • Instrument 16:HPOG 2.0 Partner Interview Guide for Systems Study

  • Instrument 17:HPOG 2.0 Participant In-depth Interview Guide

  • Instrument 18:HPOG 2.0 Intermediate Follow-up Survey

  • Instrument 19:HPOG 2.0 Phone-based Skills Assessment Pilot Study Instrument

  • Instrument 20:HPOG 2.0 Program Cost Survey



Attachments

Previously Approved Attachments

  • Attachment A: References

  • Attachment B: Previously Approved Informed Consent Forms

    • Attachment B: National Evaluation informed consent form A (Lottery Required)

    • Attachment B: National Evaluation informed consent form B (Lottery Not Required)

    • Attachment B2: Tribal Evaluation informed consent form A (SSNs)

    • Attachment B3: Tribal Evaluation informed consent form B (Unique identifiers)

  • Attachment C: 60 Day Federal Register Notice

  • Attachment D: Previously Approved Sources and Justification for PAGES Grantee- and Participant-Level Data Items

  • Attachment E: Previously Approved Final Updated Attachment E PPR Data List and Mockup

  • Attachment F: First Round of HPOG Grantees Research Portfolio

  • Attachment G: Previously Approved Participant Contact Information Update Letter and Form (Obsolete, replaced by Instrument 5a and 5b)

  • Attachment H: HPOG Logic Model

  • Attachment I: Previously Approved Focus group participant consent form

  • Attachment J: Previously Approved Interview Verbal Informed Consent Form

  • Attachment K: HPOG 2.0 National Evaluation Short-term Follow-up Survey Advance Letter

  • Attachment L: HPOG 2.0 National Evaluation Short-term Follow-up Survey Sources

  • Attachment M: HPOG 2.0 National Evaluation Short-term Follow-up Survey Trying to Reach You Flyer

  • Attachment N: HPOG 2.0 National Evaluation Short-term Follow-up Survey Email Reminder

  • Attachment O: Research Questions for Previously Approved Data Collection Efforts (National Evaluation and Tribal Evaluation)

New Attachments

  • Attachment P: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Advance Letter

  • Attachment Q: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Sources

  • Attachment R: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Trying to Reach You Flyer

  • Attachment S: HPOG 2.0 National Evaluation Intermediate Follow-up Survey Email Reminder

  • Attachment T: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot flyer

  • Attachment U: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot grantee letter

  • Attachment V: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot participant letter

  • Attachment W: HPOG 2.0 National Evaluation phone-based Skills Assessment Pilot recruitment script

  • Attachment X: Complete list of previously approved data collection instruments

  • Attachment Y: 60-day Federal Register Notice

  • Attachment Z: Participant Interview Recruitment Materials











Part B: Statistical Methods

This document serves as Part B of the Supporting Statement for the third revision of data collection for The Health Profession Opportunity Grants 2.0 (HPOG 2.0) National and Tribal Evaluation (OMB Control No. 0970-0462). The HPOG 2.0 National and Tribal Evaluation is sponsored by the Administration for Children and Families (ACF) in the U.S. Department of Health and Human Services (HHS). The federal evaluations of the HPOG 2.0 National and Tribal grantees will evaluate postsecondary career pathway programs focused on the healthcare sector that target Temporary Assistance for Needy Families (TANF) recipients and other low-income individuals. The intended use of the resulting data is to improve ACF’s research, evaluation, and program support of the HPOG 2.0 program and others like it. This current request will help answer research questions about how grantees implement HPOG in their communities, what the program’s impacts are on participants, and whether the program is cost effective.

Exhibit B-1 reviews the original information collection request submission and approval dates along with those for the two prior revisions, and a summary of the eight new instruments for which approval is currently sought.

Exhibit B-1: Clearance Requests and Instruments for HPOG 2.0 (OMB Clearance No. 0970-0462)

Request

Instrument(s)

Request Date

Approval Date

Link to Supporting Statement

Original

Participant Accomplishment and Grant Evaluation System (PAGES) (Instrument #1)

5/13/15

8/6/15

https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201505-0970-002

1st Rev.

Various baseline, process and contact update forms (Instruments #2-5b for the National Evaluation; #6-11 for the Tribal Evaluation)

10/26/16

6/27/17

https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201610-0970-012

2nd Rev.

National Evaluation Short-term Follow-Up Survey (Instrument #12)

2/5/18

6/8/18

https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201802-0970-001

3rd Rev. (this submission)

Additional National Evaluation data collection tools:

Descriptive evaluation protocols (Instruments #13-17);

Intermediate Follow-up Survey (Instrument #18);

Phone-based Skills Assessment Pilot (Instrument #19); and

Program Cost Survey (Instrument #20).

4/23/2019

TBD

https://www.reginfo.gov/public/do/PRAViewDocument?ref_nbr=201904-0970-006




All of the new information collections are discussed in Part A of this supporting package. Part B of this supporting statement focuses on the five proposed collections of information that involve complex sampling and/or analysis procedures.1 Three of the five instruments are for the descriptive evaluation:

  1. Program Operator Interview Guide for systems study (Instrument 15)

  2. Partner Interview Guide for systems study (Instrument 16); and

  3. Participant In-depth Interview guide (Instrument 17).



The remaining two instruments are part of the impact evaluation:

  1. Intermediate Follow-up Survey (Instrument 18); and

  2. Phone-based Skills Assessment Pilot (Instrument 19).



B.1 Respondent Universe and Sampling Methods

Thirty-two HPOG 2.0 grants were awarded to government agencies, community-based organizations, post-secondary educational institutions, and tribal-affiliated organizations in September 2015. Of these, 27 were awarded to non-tribal entities and five were awarded to tribal organizations. The 27 non-tribal grantees operate 38 unique HPOG 2.0 programs. The new instruments in this request concern only the 27 non-tribal grantees participating in the National Evaluation. Sampling procedures for the three instruments to support the National Evaluation descriptive evaluation are described below, followed by a discussion of the sampling procedures for the two National Evaluation impact evaluation instruments.

Descriptive evaluation. This section describes the sampling methods for the three information collection requests under the National Evaluation descriptive evaluation that involve complex sampling and/or analysis procedures: Program Operator Interview Guide, Partner Interview Guide, and Participant In-Depth Interview Guide.

Program Operator and Partner Organization Interview Guides. The systems study component of the descriptive evaluation will include interviews with two respondent groups: Program Operators (Instrument 15) and Partner Organizations (Instrument 16). The evaluation team will purposively select 12 to 16 HPOG 2.0 programs (out of 38 programs) and 3 to 7 partner organizations from each selected program for inclusion in the HPOG 2.0 Systems Study. Selection will focus on their experiences and perspectives on the local service delivery system over the course of the HPOG grant—with the goal of identifying programs that range in the types and intensity of systems activities that could influence how the system works rather than exploring collaboration across all HPOG programs. Purposive sampling will also allow for the exploration of a range of experiences and perspectives on activities and partnerships that may contribute to or hinder systems development and improvement. It will also provide opportunities to understand variations in service delivery systems across HPOG. Because selected programs will offer a range of types and intensity of systems activities, the research team expects to gain perspectives on both positive and negative experiences with conducting systems activities.

As part of the selection process, the evaluation team will review PAGES data to identify the prevalence of training in various healthcare occupations (e.g., nursing assistant versus health care information technology). This will allow the evaluation team to better understand variation in networks of partners and experiences with those partners across types of training programs. During the process of selecting programs for the systems study the evaluation team will take into consideration the degree to which selected programs overlap with those selected for the previously approved focus area site visits and with other data collection activities to minimize burden on any one program.

Program Selection

The evaluation team will draw from information collected during the first-round telephone interviews (previously approved in June 2017 under this OMB Control Number), and information available in other documents (such as grant applications and evaluation design documents, and the PAGES system) to help with the program selection. To select programs, the evaluation team will use a purposive selection strategy based on information on the types and intensity of system activities under the local service delivery systems and HPOG 2.0, geographic area, lead organization type, whether or not the grantee was an HPOG 1.0 grantee/program operator, occupation(s) of training, new or enhanced programs, program enrollment, and target population to ensure the sample includes variation in experiences and perspectives by different types of programs. Up to 128 respondents will participate in the systems study—16 program operators (one operator per program, up to 16 programs selected) and 112 partner organization staff (up to 7 partner organizations per program selected, up to 16 programs to be selected).

Partner Organization Selection

Purposive sampling will also be used to select partner organizations. The strategy will allow the evaluation team to examine a range of experiences and perspectives on systems activities and partnerships. Partner organizations that did not engage at all in the HPOG program will be excluded from the sample as respondents should have some knowledge of the program. The evaluation team will use several sources of information to select partners.

  • First, for each selected program, the team will use data from the First-Round Telephone Interviews to develop a list of partners and their involvement in the HPOG program operations.

  • Second, during the program operator interview, the team will ask respondents to discuss partners that are highly involved and those that are less involved. Program operators will be asked to recommend a mix of both highly and less involved partners for interviews.

Three to seven partners per program will be selected based on program operators’ recommendations as to which partners represent different partner organization types (e.g., nonprofit organization, government agency, employer, and education and training provider) and are best suited to answer questions. For each program, the evaluation team will create a matrix of partners that groups partners by whether they are highly or less involved in HPOG operations and by organization type. The team will select a range of organization types, typically avoiding the same organization type as the program operator unless, in the program operator’s opinion, the partner has a useful perspective on systems activities. The evaluation team will seek to include employers and employer representatives, such as industry associations, to ensure we gather perspectives on employer and industry engagement, an important component of the HPOG 2.0 Program.

Participant In-depth Interviews. The study team also plans to conduct in-depth interviews with 140 participants across 14 programs using the Participant Interview Guide (Instrument 17). Researchers will first select programs and then participants. Researchers will use data from the first-round telephone interviews with programs to select 14 programs for inclusion in the participant interviews. In consultation with ACF, the evaluation team will select programs that represent a range of locations, program size and structure, grantee organizational types, and program characteristics. The purposive sampling strategy will seek to maximize variation in participant and program characteristics as much as possible. Interviewers will travel to conduct the interviews with selected participants over a four-day period. The interviews will be conducted in a central area—ideally at the program offices or another centrally located quiet place such as a local library or community center. If those are not feasible, interviews may take place at the respondent’s home. The purposive sampling strategy will also take into account where program participants reside—to look at how geographically dispersed they are and ensure that program participants’ geographic locations are practical for conducting site visits. For example, some programs may not have sufficient participants located in a geographically central location to facilitate a successful data collection site visit.

Once the 14 programs are selected, the evaluator will select participants. The goal in sampling is to recruit roughly equal numbers of participants who have completed their training and who are still in the training program, as well as some who have dropped out before completing training. The evaluation team will select an equal number of participants to attempt to interview across the selected programs. Researchers will review the participant data available in PAGES to select an initial pool of 45 treatment group members in each program according to the following criteria:

  1. Participant Stage in the Training Program to ensure a mixture of participants who have successfully completed their training (approximately 40 percent), participants who are still in a training program (approximately 40 percent), and participants who have dropped out of a training program (approximately 20 percent).

  2. Demographic and Socio-Economic Characteristics to interview a sample representative of the demographic and socio-economic characteristics of that particular program’s participant population.

To select the 45 treatment group members, the evaluation team will choose: the most recent 25 participants who have successfully completed their training; 25 participants who are currently at least four months into their training program but not yet completed; and 12 participants who have dropped out of the training program within the last six months. Participants will be selected randomly within each group.2 From this selection of participants, the evaluation team will look at demographic and socio-economic characteristics of the group and select participants to create a sample with variation similar to the demographic and socio-economic characteristics of the program’s overall participant population.

The evaluation team will use that pool of 45 participants per program to select 15 participants in each program using stratified sampling to ensure representation from each group of interest. Evaluation team members will attempt to recruit these 15 participants to conduct an interview. The expected overall response rate is 67 percent which would result in 140 completed interviews across all selected programs (10 completed interviews at each of the 14 programs).3 If the evaluation team is unable to complete interviews with at least 10 of the 15 selected from each program, they will identify alternate participants from those remaining in the original pool of 45 participants per program.

Impact evaluation. This section describes the sampling methods for the two information collection requests under the National Evaluation impact evaluation: the Intermediate Follow-up Survey and the Phone-based Skills Assessment Pilot.

Intermediate Follow-up Survey (Instrument 18). The evaluation team in collaboration with ACF selected 13,118 study participants—all of the participants enrolled between March 2017 and February 2018—for inclusion in the Short-term Follow-up Survey sample (previously approved under this OMB Control Number in June 2018). A subset—up to 5,000—of those participants, from a compact set of randomization cohorts, will be included in the Intermediate Follow-up Survey sample. The evaluation team estimates an 80 percent completion rate (4,000 completed interviews).

Several aspects of this sampling plan deserve attention: (1) How was the subsample size chosen?; (2) Why do we want to select a subsample of those interviewed in the Short-term Follow-up Survey?; and (3) Given that a subsample is to be selected, why a compact set of randomization cohorts rather than a random sample? Each of these questions is answered below.

  1. How was the subsample size chosen? The subsample size of 5,000 was chosen because it allows reasonable power to detect national pooled impacts. The much larger sample size for the Short-term Follow-up Survey was chosen because of the need to measure variation in program implementation from the student perspective and to measure variation in effects on education outcomes. These activities are not planned for the Intermediate Follow-up Survey.

  2. Why do we want to select a subsample of those selected for participation in the Short-term Follow-up Survey? We want to select a subsample of those selected for the Short-term Follow-up Survey for several reasons. First, selecting from those who participated in the Short-term Follow-up Survey will allow the construction of longer case histories as we will have thirty-six months of employment and training history instead of just fifteen months. Second, it will reduce nonresponse and cost because the continuous updating of contact information will provide the evaluation team with a more robust history of contact information over the 36-month follow-up period than would be available if a new sample was selected. Drawing from the Short-term Follow-up Survey sample also allows the evaluation team to build upon the rapport established with study participants during the follow-up period. Finally, using a subsample of the Short-term Follow-up Survey sample will allow more powerful adjustments for nonresponse to the Intermediate Follow-up Survey since the Short-term Follow-up information can be used both to study the potential for nonresponse bias and to make adjustments in the event that evidence for nonresponse bias in unadjusted statistics is found. However, in the selected randomization cohorts we will attempt to interview all participants selected for the short-term follow-up as part of the Intermediate Follow-up Survey. That is we will not exclude participants who were included in the Short-term Follow-up Survey sample, but not interviewed.

  3. Given that a subsample is to be selected, why a compact set of randomization cohorts rather than a random sample? The Short-term Follow-up Survey sample included participants enrolled over 12 monthly cohorts—March 2017-February 2018. We want to select a compact set—or subset—of cohorts because of the substantial time and cost efficiencies associated with larger workloads for interviewers over a compressed field period. We plan to select four or five of the 12 monthly cohorts included in the Short-term Follow-up Survey for inclusion in the Intermediate Follow-up Survey data collection.

At the conclusion of the Short-Term Follow-up Survey, all study respondents will be asked to update their contact information to aid in future data collection efforts. Study participants selected for the Intermediate Follow-up Survey will also continue to receive periodic contact update requests via the previously approved contact update form (Instrument 5b) every three months between the Short-Term and Intermediate Follow-up Survey efforts.

Phone-based Skills Assessment Pilot (Instrument 19). This assessment is a pilot study. Results from it will not be published as a formal part of the evaluation of HPOG 2.0.4 Rather, the results from this effort will be used to identify a narrow set of survey questions that will be incorporated into a ten-minute module within the Intermediate Follow-up Survey.5 Given the intended usage, the evaluation team will identify a volunteer sample of 500 HPOG 2.0 participants randomized outside the window for the Short-term Follow-up Survey. The team estimates 500 volunteers are needed to produce 300 completed pilot assessments.6 Most grantees will be asked to recruit and refer potential volunteers to the evaluation contractor. Ideal candidates are HPOG 2.0 study participants who meet three key criteria:

  1. They are from cohorts that are not part of our short-term survey sample pool (enrolled prior to March 1, 2017 OR after May 31, 2018);

  2. They are nearly ready to start occupational classes or currently taking lower level occupational classes; and

  3. They have complete contact information (address, phone number, and email) in PAGES.

A sample of volunteers is adequate for the purpose of psychometric testing of the draft skills assessment. Thus, the pilot design targets a particular number of completed interviews as opposed to a certain response rate. The evaluator estimates that 300 completed pilot assessments are needed in order to yield useful results on the reliability and validity of the items. The purpose of the pilot is to sort the relative difficulties of the assessment items. By having grantees recruit participants that meet the above criteria and want to participate, the evaluation team will be able to meet these objectives.

Several national and international surveys have been developed to assess adult numeracy and literacy, but almost all of these rely on face-to-face interviewing (a mode too expensive for most OPRE evaluations) or online administration (a mode infeasible for many OPRE evaluations due to a higher lack of computer access among low-income populations). Since most OPRE evaluations use a mix of methodologies, identifying a short battery of questions that could be administered by phone in about 10 minutes would offer four benefits: (1) it would be more cost effective than in-person or online administration; (2) it would be easily adaptable for in-person or online administration reducing burden on administrators and respondents; (3) the short duration of the module would also reduce burden on respondents—potentially increasing response rates or at least minimizing break-offs, and (4) it could be easily shared across other studies.

Exhibit B-2 presents the sampling methods and target response rates for each of the HPOG 2.0 National and Tribal Evaluation respondent subgroups. The instruments labeled as “new” are the subjects of this information collection request. All other instruments and their corresponding subgroups were previously approved under this OMB control number.

Exhibit B-2: HPOG 2.0 National and Tribal Evaluation Respondents

Respondent Universe

Respondent Subgroup

Sampling Methods and Target Response Rates

Data Collection Strategies

National HPOG 2.0 Evaluation


Grantees, partners, and employers

Grantees

Evaluation team members review the topics of interest with grantees using the HPOG 2.0 Screening Interview to identify appropriate respondent(s) based on who is most knowledgeable about the topics of interest. (See Instrument 2).

Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. The team has achieved a 100 percent response rate to date and expects a 100 percent response rate going forward.

Semi-structured telephone interviews

(Previously approved Instruments 2, 3 and 4)

(NEW Instruments 13-16)

Program Cost Survey (NEW Instrument 20)



Managers and staff

A very high response rate (at least 80 percent) is expected among grantee managers and staff.

All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person interviews

(Previously approved Instruments 2, 3 and 4)

(NEW Instruments 14-15)

Program Cost Survey (NEW Instrument 20)


Partners

A very high response rate (at least 80 percent) is expected among grantee partners. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person interviews

(Previously approved Instruments 2, 3 and 4)

(NEW Instrument 16)


Employers

A very high response rate (at least 80 percent) is expected among employers. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person interviews

(Previously approved Instruments 2, 3 and 4)

(NEW Instrument 16)

Descriptive evaluation participants

Selected treatment group participants

A pool of 45 participants in each of 14 sites will be identified to recruit for the participant interviews.

Up to 15 participants per site will be interviewed; the team expects a 67 percent response rate resulting in 10 completed interviews per site (140 in all)

Semi-structured participant interview guide administered in-person

(NEW Instrument 17)

Impact evaluation participants selected for the Contact Update Sample

A sample of participants (treatment and control groups)

Up to 13, 118 study participants, beginning with those enrolled in March 2017 will be part of the participant contact update efforts.

The team estimated that 35 percent of the respondents will respond to each quarterly participant contact update effort. The contact updates are ongoing. The current return rate is 24 percent.

Contact updates by mail, online portal, or telephone

(Previously approved Instruments 5a and 5b)

Impact evaluation participants selected for Short-term Follow-up Survey sample

A sample of participants (treatment and control groups

13,118 study participants, beginning with those enrolled in March 2017 will be part of the Short-term Follow-up survey.

The team expects that 80 percent of the participants selected will complete this survey effort, resulting in 10,494 completes. Data collection is ongoing. The team is 7 months into a 12 month data collection period. The current response rate—inclusive of cases that interviewers are still working—is 58.0 percent. There are some cells within a single monthly enrollment cohort in a single site that are completely closed—almost all have closed with response rates at or above 80 percent. The target response rate remains 80 percent.

Telephone or in-person interviews conducted by local interviewers with CAPI technology

(Previously approved Instrument 12)

Impact evaluation participants selected for Intermediate Follow-up Survey sample

A sample of participants (treatment and control groups

Up to 5,000 study participants, from select cohorts of participants randomized between March 2017 and February 2018 will be part of the Intermediate Follow-up survey.

The team expects that 80 percent of the participants selected will complete this survey effort, resulting in 4,000 completes.

Telephone or in-person interviews conducted by local interviewers with CAPI technology

(NEW Instrument 18)

Impact evaluation participants selected for the phone-based Skills Assessment Pilot

Treatment group participant volunteers

Up to 500 participants will volunteer to be part of the phone-based Skills Assessment Pilot.

The team expects to interview 300 participants, but will seek 500 volunteers to ensure 300 are interviewed

Telephone interviews conducted by local interviewers with CAPI technology

(NEW Instrument 19)


Tribal HPOG 2.0 Evaluation

Grantees, partners, and employers

Grantees

Grantees have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. The team has achieved a 100 percent response rate to date and expects a 100 percent response rate going forward.

Semi-structured in-person interviews

(Previously approved Instruments 6 and 7)


Management and Staff

A very high response rate (at least 80 percent) is expected among grantee staff. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person interviews

(Previously approved Instruments 6 and 7)


Partners

Partners have agreed to participate in the evaluation as a condition of receiving HPOG grant funding. Therefore, the team expects a 100 percent response rate. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person interviews

(Previously approved Instruments 6 and 7)


Employers

A very high response rate (at least 80 percent) is expected among HPOG employers. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person interviews

(Previously approved Instrument 8)

Participants

Program participants (current)

The tribal evaluation team will work with the grantees to recruit participants during the annual site visit planning period. The team achieved response rates ranging from 25-50 percent from current program participants across sites to date, and expects the same trend to continue.

In-person focus groups

(Previously approved Instrument 9)


Program completers

The tribal evaluation team will work with the grantees to recruit participants during the annual site visit planning period. The team expects a 25-50 percent response rate from program completers. All interviews using the previously approved instruments that were attempted were completed.

Semi-structured in-person interviews

(Previously approved Instrument 10)


Program non-completers

The tribal evaluation team will work with the grantees to recruit participants during the annual site visit planning period. The team has experienced difficulty recruiting participants for this information collection—achieving closer to 10 percent response in prior rounds. The team still expects a 10-25 percent response rate from program non-completers for the upcoming information collection.

Semi-structured in-person interviews

(Previously approved Instrument 11)

HPOG National and Tribal Evaluation Participant Accomplishment and Grantee Evaluation System (PAGES)

Participants

National Evaluation (Non-Tribal) HPOG Participants

No sampling techniques will be employed for PAGES data collection.

A 100 percent response rate is expected.

Estimated enrollment expected to be 41,500

Baseline and ongoing participant level data

(Previously approved Instrument 1)


Tribal HPOG Participants

No sampling techniques will be employed for PAGES data collection.

A 100 percent response rate is expected.

Estimated enrollment expected to be 2,663

Baseline and ongoing participant level data

(Previously approved Instrument 1)



B.2 Procedures for Collection of Information

This section describes the procedures for conducting the participant interviews data collection under the descriptive evaluation, and the Intermediate Follow-up Survey and skills assessment pilot data collection under the impact evaluation.

        1. HPOG 2.0 National Evaluation Descriptive Evaluation Data Collection Procedures

The primary data collection approach for the descriptive evaluation is two rounds of semi-structured interviews conducted by telephone and one round of in-person site visits with program directors, case managers and other relevant grantee staff. The first round of telephone interviews (now complete) focused on early implementation efforts. The second round will update the earlier round and will collect information to help lay the groundwork for the systems and cost studies. Site visits (now complete) were to programs implementing promising approaches to program components of specific interest to ACF. Telephone and site visit data collection are supplemented with data from PAGES and other existing site-specific materials developed earlier by the National Evaluation team.

The data collection procedures for the previously approved descriptive evaluation instruments can be found in the first revision to OMB Control # 0970-0462, approved in June 2017. The same procedures used in the first-round interviews will be followed for the second-round telephone interviews (Instruments 13 and 14). The procedures for conducting the new descriptive evaluation data collection components—the systems study interviews and the participant in-depth interviews—are discussed here.

The descriptive evaluation systems study will describe how local service delivery systems (i.e., the economic and service delivery environment in which specific HPOG programs operate) may have influenced HPOG program design and implementation and how HPOG implementation may have influenced these local systems, based on the perspectives of program operators (i.e., the lead organization directly responsible for the administration of an HPOG program) and partners engaged in systems activities. The systems study partner interviews will also include interviews with partners outside of HPOG. Telephone interviews for the systems study will focus on coordination of grantees and their partners within the local service delivery system (Instruments 15 and 16). Two-person teams will administer the semi-structured interviews—one acting as the lead interviewer and the other as the note taker. Each interview will take approximately 60 minutes based on the depth of knowledge of the respondent. The interviewers will spend an additional 15 minutes with the program operator respondent to identify partners for interviews and obtain contact information. The primary mode for the interviews will be telephone but the interview team will also offer videoconferencing (via Skype, Zoom, Go-To Meeting, or other technology) should the respondent prefer a more visual interaction.

The descriptive evaluation will also include one round of in-person interviews with HPOG program participants. The National Evaluation team will conduct in-depth interviews with HPOG 2.0 program participants to gain insight into their motivations, decision making, expectations, and experiences (Instrument 17). The team will work with program directors and study site liaisons to identify up to 45 study participants in the treatment group, in each of the 14 selected programs for recruitment. Interviewers will send a letter to the selected sample to explain the participant interview requirements (See Attachment Z).

One interviewer will conduct all of the interviews at a given site during a five-day visit. (No interviewer will travel to more than three sites.) Interviews will be completed in-person either at the program office or at another agreed upon location. Each interviewer will conduct the participant interviews and record—with the participant’s permission—the interviews for later transcription and analysis.

        1. HPOG 2.0 National Evaluation Impact Evaluation Data Collection Procedures

The impact evaluation participant-level data collection efforts include the previously approved Welcome to the Study packet (Instrument 5a, approved in June 2017—and now complete), the collection of quarterly contact updates (also previously approved under this OMB Control Number in June 2017 and still ongoing), the Short-term Follow-up Survey (previously approved in June 2018, and ongoing), as well as the data collection procedures for the Intermediate Follow-up Survey and phone-based Skills Assessment Pilot (Instruments 18 and 19 of this request for clearance). The procedures for conducting the ongoing data collection using previously approved contact update forms (Instrument 5b) and the Short-term Follow-up Survey (Instrument 12) are described in the first and second revisions to OMB Control Number 0970-0462 approved in June 2017 and June 2018 respectively.

The data collection procedures for the Intermediate Follow-up Survey (Instrument 18) will be identical to those approved for use in the Short-term Follow-up Survey (Instrument 12)—local interviewers attempt to interview respondents first by telephone and then in-person, using computer assisted personal interviewing (CAPI) technology. Since the procedures are the same, the specific details of how the data collection will be done are not repeated here. Please refer to the second revision to OMB Control No. 0970-0462, approved June 8, 2018, for a full description of the survey procedures. This information collection request focuses on the procedures for the phone-based Skills Assessment Pilot because the procedures were not covered by earlier OMB approvals.

The purpose of the phone-based Skills Assessment Pilot is to narrow a set of 45 potential survey questions intended to assess literacy and numeracy skills down to a set that can be used in a short module within the Intermediate Follow-up Survey. Because the follow-up survey can be conducted either by phone or in person, administration of the assessment module has to “work” in either mode. There is a long history of successful skills assessments for in-person data collection, but very little history of skills assessment administration over the phone. For this reason, all of the pilot assessments will be conducted by phone.

The phone interviewers will be drawn from the same staff of local interviewers used for the Short-term Follow-up Survey. This will ensure that the interviewers are fully trained on the HPOG 2.0 program, the goals of the evaluation and have experience working with the HPOG 2.0 participant population. Evaluation site team liaisons will work with the grantees to identify a pool of HPOG 2.0 participants that want to volunteer to complete the skills assessment pilot. Once identified, interviewers will reach out to volunteer participants to explain more about the pilot, obtain their consent, conduct the pilot, and capture respondent feedback on the process. The evaluation team estimates 500 volunteers may be needed to complete 300 interviews. The approach to data collection does not include a specific response rate target; rather the plans for this pilot are based on a target number of completed interviews. No estimates about skill levels for any population will be published based on this pilot.7 Furthermore, the goals for the pilot do not include a demonstration of what response rate is achievable—only an assessment of whether it is possible to conduct a brief skills assessment by phone. The evaluation team expects that since the sample will consist of volunteers recruited by grantees that they will be easier to locate and still interested in participating. Assessment interviews will be completed using CAPI technology. Interviewers will be encouraged to quickly close out cases that are difficult to contact and move on to the next case in order to expediently complete this assessment pilot process.

HPOG 2.0 National Evaluation Cost-Benefit Analysis Study Data Collection Procedures

The Program Cost Survey (Instrument 20) will be administered to each of the staff at all 27 non-tribal grantees—to capture cost data for each of the 38 HPOG 2.0 programs. The survey will capture data on costs for staff, overhead, direct provision of training, and provision of support services. The evaluation team will ask grant managers from each of the 38 HPOG 2.0 programs to determine which staff members are the most knowledgeable about cost information. Selected staff members will attend an informational webinar to introduce the cost-benefit analysis (CBA), learn about the concepts used in the survey, and ask preliminary questions.8 Upon request, CBA staff will also call individual programs to discuss any questions before the survey. Such guidance may be necessary to improve accuracy because each program has its own structure and service offerings and so may need specific information on different survey components. Program staff will complete the survey using web-based software. The evaluation team will review the submitted documents and follow-up on missing data items as needed.

Tribal Evaluation Implementation Study Data Collection Procedures

The data collection procedures for the Tribal Evaluation data collection instruments are described in revision number 1 of OMB Control Number 0970-0462, previously approved in June 2017.

        1. HPOG Program Performance Report Based On Grantee-Level and Ongoing Participant-Level Data

The data collection procedures for the previously grantee-level and ongoing participant-level data collection done under the PAGES system are described in original submission of OMB Control Number 0970-0462, previously approved in June 2017.

        1. Procedures with Special Populations

The study documents—including the recruitment materials, advance letters, and flyers developed for the National Evaluation participant level data collection efforts—were designed at an 8th-grade readability level. This ensures that the materials can be understood by most study participants. The Intermediate Follow-up Survey will be administered in both English and Spanish.

The procedures used to ensure that special populations can understand the various instruments that were previously approved are described in the information collection requests approved in June 2017 and June 2018.

B.3 Methods to Maximize Response Rates and Deal with Nonresponse

This section first describes the methods used to maximize response rates for the Intermediate Follow-up Survey then the descriptive study participant interviews.

        1. National Evaluation impact study Intermediate Follow-up Survey

The methods used for the Intermediate Follow-up Survey will be nearly identical to those approved for use in the Short-term Follow-up Survey. Specifically, the evaluation team will use the following methods to maximize response to the Intermediate Follow-up Survey effort:

  • Participant contact updates and locating;

  • Incentives; and

  • Sample control during the data collection period.

(See the second revision to OMB Control No. 0970-0462, approved in June 2018 for more details on these methods.) Using those same procedures, the evaluation team anticipates being able to achieve the targeted 80 percent response rate for the Intermediate Follow-up Survey.

          1. Participant Contact Updates

The HPOG 2.0 National Evaluation impact evaluation team will continue participant contact update efforts (previously approved in June 2017) between the Short-term and Intermediate Follow-up Survey efforts only for those participants who will be part of the Intermediate Follow-up Survey data collection. The evaluation team intends to include in the target sample all study participants within the enrollment cohorts selected for the Intermediate Follow-up Survey, regardless of whether or not they responded to the Short-term Follow-up Survey.9This is consistent with how the evaluation team has handled non-respondents on similar Career Pathways Studies (PACE, OMB Control No. 0970-0397 and HPOG 1.0 Impact OMB Control No. 0970-0394). The evaluation team plans to maintain the same incentive structure—gift certificates provided via an email link or a physical gift card—for the Intermediate Follow-up Survey as used for Short-term Follow-up Survey. As described further in Supporting Statement A, Section A9, this request for clearance includes a modest increase in the incentive amount—from $40 to $45.

As discussed above, because the Phone-based Skills Assessment is a pilot, the intent is not to maximize the response rate, but rather complete a target number of interviews (300). The evaluation team plans to offer incentives to participants that complete the skills assessment as well.

          1. Incentives

The evaluation team will offer an incentive valued at $25 for each participant that responds to the phone-based Skills Assessment Pilot. The incentive is a way to thank the participant for their help in ensuring that the assessment instrument is feasible to administer by phone and to identify which items are most useful in assessing literacy and numeracy skills. The incentive also helps to offset any costs incurred as a result of their participation such as cell phone minutes or child care costs. The proposed incentive for this pilot is smaller than the incentives for some other instruments in the HPOG 2.0 impact evaluation because of both the lower burden on respondents and the fact that this is a single administration: that is, we do not repeat the skills assessment pilot data collection with respondents at a later date.

The evaluation team will also offer an incentive for completion of the Intermediate Follow-up Survey. Respondents will receive a $45 gift certificate. The following factors helped determine the amount of the incentive: the target response rate of 80 percent, the projected 60 minute length of the survey, the smaller sample size (only 5,000 of the 13,000 selected for the Short-term Follow-up Survey), and the duration of the follow-up period. The team also took into account the incentive amounts approved for previous rounds of data collection on OPRE’s prior Career Pathways studies (PACE and HPOG 1.0 Impact, OMB control numbers 0970-0397 and 0970-0394 respectively), to ensure that the planned amount is comparable. As with the contact update forms and Short-term Follow-up Survey, respondents will receive an email with customized instructions showing them how to log in to a secure study portal where they can redeem a $45 gift card from their choice of approved vendors.

Without an incentive of this magnitude, the impact evaluation study is unlikely to meet the quality targets defined by OMB and the Information Quality Act10 (see Supporting Statement A, Section A9 for more information).

Incentives at one or more phases of data collection have been used successfully on a number of similar federally-sponsored surveys such as PACE (OMB control number 0970-03970) and the HPOG 1.0 Impact Study (OMB control number 0970-0394.) These two studies are similar in nature to HPOG 2.0 both programmatically and in terms of respondent characteristics. We cite these two previously approved studies not to justify the use of incentives, but rather our choice of the proposed amount of the incentive. The planned incentive amount is comparable to what was offered for the follow-up survey efforts for both of those studies.

          1. Sample Control

Finally, the team does not rely solely on the contact updates or the use of incentives to maximize response rates and reduce nonresponse bias. The evaluation team will use the same sample control procedures for monitoring survey production—well-trained interviewers, clear disposition codes, Spanish-language options, a variety of communication tools for interviewers that were approved for the Short-term Follow-up Survey. (See the ICR approved under OMB Control No. 0970-0462 approved in June 2018 for more details on these sample control procedures.)



        1. National Evaluation descriptive study participant interviews

The descriptive study participant interview data are not intended to be representative in a statistical sense, in that they will not be used to make statements about the prevalence of experiences for the full HPOG 2.0 population, nor the broader TANF-eligible population. However, it is important to secure participants from a wide range of programs, with a range of background characteristics, to capture as diverse a set of possible experiences with HPOG 2.0 experiences as possible. Given the time required for an in-person interview, incentives for participation will be useful tool in helping to maximize response rates. Those who complete the in-depth participant interview will receive a non-cash honorarium valued at $40, via email. Participants will receive an email with instructions to log in to a secure study portal where they can redeem the gift certificate to one of the approved vendors (see procedures for redeeming procedures described under the Intermediate Follow-up Survey). OMB previously approved similar use of incentives for the HPOG 2.0 Tribal Evaluation (participant focus groups and interviews) in June 2017 under this OMB Control Number (0970-0462) and the Pathways for Advancing Careers and Education (PACE) study (OMB Control Number 0970-0397). Without the use of incentives, we are concerned that we will not reach the target number of completed interviews in each category, which could jeopardize the utility of the participant interview data.

        1. Nonresponse Bias Analysis and Nonresponse Weighting Adjustment

If interviewers achieve a response rate below 80 percent for the Intermediate Follow-up Survey, the research team will conduct a nonresponse bias analysis and, if necessary, create nonresponse weighting adjustments using the same protocols approved for the Short-term Follow-up Survey.

The phone-based Skills Assessments Pilot effort is not subject to non-response bias. The effort targets a certain number of completes—as opposed to the standard 80 percent response rate—with no restrictions on things such as site or intervention group. As described in section B1 above, the target sample for this effort is comprised of volunteers. If the target number of completed assessments is not reached with the first batch of volunteer sample, the evaluation team can work with grantees to obtain additional volunteers.

B.4 Tests of Procedures

This section discusses the tests of procedures planned or already conducted for the instruments that are included in this information collection request. The section first addresses the HPOG 2.0 National Evaluation descriptive evaluation, then the impact evaluation, followed by the cost-benefit analysis study. The section then provides references to the previously approved information collection requests under this OMB Control Number (0970-0462) should reviewers want to learn more about the tests of procedures for previously approved instruments.

        1. HPOG 2.0 National Evaluation Descriptive Evaluation

The evaluation team conducted pretest interviews with fewer than ten study participants to ensure that the participant interview guide was working as intended. The participant interview guide (Instrument 17) that is part of this information collection request reflects changes needed based on that pretest feedback.

The other new descriptive study instruments are similar in content and structure to the previously approved descriptive study instruments. We are relying on the pretests done for those original instruments this time. See previously approved information collection request under this OMB Control No. approved in 2017 for more on the tests of procedures for the other descriptive study instruments.

        1. HPOG 2.0 National Evaluation Impact Evaluation

This section discusses the two instruments for the impact evaluation that are the subject of the information request.

          1. Intermediate Follow-up Survey

In designing the Intermediate Follow-up Survey, the evaluation team included items used successfully in other national surveys, particularly the HPOG 2.0 Short-term Follow-up Survey (OMB Control No. 0970-0462), and the PACE and first round of HPOG impact follow-up surveys (OMB control numbers 0970-0397 and 0970-0394) respectively. Consequently, many of the survey questions have been thoroughly tested on large samples.

If time allows post OMB approval of this data collection, the instrument will be programmed prior to pretesting and a sample of 15 to 20 participants will be used to ensure that the length of the instrument is consistent with the burden estimate. Otherwise the evaluator will pretest the survey with up to nine participants, using paper forms rather than CAPI. During internal pretesting, all instruments are closely examined to eliminate unnecessary respondent burden and questions deemed unnecessary were eliminated.

          1. Phone-based Skills Assessment Pilot

The phone-based Skills Assessment Pilot is itself a test. The pilot is being used to sort items by difficulty. Assuming that the pilot is successful, the intent is to make the assessment module for the intermediate survey adaptive. That is, the CAPI program being used to conduct the surveys will dynamically vary the set of items presented to the respondent based on prior responses. One possible plan is to have three item difficulty groups for the vocabulary items and three for the math items. The software might present four medium difficulty items and then follow up with a set of four easy or four hard items based on the respondent’s performance on the four medium difficulty items. The pilot will provide sufficient information to refine these broad plans. Critical information that will be provided by the pilot includes the time required for each item. Because the instrument also includes information about earned credentials and use of basic skills in everyday life, the evaluator will also be able to select the items that correlate better with these measures. The draft Intermediate Follow-up Survey includes all of the items from the pilot assessment. Based on the findings from the pilot, we will drop the questions that do not prove successful in the pilot and will submit the final instrument to OMB as a nonsubstantive change request.

        1. HPOG 2.0 National Evaluation Cost-Benefit Analysis Study

The cost-benefit analysis study requires detailed information from grantees and stakeholders. It is possible that multiple people at each organization will need to provide the information. The evaluation team reached out to one individual at five grantee programs to ask them to review the data collection items of interest to the evaluation team and provide an assessment of the feasibility of collecting that information and the level of effort required to do so. The evaluation team reviewed the feedback from grantees and adjusted protocols accordingly. The instruments in this information collection request reflect the changes that resulted from that feedback.

        1. HPOG 2.0 Tribal Evaluation

See previously approved information collection request under this OMB Control No approved in 2017 for more on the tests of procedures for the Tribal Evaluation.

        1. PAGES

See previously approved information collection request under this OMB Control No approved in 2015 for more on the tests of procedures for the PAGES system.

B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

With ACF oversight, Abt and its partners MEF Associates, the Urban Institute and Insight Policy Research are responsible for conducting the HPOG 2.0 National Evaluation. This team has drafted an Impact Evaluation Design Plan with considerable detail on planned analytic procedures. It will be published in 2019. Prior to analyses, an even more detailed analysis plan will be prepared and published.

The individuals listed in Exhibit B-3 below made a contribution to this information collection request.




Exhibit B-3: Contributors

Name

Role in HPOG 2.0 National and Tribal Evaluation

Organization/Affiliation

Gretchen Locke

National Evaluation Project Director

Abt Associates

Jacob Klerman

National Evaluation Co-Principal Investigator

Abt Associates

Bob Konrad

National Evaluation Co-Principal Investigator

Abt Associates

Robin Koralek

National Evaluation Deputy Project Director

Abt Associates

Larry Buron

National Evaluation Project Quality Advisor

Abt Associates

David Judkins

National Evaluation Director of Impact Analysis

Abt Associates

Debi McInnis

National Evaluation Site Coordinator

Abt Associates


Inquiries regarding the statistical aspects of the HPOG 2.0 National Evaluation design should be directed to:

Gretchen Locke, Project Director

Abt Associates

10 Fawcett Street, Suite 5

Cambridge, MA 02138

(617) 349-2373


The following HHS staff—including the HHS project officers Hilary Bruck, Nicole Constance, and Amelia Popham—have overseen the design process and can be contacted at:

Hilary Bruck

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

330 C Street S.W., 4th Floor, Washington, D.C. 20201

(202) 619-1790


Nicole Constance

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

330 C Street S.W., 4th Floor, Washington, D.C. 20201

(202) 401-7260


Amelia Popham

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

330 C Street S.W., 4th Floor, Washington, D.C. 20201

(202) 401-5322

1 Two of the instruments for the descriptive study associated with the second round of telephone interviews (Instrument 13 and 14) and the program cost survey (Instrument 20) for the cost-benefit analysis study do not require any sampling.

2 Where there are insufficient participants who have dropped out of the training program within the last six months, we may extend the time period to 12 months since dropping out of the program.

3 We have assumed a 67 percent response rate because we need to ensure that we recruit sufficient participants into the study to produce the target number of completed interviews. We will select a pool of up to 45 participants, representative of the program from which we will select 15 participants using stratified sampling to ensure we get some representation from each group of interest. We expect to complete interviews with 10 of the 15 participants selected—a 67 percent response rate. If the response rate is higher at the first two sites, we will adjust the recruitment strategy accordingly.



4 The evaluation team will prepare a short methods report on the pilot assessment study that might be published as a white paper or serve as the basis for a journal paper—explaining the process followed to develop the short skills pilot and incorporate it into the Intermediate Follow-up Survey. The results will not be analyzed as part of the impact study findings.

5 The draft Intermediate Follow-up Survey includes all of the items from the pilot assessment, to ensure that we had OMB approval for each item. Based on the findings from the pilot, we will drop the questions that do not prove successful in the pilot.

6 In the event that fewer than 300 volunteers respond to the initial assessment pilot outreach effort, the team will reach out to grantees to identify additional volunteers. Since the sample is based on volunteers, we do not expect a second recruitment effort will be necessary.

7 If there are any papers published based on the pilot, they would only concern the psychometric properties of the assessment.

8 Four HPOG 2.0 grantees provided the evaluation team with feedback on an earlier draft of the instrument. The team introduced the CBA and the program cost survey at the annual meeting for HPOG grantees in August 2018.

9 The only exceptions will be those who were confirmed deceased or asked to withdraw from future data collection efforts.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created0000-00-00

© 2025 OMB.report | Privacy Policy