SCC4 OMB Supporting Statement Part B_6-11

SCC4 OMB Supporting Statement Part B_6-11.docx

Strengthening Community Colleges Training Grants Program Round 4 (SCC4) Evaluation

OMB:

Document [docx]
Download: docx | pdf


  1. COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS

The Chief Evaluation Office (CEO) and the Employment and Training Administration (ETA) in the U.S. Department of Labor (DOL) are partnering to commission an evaluation of the fourth round of the Strengthening Community Colleges Training Grants Program (SCC4). Rounds One to Three provided grants to build the capacity of community colleges to collaborate with employers and the public workforce development system, as well as to help students overcome barriers to career and technical education programs. Across the three rounds, DOL awarded $135 million in grants to 39 community colleges, including consortium grants. Round Four, which awarded approximately $65 million dollars in grants to 41 community colleges, including 16 lead community college grantees and an additional 25 consortia members, aims to enable community colleges to adopt evidence-based career pathways programs that lead to jobs that offer living wages and opportunities for advancement in locally in-demand industries such as advanced manufacturing, healthcare, and IT. The SCC4 evaluation will examine the impact of providing student success coaching and enhanced supports on participant outcomes, which services are effective for which types of students, why they are effective, and the core components of the SCC4 programs that support success. In addition, SCC4 will build on existing evidence regarding successful programs in community college settings and advance the understanding of how career pathways programs promote outcomes. CEO contracted with Mathematica and its subcontractors, the Community College Research Center and Social Policy Research Associates from Spring 2024 through Summer 2029 to conduct impact, outcomes, and implementation studies. This information collection request seeks Office of Management and Budget (OMB) clearance for new data collection for the SCC4 evaluation. This package requests clearance for seven data collection instruments as part of the evaluation:

  1. Participant baseline survey and consent form

  2. Participant follow-up survey

  3. Contact information update request

  4. Service receipt log

  5. College survey

  6. Semi-structured interview topic guide for college administrators, program directors, program staff, college instructors and faculty, and partners

  7. Semi-structured interview topic guide for SCC4 participants

The SCC4 evaluation will include three components: (1) an impact study of student success coaching and enhanced supports, (2) an outcomes study, (3) and an implementation study. We will use the first four data collection instruments for the impact study and draw on the college survey for the outcomes study. The implementation study will draw on the college survey and remaining data collection instruments. Once approved by OMB, these data collection instruments will be submitted for approval by the Health Media Lab institutional review board.1

B.1. Respondent universe and sampling methods

The universe of potential sites for the data collection efforts for which we are requesting OMB clearance approval includes all of the SCC4 grantees that received grants in 2024. The SCC4 evaluation will include three components: (1) an impact study of student success coaching and enhanced supports, (2) an outcomes study, and (3) an implementation study.

The impact study will be a randomized controlled trial (RCT) to estimate the impacts of student success coaching and enhanced supports on participant outcomes. We will conduct the impact study at five community colleges across four of the SCC4 grantees, using the first four data collection instruments. The study team will collect participant consent and administer baseline and follow-up surveys to impact study participants. We will also send brief contact information update requests to participants to update their contact information. Finally, we also will collect logs of meetings with coaches at the impact study colleges.

Both the outcomes and implementation studies will include all 16 grantees. For the implementation study, we will administer a college survey to program directors and conduct phone interviews with administrators, program leaders, and program staff at each grantee. For the implementation study, we will also conduct in-person site visits at the five colleges purposively selected for the impact study. These visits will include interviews with SCC4 program staff and participants, faculty and instructors, and consortium partners (including employers and supportive service providers). The results of the college survey will also inform the outcomes study.

1. Selection of sites

Impact study. The impact study will include approximately 6,000 participants from a purposive sample of five community colleges associated with four of the grantees. We selected these colleges to be in the sample because each college expected to enroll at least 500 participants in their SCC4 program, did not have a comprehensive coaching program at the college or a partner, and were open to randomly select some program participants to receive more intensive supports than others. To identify community colleges who meet this criteria, we conducted phone calls with all grantees and conducted site visits to grantees who we identified as likely to meet the criteria for the impact study. All participants enrolling in the SCC4 program during the study period at impact study community colleges will be asked to participate in the study.

Outcomes study. The outcomes study will include all 16 grantees, but the study’s use of administrative data outcomes will vary across grantees. We will collect service receipt data for all grantees and administrative earnings data for 12 of the grantees. Specifically, we identified 12 grantees to receive supplemental funding for an outcomes study. These 12 grantees were identified based on suitability for a quasi-experimental study which we later opted not to conduct. To minimize the additional data collection for grantees who receive supplemental funding, we will restrict the administrative data collection to those who received the funding. We will analyze results from the outcomes analysis by program characteristics, as measured by the college survey.

Implementation study. The implementation study will include all colleges associated with the SCC4 grantees. This includes 41 colleges, comprised of 16 grantees and an additional 25 consortia members. We will include the 16 grantees in a document review, including a review of grant applications and quarterly progress reports. We will include all 41 colleges in a college survey and the five impact study colleges (associated with four of the grantees) in two rounds of site visits. We will include the remaining 12 grantees in up to three phone interviews per grantee.

2. Selection of respondents

See Table B.1 for estimates of the universe and sample sizes for each data collection instrument.

Table B.1. Sampling and response rate assumptions, by data collection instrument and respondent type

Respondent type

Sampling method

Universe of potential respondentsa

Estimated selected sampleb

Estimated response ratesc

Estimated responses per respondentd

Estimated responses (across sites)

Average responses (across sites)3

Impact study data collectiona

Baseline survey of study participants

Census

6,000

6,000

100%

1

6,000

1,200

Contact information update request from study participants

Census

5,400

5,400

50%

4

10,800

2,160

Follow-up survey of study participants

Census

6,000

6,000

30%

1

1,800

360

Service receipt logs

Census

15

15

85%

3,315

49,725

9,925

Implementation study data collection

College survey

Census

41

41

95%

1

39

1

Interviews with college administrators

Purposive


36

100%

1.3

46

0.8

Interviews with program directors

Purposive

18

18

100%

1.3

23

0.5

Interviews with instructors and faculty

Purposive


15

100%

2

30

0.7

Interviews with program and college staff

Purposive


15

100%

2

30

0.7

Interviews with partners

Purposive


32

100%

1

32

0.8

Participants interviews

Purposive

6,000

40

75%

1

30

0.75

a The Universe of potential respondents is estimated as the total number of individuals eligible for the data collection. This value is missing when we do not have a clear estimate of the number of potential respondents, for example, the number of administrators at each college.

b The estimated selected sample represents the number of people who will be contacted for participation in the data collection.

c The estimated response rate is calculated at the response level. For example, in the case of service logs, we estimate that all 15 coaches will fill out logs, but each coach will only fill out an average of 85 percent of them.

d Estimated responses per respondent is estimated as the average number of responses among respondents who complete at least one data collection. For service receipt logs, this is estimated as 25 meetings per week * 52 weeks per year * 3 years * 85 percent response rate = 3,315. For interviews with college administrators and program directors, this is calculated as (2 * 5 impact study sites + 1 * 13 non-impact study sites) / 18 total sites.

e For impact study data collection, the estimated responses per site is limited to the sites participating in the impact study. The anticipated number of colleges to be included in the impact study is five. For implementation study, this includes the 18 colleges that will be included in the study – the five impact study colleges, the 12 non-impact study lead grantees, and the lead grantee from the Western consortium which includes two impact study sites.

Participant baseline survey and consent form. The sampling unit for this survey is study participants at colleges selected for the impact study who meet the program eligibility requirements and consent to be part of the study.2 The universe is the estimated 6,000 SCC4 impact study participants (which will include both treatment and comparison group members). The study team anticipates that the survey will take an average of 15 minutes to complete. Given that completing the baseline survey will be a requirement of enrolling in the study, the team anticipates collecting information on study participants’ characteristics for 100 percent of impact study participants.

Contact information update requests. We will send the contact information update requests via text every six months to all impact study participants who provided a phone number and permission to text in the baseline survey. We assume that 90 percent of the 6,000 study participants will provide us with their phone number and permission to test, giving us a sample of 5,400. We will conduct outreach to all potential respondents – therefore this will be a census and will not require sampling.. The study team anticipates that the contact information update request will take an average of two minutes to complete, with a 50 percent completion rate.

Participant follow-up survey. We will administer the follow-up survey to impact study participants approximately six months to two years following study enrollment and completion of the baseline survey. We will not use any statistical methods to select respondents for this survey because we will conduct outreach to all participants enrolled at impact study colleges who consented to the study. The study team anticipates that the survey will take an average of 15 minutes to complete. Recent surveys of community college students highlight the challenges of obtaining high response rates.3,4 The study team anticipates a 30 percent response rate for the survey.

Service receipt logs. We will ask all SCC4 coaches at all five impact study colleges to record information from all participants in service receipt logs, with an average of three coaches per college, all completing the logs. The study team anticipates that each log will take an average of three minutes to complete. The team expects all coaches to complete logs, with each coach filling out logs for approximate 85 percent of their coaching sessions. We expect that each coach will meet with approximately 25 learners each week, for three years. This amounts to 3,315 logs per coach. While this is a meaningful burden on coaches, this will be fit into their standard tracking of interactions with participants for most coaches. Further, the coaching positions are funded by the SCC4 grants, including the supplemental funding for this study. The service receipt logs are a crucial part of this study as they will allow us to measure whether the random assignment was effective. The coaches will therefore consider this to be a part of their job.

College survey. We will field the college survey to program directors at all SCC4 colleges (n = 41). This will be a census of program directors at all colleges in SCC4. The study team expects that the survey will take up to 40 minutes to complete and anticipates a 95 percent response rate.5

Semi-structured interview topic guide for college administrators, program directors, program staff, instructors and faculty, and partners. We will use this topic guide during two three-day site visits to each of the five colleges participating in the impact study and also to guide up to three phone interviews with college staff and an employer partner at each of the 12 grantees not participating in the impact study.

  • College administrators. The study team will interview up to two college administrators during each of two site visits to the five colleges (four grantees) participating in the impact study and will also conduct a phone interview with college administrators at the grantees not participating. The study team anticipates that interviews with college administrators will last 60 minutes. The study team anticipates conducting up to 32 total interviews with college administrators.

  • Program directors. The study team will interview program directors during each of two site visits to the five colleges (four grantees) participating in the impact study and will also conduct a phone interview with program directors at the 12 grantees not participating. The team anticipates that interviews with program directors will last 60 minutes.  The study team anticipates conducting up to 22 total interviews with program directors.

  •  Instructors and faculty. The study team will conduct individual or small group interviews with up to three instructors and faculty during each visit to the five colleges (four grantees) participating in the impact study, including up to three who provide instruction to SCC4 participants. The team anticipates that interviews with instructors and faculty will last 60 minutes.  The study team anticipates conducting up to 30 total interviews with instructors and faculty.

  • Program and college staff. The study team will conduct individual or small group interviews with three student services staff, including coaches and student support staff providing services to the treatment group, during each visit to the five colleges (four grantees) participating in the impact study. The study team anticipates that interviews with student services staff will last 60 minutes.  The study team anticipates conducting up to 30 total interviews with program and college staff.

  • Partners. The study team will interview up to four partners, including employers, during one visit to the five colleges (four grantees) participating in the impact study. We will also conduct one phone interview with partners at non-impact study grantees. The study team anticipates that interviews with partners will last 60 minutes. The study team anticipates conducting up to 32 total interviews with partners.

Semi-structured interview topic guide for SCC4 participants. The goal of the semi-structured interviews with treatment participants is to gather information on the education, training, coaching, and supportive and job placement services offered to students receiving student success coaching and enhanced supports. These interviews will also be used to understand more about how participants in the impact study experienced the student success coaching and enhanced supports. The study team will conduct interviews with up to six participants during each visit to the five colleges (four grantees) participating in the impact study. The study team anticipates that interviews with participants will last 60 minutes.

B.2. Procedures for collection of information

Understanding the effectiveness of the SCC4 programs requires collecting data from multiple sources.

Participant baseline survey and consent form. The goal of the baseline survey is to collect the following from study participants: baseline demographic information, employment history, participant interest in receiving supportive services, and contact information for the follow-up survey. Program staff will administer the 15-minute baseline survey to all eligible individuals at the five impact study colleges after they give their consent. Staff will enter individuals into Salesforce or a similar existing case management platform used at a college to create a participant profile at the beginning of the intake process. Once the participant is entered into the system, program staff will either hand over the electronic device to participants so they can read through the consent form and begin the baseline survey or send them a link to complete the consent form and baseline survey on their own device. To fully ensure informed consent, the study team will train grantee program staff on the intake process. At the end of the baseline survey, the web-based system will assign participants to the treatment or comparison group using random assignment. The baseline survey and consent form will be translated into Spanish.

Contact information update requests. The goal of the contact information update requests is to maintain an updated list of contact information for impact study participants for more effective outreach with the follow-up survey. The study team will send a two-minute contact information update request via text to all impact study participants who agreed to receive text messages. The text will contain a link to complete the short survey. Texts will be sent every six months throughout the course of the evaluation or until they have been contacted for their follow-up survey. Some study participants will have a shorter follow-up survey period and will receive fewer notifications. The update requests will ask participants to confirm that their contact information (phone, email, and mailing address) is up to date. If not, we will ask them to provide current information, which we will use to reach them for future surveys. The contact information update requests will be translated into Spanish.

Participant follow-up survey. The goal of the participant follow-up survey is to gather detailed information about participants’ employment and education outcomes to complement outcomes data collected through administrative data sources. The survey will also be used to assess participant experiences with student success coaching and enhanced supports. The study team will field this 15-minute web survey with all impact study participants in fall 2027, between six months and two years after they enroll in the study. Participants will be contacted to complete the survey by email and text, and the survey will be optimized for completion on mobile devices. We will administer the survey at a single point in time toward the end of the evaluation period. This approach will mean that we will capture varying lengths of time between the baseline and follow-up survey for each participant. As a result, we will capture experiences from participants on a range relative to their program entrance. Doing so will provide us with a more comprehensive picture of participant experiences following enrollment. However, this will also require some adjustments which we discuss more below. The follow-up survey will be translated into Spanish.

Service receipt logs. The goal of the service receipt logs is to track the services that treatment and comparison group participants receive in the impact study. The logs will focus on the essential information required for the study, including the goals of the meeting, content covered, and duration of the interaction. The study team will work with impact study colleges to integrate the electronic service receipt logs into existing case management systems or provide a study-created Salesforce system, used for enrollment, to create a record of their interaction with impact study participants. We will determine which platform to use based on which the college feels will be the lowest burden. The team will train coaches on either system so that after locating a learner’s record, coaches can efficiently click on multiple choice response options to complete the log entry. We will also closely monitor the service receipt log data to ensure that the number of meetings logged is consistent with our expectations, and will discuss any discrepancies we see with college staff. The study team expects all coaches to complete logs and expects that they will take two minutes each to complete.

College survey. The goal of the college survey is to gather systematic information on SCC4 programs, including the targeted sector and strategy elements being implemented, planned and established partnerships, services offered, strategies to reach the intended population, and early successes and challenges. The survey is intended to collect information about grant implementation that will not be captured through site visits and interviews. The study team will field a 40-minute web survey to all colleges receiving SCC4 funds. We will ask the program director, the main grantee contact, to complete the survey, with the option to designate another contact as the respondent if necessary. One program director per college will take the survey. We will send program directors a live secure link to the survey instrument and send reminder emails over the course of 6 weeks. If they do not complete the survey in that time frame, a study team member will conduct follow-up phone calls to encourage completion.

1. Site visits and interviews

As part of the implementation study, the study team will conduct two rounds of three-day site visits to each of the five colleges (four grantees) participating in the impact study and three phone interviews with college staff and an employer partner per grantee from each of the grantees not participating. When scheduling site visits, the study team will explain the nature of the visits, negotiate a time for it that is convenient for college staff, share a suggested schedule, and work with college staff to finalize a list of respondents and a site-specific schedule before the visit.

To ensure cooperation, the study team will be flexible in scheduling interviews and activities to accommodate respondents’ particular needs. Although the team will try to arrange in-person interviews with all respondents, sometimes a respondent might be unable to meet. If so, the study team will arrange for a virtual interview at a more convenient time. Site visitors will explain the purpose of the study and receive consent from each respondent before beginning each interview.

We will use well-established strategies to ensure the reliability of interview data collected during site visits and interviews. First, site visitors, all of whom already will have had extensive experience with this data collection method in a college setting, will be thoroughly trained in this study, its research questions, the data collection instruments and goals. They will also be trained on how to consistently probe for additional details to help interpret responses to interview questions. Each site visit team will include a lead site visitor with subject matter expertise in workforce development and community college programming. They will be supported by a junior site visitor who will take detailed notes throughout each interview. Second, this training, the use of consistent topic guides, and a template for summarizing data will ensure that data are collected in a standardized way across sites. Third, a senior team member will review all site visit summaries to ensure they are sufficiently complete, collect consistent data across colleges, and identify areas of potential follow-up. Finally, site visitors will assure all interview respondents that their responses will remain private, reports will never identify respondents by names, and any quotations will be devoid of identifying information.

Semi-structured interview topic guide for college administrators, program directors, program staff, instructors and faculty, and partners. Interview procedures will be the same for college administrators and program directors; instructors and faculty; student services staff; and partners. The study team will work with a liaison at the college to identify key respondents, schedule interviews in advance of the site visit, and make sure there is a quiet and private space to conduct the interview. Each respondent will review a consent form and provide verbal consent before participating in the interview, which will make it clear that they do not need to answer every question and their responses will private. Researchers will record and transcribe interviews with respondents’ consent. In addition, researchers will take notes in an interview summary sheet.

Semi-structured interview topic guide for SCC4 participants. We will choose participants based on their enrollment in a program affected by SCC4. The study team will work with college staff to develop a purposeful sample of students to participate in interviews. We will seek to interview student who have sufficiently engaged in SCC4 services to provide insights on their experiences and will work to identify a sample of students with different characteristics. While this will not be a representative sample, we will seek to learn from students with varying perspectives. For example, we will work to speak with students enrolled in different types of SCC4 trainings within each college. The team will ask college liaisons for a list of potential interviewees and their contact information. The team will then recruit from among this list and send invites to interviews. They will work with the liaison before the site visit to ensure there is a quiet and private space to conduct the interviews. The study team will invite interview participants to participate in a 45-minute interview.

Each student will review and sign a consent form before participating in the interview, which will make clear that they do not need to answer every question and their participation is voluntary. We will ask interview participants for permission to audio-record the interviews as part of our informed consent procedures. We will record and transcribe interviews with respondents’ consent. If interview respondents do not want to be recorded, study team members will take hand- or typewritten notes during the interview as an alternative form of data collection. We will ask participants if this alternative is acceptable before beginning the interview. Students will receive a $45 gift card for participation. Student participants will still receive their full compensation even if they opt not to be audio-recorded.The main type of data collected from the interviews will be qualitative information about staff’s experiences and insights implementing the SCC4 grant or, in the case of students, their motivations for participating in SCC4 programs and their experiences while doing so. Thus, no statistical methodology (such as sample stratification) or estimation will be needed in the analysis of the interview data. We will develop a study-specific coding scheme to guide analysis of interview data to identify common themes related to implementation context, planning, partnership development, ongoing implementation, and implementation challenges and solutions.

Analysis of interview data will involve coding and triangulating across data sources. The evaluation team will begin by writing up detailed field notes, using a template, notes from in-person and telephone interviews in a structured format. To code the qualitative data for key themes and topics, a coding scheme will be developed and organized according to key research questions and topics and guided by the conceptual framework. Each segment of coded data will be assigned a negative or positive flag to identify barriers to and facilitators of implementation. This process will reduce the data into a manageable number of topics and themes for analysis (Ritchie and Spencer 2002). The evaluation team will then code the data using qualitative analysis software, such as NVivo or ATLAS.ti. To ensure reliability across team staff, all coders will code an initial set of documents and compare codes to identify and resolve discrepancies. Because the implementation study is examining grant implementation, study findings will apply only to the SCC4 grantees and will not be more broadly generalizable.

2. Statistical methodology, estimation, and degree of accuracy

a. Impact study

To estimate the effect of student success coaching and enhanced supports in our experiment, we will use a treatment effects framework. Table B.2 presents the impact study outcomes we will examine and the corresponding data sources for each data type. First, we will use descriptive methods (including simple frequencies, cross-tabulations, and means, when appropriate) to provide contextual information about the characteristics of participants in the impact study using data from the participant baseline survey. We will also calculate descriptive statistics on baseline characteristics available from grantee administrative data to assess differences between the population of program enrollees and the full set of SCC4 participants across all grantees. Given that we will use random assignment, there should be no systematic observable or unobservable differences between research groups across key indicators, including sex, race/ethnicity, education level, and employment history, except for the services offered after random assignment. However, it is possible that differences will emerge between the treatment and comparison group, due to either random sampling variation or random chance. Therefore, we will use the baseline survey data to compare the treatment and comparison groups, and identify any differences between them. We will use statistical tests, such as t-tests or chi-squared tests, to assess whether statistically significant differences exist in these baseline measures of program and control groups using available baseline data. If we find meaningful differences between the groups, we will consider developing analytical weights to use in impact analyses to ensure that the groups are similar.

Table B.2. Impact study outcomes and data sources

Variable

Domain

Data source

Primary outcomes

 

 

Completed SCC4 program or on track to complete

Program completion and credentialing

Follow-up survey, WIPS data

Received credential

Program completion and credentialing

NSC, community college data

Enrolled in additional postsecondary education

Continued education

NSC

Employment in 4th quarter following SCC4 enrollment

Employment and earnings

NDNH

Earnings in 4th quarter following SCC4 enrollment

Employment and earnings

NDNH

Secondary outcomes

 

 

Number of SCC4 courses completed

SCC4 service receipt

Community college data

Completed SCC4 program

Program completion and credentialing

Follow-up survey, WIPS data

Enrolled in an associate's or bachelor's degree program

Continued education

NSC

Enrolled in a bachelor's degree program

Continued education

NSC

Quarterly employment and earnings

Employment and earnings

NDNH

Earnings in the second year following enrollment

Employment and earnings

NDNH

Overall job satisfaction

Job quality and satisfaction

Follow-up survey

Earnings above the poverty line in the second year following enrollment

Employment and earnings

NDNH

Working in training industry

Job quality and satisfaction

Follow-up survey

Hourly wage

Job quality and satisfaction

Follow-up survey

Benefit receipt through job

Job quality and satisfaction

Follow-up survey

Advancement opportunities through job

Job quality and satisfaction

Follow-up survey

Overall job quality index

Job quality and satisfaction

Follow-up survey

Receipt of government benefits

Job quality and satisfaction

Follow-up survey



To estimate impacts on services received, the study team will regress individual outcomes on whether the individual was randomly selected to receive the student success coaching and enhanced supports offered through SCC4. To improve the precision of impact estimates, we will include covariates, such as baseline demographics from the participant baseline survey and pre-study employment and earnings. Our primary estimates will be an intent to treat (ITT) estimate that will show the impact of being assigned to the treatment group on outcomes. Specifically, we will estimate the following equation:

(1)

where is the outcome (for example, earnings or postsecondary school enrollment) for individual participating in an SCC4 grantee program g, who enrolled in time t. is an indicator of whether individual was assigned to treatment; is a set of baseline covariates at the individual or grantee level measured using the participant baseline survey; are grantee program-specific fixed effects; are time-specific fixed effects; and is a participant-level error term. In this equation, is the intention-to-treat parameter—the impact of having access to student success coaching and enhanced supports. We will estimate this equation using ordinary least squares for continuous outcomes and linear probability models for binary outcomes. The primary impact analysis will pool impacts across all grantees.

To estimate impacts on binary outcome variables which are estimated over the full follow-up period, we will use a Cox proportional hazard model. This will allow us to consider the full follow-up period for study participants with varying amounts of data available. For participants who enroll in the study in fall 2025, we will have at least two years of outcomes data. In contrast, for participants who enroll in the study in January 2027, we will only have up to a year of outcomes data. To construct a single homogenous measure across all participants would require either limiting data for all participants to the shortest time period for all participants and therefore losing significant data. In contrast, creating a single variable using varying follow-up periods for different participants will be difficult to interpret. To address this issue, we will estimate the treatment effect for these variables using a Cox proportional hazard regression model. This model, a type of survival analysis, will allow us to use quarterly data to estimate the probability of a positive outcome in a given quarter, conditional on not having previously had a positive outcome. For example, we will estimate the probability that a participant receives a credential in quarter 4, conditional on not having previously received a credential. These regressions will control for the same covariates as regressions of outcomes estimated at a single point in time as well as calendar quarter fixed effects. We will present results as the regression-adjusted difference in the probability of receiving a positive outcome in a fixed period following study enrollment. Across outcomes, we consider an impact to be substantively meaningful if it represents an effect size of at least .1. This represents an approximately five percentage point increase in employment and an approximately $434 increase in quarterly earnings.

We will also include an additional regression analysis estimating the impact of receiving student success coaching and enhanced supports on individuals who received the treatment as a result of the study (local average treatment effect). To do so, we will use an instrumental variables framework in which we will consider treatment assignment to be an instrument for service receipt. We will use the service receipt logs to create a measure of service receipt. This variable will serve as the outcome variable in the first stage of the analysis. For example, we will run a first-stage regression of the number of times an individual met with a coach on treatment assignment. Next, we will run a second-stage regression of outcomes on the number of times an individual met with a coach to ascertain a causal impact of the returns to each meeting with a coach.

To understand the circumstances under which the treatment may be more or less effective, we will also conduct subgroup analyses. First, we will assess whether the treatment is more effective for individuals with certain demographic characteristics. Next, we will assess whether treatment impact is different among individuals with different service needs, based on the services in which individuals were interested, taken from the participant baseline survey. Finally, we will estimate impacts by grantee, although we anticipate these estimates will be imprecise due to limited power. To estimate impacts by subgroup, we will run regression analyses with interactions between treatment and individual characteristics.

We will analyze participant follow-up survey data to estimate impacts of the randomly assigned SCC4 program components on services received, job characteristics, and overall well-being. To assess the possibility that treatment and comparison group members could show differential rates in the participant follow-up survey, which in turn might affect impact estimates, we will compare the response rates by treatment group as well as key demographic characteristics among the respondents in each research group. We will use a regression model and hypothesis tests of joint significance, such as F-tests, to assess whether differences in response rates and characteristics are likely to reflect systematic differences between respondent groups. To address low response rates, we will generate nonresponse weights for analyses using the participant follow-up survey, based on individual characteristics from the baseline survey. Due to the incomplete nature of the follow-up survey from high anticipated non-response, we will limit outcomes from the follow-up survey to being exploratory outcomes. We will also adjust for the fact that the follow-up survey will be fielded at different times relative to study enrollment for different participants. For outcomes that are only applicable to participants with a longer follow-up period, we will restrict analyses to the subsample of participants with sufficient time to follow-up survey. For analyses of outcomes applicable across all times since enrollment, we will control for time since enrollment.

Table B.3 below presents the evaluation’s estimated minimum detectible impacts (MDIs). To avoid multiple hypothesis testing issues, we will focus on four primary outcomes for the confirmatory analyses. The first three are administrative data outcomes: program completion, employment in the fourth quarter following SCC4 enrollment, and earnings in the 4th quarter following SCC4 enrollment. The administrative outcomes will be collected for all participants, however, we assume that only 90 percent of participants will provide an SSN which is required for collecting administrative outcomes data. In contrast, the fourth outcome, job satisfaction, relies on follow-up survey data and will have a substantially smaller sample size due to the survey’s assumed 30 percent response rate. We find that the study will have MDIs of 3.3 percentage points for program completion, 2.8 percentage points for employment rate, $297 for quarterly earnings, and 5.8 percentage points for job satisfaction.

Table B.3. MDIs for selected outcomes under two sample size scenarios

Outcome

MDI

Administrative data outcomes


Program completion (MDIs)

3.3 percentage points

Employment in the 4th quarter following SCC4 enrollment (MDIs)

2.8 percentage points

Earnings in the 4th quarter following SCC4 enrollment (MDIs)

$297

Analysis sample size

5,400

Survey outcomes


Job satisfaction (MDIs)

5.8 percentage points

Analysis sample size

1,800

Note: We estimate MDIs based on two-sided hypothesis testing, 80 percent power, and a .05 cutoff for statistical significance. We assume that 50 percent of the sample will be assigned to treatment, and therefore the comparison sample will be equal in size to the number of treatment group participants. We also assume that 20 percent of the variation in outcomes is explained by pre-program covariates. We use the follow assumptions to estimates the baseline standard deviation for each outcome: (1) the program completion rate for the comparison group is 61 percent, based on the share of postsecondary subbaccalaureate students obtaining a certificate (ED, NCES, 2025), (2) the employment rate for the comparison group is 79 percent (based on Workforce Investment Act Gold Standard Evaluation, using the pooled sample of adults and dislocated workers; Fortson et al., 2017), (3) the standard deviation of quarterly earnings in the comparison group is $4,349 (based on individuals in the Workforce Investment Act Gold Standard Evaluation’s core services group in the 9th and 10th quarters after random assignment), (4) the job satisfaction rate of the comparison group is 40 percent, based on the rate of associate’s degree holders finding “good jobs” as defined by the Georgetown Center on Education and the Workforce, Fain (2017).

MDI = minimum detectable impact

b. Outcomes study

The outcomes study will provide important evidence on the extent to which colleges were able to accomplish their goals of providing participants with valued credentials and match them with high quality jobs. It will primarily rely on descriptive analyses of administrative data to describe the program progression (WIPS data), educational outcomes (NSC data), and employment outcomes (NDNH data) of SCC4 participants across all 16 grantees. We will use basic summary statistics, such as means, medians, and percentiles, to describe each outcome. For example, we will present the share of SCC4 participants employed in each quarter following program enrollment. We will also present outcomes for subgroups of participants by participant and grantee characteristics. We will use WIPS data to identify participant characteristics and results from the college survey to describe grantee characteristics. Where relevant, we will provide benchmarks for outcomes from other populations to contextualize results and demonstrate how outcomes for SCC4 participants compare to other populations.

c. Implementation study

The implementation study will use qualitative data sources to describe how grantees implemented their SCC4 programs. To describe the program plans, services offered, and regional partnerships, we will first analyze data from the college survey. No complex statistical methodology (such as sample stratification) or estimation will be necessary to analyze data from the college survey. We will analyze the data using simple descriptive measures to generate aggregated counts of responses. We will also cross tabulate survey responses to explore variation based on key grant features, such as single entity grantees versus consortium grantees and selected sectors. We can also cross tabulate responses to explore variation based on selected strategy options, as described in the FOA. For consortia, we will examine responses by grantee to understand response among member colleges. We will code responses to open-ended questions to identify key themes across colleges and enable the study team to describe program and facility characteristics and experiences.

The college survey will be fielded prior to site visits and interviews. Through site visits and interviews, we will validate survey responses by asking respondents to confirm if their survey responses are still accurate and will use site visits and interviews to better understand and describe responses to the college survey.

To provide more in-depth descriptions of treatment contrast, how colleges implemented their SCC4 programs, what worked well, and what challenges they faced, we also will analyze findings from site visits and interviews. The data collected from interview responses will be qualitative information about the perspectives, insights, and experiences of college administrators, program directors, instructors and faculty, program and college staff, participants, and partners at SCC4 grantees. The study team will prepare preliminary findings memos after each round of data collection to document emerging themes and insights. No statistical methods of estimation will be needed in the analysis of the interview data. We will analyze the qualitative data collected using qualitative data analysis software, such as NVivo. This coding will facilitate identification of key themes across colleges and enable the study team to describe SCC4 implementation.

3. Unusual problems requiring specialized sampling procedures

We will not need to use specialized sampling procedures for administering the baseline or grant administrator surveys. The study team will attempt to collect baseline survey data from every impact study sample member from the colleges that will be part of the impact study. The team will also try to collect follow-up survey data from every impact study sample member enrolled in the study at baseline. Additionally, they will attempt to collect survey data from all grantees awarded grants in 2024 and their respective partners (for example, employers, workforce system partners, employer agencies, etc.). Similarly, the study team will plan to have coaches record in service receipt logs for interactions with each impact study participant.

As mentioned previously, the interview data will be used to describe the SCC4 grants, including the perspectives of community college staff and faculty, partner program managers and staff, select employer partners, and students. The study team plans to use purposive sampling methods. For the grantee and partner administrators and staff interviews, team members will develop draft site visit agendas that identify potential respondents to ensure consistency across colleges. We will then meet with each college and grantee point of contact to discuss the agenda and identify the specific respondents, such as administrators, college staff and faculty, and partners, for each interview. The SCC4 grants require partnerships with workforce development boards and employers. We will work to include both groups in all site visits. Although the study team will aim to include all administrators and staff who were involved with the program in these discussions, the interview discussions might not be representative of all administrators and staff. For the student interviews, the study team will ask grantee staff to recommend and help invite students who were engaged in program services and can provide a range of perspectives; however, the final sample of invitees might not represent the participant population present at each college.

4. Periodic data collection cycles to reduce burden

The data collection survey instruments for the impact and implementation studies will be administered only once for any individual respondent, with the exception of the contact information update request. For interviews, the study team will ensure that second round site visit interviews will address different topics than first round interviews and the college survey. Similarly, the interview guides will be crosswalked against the college survey to minimize duplicative data collection. Prior to interviews, study team members will tailor interview guides with available information, drawn from grant applications and quarterly progress reports, as well as college survey responses. To further minimize respondent burden, the study team will review pertinent data available from SCC4 grant applications, grantee staffing and implementation plans, and any other reporting information to reduce such burden whenever possible. Thus, the study team will be able to focus the discussions with respondents on topics for which information is not otherwise available.

B.3. Methods to maximize response rates and minimize nonresponse

As their grant agreements indicate, SCC4 grantees included in the evaluation are expected to participate in the data collection activities as a condition of their grant awards. The study team will employ a variety of methods to maximize response rates and deal with nonresponse. This section describes these methods for survey and qualitative data collection efforts.

Across all aspects of survey data collection, the study team will use certain survey methods and best practices to encourage high response rates while minimizing burden and nonresponse, including the following:

  • Web administration. We anticipate most respondents will prefer to complete the survey online. This choice allows respondents to complete it on their own schedule and pace, as well as over multiple sessions. The web survey system the data collection team will use also supports mobile browsers, such as tablets or cellular phones.

  • Technology to reduce burden. To reduce burden, the surveys will employ drop-down response categories so respondents can quickly select from a list, dynamic questions, automated skip patterns so respondents see only those questions that apply to them (including those based on answers provided previously in the survey), and logical rules for responses so their answers are restricted to those intended by the question. These features should minimize participants’ data entry burden and facilitate high-quality responses.

  • Use-tested questionnaires. Although the collection of survey data has been tailored to the specific circumstances of this evaluation, it is based closely on prior surveys. Where possible, we used prior instruments that were extensively tested using cognitive interviews or debrief sessions with populations similar to those in this study.

Participant baseline survey and consent form. Program staff will administer the baseline survey to all eligible individuals at the colleges selected for the impact study via a web-based system as they go through an intake process. Individuals will also have the option of receiving a link to complete the consent form and participant baseline survey on their own device during intake. Completion of the survey will be a condition of random assignment. Therefore, participants who do not complete the survey will not be randomly assigned. The methods to maximize response for the intake forms will be based on approaches used successfully in many other RCTs, including ensuring that staff explain the study clearly to study participants and staff, designing forms that are easy to understand and complete, and offering a $10 incentive for baseline survey completion. Staff will be trained thoroughly on how to address study participants’ questions about the forms and be equipped with a Frequently Asked Questions (FAQ) guide.

Participant follow-up survey and contact information update requests. The study team will field the participant follow-up survey to all participants enrolled in the impact study in fall 2027, six months to two years after intake. This timing will allow us to capture the experiences of impact study participants at a range of points relative to enrollment in the program. It is also a function of the constraints on the timing of this study. Where relevant, we will control for time relative to entrance or limit the sample to the relevant population based on timing. The study team expects a 30 percent response rate for the follow-up survey and a 50 percent response rate for the contact information update requests. The 30 percent response rate is based on prior studies conducted with similar populations and using similar lower-touch outreach strategies to increase response rates. The study team assumes a slightly higher response rate for the contact update requests because they will be sent closer to completion of the baseline survey and require minimal participant effort to complete. However, because the data from these contact information update requests will solely be used for the purpose of increasing response rates for the follow-up survey, the study team will not put intensive efforts into ensuring high response rates. To account for the potential of selection into who responds to the survey, we will develop non-response rates and use them for analysis of the follow-up survey, which we describe more below.

Several aspects of the data collection design, as well as additional efforts undertaken by the team administering the follow-up survey, will help ensure success in gaining participants’ cooperation and meeting our target response rate. The study team will maintain ongoing communication with participants through text messages sent every six months from baseline with requests to confirm or update their contact information; the team will send these texts up to four times. The study team will provide a $5 incentive for responses to the second and fourth contact information update requests. For the follow-up survey, the team will contact respondents by email and text with a link to complete a mobile-optimized online instrument. Respondents will also receive email and text reminders. The follow-up survey is designed to take 15 minutes to complete and be administered as a web survey. Respondents will be offered a $30 incentive (in the form a gift card) to complete the survey. Previous studies have demonstrated that providing incentives can help increase response rates in full-scale data collection efforts, reduce nonresponse bias, and improve population representativeness.6

Service receipt logs. Coaches will complete service receipt logs to create a record of their contact with SCC4 impact study participants. The study team expects all coaches to complete logs; the target is an 85 percent response rate. The team will work with grantees to set up this log in existing case management systems or in Salesforce, which we will use for study intake. Using such a system will support coaches in finding learners’ records efficiently. The team will also train coaches on navigating the log in the chosen interface to complete the log entry efficiently. Last, the team will focus on the essential information required for the study. To further maximize coaches’ compliance with completing the logs, the study team will monitor log entries, especially during the early months of intake, using real-time access to the data. We will follow up with coaches if we notice that completion levels are lower than expected. Lower completion levels could reflect either that coaches are seeing fewer students than expected or are not filling out logs. In either case, we will discuss what challenges the coaches are facing and strategize ways of adjusting.

College survey. The study team expects to achieve a response rate of 95 percent for the survey of SCC4 project directors. The surveys will be designed to be as brief as possible, with clear, easy-to-answer questions (mostly closed-ended, with a few open-ended questions). We will email an official advance letter with log-in information to grantee sample members to legitimize the study and encourage participation. The research team will monitor survey completion and send a follow-up email two weeks after the advance letter to those colleges that have not yet completed the survey.

Semi-structured interview topic guide for college administrators, program directors, program and college staff, instructors and faculty, and partners. The study team expects to achieve a response rate of 90 percent for interviews with program and partner administrators and staff. To ensure full cooperation, the team will be flexible in scheduling interviews and activities to accommodate respondents’ particular needs. Although the study team will try to arrange interviews that accommodate respondents’ scheduling needs, there might be instances when a respondent is unable to meet while the team is on site; if so, a member of the study team will request to meet with the respondent’s designee or schedule a follow-up call at a more convenient time. Using these approaches, the study team anticipates a 90 percent response rate for administrator interviews—they have achieved a response rate of 100 percent on similar qualitative data collection efforts, such as those for the America’s Promise Job-Driven Grant Program Evaluation and the Navigator Evidence-Building Portfolio.

Semi-structured interview topic guide for SCC4 participants. The study team expects to achieve a response rate of 90 percent for interviews with SCC4 participants. To encourage participation in the interviews, the study team will use methods that have been successful for numerous other Mathematica studies, including enlisting program staff in outreach to participants, providing easy-to-understand outreach materials, strategically scheduling interviews at convenient times and locations, and offering a $45 incentive to interview participants.

Outreach materials will be designed to help colleges recruit participants for interviews. These materials will (1) describe the study, its purpose, and how the data collected will be used; (2) highlight DOL as the study sponsor; (3) explain the voluntary nature of participation in interviews; and (4) provide a phone number and email address for questions respondents might have. Outreach materials will be clear and succinct, and convey the importance of the data collection.

Methods to ensure data reliability. We will use several well-proven strategies to ensure the reliability of data collected from surveys and interviews.

  • Baseline, college, and follow-up surveys. We will use the same surveys across all study colleges to ensure consistency in the collected data. The study team has reviewed the forms extensively, and we have thoroughly tested them in a pre-test involving up to nine individuals who will not be a part of the study sample. To ensure that we capture complete and accurate data, the web-based platform will flag missing data or data outside of a valid range. One 100 surveys are completed, we will do another data quality check to ensure that the survey is functioning as expected. At the analysis stage, the study team will create binary variable flags for all items with missing values and include those flags in the analyses. In addition, we will train staff on project security, including using safeguards to protect personally identifiable information while collecting and storing information on sample members. In addition, each participating college will have access to the web-based system for entering the information from the baseline survey.

  • Interview topic guides. We will use several well-proven strategies to ensure the reliability of the data from interviews collected during the site visits and phone interviews. First, we will train qualitative data collectors thoroughly, all of whom already have extensive experience with this data collection method, in aspects specific to this study, including how to probe for additional details to help interpret responses to interview questions. Second, to ensure that data collection is standardized across colleges, the study team will conduct periodic reliability checks on the use of the topic guides. These checks will involve reviewing interview notes and recordings. Finally, staff will assure all interview respondents that their responses will remain anonymous; reports will never identify respondents by name, and any quotations will be devoid of identifying information, including college name.

B.4. Tests of procedures or methods to be undertaken

To inform data collection activities for which clearance is being requested in this submission, the study team pretested the participant baseline survey, participant follow-up survey and college survey. The purpose of the pretests was to confirm burden and identify questions that were unclear to study respondents or where respondents might have difficulty providing the requested information. For all the instruments that were pretested, average respondent burden varied from what we expected, and thus was adjusted. Across all instruments, pretests findings were used to revise and improve the wording of specific instructions and items. A summary of procedures and findings for each respondent type are described below:

Participant baseline survey and consent form and participant follow-up survey. The participant baseline and follow-up surveys were tested with three individuals each who are enrolled in or recently graduated from a community college. Only three pretests were conducted because many questions in the survey were drawn from existing OMB-approved survey instruments, and the first round of pretesting yielded minimal feedback and revisions from participants, Had the study team found major issues with the survey questions or format, we would have conducted additional pretests.

All pretests were conducted virtually on Webex. During the pretest, participants completed the online survey via a link shared during the meeting. After the online survey was completed, the study team debriefed with the participant to review any issues they may have encountered and gathered additional feedback on the survey. Interviewers followed a protocol to probe certain items to ensure they were phrased clearly and collected accurate information. Participants averaged 14 minutes on the baseline survey, exceeding the 10-minute target. Therefore, the burden was updated to 15 minutes. The average time it took participants to complete the follow-up survey was 10 minutes, within the 20-minute target. The burden for the follow-up survey was updated to 15 minutes, and the incentive adjusted to $30. Updated baseline and follow-up instruments are included as supporting attachments.

The study team revised the baseline and follow-up survey in response to the pretest feedback. That included simplifying language in the consent form of the baseline survey and clarifying time frames and probes in the employment section on both surveys. The study team also adjusted the burden and incentive amounts for the participant baseline and follow-up surveys to reflect the pretest results. The updated instruments are included in this package.

College survey. During the pretest period, the study team tested the college survey with three respondents from nonparticipating community colleges, one of whom was in prior rounds of SCC grants and the other two who have received grants similar to SCC4 in the past. The study team conducted the pretest in two phases. In the first phase, the study team facilitated two meetings with administrators from the workforce division of two nonparticipating community colleges. During the meetings, the study team presented instructions, questions, and response options from the college survey. For each item, the study team asked the respondents to read the item out loud and then respond to a series of prompts probing on the clarity of the item. The study team revised items according to the feedback given during these meetings. Revisions included 1) modifying response options to describe stakeholders in terms that resonate with respondents and to add additional job functions, 2) expanding response options for select questions to include employers, unions, and other community organizations, and 3) differentiating activities listed in response to questions pertaining to student recruitment with additional clarity. Respondents also provided useful feedback on formatting and structure; in response, the study team amended some response types to allow for short answer responses. The study team then sent a hardcopy version of the full survey via email to each of the administrators who participated in the meetings and asked them to complete it, scan it, and return it by email. In the second phase of pretesting, the study team first sent a hardcopy version of the survey via email to administrators from the workforce division of a third nonparticipating community college. The administrators completed the survey and returned it via email and then participated in a virtual debrief meeting. Their feedback was used to further clarify certain questions and response options and to eliminate questions that respondents found overly burdensome.

On average, across both phases, administrators reported that it took 40 minutes to complete the survey. The average time it took to complete the survey was longer than the original 30-minute target. Therefore, some questions were removed from that survey and the burden estimate updated to 40 minutes. The updated college survey is included as a supporting document. The study team did not pretest the site visit and interview topic guides. These were not pretested because they are closely modeled on guides previously approved by OMB for similar studies.

After finalizing instrument revisions, the study team will program the survey instruments for administration via computer-assisted web interviewing methods. Before deployment, the team will test the survey instruments to ensure they function as designed. This process will include extensive manual testing for skip patterns, fills, and other logic. To reduce data entry errors, they will check numerical entries against an acceptable range and, where appropriate, present prompts for valid but unlikely values. This testing will increase the accuracy of data collected while minimizing respondent burden.

B.5. Individuals consulted on statistical aspects of design and on collecting and/or analyzing data

The study team will convene a technical working group (TWG) to provide substantive feedback throughout the project period, especially regarding the impact evaluation. The TWG members have expertise in research methodology, as well as on programs and populations similar to those being served in the SCC4 grant programs. Table B.4 lists TWG members. The study team also will convene a student advisory group of current and former students in SCC-funded programs to ensure the research design, instruments, and findings are grounded in the experiences of people with direct experience of similar career pathways programs and strategies. Table B.5 lists the people who will oversee data collection and analysis for the SCC4 evaluation and identifies who is responsible for methodological components.

Table B.4. TWG Members

Name

Title and affiliation

Summary of qualifications

Researchers with expertise in evaluation design

Dr. Matthew Giani


Research Associate Professor, Department of Sociology; Assistant Professor of Practice, Department of Educational Leadership and Policy, University of Texas-Austin

Randomized control trials (RCTs) in community colleges; experimental and quasi-experimental research methods

Dr. LaShawn Richburg-Hayes

Former Vice President for Education Studies, Westat

RCT’s in community colleges; experimental and quasi-experimental research methods; rigorous studies of programs to improve academic outcomes

Researchers with expertise in community colleges and workforce programming

Dr. Cecilia Rios-Aguilar

Professor of Education and Associate Dean, Graduate School of Education and Information Studies, University of California-Los Angeles

Educational and occupational trajectories of community college students

Dr. Peter Riley Bahr

Associate Professor, Marsal Family School of Education, University of Michigan

Educational and economic opportunities and outcomes among socioeconomically disadvantaged students and older/adult-age students

Community college and workforce program leaders and practitioners

Mr. Mark Potter

Provost and Chief Academic Officer, City Colleges of Chicago


Community college management, strong focus on workforce preparation and industry partnerships



Table B.5. People who will oversee data collection and analysis for the SCC4 evaluation

Organization

Individuals

Mathematica
P.O. Box 2393
Princeton, NJ 08543-2393
(609) 799-3535

Ms. Jeanne Bellotti
Project director: Responsible for overseeing all project components and overall study design
(609) 275-2243


Dr. Ann Person
Co-principal investigator
(510) 285-4608

Ms. Brittany English
Deputy project director
(202) 484-3094

Dr. Ariella Spitzer

Impact Study Task Lead: Responsible for impact study design, modeling, and analysis

Senior researcher
(617) 588-6744

Dr. Lisbeth Goble

Survey Director: Responsible for all facets of survey data collection
Principal survey researcher
(312) 994-1016

Community College Research Center
Box 174
525 West 120th St.
New York, NY 10027
(212) 678-3091

Dr. Thomas Brock
Co-principal investigator: Oversees the implementation study
(212) 678-3091

Dr. Maria Cormier

Implementation Study Task Lead: Responsible for overseeing site visits and interviews and associated analysis, including developing the analysis and coding framework
Senior research associate
(212) 678-3091




1 We will submit an institutional review board application with draft instruments while the OMB package is being finalized, then update our submission with OMB-approved instruments once available.

2 The Funding Opportunity Announcement describes participants eligible to receive training as “students enrolled in career pathways programs that are being enhanced using SCC4 Program Grant funds.” This might include “new entrants to the workforce and those seeking their first job, dislocated workers who have lost employment, and those currently working but seeking additional skills.”

U.S. Department of Labor. Notice of Availability of Funds and Funding Opportunity Announcement For: Strengthening Community Colleges Training Grants. https://www.dol.gov/sites/dolgov/files/ETA/grants/FOA-ETA-23-15_.pdf. Accessed on April 30, 2025.

3 Sax, L.J., S. Gilmartin, J.J. Lee, and L.S. Hagedorn. “Using Web Surveys to Reach Community College Students: An Analysis of Response Rates and Response Bias.” Community College Journal of Research and Practice, vol. 32, 2008, pp. 712–729. https://philpapers.org/rec/SAXUWS. Accessed on May 9, 2025.

4 Community College Survey for Student Engagement. “Community College Survey for Student Engagement, 2023 Appendix Table 7, 2023 CCSSE Cohort Survey Response Rates.” 2023. https://www.ccsse.org/members/reports/2023/appendix/table7.pdf. Accessed on May 9, 2025.

5 The Funding Opportunity Announcement stipulates that grantees selected for an evaluation must agree to participate in evaluation procedures as specified by the evaluation contract under the direction of DOL as a condition of award. Thus, we expect a high response rate for data collection with colleges.

U.S. Department of Labor. Notice of Availability of Funds and Funding Opportunity Announcement For: Strengthening Community Colleges Training Grants. https://www.dol.gov/sites/dolgov/files/ETA/grants/FOA-ETA-23-15_.pdf. Accessed on April 30, 2025.

6 Singer, E., and C. Ye. “The Use and Effects of Incentives in Surveys.” Annals of the American Academy of Political and Social Science, vol. 645, no. 1, 2013, pp. 112–141. https://doi.org/10.1177/0002716212458082. Accessed on May 9, 2025.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2025-08-03

© 2025 OMB.report | Privacy Policy