CUI//PROPIN
Louis Stokes
Request for OMB Approval, Part B
The potential respondent universe include LSAMP-affiliated school staff such as administration, support staff, and faculty and students. Any staff member currently involved with the LSAMP program at the selected sites will be eligible for inclusion. Students must be currently enrolled in the LSAMP program at the selected sites, either at the graduate or undergraduate level, and referred by a current LSAMP staff member.
Sources of data and information for this study include administrative data (including enrollment data), alliance documents (including proposals, annual reports, evaluation reports), interviews, and the student focus groups. We will analyze these data in two main ways—a landscape analysis and a comparative case study. The main goal of the landscape analysis is to determine, site by site, what PSPMs are in place, the duration of each PSPM, and when relevant, the intensity (e.g., frequency of offering, components) of the PSPM(s). We will draw largely on document review for this analysis. For the document review, we will code LSAMP artifacts for information such as LSAMP-related policies, staffing, and practices. We will organize these codes by site-level characteristics (e.g., age of program) to draw connections between alliance activities and institutional (site-level) characteristics.
These analyses derived from the various data points will seek to identify connections between, for instance, alliance type, specific PSPMs or bundles of PSPMs, and organizational factors (e.g., ways of supporting PSPMs). Then, based on these initial results, we may generate insights into alliance activities and the institutionalization, sustainability, and influence of PSPMs. Existing research has identified best practices in the field, and the document review presents a rich opportunity to compare what we know with what we observe at each institutional site. We can identify where practices converge, overlap, or diverge, providing the opportunity to analyze successful or less successful PSPMs—what is working and why.
The main goal of the comparative case study is to understand the ways that meanings and practice (or bundles of practices) emerge, take hold, and become self-reproducing. In other words, we draw on artifacts, staff interview, and student focus group data to distinguish what supports and hinders the institutionalization and sustainability of LSAMP-related activities. For instance, artifacts such as evaluation or final reports can communicate negotiated meanings and support or legitimize particular forms of practice (Vaara, 2014). Staff interviews can reveal what these negotiated meanings mean to the individual and the ways they inform their practice, thereby impacting themselves and their organizations. Student focus groups can provide deep insight into the ways that these negotiated meanings are embedded in organizational processes and communicated by project staff to students within the institutions. Analysis of these data will then inform a greater understanding of the influence of alliance-related activities on participating institutions and their students.
Comparative case study analysis will occur in two phases. In the first phase, we will develop a typology of existing PSPMs. This typology will be compared against landscape analysis results and best practices, enabling us to catalog similarities and differences across case study sites. While developing the typology, we will focus on contextual factors that surround the PSPM such as administrator duties, funding streams, and organizational routines.
In the second phase, we will conduct qualitative coding of the participant interviews and student focus group data. Data analysis will be iterative (Miles & Huberman, 1994), relying on an initial set of codes, then expanding based on emerging themes. The initial set of codes will include uniform codes for both interview and focus group data and codes that are tailored to each type of participant. The uniform codes will include codes for alliance-specific practices, meanings ascribed to alliance-specific practices, the people who engage in LSAMP projects, the perceived effects or influence of alliance-specific practices, and the organizational factors that support or hinder these practices. For staff interview data, we will also include codes for decision-making processes around the defining and enactment of LSAMP-specific practices. For student focus group data, we will include codes for their thoughts on the relationship between LSAMP and their overall postsecondary experience and codes specifically about how LSAMP relates to their perceptions of themselves in relation to the STEM field. We will develop more codes as cross-cutting themes emerge.
Based on the typology and these sets of codes, we will delineate the different processes of institutionalization of PSPMs at each site. We will specify which LSAMP-related practices were integrated into an existing set of practices, which were entirely new sets of practices, and which combined both. We will then catalog and develop a matrix that allows for easier comparison across sites, specific PSPMs, and process of institutionalization. Using this matrix, we will draw connections between the institutionalization process, sustainability, and the effects of specific PSPMs. We will gain an understanding of how different practices or sets of practices may facilitate or deter alliance/institutional success. In addition, the rich qualitative data will enable us to contextualize the institutionalization and sustainability processes, how meanings and related practices are generated, and how they take hold and reproduce over time.
B. 3 Methods to Maximize Response Rate and Minimize Non-Response
All contact will be completed using official email addresses or work phone numbers associated with LSAMP roles and participation. A prenotification to administrators was sent out in spring 2024, alerting them to the general plan and the upcoming cognitive interviews (resulting from a fast-track clearance).
When the main data collection begins, selected participants will receive a prenotification email, and up to three reminder emails requesting participation and scheduling. If email is not successful, we will contact them through their provided phone numbers up to three times. Should these first six contacts go unanswered, we will presume they are implicating refusing to participate and cease outreach.
However, if they are the director or lead of a program, we will require their cooperation, at least in terms of scheduling site visits and gaining access to other individuals. While we do not anticipate this issue arising, if contact with these individuals were an issue, we would turn to certified mailings and send letters requesting cooperation until an affirmative response was given.
Cognitive recruitment and testing, utilizing a fast-track clearance, (discussed in greater detail below, full report attached as an attachment) occurred from May 2024 to July 2024. NSF provided the recruitment list for administrators and faculty, and students were referred by some of these faculty (upon participation in the cognitive testing interview). Recruitment occurred using three methods: email, phone calls, and NSF outreach. From the initial NSF-provided list of 18 individuals from across alliance types (all of whom are not in a selected case-study site), nine administrators and faculty completed a cognitive interview. In addition, seven students completed a cognitive interview, for a total of sixteen unique participants.
Interviews were completed virtually on Microsoft Teams and recorded for later review if participants consented (all consented to recording). Interviews lasted approximately one hour, and no incentive was provided. The interviews were conducted by a member of the NORC team, with no note-taker or observer present. In the interest of covering as much material as possible, staff participants were asked to provide feedback on each of the instruments in order of importance (first the staff and administrator interview, followed by the student focus group protocol). Students only saw and provided feedback on the student focus group protocol (i.e., did not provide feedback on the staff and administrator protocol).
The administrator interview was tested with eight individuals and the student focus group was tested with 14 individuals (seven administrators and seven students). The preliminary findings from these tests are presented below, by individual instrument. Of note, not all interviewees participated in cognitively testing all protocols.
A note on reviewing findings: each initial set of questions (directly from the NORC developed, NSF approved protocol) are presented, followed by key findings and suggestions resulting from cognitive testing. Cognitive testing focused on respondent understanding of questions asked (comprehension), ability to recall information, and topical relevancy. Suggestions from testing focused on clarifying question language and recommending key topics for us to consider for inclusion (e.g., respondents sharing valuable information we had not already included). In reviewing the findings, comprehension and recall were adequate and required minimal revisions (e.g., defining a term, removing a redundant topic). Necessary changes have been included in the attached instruments.
In addition to the findings directly related to the instruments, there were several key lessons learned as they apply to the recruitment of participants. These recruitment notes, specifically around respondent referral and access, were considered when developing the final recruitment design in the data collection plan. First, respondents reported that much of the information around program operations can be centrally accessed—meaning that information may be held by a single individual or position—particularly in smaller institutions. Thus, recruitment must include the institution’s LSAMP administrative lead who can answer such questions in addition to others who can provide a breadth of experience. However, this practice would exclude experiences of faculty, students, interactions beyond the department (e.g., with departments across the institution), etc. As such, we will recruit multiple individuals (e.g., anyone named in proposals or annual reports), and request that they refer other individuals for participation.
This approach of broader invitations, with referral or collaboration, also helps to alleviate the second main observed challenge during cognitive testing, which was contacting the intended individuals. For many individuals, we were unable to obtain a response via email or phone outreach (and many phone lines were either unavailable or out of service). We are aware that people switch roles within an institution or alliance, or may move to another location entirely, making a contact strategy focused on one individual riskier and more challenging. Therefore, we will work with NSF program staff to ensure we have the most up to date information, and secondary contacts if necessary. The final key takeaway we learned as it pertains to recruitment is the high frequency and duration of contact necessary to complete interviews with these individuals. While we anticipate the case study approach will alleviate some of these concerns, these potential respondents are very busy with frequently changing schedules. We will need to account for this with multiple contacts, reminders, and flexibility in order to respectfully garner their participation.
Resulting protocol revisions incorporate specific modifications identified from cognitive testing, include NSF requests and suggestions, and offer procedural updates that will best facilitate the interview recruitment and completion. Resulting recommendations, including recommendations around future recruitment efforts, have been incorporated in the final instruments (attached, Appendix 1 and 2).
Jessica Stewart, Project Director; (312) 759-4000
Debbie Kim, Research Scientist and Principal Investigator; (312) 201-4470
Justine Bulgar-Medina, Research Methodologist; (773) 256-6094
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Plimpton, Suzanne H. |
File Modified | 0000-00-00 |
File Created | 2025-05-19 |