OMB Number: (XXXX) XXXX-XXXX
Revised: XX/XX/XXXX
RIN Number: XXXX-XXXX (if applicable)
REL Northwest Toolkit Efficacy Evaluation
PART B: Collection of Information Employing Statistical Methods
May 2023
Submitted to:
Institute of Education Sciences
U.S. Department of Education
Submitted by:
WestEd and Community College Research Center
SUPPORTING STATEMENT
FOR PAPERWORK REDUCTION ACT SUBMISSION
OMB Number: XXXX-XXXX
Revised XX/XX/XXXX
RIN Number: XXXX-XXXX (if applicable)
Overview
The
U.S. Department of Education (ED), through its Institute of Education
Sciences (IES),
requests
clearance for the recruitment materials and data collection protocols
under the OMB
clearance
agreement (OMB Number (XX) XXXX-XXXX) for activities related to the
Regional
Educational
Laboratory Northwest Program (REL NW).
Community
colleges are increasingly using technology to improve the quality of
student
learning, to
make active and engaging learning more accessible, and to help
students become
more
successful learners. Instructors need professional learning about
incorporating
technology
into their teaching to support students. The REL NW toolkit
development team has
developed
a toolkit to support instructors in implementing evidence-based
instructional strategies to improve student success. The toolkit is
based on the Using Technology to Support
Postsecondary
Student Learning What Works Clearinghouse (WWC) Practice Guide.
The
toolkit will
comprise the professional learning course,
Using Technology to Support Postsecondary Student Learning.
The Toolkit will address the five recommendations from the WWC Practice Guide:
Use
communication and collaboration tools to increase interaction among
students and
between students and instructors.
Use
varied, personalized, and readily available digital resources to
design and deliver
instructional content.
Incorporate technology that models and fosters self-regulated learning strategies.
Use technology to provide timely and targeted feedback on student performance.
Use
simulation technologies that help students engage in complex
problem-solving
(Dabbagh et al., 2019, p. 1).
The Toolkit is completely manualized and contains all of the information needed to implement it. The toolkit has three main components:
Diagnostic and ongoing monitoring instruments. The instruments will include an Instructor Technology Use Survey and an Institutional Instructional Technology Readiness and Support Survey that will assess baseline capacity and enable progress monitoring at both the individual-instructor level and the institutional level.
Professional learning resources. The professional learning resources will address all five Practice Guide recommendations. They will be available both as freestanding learning resources and organized into an online professional learning course accompanied by resources supporting facilitation of the series.
Institutional guidance on supporting implementation of the recommendations. The guidance will include a checklist of steps that institutions can use to support instructors’ implementation of the recommendations, as well as resources that provide guidance on how to implement the steps, including examples from the literature.
The professional learning course will involve both short synchronous sessions, which will allow a cohort of instructors from a single community college to engage collaboratively in authentic activities relevant to the Practice Guide recommendations, and asynchronous sessions, which will support independent learning opportunities that give participants agency to focus on specific areas of improvement that are of interest to them. The professional learning course will consist of four modules that together address the five recommendations from the Practice Guide:
Module 1: Course overview and Practice Guide recommendations 1 (collaboration tools) and 5 (simulation technologies)
Module 2: Practice Guide recommendation 2 (varied, personalized, and available digital resources)
Module 3: Practice Guide recommendation 3 (self-regulated learning strategies)
Module 4: Practice Guide recommendation 4 (timely and targeted feedback)
Each module follows a similar format and leverages a variety of resources created for the Toolkit, including videos, PDFs, and slide decks. The professional learning course is designed to be facilitated by instructional support staff from community colleges who will use a facilitator guide that provides recommendations for organizing the professional learning course and establishing the culture of inquiry that is required for this type of professional learning. The guide will also provide step-by-step instructions needed to execute the logistics of the professional learning course and recommendations for how to facilitate instructor engagement in both the synchronous and asynchronous sessions.
The REL NW toolkit evaluation team is requesting clearance to conduct an independent
evaluation
that will assess the efficacy and implementation of the professional
learning course. First, using a random assignment design, researchers
will examine the impact of the professional learning course on
instructor knowledge, teaching practices, and student outcomes.
Second, researchers will collect implementation data to understand
fidelity of implementation, treatment contrast, and how the
professional learning course influences instructor and student
outcomes. The evaluation will take place in approximately four
community colleges in Oregon. Community College Research Center
(CCRC) will conduct the program evaluation on behalf of REL NW.
B1.
Collection of Information Employing Statistical Methods
B1.1. Data Collection and Analysis Summary:
The efficacy study will address the following research questions:
What is the impact of the professional learning course on instructors’ awareness of technology tools for learning, knowledge of how to use technology for learning, and comfort using education technologies to support student learning?
What is the impact of the professional learning course on instructors’ use of technology to support student learning?
What is the effect of the professional learning course on student engagement, interaction, course completion, and persistence to the next quarter?
To interpret the impact findings accurately and ensure that the study provides useful information for policymakers and practitioners, the evaluation team will also conduct an implementation study that addresses the following research questions:
How is the professional learning course structured and delivered? Was the course implemented with fidelity? For how many hours do instructors participate, and how many instructors complete the training?
What implementation challenges do facilitators and treatment instructors identify? How might the toolkit be improved to address implementation challenges?
How and why do instructors in the treatment and comparison groups select and implement technology? What do instructors in the treatment group learn in the course?
How does the professional learning course differ from other professional learning programs that instructors in the treatment and comparison groups access?
Instructors will apply to participate in the study in spring 2024. Applicants will identify an academic course they will teach in fall 2024 that will serve as the “focal course” for the professional learning course and the evaluation. During the professional learning course, offered to the treatment group in the summer of 2024, instructors in the treatment group will apply their learning to this focal academic course. In order to both reduce the burden on instructors and ensure that efforts to make changes are not diluted across multiple courses, participants will be instructed to direct their attention and efforts to a single focal course. Across both treatment and comparison groups, focal courses will provide the sample for the instructor- and student-level analyses described below. Students enrolled in a focal course during fall 2024 will comprise the student sample for the evaluation.
To address the efficacy research questions, the research team will rely on four data sources:
Application package. First, all instructor applicants for the professional learning course will complete an online application package that will include the Instructor Technology Use Survey. This instrument uses a series of Likert scales to measure applicants’ uses of technology in course design and content delivery, awareness of technology tools, knowledge of and comfort with technology to support learning, and use of education technologies in their focal course. In the application package, instructors will also specify their focal course section. If participating colleges do not have the capacity to provide administrative records that include instructors’ characteristics (e.g., age, race, and gender), this information will be requested in the application package. The evaluation team will use instructor background characteristics to assess baseline equivalency after randomization.
End-of-term instructor survey. At the conclusion of the fall 2024 term, instructors will take a second survey that will also include the Instructor Technology Use Survey. During this second administration, the survey will include two constructs that were not included in the application package. The first will capture faculty perceptions of student engagement in their focal course, including frequency and quality of student-to-student interactions and instructor-to-student interactions. The second will ask respondents to identify any professional learning, training programs, or resources that they used to get information about education technologies.
Administrative
data.
From each participating community college, CCRC will
request a
file containing individual-level data on faculty characteristics as
well as a file containing data on students taught by instructors in
the treatment and comparison
groups during the fall 2024 quarter. In addition to demographic
information such as gender, race/ethnicity, age, and financial-aid
status, the file will include credits attempted and completed and
grades earned by students in these courses, as well as pass, fail,
or withdrawal status. It will also include student enrollment at the
college in the
winter 2025 quarter.
Student engagement survey. Finally, all students enrolled in focal course sections taught by instructors in treatment and comparison groups during fall 2024 quarter (approximately 2,880 students) will receive a survey designed to measure engagement. The toolkit is expected to affect three short-term student-level outcomes: student engagement with learning activities, student-to-student interaction, and student-to-instructor interaction. The survey will measure these constructs:
Emotional engagement
Self-disciplined engagement
Interactive engagement
Social engagement with peers
Social engagement with teachers
Data analysis for the efficacy study: To determine treatment effects of the professional learning course on instructor-level outcomes, the evaluation team will rely on a null-hypothesis framework wherein the researchers compare average outcomes measured by the Instructor Technology Use Survey for instructors assigned to the treatment group and instructors assigned to the comparison group. Researchers will compare average outcomes measured by the Instructor Technology Use Survey for instructors assigned to the treatment group and instructors assigned to the comparison group, with adjustments for baseline covariates, clustering, and differential probabilities of treatment assignment when needed. Primary outcomes of interest include instructors’ awareness of technology tools, knowledge of how to use technology, and comfort using education technologies to support student learning. The model will generate estimates of the impact of participation in the professional learning course using an intent-to-treat analysis.
To determine the relationship between instructor participation in the professional learning course and student-level outcomes, data will be analyzed using a two-level model specification, which accounts for the nesting of students within instructors with random assignment occurring at the instructor level. The research team is primarily interested in modeling the relationship between instructors participating in this study and students’ outcomes (Schochet et al., 2014). The research team will employ logistic regression as a robustness check to account for the distribution of binary outcomes as appropriate. Primary student-level outcomes of interest include course completion and withdrawal, persistence into the winter 2025 quarter, and student engagement and interaction.
To address the implementation research questions, the research team will rely on five data sources:
Instructor focus groups. First, the research team will randomly select 30 instructors (half in the treatment group and half in the comparison group from across the colleges to participate in focus groups at the end of fall 2024. Each one-hour focus group will consist of three or four instructors from the same college and in the same condition. The purpose of these focus groups is to gather perspectives and experiences related to the expected outcomes of the professional learning course and experiences with professional learning training and resources.
Structured course review. Second, the research team will use a structured protocol to review courses taught by the 30 randomly selected instructors. For each focal course, researchers will review the materials in the LMS (e.g., syllabi, course assignments, assessments) to document the presence of technology tools and resources aligned with the Practice Guide recommendations and evidence of instructional practices promoted by the Practice Guide (e.g., reflection, communicating purposes and expectations for technology use). This analysis will provide insights into the range of technology tools used in focal courses and the stated purposes of those tools.
Interviews. The research team will conduct interviews with a purposive sample of college stakeholders to understand the landscape of professional learning opportunities that are typically provided to instructors. These stakeholders include administrators with knowledge of faculty development opportunities and resources as well as the professional learning staff who facilitate the professional learning course.
Professional learning course participation data. Fourth, the research team will access information from the learning management system that will house the professional learning course for participating instructors. This system will provide information on each instructor’s participation patterns, including how many hours they spend on the platform and their completion of assigned activities and tasks.
Professional learning course observations. Finally, to understand fidelity of implementation, researchers will observe synchronous professional learning course sessions using a structured protocol.
Analytic strategy for the implementation study: The implementation study has three analytic goals. First, researchers will use participation data and observations to document the content of the professional learning course, the way the course is implemented, and the treatment dosage to understand fidelity of implementation. These data will be triangulated with professional learning course facilitator interviews. Next, researchers will utilize focal course review data and instructor focus groups to provide further insight into how the professional learning course is related to instructor- and student-level outcomes. Finally, the implementation study will document treatment contrast between the professional learning course and other professional learning resources, drawing on stakeholder interviews, instructor focus groups, and the professional participation data.
Consent Considerations: The Teachers College Institutional Review Board (IRB) provides guidance on the informed consent procedure and forms for all data collection activities. Instructors and students will provide consent prior to participating in surveys. Instructors who are randomly selected to participate in focus groups and the structured course review will provide consent for those activities. Stakeholders will provide consent before participating in an interview. The consent forms will be provided electronically using the Qualtrics survey platform. If an in-person interviewee has not completed the consent form before an interview begins, the researcher will provide a link to the electronic consent form, or if needed, will provide a paper copy.
B1.2. Potential Respondent Universe
The evaluation is designed as a randomized controlled trial to obtain unbiased estimates of the impact of the professional learning course on instructor-level and student-level outcomes of interest. To this end, the research team will recruit from medium-to-large Oregon community colleges that enroll approximately 3,000 students and employ approximately 200 instructors. From this subset of participating Oregon community colleges, CCRC will randomly assign 120 instructors to the treatment group or the comparison group (60 instructors in each group). To reduce bias and achieve balance in the allocation of participants to treatment conditions, randomization will occur within blocks comprising instructors from the same broad academic discipline (e.g., humanities, social science, natural science). Analyses of student-level outcomes will rely on data collected from approximately 2,880 students enrolled in courses taught by participating instructors. Drawing on experience from similar recent studies, the CCRC research team assumes an average class size of 24 students. Class sizes vary dramatically by discipline and course type, but most community college courses enroll between 15 and 40 students. This estimate accounts for recent enrollment declines at many community colleges. The sample sizes and expected response rates for each level of data collection are shown in Table 1.
Table 1 - Sample sizes and expected response rates for each level of data collection
Level of Sample |
Universe |
Sample Size |
Response rate |
Community College Administrators |
16 |
8 |
100% |
Community College Instructional Design Staff (facilitators) |
36 |
8 |
100% |
Community College Instructors |
2415 |
120 |
85% |
Community College Students |
36761 |
2880 |
85% |
B2. Procedures for Collection of Information
B2.1. Recruitment Process
The research team will recruit approximately 4 medium-to-large Oregon community colleges that meet the size criteria listed above in B1.2. To recruit each college, CCRC will conduct email outreach to the college president and then host a follow-up informational call with administrators to explain the requirements and benefits of study participation. Should recruitment be unsuccessful from our target population, we would consider expanding to smaller colleges in Oregon or to other large colleges in the REL NW region such as Washington, while not changing the number of respondents or expected burden.
CCRC will recruit 120 instructors
to participate in the study. To achieve this sample, CCRC will work
with participating colleges to determine which of their instructors
are eligible for the study.
Eligible
instructors include all full- and part-time instructors who are
scheduled to teach a credit-bearing course in fall 2024. Non-eligible
instructors are those who ONLY teach non-credit and nonacademic
courses (i.e., continuing education) and instructors who ONLY teach
course sections reserved exclusively for dual enrollment students.
The
research team will request a list of eligible instructors from each
participating college in spring 2024. Researchers will send each
instructor an email that invites them to participate. The email will
describe what instructors can expect to learn from the professional
learning course, when it will be held, and how much time it will
entail. The message will also explain that the professional learning
course is part of a research study and that a lottery will be used to
determine who will be invited to attend the professional learning
course in the summer of 2024 and who may attend the course in 2025
after the study data collection has concluded. Instructors for both
the treatment group and the comparison group will be notified by
email about their assignment in the study and the steps and timing
for their participation.
Students enrolled in focal course sections taught by treatment and comparison instructors will be recruited to participate in the student engagement survey at the end of the fall term. To recruit these students, the research team will request the email addresses of students who were ever enrolled in a focal course section from each college’s institutional research office. The research team will send each student an email requesting their participation in the 10-minute electronic survey. Instructors will be asked to also remind students about the survey.
B2.2. Statistical Methodology for Stratification and Sample Selection
The professional learning course application period will close at the end of the spring 2024 quarter. Prior to randomization, researchers will block instructors by academic discipline (i.e., Humanities, social science, natural science, formal science mathematics and computer science, and applied science) using information provided in the application packages submitted by those interested in participating in the professional learning course.1 Within each block, instructors will have a 50 percent probability of treatment group assignment.2 Due to budget constraints (e.g., costs associated with instructor incentives), if the research team receives more than 120 applications, researchers will randomly assign 120 applicants to either the treatment group or the comparison group and create a third, non-research group that is placed on a waiting list for a future training. While the research team will not include the non-research group in the subsequent data collection or analysis, we will use information collected through the application to describe the universe of instructors interested in the professional learning course. Importantly, the targeted sample size of 120 instructors is sufficiently large to detect moderate effect sizes.
To address implementation research questions related to how instructors select and use technology tools and their experiences in the professional learning course and other professional learning opportunities, the research team will invite a total of 30 randomly selected instructors (15 in each condition) from our sample to participate in a structured course review and a focus group. Random selection will be stratified by college and by baseline knowledge and use of technology in instruction as collected in the application; instructors will be classified into three categories (i.e., low, medium, and high) based on their baseline survey responses. Instructors who participate in this qualitative portion of the study will grant researchers access to their course materials in the learning management system (e.g., syllabus and assignments) and will attend a focus group lasting up to one hour.
Students enrolled in a focal course section during fall 2024 will comprise the student sample for the evaluation. Assuming an average class size of 24 students, the research team anticipates a final sample of approximately 2,880 students clustered within focal courses taught by 120 instructors across the treatment and comparison group conditions.
B2.3. Quantitative analysis of instructor-level data
To determine treatment effects of the professional learning course on instructor-level outcomes, the research team will rely on a null-hypothesis framework wherein the researchers compare average outcomes measured by the end of term Instructor Technology Use Survey for instructors assigned to the treatment group and instructors assigned to the comparison group. Researchers will compare average outcomes measured by the Instructor Technology Use Survey for instructors assigned to the treatment group and instructors assigned to the comparison group, with adjustments for baseline covariates and clustering, and differential probabilities of treatment assignment when needed. Primary outcomes of interest include instructors’ awareness of technology tools, knowledge of how to use technology, and comfort using education technologies to support student learning.
The model will generate estimates of the impact of participation in the professional learning course using an intent-to-treat analysis. More formally, the research team will analyze data using the following model specification: Yi = β0k+ β1j Ti + β2 Zi + β3 Di + εi where Y represents the outcome in question for instructor i ; T indicates whether the individual was randomly assigned to the treatment group or the comparison group; Z is a vector of baseline covariates including instructor demographic characteristics, relevant indicators for the instructor’s focal course (e.g., course-level, modality), and an indicator for the instructor’s college; D is a vector of block dummy variables for instructor i's affiliated academic discipline (i.e., humanities, social science, natural science, mathematics and computer science, and applied science); and ε is a random error term. The coefficient of interest, β1, represents the effect of assignment to the professional learning course on the included instructor-level outcome. Because of the random assignment process, estimation of β1 will provide an unbiased estimate of the intent-to-treat effect and controlling for other instructor characteristics is not necessary. However, blocking on potential confounders (i.e., instructor’s academic discipline) as well as including pre–random assignment covariates that are correlated with outcomes can improve the precision of impact estimates (Bloom et al., 2007; McCoy, 2017). Hence, in addition to conducting blocked randomization, the evaluation team will include baseline instructor characteristics in the impact model, including, wherever relevant, pretest measures of instructor-level outcomes (e.g., knowledge and use of and comfort with technology), as captured in the diagnostic survey.
B2.4. Quantitative analysis of student-level data
To determine the relationship between instructor participation in the professional learning course and student-level outcomes, data will be analyzed using the following two-level model specification, which accounts for the nesting of students within instructors with random assignment occurring at the instructor level:
Level-1:
Yij = β0j + β1j Xij + εij
Level-2:
β0j = Ƴ00 + Ƴ01Tj + Ƴ02Zj + Ƴ03Dj + u0j
β 1j = Ƴ10
where Y is the student outcome of interest for student i enrolled in a course taught by instructor j; X is a vector of student-level baseline covariates measured using colleges’ administrative data (i.e., demographic and academic characteristics); T is the treatment indicator for instructor j [1 if assigned to treatment, 0 if assigned to comparison]; Z is a vector of instructor-level baseline covariates including instructor demographic characteristics and relevant indicators for the instructor’s focal course (e.g., , course-level, modality), and an indicator for the instructor’s college; D is a vector of block dummy variables for instructor i's affiliated academic discipline (i.e., humanities, social science, natural science, mathematics and computer science, and applied science); and ε and u are student- and instructor-level residuals. In addition to blocking on instructor’s academic discipline D, vectors X and Z are included in the model to explain outcome variance and to increase precision in the overall average estimate intervention effect—i.e., whether the instructor was exposed to the professional learning course.
In the two-level hierarchical model, the intervention effect, Ƴ10, is modeled as fixed, because the research team is primarily interested in modeling the relationship between instructors participating in this study and students’ outcomes (Schochet et al., 2014). Although the equations presented in this section are specified for use with continuous student outcomes with a normal distribution, the research team will employ logistic regression as a robustness check to account for the distribution of binary outcomes as appropriate.
Primary student-level outcomes of interest include course completion and withdrawal and persistence into the winter 2025 quarter (measured using colleges’ administrative data), and student engagement and interaction (measured using the Student Engagement Survey). The research team also considered course grades as an outcome, but grades at the postsecondary level often lack reliability and validity as measures of student learning (Schinske & Tanner, 2014), and they are problematic as an outcome measure when some students receive pass/fail transcript designations rather than letter grades. Further, the WWC encourages authors to consider the effects that multiplicity, resulting from multiple outcomes, may have on study findings (WWC, 2020). The proposed efficacy study seeks to minimize the risks associated with multiple-hypothesis testing (e.g., increased type I error rate) by limiting the primary outcomes to those that are valid and reliable.
B2.5. Power calculations: Instructor-level data
Power calculations are often described in terms of the minimum detectable effect size (MDES). An MDES is the smallest true impact that an experiment has a good chance of detecting (Bloom, 1995). Prior to randomization, researchers will block the sample of 120 instructors by academic discipline (i.e., humanities, social science, natural science, mathematics and computer science, and applied science) using information provided in the application packages submitted by those interested in participating in the professional learning course.3 Within each block, instructors will have a 50 percent probability of treatment assignment.4 Assuming 5 blocks with an average sample size of 24 instructors each, a 5 percent significance level, 80 percent power, and a two-tailed test, the researchers estimate a lower-bound MDES of 0.365 (r2 = 0.50) and an upper-bound MDES of 0.490 (r2 = 0.10).5 Although the literature provides little insight into the anticipated instructor-level effect sizes of professional learning interventions in the postsecondary context, research at the primary and secondary levels suggests professional learning has a moderate positive effect (ES=0.40-0.43) on teaching quality (Gore et al., 2014; Ansyari et al., 2017) and a large positive impact on teacher knowledge and skill (ES=0.71) (Ansyari et al., 2017). The proposed study is sufficiently powered to detect such effect sizes.
B2.6. Power calculations: Student-level data
Based on an anticipated sample of 120 instructors and 2,880 students (assuming an average class size of 24 students), the MDES for student-level outcomes ranges between 0.109 and 0.281.6 These power calculations assume that students are clustered within instructors blocked by broad academic discipline. They also assume a probability of treatment equal to 50 percent within each block,7 a 5 percent significance level, 80 percent power, and a two-tailed test. In practice, intraclass correlation (ICC) levels typically range from .05 to .30 in educational research (Raudenbush & Bryk, 2002). The lower-bound estimate assumes an ICC of 0.05 and a level-1 and level-2 r2 of 0.50, meaning that 5 percent of the variation in outcomes can be accounted for by cluster-level differences and that there is a strong correlation between covariates and outcomes at each level. The upper-bound estimate assumes an ICC of 0.30 and a level-1 and level-2 r2.8 Research suggests that background characteristics tend to have limited power in explaining outcomes such as persistence rates in community colleges (Adelman, 2006; Gates & Creamer, 1984).
Few studies directly link professional learning activities to student outcomes in the postsecondary setting, and even fewer directly report effect sizes. However, a review of nine studies focused on elementary school teachers and their students suggests professional learning programs, on average, have a moderate, positive impact on student achievement (ES = 0.51) (Yoon et al., 2007). Indeed, another synthesis of 97 studies concluded that teacher professional learning opportunities can have a substantial positive impact on student learning and achievement, citing effect sizes ranging from 0.48 to 0.89 (Timperley et al., 2007). While research does not provide insight into the anticipated impact on other student outcomes of interest, the proposed study is sufficiently powered to detect small to moderate impacts.9
B2.7. Unusual Problems Requiring Specialized Sampling Procedures
There are no anticipated problems in this study that would require the use of specialized sampling procedures.
B2.8. Any use of periodic (less frequent than annual) data collection cycles to reduce burden.
The data collection will occur within a one term cycle of approximately 5 months.
B3. Methods to Maximize Response and Issues of Non-response
The research team will employ several methods to maximize response. First, the application procedure will ensure a 100 percent response rate for the baseline survey. All applicants will complete the survey prior to randomization. Second, using administrative records for professional learning course participation data and student outcomes (credits attempted and completed and enrollment persistence) will ensure a complete dataset. For other data sources, the research team will use targeted email outreach and incentives to mitigate non-response. The research team will target an 85 percent response rate or higher for the follow-up administration of the Instructor Technology Use Survey, which researchers will use to measure changes in instructors’ practices and attitudes since the baseline administration of the survey. The research team is confident about achieving this high response rate for a few reasons: first, all instructors in the sample will have volunteered to be part of the study and will receive an incentive for their participation; second, instructors in the comparison group will have the opportunity to participate in the professional learning course once the data collection for the evaluation is complete; and third, researchers will contact sample members up to three times to ask them to fill out the survey. In an IES-funded project focused on another professional learning intervention called Lesson Study, CCRC obtained a 79 percent response rate for a survey of Oregon community college instructors who participated in the intervention. Other random-assignment studies of professional learning that were commissioned by IES have achieved survey response rates of more than 90 percent (Garet et al., 2016; Jayanthi et al., 2017). If the research team does not achieve the target 85 percent response rate, the research team will conduct non-response bias analysis as described below.
The research team will employ several strategies to achieve a strong response rate for the student engagement survey. In the personal email the research team sends to each student, the short duration of the survey (10 minutes) and the opportunity to receive an incentive gift card will be advertised. The research team will conduct a drawing for up to fifteen $100 gift cards to incentivize student participation.10 Approximately one out of every 200 student survey respondents will receive a gift card. To support the email recruitment efforts, the research team will provide a set of resources to instructors in the treatment and comparison conditions so that they can remind their students about the survey. These resources will include an announcement they can post on the LMS, an email they can send to students, and language they can use for an in-class announcement reminding students about the survey. If, despite these efforts, the survey does not yield the target 85 percent response rate, the research team will conduct non-response bias analysis.
In accordance with IES guidelines, the research team will conduct non-response bias analysis for the analytic sample if the research team experiences low survey response rates. Specifically, for each research question for which the sample with non-missing data is less than 85 percent of the original sample, the researchers will assess equivalence on baseline characteristics to determine whether sample members with data and the full sample differ on other observed characteristics, and the researchers will assess the most likely reasons for missing data. The research team will use multiple imputations to account for differences exceeding a magnitude of 0.05 standard deviations (NCEE, 2019).
B4. Tests of Procedures or Methods
The instructor and student surveys both use instruments validated in previous research. The Instructor Technology Use Survey (ITUS) was developed by the REL NW development team for use in the toolkit. The ITUS was administered three times to the sample of eight instructors participating in a pilot of the professional learning course. The pre-pretest was administered three weeks before engagement with the professional learning course. The second time point (pre) was administered at the end of the first session of the course. The third time point was administered at the completion of the professional learning course. A total of eight participants responded to the survey at each time point. Moderate to good reliability was demonstrated for most subscales and timepoints. Overall, the measure was able to index an increase in technology use post professional learning course. The student survey uses scales that have been validated previously, including the Student Course Engagement Questionnaire and the Higher Education Student Engagement Scale. The Cronbach’s Alpha for all of the student survey items demonstrate a high level of reliability (over .7 which is considered reliable by What Works Clearinghouse).
To further refine the presentation, format and wording of the survey items, the research team solicited feedback from expert reviewers on both instruments. In addition, the team conducted cognitive interviews with instructors and students. During these interviews, the respondents first completed the survey (to confirm that the surveys can be completed within the expected timeframe). Then, the respondents provided feedback on their experience navigating the survey in Qualtrics and the wording of items and answer choices. These cognitive interviews informed final edits to the survey wording, format, and layout.
B5. Project Consultants
REL Peer Reviewers:
All Regional Educational Laboratory (REL) applied research and development products are required to undergo rigorous external peer review. This ensures that all applied research and development products meet the Institute of Education Sciences (IES) standards for scientifically valid research before being published as online applied research and development products on the REL website at http://ies.ed.gov/ncee/rel. In this way, policymakers and practitioners, the primary users of REL applied research and development products, can be assured that these applied research and development products have met high standards for scientific quality, and that the information in the applied research and development products is valid and reliable, and therefore can be trusted.
Consultant: Dr. Lindsay C. Page, the Annenberg Associate Professor of Education Policy at Brown University and a faculty research fellow of the National Bureau of Economic Research. Phone: 412-648-7166
Community College Research Center team – 212-678-3091
Thomas Brock, Director
Susan Bickerstaff, Senior Research Associate
Selena Cho, Senior Research Assistant
Elizabeth Kopko, Senior Research Associate
Ellen Wasserman, Research Associate
Works Cited
Adelman, C. (2006). The toolbox revisited: Paths to degree completion from high school through college. U.S. Department of Education. https://www2.ed.gov/rschstat/research/pubs/toolboxrevisit/toolbox.pdf
Ansyari, M. F., Groot, W., & De Witte, K. (2017). Tracking the process of data use professional development interventions for instructional improvement: A systematic review. Educational Research Review, 31. https://doi.org/10.1016/j.edurev.2020.100362.
Bloom, H. S. (1995). Minimum detectable effects: A simple way to report the statistical power of experimental designs. Evaluation Review, 19(5), 547–556. https://doi.org/10.1177%2F0193841X9501900504
Bloom, H. S., Richburg-Hayes, L., & Black, A. R. (2007). Using Covariates to Improve Precision for Studies That Randomize Schools to Evaluate Educational Interventions. Educational Evaluation and Policy Analysis, 29(1), 30–59. https://doi.org/10.3102/0162373707299550
Dabbagh, N., Bass, R., Bishop, M., Costelloe, S., Cummings, K., Freeman, B., et al. (2019). Using technology to support postsecondary student learning: A practice guide for college and university administrators, advisors, and faculty (WWC 20090001). U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance. https://ies.ed.gov/ncee/wwc/Docs/PracticeGuide/wwc-using-tech-postsecondary.pdf
Garet, M. S., Heppen, J. B., Walter, K., Parkinson, J., Smith, T. M., Song, M., et al. (2016). Focusing on mathematical knowledge: The impact of content-intensive teacher professional development (NCEE 2014–4017). U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Analytic Technical Assistance and Development. https://files.eric.ed.gov/fulltext/ED569154.pdf
Gates, A. G., & Creamer, G. G. (1984). Two-year college attrition: Do student or institutional characteristics contribute most? Community/Junior College Quarterly, 8, 39–51.
Gore, J., Lloyd, A., Smith, M., Bowe, J., Ellis, H., & Lubans, D. (2017) Effects of professional development on the quality of teaching: results from a randomised controlled trial of quality teaching rounds. Teaching and Teacher Education, 68, 99-113.
Jayanthi, M., Gersten, R., Taylor, M. J., Smolkowski, K., & Dimino, J. (2017). Impact of the Developing Mathematical Ideas professional development program on grade 4 students’ and teachers’ understanding of fractions (REL 2017-26). U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast. https://ies.ed.gov/ncee/edlabs/regions/southeast/pdf/rel_2017256.pdf
McCoy, C. E. (2017). Understanding the Intention-to-treat Principle in Randomized Controlled Trials. Western Journal of Emergency Medicine: Integrating Emergency Care with Population Health, 18(6). http://dx.doi.org/10.5811/westjem.2017.8.35985 Retrieved from https://escholarship.org/uc/item/83j2g4hq
National Center for Education Evaluation and Regional Assistance (NCEE) (2019). NCEE guidance for REL study proposals, reports, and other products. U.S. Department of Education.
Raudenbush, S. W., & Bryk, A. S. (2002). Hierarchical linear models: Applications and data analysis methods. Sage Publications, Inc.
Schinske, J., & Tanner, K. (2014). Teaching more by grading less (or differently). CBE Life Sciences Education, 13(2), 159–166. https://doi.org/10.1187/cbe.CBE-14-03-0054
Schochet, P., Puma, M., & Deke, J. (2014). Understanding variation in treatment effects in education impact evaluations: An overview of quantitative methods. U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, What Works Clearinghouse. https://ies.ed.gov/ncee/pubs/20144017/pdf/20144017.pdf
Timperley H., Wilson A., Barrar H., Fung I. (2007). Teacher professional learning and development: Best evidence synthesis iteration (BES). Auckland, New Zealand: University of Auckland. Timperley H., Wilson A., Barrar H., Fung I. (2007). Teacher professional learning and development: Best evidence synthesis iteration (BES). Auckland, New Zealand: University of Auckland.
What Works Clearinghouse. (2020). Standards handbook, version 5.0. U.S. Department of Education, Institute of Education Sciences, What Works Clearinghouse. https://ies.ed.gov/ncee/wwc/Docs/referenceresources/WWC-Standards-Handbook-v4-1-508.pdf
Yoon, K. S., Duncan, T., Wen-Yu Lee, S., Scarloss, B., & Shapley, K.L. (2007). Reviewing the evidence on how teacher professional development affects student achievement. U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Regional Educational Laboratory Southeast. https://ies.ed.gov/ncee/edlabs/regions/southwest/pdf/rel_2007033.pdf
1 A preliminary analysis using data from an IES-funded evaluation of the Lesson Study professional learning program in Oregon suggests that background characteristics explain between 15 and 30 percent of the variation in the self-reported use of instructional strategies and practices (Bickerstaff, Raphael, Cruz Zamora, & Leong, 2019).
2 While researchers will target a 50 percent probability of treatment assignment in each block, minor variation will be permitted for blocks with odd numbers of instructors. Importantly, these variations are expected to have very little impact on statistical power due to anticipated large anticipated block sizes.
3 A preliminary analysis using data from an IES-funded evaluation of the Lesson Study professional learning program in Oregon suggests that background characteristics explain between 15 and 30 percent of the variation in the self-reported use of instructional strategies and practices (Bickerstaff, Raphael, Cruz Zamora, & Leong, 2019).
4 While researchers will target a 50 percent probability of treatment assignment in each block, minor variation will be permitted for blocks with odd numbers of instructors. Importantly, these variations are expected to have very little impact on statistical power due to large anticipated block sizes.
5 MDES calculated for an individual random assignment design using PowerUp! (Dong & Maynard, 2013).
6 MDES calculated for a 2-level fixed effects blocked cluster random assignment design using PowerUp! (Dong & Maynard, 2013).
7 While researchers will target a 50 percent probability of treatment assignment in each block, minor variation will be permitted for blocks with odd numbers of instructors. Importantly, these variations are expected to have very little impact on statistical power due to large anticipated block sizes.
8 MDES calculated for an individual random assignment design using PowerUp! (Dong & Maynard, 2013).
9 Similar to the analysis plans for instructor-level outcomes, student-level impacts will be presented using a Bayesian framework in a supplementary appendix (Deke, Finucane, & Thal, 2022).
10 Research literature offers conflicting findings on the effectiveness of monetary incentives to improve survey response rates (Sammut et al., 2021). The literature suggests that a guaranteed monetary award is the most effective at boosting survey response rates (Dykema et al., 2011; Royal & Flammer, 2017; Porter & Whitcomb, 2003). However, due to resource constraints, the research team cannot provide a reasonable incentive to each student in the sample. The literature suggests that, generally, incentives of higher monetary value are more effective than those of lower monetary value (DeCamp & Manierre, 2016; Dykema et al., 2011). Additionally, Bosnjak & Tuten (2003) found that higher cash value incentives in a drawing drew higher response rates than lower cash value guaranteed incentives. Based on available literature and available financial resources, the research team has opted to provide a drawing for incentives of higher financial value (i.e., 15 gift cards worth $100 each). CCRC research teams have used gift card drawings as incentives for survey participation on other research projects.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Authorised User |
File Modified | 0000-00-00 |
File Created | 2023-09-30 |