SUPPORTING STATEMENT B:
REQUEST FOR CLEARANCE OF INFORMATION COLLECTION FORMS FOR
“IMPACTS OF A DETAILED CHECKLIST ON FORMATIVE FEEDBACK TO TEACHERS”
October 2014
Submitted to: Submitted by:
U.S. Department of Education SEDL
Institute
of Education Sciences 4700 Mueller Blvd.
555 New Jersey Ave.
NW, Rm. 308 Austin, TX 78723
Washington, DC 20208 Phone: (800) 476-6861
4700 Mueller Blvd. Austin, TX 78723
800-476-6861
www.relsouthwest.org
This publication was prepared for the Institute of Education Sciences (IES) under contract ED-IES-12-C-00012 by Regional Educational Laboratory Southwest, administered by SEDL. The content of the publication does not necessarily reflect the views or policies of IES or the U.S. Department of Education, nor does mention of trade names, commercial products, or organizations imply endorsement by the U.S. government. The publication is in the public domain. Authorization to reproduce in whole or in part for educational purposes is granted.
Implementation Fidelity Research Questions 6
Exploratory Research Questions 6
1. Respondent Universe and Sampling Methods 8
2. Description of Procedures for the Collection of Information 9
Data Collection for Principal and Teacher Recruitment 9
Principal Baseline and Follow-up Survey 12
Teacher Baseline and Follow-up Survey 12
3. Description of Procedures for Maximizing Response Rates 16
4. Description of Tests, Procedures, and Methods 17
5. Individuals Consulted on Statistical Aspects of the Design and Key Staff 18
APPENDIX A – Teacher and Principal Survey Instruments 20
Spring 2015 School Leader Survey 20
Spring 2016 School Leader Survey 20
APPENDIX B – Principal and Teacher Recruitment Materials 21
Wave 1 Principal Recruitment Script– SPRING 2015 21
Wave 1 Teacher Recruitment Script– – SPRING 2015 21
Wave 2 Principal Recruitment Script– – SPRING 2016 21
Wave 2 Teacher Recruitment Script– – SPRING 2016 21
All Follow Up Scripts for Principals and Teachers 21
APPENDIX C – Secondary Data Elements 22
Teacher and School Administrative Data Variables List 22
NM TEACH Reflect Data Variables List 22
Principal and Teacher Contact Information 22
The U.S. Department of Education (ED) requests clearance for the recruitment materials and data collection protocols under the OMB generic clearance agreement (OMB Number [IES to complete]) for activities related to the Regional Educational Laboratory Program (REL). ED, in consultation with SEDL, is planning a clustered randomized evaluation in New Mexico to test the effectiveness of materials intended to improve the feedback that principals provide in one-on-one conferences to their teachers about their classroom instruction. The study includes an impact analysis and an implementation analysis.
New Mexico Public Education Department (NM PED) staff have identified the topic of principal feedback to teachers as an area where New Mexico needs assistance. In particular, NM PED staff have received comments that principals do not feel adequately prepared to provide teachers with feedback about their instructional practices. The NM PED has committed to ensure that every student has equitable access to an effective principal and teacher every day they are in school. Although NM PED recognizes that the conferences are likely an area in the overall evaluation system in need of improvement, it has limited resources and time to focus on the post-observation conference step in the evaluation cycle. Therefore, any potential interventions to address this issue must be easy to implement and require few resources.
This impact study will examine whether an enhanced feedback guide (Conversation Protocol), relative to business-as-usual guidance to principals and teachers, achieves the following:
improves the structure and content of the principal-teacher feedback conversation,
improves perceptions that the feedback delivered to teachers is useful
better targets guidance to teachers’ regarding professional development that is tied to their formal classroom observation scores
improves quality of teacher instruction as measured by subsequent formal observation ratings on the state’s tool, called NM TEACH, and
increases student achievement and state standardized tests.
To examine these outcomes, ED’s contractor will invite all public school principals of charter and non-charter schools in the state to participate in the study. Blocking by school district, a randomly selected half of principals who agree to participate (along with teachers in their school who also agree to participate in the study) will be assigned to the treatment group. The remaining half of principals (along with teachers in their school who also agree to participate in the study) will be assigned to the control group.
In summary, the study will compare principals and teachers in the treatment and control conditions, which are detailed in Table 1.
Table 1. Treatment and Control Conditions
|
Treatment group |
Control group |
Principals |
|
|
NM PED-sponsored professional development for principals offered during the summer, of which 1.5-2 hours were devoted to feedback to teachers. Training conducted in regional in-person professional development sessions led by the state’s contractor, Southern Regional Education Board (SREB). All principals in the state are required to participate as a part of their job requirement. |
|
|
List of documents to bring to each feedback conversation |
|
|
Receipt of Conversation Protocol, which includes 24-item checklist to use during each feedback conversation. |
|
|
Link to a 5-minute video in which principal testifies about her experience using the checklist. |
|
|
Business-as-usual guide to remind principals about “5 stages of feedback” described in NM PED-sponsored principal professional development to use for feedback conversations |
|
|
Teachers |
|
|
Business-as-usual guide: A document the includes a reminder to teachers about their right to feedback within 10 calendar days of formal observation under the NM TEACH Educator Effectiveness System, and a link to the NM PED provided FAQ on teacher evaluation (http://ped.state.nm.us/ped/NMTeach_FAQ.html) |
|
|
List of documents to bring to each feedback conversation |
|
|
24-item checklist to use during each feedback conversation |
|
|
5-minute video in which teacher testifies about her experience using the checklist |
|
|
ED’s contractor will request that the treatment group principals and control group principals do three things: (1) disseminate the relevant guide (and a link to the video, where relevant) to all school leaders and to all teachers in his/her building, (2) use the guide and view the video, where relevant (along with all other schools leaders in the building who observe teachers) with 100% of the teachers that they formally observe during school year 2015-2016, and (3) complete a baseline survey in fall 2015 and a follow-up survey in spring 2016. ED’s contractor will request treatment group teachers and control group teachers to do two things: (1) use the guide they received from ED’s contractor (and view the video, where relevant) in the feedback conversations they have with their school leader(s), and (2) complete a baseline survey in fall 2015 and a follow-up survey in spring 2016.
The evaluation will examine the impact of providing a post-classroom observation feedback checklist and a video to principals and teachers on principal, teacher, and student outcomes. To do so, this evaluation will use a cluster RCT design with random assignment at the school level to address the following impact and implementation questions.
Does provision of the feedback checklist and video\relative to the business as usual guidance:
improve the content and structure of post-observation conference according to principals and teachers?
improve principals’ and teachers’ perceptions of the utility of post-observation conference?
increase the amount of time it takes to complete the post-observation conference?
cause principals to recommend and teachers to take professional development that is aligned to needs identified in the formal observations?
improve the quality of teachers’ subsequent instructional practices as measured by principals’ formal classroom observation scores?
ED’s contractor will also examine the implementation of the Feedback Checklist relative to the business-as-usual guidance by analyzing responses to survey questions regarding the type of guidance used. The implementation portion of the study will address the following research questions:
How extensively do principals and teachers in the treatment and control groups report using the form of guidance they were assigned?
How extensively do principals and teachers in the treatment and control groups report using the form of guidance they were not assigned (i.e. cross-overs)?
ED’s contractor will also perform exploratory analyses examining the association of the enhanced feedback protocol on subgroups of teachers and principals. In addition, we will also examine whether provision of the treatment guide improves subsequent student performance on state standardized tests. Although the design of the study allows for causal analysis of these questions, for reasons of limited sample size within New Mexico and limited time for the treatment or control condition to influence behavior and student achievement, we will likely have insufficient statistical power to detect differences. Nevertheless, these questions have high policy relevance for NM PED, so we include them to provide insights for follow-on studies and for NM PED’s improvement of subsequent iterations of NM TEACH observations and feedback. The exploratory research questions are:
Does the provision of the checklist to principals and teachers increase student achievement relative to business-as-usual guidance regarding feedback?
Is there suggestive evidence that the effects of providing the checklist varies according to the following subgroups of principals and to the following subgroups of teachers:
extensive users of the checklist – i.e., teachers who have received three versus two formal observations and principals who have used the checklist in schools that have a greater than average number of teachers within the given district
personal beliefs and attitudes about NMTEACH evaluation system
accountability pressure (as measured by prior year’s school accountability grade)
those principals who have received training on the NMTEACH Observation Rubric
those who work in schools that teachers rate as collaborative and supportive
those with greater qualifications, including more years of experience and certification type
those who teach core subjects versus non-core subjects
those who work in elementary versus middle or high schools
teachers in the top and bottom quartile of the NM TEACH summative effectiveness score distribution
principals in the top and bottom quartile of the NM TEACH School Leader effectiveness score distribution
Beyond subgroup analyses, ED’s contractor will also gather information about the perceptions users have of the treatment checklist and the control guide.
What are principals’ and teachers’ perceptions of the checklist and videos versus business-as-usual feedback guidance? For example, do they report that it is easy to use, burdensome, formulaic, lengthy, useful for creation of professional development plans, and helpful in causing teachers to commit to instructional changes?
What are the main reasons why principals report not using the form of guidance they received? checklist or the business-as-usual guidance?
OMB approval is being requested for a multimode data collection and analysis of principal, teacher, and student outcomes in public schools in New Mexico. The project consists of data collection from NM PED and participating principals and teachers who work in New Mexico public schools. Specifically, in this OMB clearance package, ED is requesting clearance for the following data collection approaches:
Recruitment materials for participating principals and teachers
Extant administrative records data collections from NM PED
Two waves of web-based surveys of principals and teachers in treatment and control schools.
ED believes that the data collections for which clearance is being requested represent the bare minimum necessary to assess the impact and implementation of the Conversation Protocol.
The remainder of Part B addresses the following: respondent universe and sampling methods; description of procedures for maximizing response rates; description of tests, procedures and methods; and contact information for statistical consultants and key staff.
This section describes the respondent universe and sampling methods for recruitment. To understand the impact of providing guidance to principals and teachers regarding feedback sessions with teachers following formal classroom observations, ED’s contractor will implement a randomized controlled trial (RCT) evaluation. The recruitment strategy will focus on both New Mexico public school principals and New Mexico public school teachers. ED’s contractor will conduct recruitment in a top-down approach, with principal recruitment preceding teacher recruitment. First, ED’s contractor will send out an introductory email, and then invite via email all principals in public schools governed by the New Mexico Public Education Department (NM PED) to participate in a research study. As of the school year 2012–2013, there were 855 school principals across the elementary, middle, and high school levels.1 There are no additional inclusion criteria for principals recruited into the study. We anticipate that after three rounds of recruitment follow-up to nonresponding principals, about 510 principals will consent to participate in the study.
Next, in schools where principals consent to participate in the study, ED’s contractor will randomly select up to 10 teachers to participate in the study. In schools with less than 10 teachers, all teachers will be invited to participate. In schools where more than 10 teachers work, ED’s contractor will sample 10 using simple random sampling to invite them to participate. Across New Mexico, there are approximately 21,500 FTE public school teachers according to Common Core of Data. We anticipate that after three rounds of recruitment to nonresponding teachers, about 5,130 teachers will consent to participate in the study.
The power analyses to estimate the minimum detectable effect size (MDES) were conducted using the Optimal Design software package, version 3.01 (Spybrook, Bloom, Congdon, Hill, Martínez, and Raudenbush, 2011), with interpolation of results as necessary to account for the non-constant number of schools per district. ED’s contractor drew from prior studies using student achievement and principal and teacher survey data to predict participation rates and the anticipated correlations of the covariates with the outcomes of interest.
The study will examine continuous principal and teacher outcomes, binary teacher and principal outcomes, and continuous student outcomes. We anticipate that our principal sample will be sufficient to detect minimum effect sizes between 0.12 and 0.21 standard deviations for continuous outcomes with 80% power and a 5% Type I error rate, under reasonable assumptions for participation, attrition, and baseline correlation. In addition, the sample of students in these schools with participating principals will be sufficient to detect minimum effect sizes of between 0.09 and 0.14 standard deviations for individual grade level student outcomes, again with 80% power and a 5% Type I error rate, under reasonable assumptions for participation, attrition, and baseline correlation. The sample of teachers and principals will be sufficient to detect minimum effect sizes of between 0.11 and 0.18 standard deviations for continuous outcomes, and 0.15 and 0.24 standard deviations for binary outcomes with 80% power and a 5% Type I error rate, under reasonable assumptions for participation, attrition, and baseline correlation.
ED’s contractor, REL Southwest, will manage data collection and ensure quality and timeliness. ED’s contractor proposes to collect both primary and secondary data. The primary data will consist of a Web-based baseline and follow-up survey for principals and teachers (for four total survey instruments). The draft recruitment texts for study participation are included in Appendix B, and the survey instruments (along with the consent language) for data collection are included in Appendix A. The secondary data consists of formal observation scores by school leaders of teacher practice as well as principal, teacher and school characteristics and student achievement scores collected by NM PED. The data request of extant data from the NM PED is included in Appendix C. The content of Appendix A, B, and C are also included in Statement A of this OMB package.
ED’s contractor will contact all principals in public schools governed by the New Mexico Public Education Department (NM PED) to invite them to participate in the study. Recruitment will be facilitated by clear materials and communication about the nature of the study and the requirements for participation, including complying with random assignment.
The following sections describe more fully each step in the recruitment process:
(1) Obtain principal and teacher email addresses from NM PED.
Starting in January 2015, NM PED will provide to ED’s contractor their existing file of principal and teacher email addresses for all those who worked in school year 2014-2015. This list will be cleaned in February and March 2015 in advance of recruitment in April 2015. ED’s contractor will also obtain from NM PED the official start and end day of each NM school district’s school year.
(2) Recruit principals and administer baseline principal survey.
NM PED will send an introductory email to all principals in the first week of April 2015 (or after OMB approval if this occurs after April 2015), describing REL SW, explaining the study, and endorsing the study and encouraging principals to participate. REL SW will then send a follow-up email to principals (up to three times to non-respondents) – first week of May, third week of May and first week of June—seeking their participation in the study and issuing a link and password to the baseline principal survey. We will also mail the recruitment letter to the principal’s school up to two times. If a principal has not responded to any of the three emails or two letters, they will be considered to be out of the study. The principal recruitment email is included in Appendix A.
(3) Recruit teachers and administer baseline teacher survey (two weeks after step 2).
During April-May 2015 while the teacher and principal rosters are being cleaned provided by NM PED, ED’s contractor will randomly select up to 10 teachers who are teaching in the 2014-15 school year in a given school to be invited to participate in the study using simple random sampling. At schools with fewer than 10 teachers, all teachers will be selected for recruitment. After the principal in the schools has consented to participate in the study, the introductory study recruitment email will be sent to teachers in the school in the third week of April, asking them to participate in the study. Up to three emails will be issued until the close of the school year, and up to two letters will be sent to the schools, at which point non-respondents will be excluded from the study. The recruitment email will include a link and password to the baseline teacher survey. The teacher recruitment emails is included in Appendix B.
(4) Conduct random assignment of principals.
Once the window has closed for study recruitment in early summer 2015, ED’s contractor will randomly assign 50% of participating principals to the treatment group according to the following blocked random selection procedures. Using data from NM PED about 2014-2015 school characteristics, ED’s contractor will randomize participating schools within each district, stratifying schools according to school level (elementary, middle and high school). Prior to randomization, ED’s contractor will also examine the distributions of school size and racial/ethnic composition (high and low percent Hispanic and Native American) among participating schools to determine if any further advantage may be gained by stratifying on these characteristics as well.2 In cases where only a single school from a district is participating, ED’s contractor will combine multiple districts and randomize within matched districts. Randomization will occur in the first week of August 2015. By definition, teachers and students whose principals who are in the treatment group will belong to the treatment group.3
ED’s contractor will collect both primary and secondary data to understand the impact of providing the Conversation Checklist to principals and teachers relative to the business-as-usual control guide. The primary data will consist of two baseline surveys in spring 2015 (one to teachers and one to principals) and follow-up surveys in spring 2016. The secondary data consists of (a) formal observation scores by school leaders of teachers’ classrooms throughout the 2014-15 and 2015-2016 school year; (b) principal, teacher and school characteristics. Table 2 provides a summary table of the data collection, including the data sources, data elements, and timeline. The following section describes the data types in more detail.
Table 2 – Data Sources, Data Elements and Timeline for Data Collection
Data Source |
Type of Data |
Timeline for Data Collection |
Principals |
Baseline survey |
April 15, 2015 – June 15, 2015 |
Follow-up survey |
April 15, 2016 – June 15, 2016 |
|
Teachers |
Baseline survey |
April 30, 2015 – June 1, 2015 |
Follow-up survey |
April 15, 2016 – June 1, 2016 |
|
NM PED
|
Principal and Teacher Contact Information |
April 1, 2015, July 1, 2015, April 1, 2016 |
Student-level achievement data |
June 2015, June 2016 |
|
School Report Card Grades |
July 2015, July 2016 |
|
School administrative data |
January 2015, December 2015, December 2016 |
|
NM TEACH observation rubric scores from REFLECT for 2014-15 |
June 2015 |
|
NM TEACH observation rubric scores from REFLECT for 2015-16 |
November 2015, February 2016, June 2016, December 2016 |
ED’s contractor will collect two waves of survey data from both principals and teachers. To increase the precision of the treatment effect estimates, ED’s contractor has designed the baseline (wave 1) and post-treatment (wave 2) survey to contain identical items within respondent type. This is to allow us to include baseline responses as a predictor within our regression models for each correlative outcome of interest. The only exception to this rule is in wave 2 surveys where ED’s contractor added a battery of questions for treatment group respondents and for control group respondents to understand their use (or not) and perceptions of the Conversation Checklist.
The baseline and follow-up principal survey instruments contains eight sections that cover the following topics (see Attachment B for the full survey instrument):
Your Position and Job Responsibilities: type of school leader, years of experience as school leader
Your Perceptions of the Quality Effectiveness of Your School’s Teaching Staff This Year: subjective rating of the quality of teachers
Your Practices When Observing Teachers This Year (SY 2014-2015) for Evaluation Purposes: number of teachers observed formally, length of post-observation conference, frequency of informal observations, practices and perceptions of the post-observation conference, (the follow-up survey includes questions about whether the treatment and control guides were received, used, frequency of use, and perceptions about the guide)
Professional Development Offered to Teachers This Year (SY 2014-2015): type of professional development offered, quality of professional development, alignment of professional development options with NM TEACH Observation Rubric
Training You Received About NM TEACH to Date: types of training received, including observation rubric, software, interpreting ratings, providing feedback, utility of training
Your Personal Views of NM TEACH Educator Effectiveness System: subjective rating of evaluation system
Teacher Retention and Improvement: number of teachers who have left the school, reason for dismissal
Optional Comments from You: comments on the observation rubric, professional development available, and training on teacher evaluation system.
The baseline and follow-up teacher survey instruments contains seven sections that cover the following topics (see Attachment B for the survey instrument):
Your Position and Job Responsibilities: type of teacher, teaching arrangement, grades taught, years of experience
Your Experiences This Year (SY 2014-2015) of Formal Observations for Evaluation Purposes: number of formal observations, length of post-observation conference, practices and perceptions of the post-observation conference, ratings on two of the four domains in the observation rubric, self-rating on two of four domains in the observation rubric (the follow-up survey includes questions about whether the treatment and control guides were received, used, frequency of use, and perceptions about the guide)
Professional Development You Received This Year (SY 2014-2015): type of professional development taken, quality of professional development, availability of and support for professional development
Your Perceptions of the Quality Effectiveness of Your School’s Leadership Last Year: subjective rating of the quality of school leadership
Your Personal Views of NM TEACH Educator Effectiveness System: subjective rating of evaluation system
Your NM TEACH Summative Evaluation Score Rating: overall rating, feedback received from principal after overall rating
Optional Comments from You: comments on the observation rubric, professional development available, and evaluation system.
All student-level information will be collected through extant data from NM PED. To reduce burden on survey respondents, all respondent and school characteristics will be collected from NM PED’s secondary data sources. ED’s contractor will provide a list of requested data fields to NM PED prior to the first data collection request. NM PED will also provide school report card grades for schools years 2014-2015 (baseline) and 2015-2016 (outcome).4 In addition, NM PED will provide ED’s contractor with classroom observation-level scores from the NM TEACH Observation rubric. Appendix C contains a list of the data elements that will be obtained from NM PED.
The impact and exploratory analysis measures will be constructed from survey questions and administrative data sources, as described in Table 3. Th e following items are outcome measures derived from principal and teacher surveys:
Good Practices Index: this index is a measure of whether the practices that are recommended in the Conversation Checklist are implemented in the feedback conference. There will be one index for principals, and one for teachers of good practices. We anticipate that in each index, the items will include whether the conversation included at least one practice that the teacher does well, identified at least one challenge, prioritized next steps, ended on a positive note, and mutually developed next steps.
Feedback Conversation Perceptions Index: We anticipate that this index will be derived from the extent to which principals agree with statements regarding the feedback conference, , the principal gave specific feedback, the principal gave actionable feedback, the teacher had an equal chance to speak during the conversation, the principal or teacher feels positive about the feedback provided, the principal or teacher experienced a high level of conflict during the feedback conference, and the principal found it difficult to provide criticism.
PD is tailored to needs identified in formal classroom observation: this measure will be based on a survey question that asks the teacher whether they were recommended to take PD as a result of a rating in each of the five Domain 2 and five Domain 3 items of the NM TEACH Observation Rubric.
Fraction of targeted professional development that was recommended by the principal and taken by the teacher: This outcome measure will be based on teacher survey responses about taking professional development that was recommended. Of all of the recommended PD, we anticipate estimating the fraction that was taken by the teacher in the 2015-2016 school year (including PD taken in person and online).
Table 3. Impact and Exploratory Analysis Outcome Variable, Research Question, and Data Source Mapping
Outcome |
Research Question |
Data source |
Impact outcomes |
||
Good Practices Index (measure of post-observation feedback conference content and structure) |
RQ1 & RQ9 |
principal survey - Wave 1 survey: Q12b, Q12c, Q12d, Q12e, Q12f, Q12g, Q12h, Q12j, Q12m, Q12n, Q12o, Q12u - Wave 2 survey: Q12b, Q12c, Q12d, Q12e, Q12f, Q12g, Q12h, Q12j, Q12m, Q12n, Q12o, Q12u teacher survey - Wave 1 survey: Q11b, Q11c, Q11d, Q11e, Q11f, Q11g, Q11h, Q11j Q11p, Q11q, Q11s, Q11w - Wave 2 survey: Q11b, Q11c, Q11d, Q11e, Q11f, Q11g, Q11h, Q11j Q11p, Q11q, Q11s, Q11w |
Feedback Conversation Perceptions Index (measure of the perceived quality of feedback)
|
RQ2 & RQ9 |
principal survey - Wave 1 survey: Q12a, Q12k, Q12l, Q12p, Q12q, Q12r, Q12s, Q12t - Wave 2 Survey: Q12a, Q12k, Q12l, Q12p, Q12q, Q12r, Q12s, Q12t teacher survey - Wave 1 survey: Q11a, Q11i, Q11k, Q11l, Q11m, Q11n, Q11o, Q11r, Q11t, Q11u - Wave 2 survey: Q11a, Q11i, Q11k, Q11l, Q11m, Q11n, Q11o, Q11r, Q11t, Q11u |
Average number of minutes it takes to complete the post-observation conference |
RQ3 & RQ9 |
Principal survey - Wave 1 survey: Q10 - Wave 2 survey: Q10 Teacher Survey - Wave 1 survey: Q9 - Wave 2 survey: Q9 |
PD is tailored to needs identified in formal classroom observation |
RQ4 & RQ9 |
teacher survey - Wave 1 survey: Q12, Q14 - Wave 2 survey: Q15 |
Fraction of targeted Professional development that was recommended by the principal and taken by the teacher |
RQ4 & RQ9 |
teacher survey - Wave 1 survey: Q13 - Wave 2 survey: Q15 |
Domain level Observation Rubric score |
RQ5 & RQ9 |
REFLECT |
Observation proficiency level |
RQ5 & RQ9 |
REFLECT |
Exploratory questions |
||
School-level student achievement scores |
RQ8 |
NM PED Administrative Data |
Effects by subgroup |
RQ9 |
Teacher Survey - wave 1: Q1 – Q7; Q16 – Q22 - wave 2: Q1 – Q7; Q17 – Q24 principal wave 1 & 2: Q1 - Q6; Q15 - Q22 NM PED Administrative Data |
Note: REFLECT is the system designed by Teachscape for NM PED that stores observation rubric scores and observation narratives
For cases where multiple survey items are the source of the outcome measure, we will conduct exploratory factor analyses to generate an index of the given construct. For each construct, we anticipate testing whether the survey items measure the latent variable described in Table 3. The precise number of items retained in each scale will depend on each item’s factor loading. As an initial step, we will a scree plot to visually display the Eigenvalues, and we anticipate retaining factors for which the minimum Eigenvalue is greater than or equal to one. However, we will not apply a strict test, and may, for substantive reasons, retain factors with Eigenvalues of less than one or drop factors with Eigenvalues of greater than one. We will then use oblique rotation and varimax rotation to determine which items load most highly onto which of the retained factors. We intend to retain only factors for which we have an underlying theory or hypothesis for the apparent latent variable measured by a factor, and we will derive our theories from the theory of action about the Conversation Checklist. Namely, we will test if the feedback checklist, as intended, makes conversations seem more mutual (rather than from principals to teachers), make feedback more specific, and lead to actionable next steps. Operationally, we will generate the scales by averaging the respondent-level sums of item-level responses derived from the relevant survey items. A coefficient Alpha of 0.70 or greater is generally considered an acceptable level of internal consistency within a given factor, with lower values indicating a potential lack of adequate reliability.
Meanwhle, the following are the outcome measures that are collected from NM PED from the REFLECT teacher observation rubric system or from administrative data:
NMTEACH Observation rubric scores: These are item level measures from the REFLECT system on each of the four domains of the NM TEACH Observation tool, and proficiency levels associated with the item level measures. We will examine changes to the Domain-level scores and proficiency ratings from the prior year as well as changes over the course of the year when the Checklist is available.
Student achievement on tests. We will estimate whether the treatment raises average school-wide achievement on standardized tests. We will do this by modelling student-level achievement data hierarchically to account for the clustering of students in schools and to estimate the effect of the guide on average student achievement according to whether students are in treatment or control group schools. The school-level student achievement will include separately modeled math, ELA, and science PARCC scores. We will examine the difference in student achievement between students who have teachers that are using and not using the Conversation Checklist in the feedback conference, controlling for prior achievement scores and relevant student and school covariates. We will examine differences overall at the school level, with sub-analyses by grade.
School report card data. Additionally, the academic component of the school report card measure will be used as a complimentary impact measure for student achievement observed at the school level. NM PED will provide the continuous scale academic composite measure currently in use in the state comprising 90 percent of the final school report card grade.
ED is committed to obtaining complete data for this evaluation. A large share of the impact analysis question for the evaluation relies heavily on administrative data. ED’s contractor anticipates a 95-percent response rate from NM PED on student, teacher and principal measures in the administrative data, as well as teacher observation scores. A key to achieving complete administrative data is tracking the data components from NM PED with e-mail and telephone contact to the appropriate parties to resolve issues of missing or delayed data files. ED’s contractor has a strong working relationship with representatives from NM PED. All administrative data files will be reviewed for consistency and completeness. If a data file has too many missing values or if an instrument in the implementation study has too few items completed to be counted as a response, ED’s contractor will seek to obtain more complete responses by e-mail or phone.
Based on its prior experience with administering surveys to principals and teachers in a variety of schools, districts, and states, ED’s contractor expects the response rate for the baseline survey to be at 85 percent for those principals and teachers who consent to participate in the study. We will contact non-responding principals and teachers up to four times to encourage participation. Three follow-up email reminders will be sent to individual respondents in the event that responses are not obtained for Web-based surveys. Finally, as a fourth and fifth contact attempt we will send a reminder letter to their school (twice), after which point we will deem the person a non-respondent and exclude them from the study.
In addition, a number of steps will be taken to maximize response rates. For example, sampled respondents will receive advance communications explaining the study, introducing the REL Southwest, provide an assurance of confidentiality, and encourage them to participate as a way to help NM PED refine and improve NM TEACH. Respondents also will be given a contact number to reach ED’s contractor with questions. Finally, respondents will receive an incentive for participating in the study:
$25 gift card for principal for completing the baseline survey, and $25 gift card for completing the follow-up survey
$25 gift card for teachers for completing the baseline survey, and $25 gift card for completing the follow-up survey
Finally, ED’s contractor will consider the most recent statistical literature and work with IES to determine the most appropriate method for handling missing data. They will use appropriate analytic methods to account for missing data, and consider options such as complete case analyses with regression adjustment, maximum likelihood methods, non-response weights, or fitting the models specified above in a Bayesian hierarchical framework. Upon obtaining the data, ED’s contractor will examine the extent of missing data overall and by treatment group.
To measure the impact of the Conversation Checklist on the structure and content of the feedback conference, the quality of the feedback provided by principals and received by teachers, the professional development that is recommended by principals and completed by teachers, ED’s contractor will administer principal and teacher surveys. The survey items have been reviewed by RAND colleagues who were formerly employed as teachers and colleagues with content expertise. ED’s contractor has also conducted cognitive interviews of two retired New Mexico teachers and five retired principals by asking the participants to complete the survey and comment on the clarity of the questions, whether the questions assess the intended constructs, whether the tone of the questions is appropriate for the audience, and whether the length of the survey is suitable. During 1.5-2 hour phone calls, ED’s contractor solicited detailed feedback on the clarity, tone, and wording of these documents. With each participant, we mailed them hard copy materials several days in advance of our scheduled call, asking them to complete the survey and read the materials prior to the call. After the call, ED’s contractor asked participants to return the hard copies with comments on them in a pre-paid return envelope. Based on the feedback from the pilot test, ED’s contractor has refined and clarified the survey questions and made clarifications and refinements to the feedback protocol. For example, ED’s contractor learned that tenure is a term not widely understood and agreed upon in New Mexico, and that it set by the state rather than districts. ED’s contractor has therefore dropped some of the tenure questions on the survey. ED’s contractor also changed some words that sounded negative or punitive in tone about teachers to retired principals, and ensured that the list of documents participants in the treatment checklist are accurate and described in ways that principals and teachers would understand.
ED’s contractor consulted Lou Mariano, Senior Statistician and director of the Statistics Unit at RAND Corporation concerning the methodology, study design, and data collection approach/burden.
The following individuals were consulted on the statistical, data collection, and analytic aspects of the principal feedback evaluation study through REL Southwest’s Technical Working Group (TWG):
Dan Goldhaber, Ph.D.
Director, CALDER (National Center for Analysis of Longitudinal Data in Education Research)
Vice President, American Institutes for Research (AIR)
Director, Center for Education Data & Research (CEDR), University of Washington Bothell
Co-Editor, Education Finance and Policy
3876 Bridge Way N,Suite 201
Seattle, WA 98103
Ph: 206-547-1562
Fax: 206-547-1641
E-mail: dgoldhab@uw.edu
Geoffrey Borman, Ph.D.
Professor of Education, University of Wisconsin—Madison
Deputy Director of the University of Wisconsin's Predoctoral Interdisciplinary Research Training Program
Senior Researcher, Consortium for Policy Research in Education.
348
Education
Building
1000 Bascom Mall
Madison, WI 53706-1326
Ph: 608-263-3688
Fax: 608-265-3135
E-mail: gborman@education.wisc.edu
Johannes M.(Hans) Bos, Ph.D.
Vice President and Program Director, International Development, Evaluation, and Research (IDER) Program
American Institutes for Research
2800
Campus Drive, Suite 200
San Mateo, CA 94403
Ph: 650-843-8100
Fax: 650-843-8200
E-mail: jbos@air.org
W. Steven Barnett, Ph.D.
Board of Governors Professor and Director of the National Institute for Early Education Research
Rutgers University
73
Easton Avenue
New Brunswick, NJ 08901
Ph: 848-932-4350 x23132
Fax: 732-932-4360
E-mail: sbarnett@nieer.org
Raudenbush, S.W., Spybrook, J., Congdon, R., Liu, X , & Martinez, A., Bloom, H., & Hill, C. (2011). Optimal Design Software Plus Empirical Evidence (Version 3.0) [Software]. www.wtgrantfoundation.org.
See attachment
See attachment
See attachment
See attachment
See attachment
See attachment
See attachment
See attachment
See attachment
See attachment
See attachment
See attachment
1 While the precise number of schools may shift minimally by the time the principal recruitment takes place in August 2015, this change is not expected to impact our statistical power calculations.
2 Given that all non-charter principals in the state will be recruited for the study, stratification is not necessary to produce a representative sample. Randomizing within district optimizes power. Among small districts, stratifying by school level guards against treatment and control schools being confined to mutually exclusive school levels within a district. Given the homogeneity of within-district school composition, ED’s Contractor does not expect further stratification by size or race/ethnicity to be advantageous, but will allow for that option if it reduces the potential for imbalances in treatment assignment on these school characteristics across the participating schools.
3 If a principal moves away from the school, we will assign the new principal to the same group as the previous principal, and send them the relevant guide. The regression analyses will control for teacher and principal mobility.
4 School grades are computed on a continuous scale from 1 to 100, and are publicly reported annually.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | 4.8 |
Author | djbieniewicz |
File Modified | 0000-00-00 |
File Created | 2021-01-26 |