Download:
pdf |
pdf
Evaluation of the Comprehensive
Technical Assistance Centers
OMB Clearance Request for Data Collection Instruments
Part B: Supporting Statement for Paperwork Reduction Act Submission
December 17, 2014
Prepared for:
U.S. Department of Education
Contract No. ED‐IES‐13‐C‐0059
Prepared by:
IMPAQ International
INTRODUCTION
This document has been prepared to support the clearance of data collection instruments for
the National Evaluation of the Comprehensive Technical Assistance Centers. The Institute of
Education Sciences (IES) within the U.S Department of Education (ED) is conducting this
evaluation. In the introduction to the supporting statement, we provide a description of the
Comprehensive Technical Assistance Centers program, the evaluation questions and study
design. The remaining sections of this document respond to specific instructions of the Office of
Management and Budget (OMB) for the preparation of a supporting statement.
This document describes a request for clearance of six data collection instruments for phase
one of the evaluation: 1) Design‐focused Interview Guide for Center Staff, 2) Implementation‐
focused Interview Guide for Center Staff, 3) Interview Guide for Technical Assistance (TA)
Recipients, 4) Center Staff Survey, 5) TA Recipient Survey, and 6) TA Event Observation Guide.
A separate, phase 2 proposal will be submitted at a later date for clearance of outcomes‐
focused data collection instruments, including interview protocols for Comprehensive Center
Staff and TA recipients. The outcome‐focused protocols and their related burden hours will be
submitted to OMB as a phase 2 package after the first data collection site visits are complete.
The Comprehensive Technical Assistance Centers
Title II of the Educational Technical Assistance Act of 2002 (F.T AA, Section 203)1 authorized the
Comprehensive Center (CC) Program, a discretionary grant program establishing technical
assistance centers. The CCs were last awarded in 2012, to “provide technical assistance to State
educational agencies (SEAs) that builds their capacity to support local educational agencies
(LEAs or districts) and schools, especially low‐performing districts and schools; improve
educational outcomes for all students; close achievement gaps; and improve the quality of
instruction” (77 FR 33564)2.
In 2012, the Department of Education awarded new five‐year grants to 15 Regional Centers and
7 Content Centers under the Comprehensive Centers program. The Regional Centers each serve
one to seven U.S. states, territories, and possessions. They provide technical assistance (TA)
that builds the capacity of SEAs to implement, support, scale up, and sustain initiatives that
help districts and schools improve student outcomes. The Regional Centers focus their work on
seven Federal priority areas:
1. Implementing college‐ and career‐ready standards and aligned, high‐quality
assessments for all students;
2. Identifying, recruiting, developing, and retaining highly effective teachers and leaders;
3. Turning around the lowest‐performing schools;
4. Ensuring the school readiness and success of preschool‐age children and their successful
transition to kindergarten;
1
http://www2.ed.gov/programs/newccp/legislation.html
https://www.federalregister.gov/articles/2012/06/06/2012‐13735/applications‐for‐new‐awards‐comprehensive‐
centers‐program#h‐4
2
1
5. Building rigorous instructional pathways that support the successful transition of all
students from secondary education to college without the need for remediation, and
careers;
6. Identifying and scaling up innovative approaches to teaching and learning that
significantly improve student outcomes; and
7. Using data‐based decision‐making to improve instructional practices, policies, and
student outcomes.
The Content Centers provide the Regional Centers and SEAs with in‐depth content knowledge
and expertise by providing information, publications, tools, and specialized technical assistance.
The 7 Content Centers are:
1.
2.
3.
4.
5.
6.
7.
Center on Standards and Assessments Implementation
Center on Great Teachers and Leaders
Center on School Turnaround
Center on Enhancing Early Learning Outcomes
Center on College and Career Readiness and Success
Center on Building State Capacity and Productivity
Center on Innovations in Learning
The National Evaluation of the Comprehensive Technical Assistance Centers
The National Evaluation is charged with examining and documenting how the individual CCs
intend to build SEA capacity (theories of action) and what types of activities they actually
conduct to build capacity. It will also explore and document the outcomes of the capacity
building efforts. It is designed to build on the previous evaluation of the CCs, which
documented the type, extent, and quality of services provided.
Evaluation Questions
The evaluation will address broad questions in three areas:
Program Design: How did the CCs design their work? In addressing this evaluation
question, we seek to identify how the CCs designed their work as TA providers, including
the underlying theories of action driving the work. The evaluation will seek to surface
the theories of action and definitions of capacity building employed by CCs, as well as
describe CCs’ plans for assessing the needs of their constituencies and developing TA
work plans to address those needs.
Program Implementation: How did the CCs operate? In addressing this evaluation
question, we seek to identify the various strategies CCs used to build capacity, describe
the characteristics of those strategies, and document the extent to which CCs
implemented the TA as planned. We will also seek to identify the common challenges
and barriers CCs faced in building capacity, and ways they met those challenges.
Program Outcomes: What was the result of the CCs’ work? In addressing this
evaluation question, we seek to identify the extent to which CCs achieved their goals
and objectives, particularly as they relate to building their constituents’ capacity. We
will also explore the extent to which outcomes aligned with and supported the CCs’
2
theories of action, and to identify factors that may have contributed to CCs’ success (or
failure) in achieving expected outcomes.
Focus on Two Federal Priority Areas
As a way to focus the evaluation to gather data in depth rather than breadth, the evaluation
will limit data collection on the implementation and outcomes questions to two of the seven
federal priority areas: identifying, recruiting, developing, and retaining highly effective teachers
and leaders, and ensuring the school readiness and success of preschool‐age children and their
successful transition to kindergarten. All of the Regional Centers have implemented projects
and strategies that address the effective teachers and leaders priority, so it clearly represents
an issue of nationwide focus. It is an area where most SEAs have significant needs for technical
assistance and capacity building, as many are newly moving into a role in which they are
choosing and implementing educator evaluation systems or supporting districts and schools as
they hire and evaluate their professional staff. This priority is also tied to school reform efforts
and large federal funding streams such as the Race to the Top initiative, the School
Improvement Grants, and the Teacher Incentive Fund.
Early learning is a high‐profile issue, recently gaining renewed attention from SEAs, the
Department of Education, and President Obama. States vary in the attention and funding they
have historically given to early childhood education, from universal full‐day prekindergarten in
some states to no state‐supported pre‐K at all in others. Research evidence is clear, however,
that high‐quality early education leads to better student outcomes, thus leading states to
increase their focus on this issue. Given the overarching nature of the Great Teachers and
Leaders priority, and the recent policy focus on early learning efforts, we believe that focusing
on these two priorities will allow us to learn about how CCs developed SEA capacity in these
two areas and what difference it made.
Data Sources
Data collection for this study will consist of surveys of TA recipients, interviews of SEA staff,
surveys and interviews of CC staff, and observations of TA events.
TA recipient surveys
Purpose: To gather information about TA received in selected priority areas from the
CCs, understand the actions resulting from participating in TA, and examine outcomes
related to that TA
Sample: All SEA and possibly district staff who received TA from the CCs in the selected
priority areas, along with Regional Center staff receiving TA from Content Centers
Timing: Once yearly, beginning in the first quarter of 2015 (following OMB approval)
CC staff and SEA interviews
Purpose: To gather information about the Centers’ capacity building efforts, theories of
action, program implementation, and outcomes;
3
Sample: A purposeful sample of Center Directors, Managers, Evaluators, and TA staff;
Recipients of CC TA, including SEA and possibly LEA staff, and other CCs;
Timing: In coordination with site visits (projected to occur in Q2 2015).
CC staff surveys
Purpose: To gather information from multiple CC TA staff about the nature, successes
and challenges of their work building SEA capacity or Regional Center capacity, and to
help identify high‐leverage projects for more detailed study;
Sample: All Center staff providing TA;
Timing: Once yearly, beginning in the first quarter of 2015 (following OMB approval).
TA event observations
Purpose: To obtain detailed data about the strategies that CCs used to support capacity
building and achieve planned outcomes;
Sample: Observable services or events that are planned for the selected priority areas in
profiled projects, or projects that are potential profiled projects;
Timing: In coordination with site visits if possible, except for virtual events such as
webinars, which will be observed as they occur.
Evaluation reports
The evaluation will produce four reports. The first report will be an interim report focusing on
how the CCs designed their work as technical assistance providers. The report will describe the
CCs’ underlying theories of action and definitions of “capacity building,” and explain how the
Centers assessed their constituencies’ needs and developed work plans to address those needs.
This report will be available in the first quarter of 2016.
Two interim summative reports (projected date: first quarter 2017) will be produced to address
the program implementation and outcomes questions, one report for each of the two selected
priority areas. These reports will demonstrate how and to what extent the Comprehensive
Technical Assistance Center program has built state capacity in these priority areas. The reports
will include descriptions of the strategies used to build SEA capacity, common challenges faced
and ways the CCs sought to address them, the extent to which CCs achieved their goals and
objectives, factors that may have contributed to success (or failure) in achieving expected
outcomes, and the extent to which CCs’ outcomes aligned with and supported their theories of
action. The interim summative reports will also include six profiles of multi‐year projects, which
will be selected and analyzed to illustrate how the CCs have successfully worked to build
capacity related to project design and planning, implementation, and outcomes.
A final report will be produced in September 2018. The final report will summarize findings
documented in the interim reports and update findings based on new data. The report shall
provide a full summary of lessons learned on the CCs’ efforts to build capacity.
4
DESCRIPTION OF STATISTICAL METHODS (PART B)
1. Describe the potential respondent universe (including a numerical estimate) and any
sampling or other respondent selection method to be used. Data on the number of
entities (e.g., establishments, state and local government units, households, or persons)
in the universe covered by the collection and in the corresponding sample are to be
provided in tabular form for the universe as a whole and for each of the strata in the
proposed sample. Indicate expected response rates for the proposed sample. Indicate
expected response rates for the collection as a whole. If the collection had been
conducted previously, include the actual response rate achieved during the last collection.
As described in the introduction above, the study team has purposively selected two of the
seven Federal priority areas of the 22 Comprehensive Centers (CCs), and proposed to include
the universe of CC staff and TA recipients who are heavily involved with CC activities in those
priority areas. See Exhibit 1 for a more detailed description of sampling by instrument.
The two selected Federal priority areas are
1. Identifying, recruiting, developing, and retaining highly effective teachers and leaders,
and
2. Ensuring the school readiness and success of preschool‐age children and their successful
transition to kindergarten.
The selection of two priority areas in no way implies that the Department has a preference for
these areas over others, or that the CCs or states should shift the focus of their efforts to these
areas. Rather, this narrowing of focus allows us to target our resources in such a way that we
are able to learn more about specific capacity‐building activities and outcomes.
The target population, which includes the 22 CCs and their state education agency constituents,
is small. Furthermore, each CC is unique, as it serves a different set of constituents with
different capacity building needs. Sampling within this small population might not involve some
of the work of some CCs. However, focusing on only two of the seven Federal priorities means
that our data collection efforts will target projects related to these priorities, thus reducing
burden on respondents. Therefore, in the interest of data quality with this small population,
statistical methods to identify a probability sample from a larger respondent universe will not
be used.
Project Profiles
There is one additional sample selection component within this study. We will profile a small
number of projects in depth to illustrate how capacity building can be implemented. Profiles of
up to three successful projects will be developed within each of the selected Federal priority
areas. We will employ a purposeful selection process to identify projects for these profiles as
follows:
5
In the surveys in 2015 and in the first data collection site visits (Q2 2015), site visitors
will collect information from CC staff about the projects within the selected priority
areas that might be appropriate for project profiles. These may be multi‐year, high‐
leverage projects that serve multiple state departments of education and/or that
address critical capacity needs. One project per key priority area will be identified for
consideration at each Regional Center and at the relevant Content Centers.
Site visitors will conduct observations of events delivered as part of these projects
during the site visits if feasible given cost and availability of staff.
In mid‐2015 the evaluators will identify the six most promising projects based on:
o State department of education feedback, via the surveys and interviews, on the
effectiveness or outcomes of the projects, and
o Evidence from CC staff interviews as to the projects’ progress in meeting their
benchmarks, goals, and objectives.
Evaluators will gather details on these projects through documents and continued interviews
and observations through the first quarter of 2017 (see timeline in Exhibit 1). Stakeholders
outside of the CCs and state education agencies, such as partner organizations and school
districts, may be considered for additional interviews if needed to develop a thorough
understanding of the implementation and outcomes of a given project.
2. Describe the procedures for the collection of information, including:
Statistical methodology for stratification and sample selection.
Estimation procedure.
Degree of accuracy needed for the purpose described in the justification.
Unusual problems requiring specialized sampling procedures, and
Any use of periodic (less frequent than annual) data collection cycles to reduce
burden.
The data collection procedures are discussed below. Exhibit 1 includes each instrument, its
purpose, its administration time, its estimated sample, and the research questions addressed.
All instruments are on yearly data collection cycles except for the observations, which will take
place on an ongoing basis beginning in the second quarter of 2015, or upon OMB approval, as
relevant observable activities are identified.
6
Exhibit 1. Instruments, Purpose, Timing, Sample, and Research Questions
Instrument
Purpose
Administration Estimated sample
timing
1. Design‐
focused
interviews with
CC staff
Understand
design/theory of
action/capacity
building model
Q2 2015
2.
Implementation‐
focused
interviews with
CC staff
Understand program
implementation,
including key
strategies,
partnerships,
successes and
challenges, in the two
priority areas
Understand technical
assistance outcomes,
including longer term
outcomes, and follow
up on projects
identified for profiles
Q2 2015 and
Jan‐Feb 2016
3. Outcomes‐
focused
interviews with
CC staff*
4. Interviews
with TA
recipients
Understand CCs'
implementation
strategies and
outcomes, including
how CCs and SEAs
work together in the
two selected priority
areas
Jan‐Feb 2017
Note: this
protocol will be
developed after
the first site visit
and is not
included in this
package for
review
Q2 2015, Jan‐
Feb 2016 and
2017
Note: the
protocol for the
last interview
will be
developed after
the first site visit
and is not
included in this
package for
review
All CCs’ leadership staff;
estimated total of 88
respondents (average of 4
per CC)
1. How did CCs define capacity
building? Did their definitions
change over time? If so, how?
2. What theories of action did CCs
use to guide their general capacity‐
building work? Did the theories
change over time? If so, how?
3. How did CCs assess the needs of
their constituencies?
Leadership and staff who
4. What strategies did CCs employ to
achieve their outcomes?
work on projects in
5. To what extent did CCs implement
selected priority areas;
estimated total of 132 (6 in technical assistance to their
constituencies as planned?
each CC)
6. To what extent and how did CCs
collaborate with each other?
CCs with work in the two
selected priority areas;
staff who work on projects
in those areas; estimated
total of 88 respondents (4
per CC)
7. To what extent did CCs achieve
their goals and objectives?
State education and
Regional Center staff
identified by CCs as key
contacts or participants in
key CC projects in the
selected priority areas;
Estimated total of 88
(average of 4 per CC)
4. What strategies did CCs employ to
achieve their outcomes?
5. To what extent did CCs implement
technical assistance to their
constituencies as planned?
7. To what extent did CCs achieve
their goals and objectives?
7
Major research questions
addressed
5. CC staff online
survey
6. TA recipient
survey
7. TA event
observations
Understand CC staff
individual roles and
perceptions of
projects in the two
priority areas,
including how
projects build
capacity
Investigate how
technical assistance
recipients, including
state education staff
and other participants
as relevant, perceive
and use the services
they receive, and how
these services have
helped them build
capacity
Provide observational
data about the
strategies that CCs
used to support
capacity building and
achieve planned
outcomes. Will inform
project profiles.
Q1 2015, Nov
2015, Nov 2016
All CC staff and partners
who provide TA in the
selected priority areas;
estimated total of 264
(average of 12 per CC)
4. What strategies did CCs employ to
achieve their outcomes?
5. To what extent did CCs implement
technical assistance to their
constituencies as planned?
7. To what extent did CCs achieve
their goals and objectives?
Q1 2015, Nov
2015, Nov 2016
TA recipients in the
selected priority areas;
estimated total of 440
respondents (about 20 per
CC)
7. To what extent did CCs achieve
their goals and objectives?
On an ongoing
basis beginning
in Q2 2015, as
relevant
observable
activities are
identified
Observable events of TA
activities in the two
selected priority areas;
estimated total of 10
observations per year,
with one CC staff member
providing background
information about each
event
4. What strategies did CCs employ to
achieve their outcomes?
5. To what extent did CCs implement
technical assistance to their
constituencies as planned?
7. To what extent did CCs achieve
their goals and objectives?
*Phase 2 OMB submission
CC staff interviews (instruments 1‐3)
CC staff will be interviewed as part of yearly site visits to each CC, taking place in early 2015,
2016, and 2017. There are three separate interview protocols because the evaluation team will
be focusing on different evaluation questions, and therefore asking different interview
questions during site visits as the work of the CCs progresses. Instrument 1 is a protocol for
interviewing CC leaders in a group setting during the site visits about how they design their
work with constituents (see Appendix A). It will be administered once in the second quarter of
2015. Instrument 2 (see Appendix B) focuses on the implementation of the technical assistance
work, and will be administered in a group setting in the 2015 and 2016 site visits. Respondents
for Instrument 2 will include staff members who are working on projects in the two selected
federal priority areas. Instrument 3 will be administered in the 2017 site visits. It will be
addressed in a later OMB submission.
For all of the CC staff interviews, the evaluation team will work with a coordinator from each CC
who will help schedule the site visits. The evaluation team will provide guidelines for the
selection of participants in interviews, and will follow recommendations from CCs regarding
8
staff to be included in each interview. Each site visit team will consist of an evaluator from
IMPAQ (the site liaison) and a consultant from a subcontractor. The evaluators have expertise
in the study design and evaluation methods, while the consultants have expertise in technical
assistance to state and local education agencies. Having the two visitors work as a team during
the site visits will enhance the accuracy of the data gathered. The use of experienced
interviewers, coupled with careful preparation, will guide the specific wording of each question
and probe on the protocol, to ensure the interviews are customized appropriately to address
the unique situations of each interviewee. The same pair of visitors will attend all site visits of
the CCs to which they are assigned, and both site visitors will attend all interviews if possible.
However, site visitors may conduct interviews separately if this is necessary to collect
information from all respondents. Because of this, and to ensure accurate notes, each interview
will be audio recorded if the interviewees permit. The IMPAQ evaluator (site liaison) for each
site will have ultimate responsibility for completing and submitting the specified set of standard
deliverables on the checklist for each site visit.
Throughout the process of data collection and reporting, the contractors will make all efforts to
protect the privacy of respondents participating in the site visits. The study team will not
identify by name any of the interviewees, nor will the study team attribute quotes by name,
although the study team will identify the names of states and CCs in final reporting.
The study leaders will train the site visit teams so that all team members share a consistent
understanding of the study, the research questions, the interview questions and probes, and
the data collection needs. Prior to each wave of site visits all site visitors will convene in
Washington, DC and Oakland, CA for a half‐day training session. The session will address the
research questions, site visit logistics and activities, site pre‐ and post‐visit communication, data
collection procedures on site (including a review of the interview protocols), and data handling.
The site visit task leaders will develop a site visitor guide with a checklist that will outline all
tasks the site visitors need to perform before, during, and after each visit. During the months of
site visits there will be regular meetings of the site visitors and study leaders to discuss issues
and concerns.
Technical assistance recipient interviews (instrument 4)
Prior to each visit, site visitors will ask Center staff to identify the key SEA representatives they
have worked with on each major project within the two selected priority areas (up to two
projects per priority area). State education agency representatives may include Chief State
School Officers and their deputies, division leaders, and middle managers and technical staff
within divisions, as relevant to the projects. Evaluators will contact the respondents and
conduct these interviews in person, if feasible in conjunction with site visits to the centers, or
by telephone. These interviews (see Appendix C) will include questions about planning, needs
identification, and ongoing communication between the Centers and the state education
agencies, implementation of projects in the selected priority areas, and outcomes of these
projects.
9
CC staff online survey (instrument 5)
The CC staff survey (see Appendix D) will be administered once per year via online survey
software prior to the site visits, starting in Q1 2015. The survey includes questions about
technical assistance implementation in projects under the two selected federal priorities. Each
CC’s list of projects will be unique to that CC’s work plan, but the questions asked about the
projects will be the same from survey to survey. The survey addresses the kinds of technical
assistance involved in each project, the role of the respondent, challenges and supporting
factors, staff members and collaborators on each project, and perceptions of capacity building.
It also asks for suggestions for projects suitable for project profiles.
IMPAQ staff will contact each of the CCs to obtain the list of respondents for each site. The
evaluation team will provide guidelines for the selection of survey respondents (people who are
involved in technical assistance with projects in the two selected federal priority areas), and will
follow recommendations from CCs regarding staff to be included in each survey. Participants
will receive a survey link via email.
The study team will not identify by name any of the survey respondents, nor will the study
team attribute quotes by name.
Technical assistance (TA) recipient online survey (instrument 6)
The TA recipient survey (see Appendix E) will be administered once per year via online survey
software, starting in Q1 2015. It will be addressed to state education agency staff, as well as to
school district staff and other relevant stakeholders who are direct recipients of TA from CCs.
The survey includes questions about technical assistance received from CCs during past 12
months under the two selected Federal priorities. Similar to the CC staff survey, the list of TA
activities provided by each CC will be unique, but the questions asked about the TA will be the
same from survey to survey. The survey addresses specific TA received, actions taken as a result
of the received TA, and statements about individual and organizational capacity built under the
two selected federal priorities. It also asks for the most helpful aspects of TA, challenges
encountered, and specific plans to use TA in upcoming months.
IMPAQ staff will ask each of the CCs for the list of specific TA activities delivered in past 12
months and for the contact information for their TA recipients (including SEA staff, LEA staff
and other as applicable). The evaluation team will provide guidelines as to what constitutes a
TA activity (such as a workshop, webinar, consultation, product, and toolkit). TA recipients will
receive survey links via email.
The study team will not identify by name any of the survey respondents, nor will the study
team attribute quotes by name.
10
Technical assistance event observations (instrument 7)
Observations of TA events will provide detailed data about the strategies that CCs use to
support capacity building and achieve planned outcomes. These data will inform research
questions about differences in strategies between CCs, characteristics of strategies, and the
extent of collaboration with other CCs. Follow‐up data on the capacity building outcomes of the
event may be obtained through the TA recipient surveys and interviews, particularly for events
that are part of the projects to be profiled.
In preparation for each observation, observers will gather information from the CC staff leading
the event or extant documents in order to identify:
Goals of the event/needs to be addressed
How these goals relate to the CC’s theory of action, workplan, and relevant strategy or
project plan
Intended audience
Expected outcomes of the event
As relevant, other background on the planning or preparation for the event
A structured protocol for the observations, including questions to be addressed prior to the
event, is included as Appendix F.
During the meetings/site visits with CCs, or in follow‐up calls as needed, the evaluators will ask
CC staff members to identify observable services or events of profiled projects (or potentially
profiled projects) that are planned for the two selected Federal priority areas in the coming
year. These events may include webinars, stakeholder meetings (virtual or in‐person), and in‐
person training events, workshops, or presentations. When possible, observations will be
conducted in conjunction with site visits. Observations of webinars or other virtual events will
be conducted as they are scheduled. Observations will focus on a convenience sample of high‐
leverage activities that may illustrate the capacity building process. For example, rather than
observing training events where individuals impart knowledge to other individuals, we will
attempt to observe more dynamic events, such as roundtables or cross‐state events and
planning sessions, where we might be able to observe how the CCs work with their constituents
and how the constituents respond.
Emails will be sent to respondents to inform them and request participation in surveys and
interviews. Email templates can be found in Appendix G.
3. Describe methods to maximize response and to deal with issues of non‐response. The
accuracy and reliability of information collected must be shown to be adequate for
intended uses. For collections based on sampling, a special justification must be provided
for any collection that will not yield “reliable” data that can be generalized to the universe
studied.
11
The evaluation team has designed instruments so that they are easy to understand and place as
little burden on participants as possible, which will encourage response.
CC staff interviews and surveys
In Winter 2013, IMPAQ evaluators conducted brief phone calls with CC leadership to introduce
themselves and the goals of the overall evaluation. In spring/summer 2014, site liaisons and
consultants visited each CC to introduce themselves and to have informal conversations about
the work of the CCs. We believe these interactions have resulted in cooperative relationships
between CCs and evaluators. This spirit of cooperation and the eagerness of CCs to have their
work understood by a larger audience are likely to result in high response rates for interviews
and surveys.
Site liaisons will have the opportunity to plan the list of CC staff survey participants with CC
directors or their designees. Once the participant list is selected, the evaluation team will notify
intended respondents via email several days prior to the survey release, in order to let them
know to expect an upcoming online survey link. Approximately one week after the survey links
have gone out, IMPAQ staff will begin to follow up with any nonrespondents. This will initially
involve email reminders to respondents, followed by telephone calls.
Interviews will take place in a group setting during site visits, unless scheduling difficulties
necessitate individual interviews. Site liaisons will identify the lists of participants in advance
with CC directors. Site liaisons and consultants will then interview identified participants in
person during site visits. In the event that some identified CC staff members are participating in
site visits via videoconferencing or teleconferencing, site visitors will conduct the interviews
using these technologies. If some CC staff members are not able to participate in site visits at
all, but should be included in interviews, site visitors will (with the help of CC directors) plan
follow‐up calls or videoconferences with them to conduct the interviews.
The evaluation team anticipates a 100 percent response rate for the interviews and a 90
percent response rate for the CC staff surveys.
TA recipient surveys and interviews
Based on our experience, we expect an 80 percent response rate for the TA recipient surveys.
We will take the following steps to maximize response rate. Evaluators will rely on CC staff to
identify TA recipients with whom they have worked or who have participated in CC‐organized
events such as workshops and webinars and to provide us with their contact information. Once
we send surveys to recipients, several rounds of email reminders will be sent to any
nonrespondents. If these are not successful, we will proceed with phone or mail follow‐up to
boost the overall response rates.
Regarding the TA recipient interviews, evaluators will ask CCs for the contact information of key
TA recipients in projects in the two selected priority areas. Evaluators will contact the TA
recipients using the following procedures: 1) All identified SEA recipients will receive an initial
contact via email, requesting their participation in a one‐hour interview, and providing an
12
overview of the scope and purpose of the evaluation. The general purpose of the interview and
the topics to be covered will also be described. 2) If there is no response within a week,
evaluators will follow up by telephone within one week of sending each email. 3) If the first
telephone call does not reach the respondent or does not accomplish scheduling the interview,
evaluators will continue to follow up through a combination of email and telephone. If in‐
person interviews cannot be arranged during a site visit, telephone interviews will be arranged
to accommodate the respondents’ schedules within the next two months. We expect that this
flexibility will result in a response rate of 80%. If a respondent is unreachable or completely
unavailable for an interview, evaluators will ask the relevant CC for names of alternate TA
recipients who might serve as replacements or representatives.
4. Describe any tests of procedures or methods to be undertaken. Testing is encouraged as
an effective means of refining collections of information to minimize burden and improve
utility. Tests must be approved if they call for answers to identical questions from 10 or
more respondents. A proposed test or set of tests may be submitted for approval
separately or in combination with the main collection of information.
Study team members have pre‐tested drafts of the interview protocol questions with up to nine
respondents from CCs. Responses to the questions were collected by site liaisons via interview
notes and used to make revisions to the instruments in terms of wording, flow, length, and
order of questions. The interview protocol notes were also used to inform the construction of
the survey items.
5. Provide the name and telephone number of individuals consulted on statistical aspects of
the design and the name of the agency unit, contractor(s), grantee(s), or other persons
who will actually collect and/or analyze the information for the agency.
The following individuals and organizations are involved in data collection, analysis, and
consulting on statistical aspects of the study design.
Responsibility
Organization
Contact Name
Telephone
Number
Co‐Principal Investigator
and Project Director
IMPAQ International Cheri Fancsali
443‐259‐5406
Co‐Principal Investigator
IMPAQ International Phyllis Weinstock
510‐597‐2423
Associate Education
Research Scientist
Institute of
Education Sciences,
ED
Amy Johnson
202‐208‐7849
Research Scientist
Institute of
Education Sciences,
ED
Joy Lesnick
202‐219‐2013
Technical Working Group
Member
Consultant
Margaret Goetz
609‐737‐2464
13
Technical Working Group
Member
Consultant
Thomas Adams
530‐848‐9728
Technical Working Group
Member
Consultant
Constancia Warren
212‐367‐4595
Analytic Support
IMPAQ International Kelley Akiya
510‐597‐2411
Task Lead, Site Liaison
IMPAQ International Andrea Beesley
443‐832‐2313
Task Lead, Site Liaison
IMPAQ International Anne Chamberlain
443‐259‐5215
Analytic Support
IMPAQ International Maria DiFuccia
202‐774‐1948
Task Lead, Site Liaison
IMPAQ International Michaela Gulemetova
202‐774‐1956
Analytic Support
IMPAQ International Ashley Hunt
202‐774‐1949
Site Liaison
IMPAQ International Kay Magill
510‐597‐2418
Site Liaison
IMPAQ International Nada Rayyes
510‐597‐2422
Analytic Support
IMPAQ International Eliana Saltares
202‐774‐1972
Site Liaison
IMPAQ International Raquel Sanchez
510‐282‐4794
Site Liaison
IMPAQ International Linda Toms Barker
808‐934‐9297
Technical Assistance
Expert, Site Visitor
Consultant
Michelle Feist
804‐252‐5714
Technical Assistance
Expert, Site Visitor
Consultant
Deborah Jonas
503‐381‐4164
Technical Assistance
Expert, Site Visitor
Consultant
Leslie Rennie‐Hill
410‐206‐0394
Technical Assistance
Expert, Site Visitor
Consultant
Paul Smith
541‐543‐9179
Technical Assistance
Expert, Site Visitor
Consultant
Michelle Swanson
804‐252‐5714
14
File Type | application/pdf |
File Modified | 2014-12-18 |
File Created | 2014-12-17 |