Alternative Supporting Statement for Information Collections Designed for
Research, Public Health Surveillance, and Program Evaluation Purposes
Assessing the Implementation and Cost of High Quality Early Care and Education: Field Test
OMB Information Collection Request
0970 - 0499
Supporting Statement
Part B
September 2021
Submitted By:
Office of Planning, Research, and Evaluation
Administration for Children and Families
U.S. Department of Health and Human Services
4th Floor, Mary E. Switzer Building
330 C Street, SW
Washington, D.C. 20201
Project Officers:
Meryl Barofsky, Senior Social Science Research Analyst
Ivelisse Martinez-Beck, Senior Social Science Research Analyst and
Child Care Research Team Leader
Part B
B1. Objectives
Study Objectives
The purpose of information collection under the current request is to add a teaching staff survey, the SEQUAL, to the field test so that we can test and validate the measures developed in previous phases of the study (Phase 1 completed ACF’s generic clearance 0970-0355 and Phase 2 completed under 0970-0499; implementation interview, cost workbook, and staff time-use survey). The goals of the field test are to (1) refine the implementation measures to further test and improve their psychometric properties; (2) test the usability of revised instruments; and (3) test preliminary associations between implementation, cost, and quality measures. The information collected will provide evidence in the field by validating practical tools to measure how centers use resources to support high-quality early care and education, and examining preliminary evidence of associations between cost and quality. The data will be archived at the Child and Family Data Archive at the University of Michigan for future research and analyses by qualified researchers.
Generalizability of Results
This is a measurement development study intended to refine and validate instruments, in addition to examining preliminary evidence of associations between cost and quality. Data are not intended to support statistical generalization.
Appropriateness of Study Design and Methods for Planned Uses
Sites will be selected for geographical diversity and variation in investments in ECE, which is appropriate for further refining and validating the measures created in earlier phases of the study. For sites in this field test, adding an externally validated, existing teaching staff survey and accessing quality rating and improvement systems (QRIS) data from administrative records will support the triangulation of data to assess the new measures’ validity.
The diversity of participating sites will support assessment of preliminary associations between site characteristics, implementation factors, quality, and cost structures of center-based ECE. This analysis is intended to assess the practicality of combining these data types, and will not be used to generate nationally-representative estimates of the prevalence of program characteristics, practices, or costs. As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.
The data
collection mode, target population, and other study design features
align with earlier data collection.
B2. Methods
and Design
Target Population
The target population for this information collection is center-based early care and education (ECE) providers that serve children from birth to age 5. The sampling plan prioritizes the inclusion of different types of ECE centers. To answer questions about the reliability and validity of the measures across a variety of contexts, we first plan to conduct a feasibility test with up to 12 centers from Phase 2 of the previous data collection effort. This is to ensure our measures capture the context of the COVID-19 pandemic on the centers we are visiting appropriately. We next plan to recruit about 10 to 12 additional centers in each of the three states from which centers participated in Phase 2 (so we can combine data for analysis for the field test) and about 16 centers from two additional states. We will recruit centers that represent different geographical regions and types of investments in early care and education. This will provide us with a sample of 80 centers.
Sampling and Site Selection
The study team will consider the following characteristics in selecting the five focal states and plans to target similar proportions of different types of centers in each state (see Table B.1):
Quality rating and improvement systems (QRIS). Selecting states with a QRIS will help ensure some variation in quality based on QRIS ratings. We will also include some centers that do not participate in QRIS. The study team will aim to select at least some focal states that (1) conduct the Program Administration Scale (PAS; Talan and Bloom, 2004) as part of their QRIS rating process; and (2) may be able to provide QRIS component-level data for analysis as these data may allow for additional validation analysis.
Child care licensing regulations. We will include states that have variation in child care licensing requirements because these requirements set the floor for quality.
Geographic regions. The states included in the field test should be located in different Census-defined regions of the country to capture variation in state and regional contexts and conditions.
Table B.1. Targeted number of centers for the field test
|
Centers in each state (up to 5 states) |
Total |
Centers from Phase 2 Data Collection |
4 |
12 |
Community-based centers with medium/high QRIS ratinga |
|
|
Mixed funding a |
3 |
20 |
Limited or no public funding |
1 |
10 |
Community-based centers with low QRIS ratinga |
|
|
Mixed funding a |
2 |
10 |
Limited or no public funding |
1 |
5 |
Community-based centers with any QRIS rating or not participating in QRISa |
|
|
Mixed funding a |
2 |
10 |
Limited or no public funding |
1 |
5 |
Head Start/Early Head Start centers b |
|
|
Head Start only |
1 |
5 |
Head Start and Early Head Start |
1 |
5 |
TOTAL |
16 |
80 |
Note: Numbers in italics are subtotals and are not included in the overall total.
a Mixed funding centers are those that draw from tuitions and one or more public funding sources or centers that draw from multiple public funding sources.
b Centers that are funded in full with Head Start funding, or receive the majority of their funding from Head Start mixed with other public funding.
The study team will contact centers for the feasibility test from contact information from Phase 2 data collection.
For the remaining centers, the study team will assemble contact lists for centers in five states through state websites and Head Start PIR or ECLKC data, if necessary. The team will use this information to build a comprehensive list of centers that meet the selection criteria, with enough centers in reserve to replace those that are unable or unwilling to participate. We will build sampling lists based on public information on: (1) QRIS rating level and (2) funding sources. Once we successfully recruit a center into the field test, we will conduct the engagement call to collect detailed information about a center’s characteristics. We will use this information to determine the fit of the center into our recruitment goals based on the characteristics of interest. If a center has the characteristics needed, we will proceed in enrolling them in the field test and begin data collection. Based on the prior phases of this work, the study team expects to initially send hard copy letters to 2,400 centers, and follow-up with individual emails to 800 centers to secure the participation of the 80 centers required for this study (see Attachment B for the advance letter and email). In order to identify 80 willing sites, we estimate that 800 centers will be contacted for recruitment and 100 centers will participate in the study engagement call.
Recruiters will use the initial staff roster (Instrument 5) to collect information about the staff in each center after the implementation interview. Recruiters will work with center administrators in the fall of 2021 to update the staff roster (Instrument 8) removing staff who have left the center. Key center administrators who were at the center at the time of the initial staff roster will be selected to receive just the time use survey. Staff who were teaching staff at the center at the time of the initial staff roster will be selected for the combined time use and teaching staff survey even if their position at the center has since changed. New staff who arrive at the center in fall 2021 will not be eligible for the surveys. The study team will then distribute the time use survey (Instrument 6) and the SEQUAL teaching staff survey (Instrument 8) to all eligible staff.
B3. Design of Data Collection Instruments
Development of Data Collection Instrument(s)
Since
the fall of 2014, the ECE-ICHQ study team has developed a conceptual
framework (Attachment A); conducted a review of the literature
(Caronongan et al. 2016); consulted with a technical expert panel;
collected and summarized findings from Phase 1 of the study
(completed under ACF’s generic clearance 0970-0355) and
collected and summarized findings from Phase 2 of the study
(completed under 0970-0499). Phase 1 included thoroughly testing data
collection tools and methods, conducting cognitive interviews to
obtain feedback from respondents about the tools, and refining and
reducing the tools for the next phase. Phase 2 of the study further
refined the data collection tools and procedures through additional
quantitative study of the implementation of key functions of
center-based ECE providers and an analysis of costs. Using the Phase
2 data, the study team developed a draft set of measures of
implementation and cost around five key functions of a center (as
shown in the conceptual framework) which included an implementation
interview, cost workbook, and time-use survey that were approved
under a previous information collection and comment period.
The
most recent approval adds a teaching staff survey, the SEQUAL, to the
field test of the measures developed in previous phases of the study,
reduced to include only items deemed necessary to accurately measure
cost and implementation. The implementation interview, cost workbook,
and staff time-use survey instruments have also been updated to
include information about the COVID-19 pandemic. Table B2 below
outlines the final instruments for the field test, including
information about their length during Phase 1 and 2 of the study.
Table B.2. Data collection activity for the ECE-ICHQ field test, by respondent, and time to complete
Data collection activity |
Respondents |
Time to Complete P1 |
Time to Complete P2 |
Time to Complete Field Test |
Center recruitment call (Instrument 1) |
Site administrator or center director Umbrella organization administrator (as applicable) |
20 minutes
n/a |
20 minutes
20 minutes |
20 minutes
20 minutes |
Center engagement call (Instrument 2) |
Site administrator or center director
|
25 minutes |
25 minutes |
30 minutes |
Implementation interview (Instrument 3) |
Site administrator or center director Education specialist Umbrella organization administrator (as applicable) |
5.5 hoursa |
3.5 hours |
3 hours |
Cost workbook (Instrument 4) |
Financial manager at site Financial manager of umbrella organization (as applicable) |
8 hours |
7.5 hours |
8 hours |
Initial staff rosters (Instrument 5) |
Site administrator or center director |
n/a |
15 minutes |
15 minutes |
Time-use survey (Instrument 6) |
Site administrator or center director Education specialist Lead and assistant teachers |
30 minutes |
15 minutes |
15 minutes |
Center re-engagement call and roster update for the fall survey (Instrument 7) |
Site administrator or center director |
n/a |
n/a |
30 minutes |
SEQUAL teaching staff survey (Instrument 8) |
Lead and assistant teachers |
n/a |
n/a |
30 minutes |
a In Phase 1, part of the Implementation interview was administered as a self-administered questionnaire.
n/a = not applicable
B4. Collection of Data and Quality Control
The contractor team (Mathematica) will collect data for this study. Using information from publicly available websites, we will send advance materials to 2,400 centers in 5 states (Attachment B). We will then identify certain centers on the initial contact lists that fit specific selection criteria and send a targeted email and letter to 800 centers (Attachment C). Project staff will call the director of each selected center to discuss the study and recruit the director to participate. The center recruitment and engagement call script (Instruments 1 and 2) will collect information about the characteristics of the center if the director agrees to participate. If the center is part of a larger organization that requires the organization’s agreement, the recruiter will contact the appropriate person to obtain that agreement before recruiting the center (Instrument 1). Finally, the recruiter will schedule the data collection activities. All data collection activities will be remote.
Implementation interview. The recruiter will send an email (Attachment D) to the center director to confirm the schedule and topics for the implementation interview. Interviewers will use the implementation interview protocol (Instrument 3) to conduct the interview by phone.
Cost workbook. The data collection team will send an email (Attachment E) to the center director or a staff member designated by the director who is familiar with the center’s finances to schedule a phone call to provide an overview of the cost workbook. The financial manager at each center or umbrella organization will be the primary person to complete the cost workbook (Instrument 4) with support from the data collection team as necessary
Initial staff roster. Following the implementation interview, recruiters will work with the center administrator to identify eligible staff for the surveys. Each potential respondent will be listed on the initial staff roster (Instrument 5) to identify staff and their roles to align data collection across the instruments.
Center re-engagement call and roster update for the fall survey. Recruitment and data collection activities for the field test began in March 2021. By the fall of 2021, the study team plans to have recruited all 80 centers for the study and completed the implementation interviews. Cost data collection may extend into the fall of 2021 but will focus on the prior school year (or 12-month period). During the re-engagement call in fall 2021, recruiters will work with the center administrator to update the staff roster and contact information (Instrument 7).
Fall 2021 survey.
In the fall of 2021, recruiters will email and send a letter to center administrators (Attachment F) introducing the two-part fall survey and letting the center administrator know we will be calling soon to discuss having their staff complete it. We will distribute an advance letter inviting potential respondents to fill out the survey and a document with frequently asked questions about the survey (Attachment F). The advance letter will provide a link to the web-based survey (Instrument 6 and Instrument 8). Potential respondents will also receive an email invitation to complete the survey (Attachment F). A follow-up email/letter and final reminder (Attachment F) will be sent if the survey has not been completed within the requested time frame.
We will build quality assurance (QA) into every stage of data collection to ensure that data will be gathered and processed in a valid, standardized, and professional manner. QA includes data collector certification at the end of training, periodic checks to assess reliability, and ongoing monitoring of data collectors. Together, the data collector and QA reviewer will identify essential questions and items for follow-up. Data collectors will follow up with respondents as necessary, by phone or email. Once all essential follow-up items have been addressed and documented, the QA reviewer will conduct a final review to determine if data collection is complete.
B5. Response Rates and Potential Nonresponse Bias
Response Rates
The
team plans to complete all of the cost and implementation data
collections with all 80 centers that agree to participate in the
study, following the selection protocol described in B2. However, if
any centers withdraw from the study after agreeing to participate, a
sample of 70 centers would still provide sufficient statistical power
to achieve the analytic goals of the field test. As a reminder, the
analytic goal of the field test is to assess the validity and
reliability of measures and not to determine representative
statistical estimates of the items.
Within the 80 selected sites, the team expects to invite
1,280 center staff to complete the time-use and SEQUAL surveys. The
team expects to obtain an 87.5 percent response rate, for 1,120
completes of both surveys.
The analysis plan requires obtaining complete data collection for costs and implementation from each participating center. To build center buy-in, initial communication materials will describe the importance of the study, outline the study goals, encourage center participation, and describe the offer of a $500 honorarium to participating centers. Mathematica has extensive experience in collecting implementation information and cost data with high response rates from staff in education, social services, and health programs. The team has further refined the cost and implementation data collection tools based on their use in Phase 2; these revisions are expected to support full completion.
Study protocols are designed to minimize the organizational burden of complete data collection. Following site selection, the study team will provide each participating center with a summary of the information collected which they can use to assess the activities they pursue under each of the five key functions and how they allocate staff time and center resources to support each function. Providing information structured around the key functions can help center staff think about how they may be supporting quality within their center.
For
the time-use survey, recruiters will collect contact information for
select administrators and teaching staff; the SEQUAL survey will just
be administered to teaching staff. The study team will send an
invitation letter and instructions for accessing the surveys which
will appear as a combined instrument to respondents. The materials
will provide a secure login ID and password to access the web
instrument. The team will follow-up by email at periodic intervals
with staff who have not responded up to three times over the course
of a month. The study team will also connect with the center director
to seek their assistance in reminding staff to complete the surveys,
as needed.
The team’s strategies to maximize
response rates are based on lessons learned from Phases 1 and 2 as
well as experience in other studies. In Phase 2, the study team found
that when field staff explained and distributed the time-use survey
on site, remained to answer questions about the survey, and offered a
$10 token of appreciation for completion, response rates were over 90
percent. We cannot use this design with field staff on site due to
the COVID-19 pandemic.
The study team will instead increase the token of appreciation amount for completion of the combined time-use and SEQUAL teaching staff survey to $50 and offer both a pre-paid and a post-pay gift card to each respondent. Select center administrators who will only complete the time use survey will be offered a pre-paid $10 gift card and $10 post-pay gift card.
The study team expects that the proposed token of appreciation design change will simulate the response benefit we saw in Phase 2. The pre-token of appreciation is similar in intent to the in-person visits from Phase 2 and will be a mechanism to replicate the success of initial face-to-face contact with respondents. The prepaid token of appreciation provides an opportunity to have center directors have contact with staff about the survey and encourages staff to open the invitation envelope. This population is inclined to help about a topic that is important to them. The invitation with a token of appreciation is designed to get center staff to prioritize the request so it is not forgotten rather than to convince them the survey important.
The study team will offer a post-survey token of appreciation through delivery of an immediate electronic gift card to replicate having site field staff hand out gift cards on-site immediately following survey completion. The total increase in the gift card value will also help offset the increased effort required of staff to access and complete a web-based survey and is expected to result in more center staff completing the surveys and getting the response rate close to levels seen in Phase 2 of data collection.
For the combined time use and SEQUAL teaching staff surveys, the study team will vary the amount of the pre- and post-paid gift cards by offering a $10 pre-paid gift card and a $40 post-paid gift card to teaching staff respondents in half the participating centers and equal pre- and post-pay gift cards of $25 to teaching staff respondents in the other half of participating centers. All eligible center administrators who are identified for just the time use survey will receive a $10 pre-paid gift card and a $10 post-paid gift card. This approach is similarly structured to simulate the high response rates we experienced through in-person contact at survey distribution and collection but do so through the token of appreciation remotely. Varying the amount of the pre-paid gift card relative to the post response gift card will help build evidence about cost effective ways to obtain high response rates among staff in center-based settings.
If a center’s total response rate for the surveys is below 70 percent after all the follow-up emails have been sent, the study team will schedule a site visit to distribute and collect the surveys. During the visit, field staff will encourage teaching staff to complete the survey online or on paper and distribute the gift card at completion (based on the amount for the experimental group the center was assigned to). If respondents still haven’t completed their surveys within a week of the site visit, they will receive one more follow-up email.
Non Response
Based
on previous experience in earlier phases of the project, we do not
expect substantial non-response on center-level data collection
(implementation and cost). As part of study reporting, we plan to
present information about characteristics of the participating sites
and the full universe of eligible sites on the characteristics listed
in table B1.
The potential for challenges with survey
non-response exists mainly for the time-use survey, to be completed
by key administrators and teaching staff, and the SEQUAL teaching
staff survey. The study team will work closely with each center to
maximize completion of both surveys. See details on maximizing
response rates in the section above. The team will follow up with
non-responders by email and regular mail (Attachment F) to encourage
survey completion.
The study will attempt to collect data from all teaching staff at each center in the field test to understand the extent of variation within centers and among staff with similar roles. The team will create time-use measures by job category using all available data from staff in a particular position. If there are no responses in a center from staff corresponding to a specific teaching position (for example, an assistant teacher), the team will explore several options for creating time-use measures for that position. One option is to develop time-use measures based on the average responses among all other respondents in the center who are in teaching positions. A second option is to impute time-use measures based on the responses from teaching staff in similar positions in a group of centers with similar characteristics. A third option is to create time-use measures using assumptions about time allocation based on information gathered about that staff member’s responsibilities in the center. The team will conduct sensitivity tests to assess whether and how different approaches to estimating measures for teaching positions with missing data affect measures at the center level. Individual teaching staff responses to the SEQUAL will be scored and aggregated to create center-level domain scores on Teaching Supports, Learning Community, Job Crafting, and Adult Well-Being. The scores are more stable with responses from all or most teaching staff, but scores can be generated with as few as two respondents.
The team will not collect information on the demographic characteristics of individual staff members that would be necessary to compare respondents with non-respondents; however, we will analyze characteristics of centers with high and low non-response in the study sample. The team will also analyze differences in response rates, at the center-level and by respondent type (such as lead and assistant teachers), between the two experimental token of appreciation groups. They will also examine differences in the timing of survey completion between respondents in centers in each of the two experimental groups and assess the costs associated with each approach.
B6. Production of Estimates and Projections
To support evidence-informed program management and improvement, ACF will use the data from this ICR to assess the feasibility, validity, reliability, and usefulness of a field protocol to measure implementation, costs, and quality of ECE. The data will not be used to generate population estimates, either for internal use or dissemination.
B7. Data Handling and Analysis
Data Handling
Procedures for editing to mitigate or correct detectable errors, including checks built into computerized instruments.
Data from the instruments will be monitored for potential respondent errors as reflected in high levels of item nonresponse (“don’t know” and “refused” responses). ICHQ will allow the use of some paper instruments, as some respondents may choose to complete their time-use or SEQUAL surveys on paper. All paper instruments will be reviewed by specially trained data quality clerks who will check for completeness and clarity and adherence to routing and range rules. In addition, senior project staff members will review data collected electronically to determine the need for corrections to instruments.
The web-based surveys will contain built-in range checks, logic checks, and routing instructions to effectively eliminate most of the errors inherent in paper instruments. All data will undergo a series of data editing steps beginning with the recruiters’ review of all roster information entered into a web-based rostering program. Senior staff will then review the roster information and note any errors or inconsistencies for correction.
Procedures to minimize errors due to data entry, coding, and data processing.
Cost and implementation data are reviewed by data collectors and a dedicated QA reviewer to ensure that data are complete and error free. Data entry staff will enter the data from any paper time-use or SEQUAL surveys into the web-based instruments. With the use of the same web-based instrument, the data received from hard copy instruments for either survey will undergo the same range, logic, and consistency checks that are built into the web-based instruments. Entering the data from paper instruments into the web-based instruments allows frequency review to be performed across all cases regardless of administration mode. Several questions in the time-use survey are open-ended and will require respondents to enter text directly. In addition, some responses to questions may not fit into any of the provided response categories. Respondents will have the option to choose “other” and then to specify a response. Probes and help screens will be built into the survey to be available for the respondents.
Data Analysis
The study team will build the measures in a series of incremental steps. The steps progress from analyzing the data at the item-level; next, creating reliable summary variables for analysis by key function; and finally, analyzing summary variables or scales to examine associations among implementation, cost, and center characteristics (including quality).
Implementation measures. When data are complete and clean, the study team will develop implementation measures that represent a descriptor of each key function. To assess the validity and reliability of draft scales for each key function, the study team will first examine the item-total correlations, which represents the degree to which differences among centers’ responses to each individual item are consistent with their responses to all other items in the scale as a whole. A high item-total correlation indicates that the item is consistent with the scale as a whole, which is a desirable characteristic for reliability. Next, we will identify the items with adequate item-total correlations (at least 0.2) and examine face validity of the resulting set of items. In other words, we will examine whether the set of items reflects content we would expect from a theoretical perspective. Finally, we will conduct categorical confirmatory factor analysis to identify key implementation factors and how they work together within each of the key functions.
Analysis. The study team will use constructed cost and implementation measures to focus on:
Validity of the implementation measures: The team will evaluate concurrent validity of the implementation measures by looking at bivariate correlations between the implementation measures and scores or performance on other well-known or established measures of the same or similar construct obtained at approximately the same time. The team will calculate the correlations between the implementation measures and the SEQUAL domain scores to assess evidence of a positive relationship with measures of the same or similar constructs. The team will also evaluate validity by testing for significant bivariate relationships between the implementation measures and measures of quality—such as QRIS ratings or accreditation—through correlations or t-tests for differences in means.
Variation in implementation and cost measures: The team will inspect descriptive statistics for implementation and cost measures, by key function, across all centers and by a range of center characteristics (such as funding mix, inclusion of infant or toddler age, or center size).
Associations between implementation and cost measures: The team will examine correlations between implementation and cost measures. According to our calculations, a sample of 80 centers would be sufficient to detect correlations of 0.31 or higher.
Examine whether the relationship between implementation, cost, and/or quality varies by selected center characteristics: The team will conduct multivariate analysis to examine the relationship between cost and implementation, controlling for selected center characteristics (including quality). The team will also explore whether the relationship between cost, implementation and/or quality varies by other selected center characteristics. Quality measures will primarily be based on publicly available QRIS ratings. The study team will also explore the possibility of conducting additional analysis using center-level state administrative data (for example, additional quality measures collected through the state QRIS).
The study team will also analyze the results of the experiments with tokens of appreciation for the time-use and SEQUAL teaching staff surveys. For the experiment, the team will analyze differences in response rates, at the center-level and by respondent type (such as lead and assistant teachers), between the two experimental groups. They will also examine differences in the timing of survey completion between respondents in centers in each of the two experimental groups and assess the costs associated with each approach. The study team will prepare a memorandum that presents the results for the experiment as available. The results on the effectiveness and efficiency in the use of pre-paid tokens of appreciation and variation in amounts (or dosage) will prove useful in building a body of evidence about what works in conducting surveys with teaching staff in early care and education center-based settings.
Data Use. After the field test and when measures have been finalized, the team will develop a user’s manual about the collection and analysis of data to produce and interpret the measures so that the instruments/measures can be used by other researchers to generate information to guide program, policy, and practice. If ACF opts to archive the data from this field test for secondary use, documentation will include information necessary to contextualize and assist in interpretation of the data, such as descriptive tables comparing the characteristics of participating centers to national averages.
B8. Contact Person(s)
Meryl Barofsky, Office of Planning, Research, and Evaluation, Meryl.Barofsky@acf.hhs.gov
Ivelisse Martinez-Beck, Office of Planning, Research, and Evaluation, ivelisse.martinezbeck@acf.hhs.gov
Tracy Carter Clopet, Office of Planning, Research, and Evaluation, Tracy.Clopet@acf.hhs.gov
Gretchen Kirby, Mathematica Policy Research, GKirby@Mathematica-Mpr.com
Pia Caronongan, Mathematica Policy Research, PCaronongan@mathematica-mpr.com
Annalee Kelly,
Mathematica Policy Research, AKelly@mathematica-mpr.com
Attachments
ATTACHMENT A: ECE-ICHQ CONCEPTUAL FRAMEWORK
ATTACHMENT B: ADVANCE MATERIALS
ATTACHMENT C: EMAIL AND LETTER TO SELECTED CENTERS
ATTACHMENT D: IMPLEMENTATION INTERVIEW EMAIL
ATTACHMENT E: COST WORKBOOK EMAIL
ATTACHMENT F: FALL 2021 SURVEY OUTREACH
INSTRUMENT 1: CENTER RECRUITMENT CALL SCRIPTS
INSTRUMENT 2: CENTER ENGAGEMENT CALL SCRIPT
INSTRUMENT 3: IMPLEMENTATION INTERVIEW PROTOCOL
INSTRUMENT 4: COST WORKBOOK
INSTRUMENT 5: INITIAL STAFF ROSTER
INSTRUMENT 6: TIME-USE SURVEY
INSTRUMENT 7: CENTER RE-ENGAGEMENT CALL SCRIPT AND ROSTER UPDATE FOR THE FALL 2021 SURVEY
INSTRUMENT 8: SEQUAL TEACHING STAFF SURVEY (citation only; proprietary instrument)
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Barofsky, Meryl (ACF) |
File Modified | 0000-00-00 |
File Created | 2021-09-15 |