ExCELS_Descriptive Study_SSB_NSC_Clean 02072022

ExCELS_Descriptive Study_SSB_NSC_Clean 02072022.docx

OPRE Study: Early Care and Education Leadership Study (ExCELS) Descriptive Study

OMB: 0970-0582

Document [docx]
Download: docx | pdf







Early Care and Education Leadership Study (ExCELS) Descriptive Study



OMB Information Collection Request

New Collection





Supporting Statement

Part B



AUGUST 2021









Submitted By:

Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services


4th Floor, Mary E. Switzer Building

330 C Street, SW

Washington, D.C. 20201


Project Officer: Nina Philipsen, Ph.D.

Part B

B.1. Objectives

1. Study objectives

The goals of the Early Care and Education Leadership Study (ExCELS) data collection are to (1) develop a short-form measure of early care and education (ECE) leadership that has strong psychometric properties, and (2) examine empirical support for the associations among key constructs and outcomes in the theory of change (Appendix A) of ECE leadership for quality improvement. The Office of Planning, Research, and Evaluation (OPRE) within the Administration for Children and Families (ACF) at the U.S. Department of Health and Human Services (HHS) has contracted with Mathematica and its subcontractor, the Institute for Early Education Leadership and Innovation at the University of Massachusetts Boston, to conduct this study.

2. Generalizability of results

This is a measurement development study intended to develop a short-form measure of ECE leadership, examine the psychometric properties of the measure, and test the associations among leadership elements and outcomes. Data are not intended to support statistical generalization to other sites or service populations.

3. Appropriateness of study design and methods for planned uses

The ExCELS purposive sample design features criteria to select ECE centers that vary in context and characteristics to build a measure that can reliably make distinctions in leadership between centers that receive federal funding. The study’s theory of change (Appendix A) guided the design of the study and the measures to be included. The surveys that will be administered to center managers and teaching staff include newly developed questions on leadership that maximize the study team’s ability to capture what leadership is in a broader sense and to reflect the potential range of leadership that can inform what effective leadership can produce. A second component of the teaching staff survey (on center culture, climate, and communication, such as culture of respect, shared growth, and learning; collaboration among staff ) helps further support the assessment of significant differences in leadership and the test for the associations of leadership with outcomes. The analyses are intended to assess the psychometric properties of the new leadership measure and test the associations hypothesized in the theory of change, and will not be used to generate nationally-representative estimates of the prevalence of characteristics of leadership. As noted in Supporting Statement A, this information is not intended to be used as the principal basis for public policy decisions and is not expected to meet the threshold of influential or highly influential scientific information.

B.2. Methods and design

1. Target population

The study team will collect information from center managers and teaching staff in center-based ECE settings (centers) that receive funding from Head Start and/or the Child Care and Development Fund (CCDF) and provide services to children whose ages range from birth to age 5 (but who are not yet in kindergarten). The study team plans to select 30 centers each from four states that represent different ECE environments that have the best potential for promoting and supporting center leadership, but that vary in child care regulatory context, supports for ECE access and quality, and geographic regions. The study team will then select an average of two center managers and all teaching staff for children whose ages range from birth to age 5 (but who are not yet in kindergarten) from each of the centers to collect information about key study constructs. This will provide samples of 240 center managers and approximately 1,680 teaching staff in 120 centers. The analyses will be at both the individual (center manager or teaching staff)-level and the center-level.

2. Sampling and site selection

The study team will use a multistep, purposive sampling approach that begins at the state level and moves systematically to the center and staff levels.

a. Sampling of states and centers

The study team will consider the following criteria in selecting the four states:

  • ECE environments that foster leadership. The study team will select states based on the strength of administrator qualifications including licensing, credentials, and Quality Rating and Improvement System (QRIS) criteria using the policy lever scores developed by the Leadership Education for Administrators and Directors (L.E.A.D) Early Childhood Clearinghouse (Abel et al. 2018).1 This criterion will enable the study team to select states that set or promote minimum qualifications for administrators in some manner and thus have ECE environments that foster leadership.

  • QRIS participation. The study team prefers to select states that have a sufficient number of centers participating in the QRIS and distributed across the QRIS rating levels to support center selection in two geographic areas of each state. This criterion will support the team in recruiting centers with varying quality in the two areas. The team will try to use QRIS as a proxy for high and low quality in the center-level analyses, although QRIS ratings will not be a criterion for center selection (particularly because of the delays and complications in determining ratings during the COVID-19 pandemic). This criterion will support selection of centers varying in quality.

  • Child care regulatory context. The study team will select two states with more stringent licensing requirements and two states with less stringent licensing requirements based on rankings produced by Child Care Aware of America in 2013.2

  • Reimbursement rates for child care subsides as supports for ECE access and quality. The study team will select states that vary in the payments of CCDF subsidies, as represented by the percentage of the market share at which rates were set in 2018. Subsidy reimbursement rates could indicate the ability of CCDF-funded centers to invest in quality of leadership.

  • Geographic regions. The study team will select states in different census-defined regions to capture variation in state and regional context and conditions.

Across the four states, the study team plans to achieve a total sample of 120 centers. A center is defined as a specific physical location with at least two classrooms that has the primary purpose of providing ECE services for children whose ages range from birth to age 5 (but who are not yet in kindergarten). Classrooms in public school settings will be excluded in order to focus on ECE leadership that are not confounded by K-12 leadership structures. The study team will restrict the selection of centers to those that offer full-day service. The selection criteria will include funding mix, center size, and geographic area as defined in Table B.1. The study team plans to target similar proportions of different types of centers in each state.

Shape1

Table B.1. Center sample sizes by center selection criteria


Geographic area

Center sizeb

Row total

Total by funding mix

Funding mixa

Large

Medium

Small

Head Start

Urban

8

8

4

20

40

Suburban

8

8

4

20

CCDF

Urban

8

4

8

20

40

Suburban

8

4

8

20

Mixed funding

Urban

4

8

8

20

40

Suburban

4

8

8

20

Total number of centers

40

40

40

120

120

Urban

20

20

20

60


Suburban

20

20

20

60


a Head Start centers are those that receive at least 50 percent of their funding from Head Start (can also include Early Head Start). CCDF centers are those that receive at least 50 percent of their funding from CCDF subsidies. Centers with mixed funding are those that receive at least 50 percent of Head Start and CCDF funding combined.

b Small centers are those serving 25 or fewer children; medium centers are those serving more than 25 but fewer than 75 children; and large centers are those serving 75 or more children. These classifications were identified based on the patterns seen in the National Survey of Early Care and Education (NSECE).3

Using data from state websites and the Head Start Enterprise System or Early Childhood Learning & Knowledge Center, the study team will assemble lists of centers in four states that meet the selection criteria outlined above. Once a center is recruited to participate in the descriptive study, the study team will conduct the engagement interview (Instrument 3) with them to collect the center’s characteristics. The study team will use this information to determine the fit of the center into the study’s recruitment goals based on the selection criteria outlined in Table B.1. If a center meets the selection criteria, they will be able to participate in the study.

b. Selection of center managers and teaching staff

The formal leadership structure in ECE centers may include just a primary site leader or multiple managers with different areas of oversight or responsibility. Based on our experience with the Assessing the Implementation and Cost of High Quality Care and Education (ECE-ICHQ) project, there is an average of two managers per center. The study team will select one to three center managers per center, depending on center size, with a total sample of 240 center managers across the centers.

  • The primary site leader (the person in the building who is responsible for oversight of all that happens in the center on a daily basis) will be prioritized as respondent for the survey in all centers.

  • In medium and large centers, the education program lead will be selected as the second respondent in centers that have this position distinct from the primary site leader.4

  • In large centers with manager-level positions beyond those of a primary site leader and education program lead, the study team will randomly select a third respondent from among the center managers.

Finally, the study team will select all teaching staff included on the teaching staff roster (Instrument 5) who are in classrooms serving children whose ages range from birth to age 5 (but who are not yet in kindergarten).

Table B.2 shows the expected sample sizes for each level of respondent by center type for the descriptive study. For the purposes of assessing statistical precision, the study team assumed a conservative response rate of 80 percent for teaching staff, with 1,344 staff responding to the survey. This is based on the high response rates seen in the ECE-ICHQ study that used a similar study design. However, the goal is to have all teaching staff complete their surveys.

Shape2

Table B.2. Expected sample sizes of center managers and teaching staff by center type for the ExCELS descriptive study


Head Start

Mixed funding

CCDF center

Total

Centers

40

40

40

120

Center managers

80

80

80

240

Primary site leaders

40

40

40

120

Education program lead and/or other managers

40

40

40

120

Teaching staffa

448

448

448

1,344

a The study team assumed a total of 14 teaching staff per center, on average (based on our experience in ECE-ICHQ and the NSECE [NSECE Project Team 2015])5, with an 80 percent response rate for statistical precision estimates.

c. Statistical precision

Table B.3 shows the minimum detectable correlations for analyses that examine the associations between two continuous variables using the expected samples (for example, the leadership elements at the center level and teaching staff turnover). The proposed sample sizes shown below allow minimum detectable correlations in the low range, from 0.133 to 0.256.

Shape3

Table B.3. Minimum detectable correlations for analyses examining associations of two continuous variables, with leadership measures at the center level

Analysis

Sample size

Minimum detectable correlations

Centers

120

0.256

Center managers

240

0.198

Teaching staff

1,344

0.133

Note: In this table, the study team assumed a type I error rate of 0.05 (two-sided) and a power of 0.80. The study team assumed a sample with an average of 2 center managers and 14 teaching staff per center and assumed a 100 percent response rate for center managers and an 80 percent response rate for teaching staff.

The study team can also estimate means and percentages overall and by subgroups of interest (such as Head Start, centers receiving CCDF subsidies, and centers with mixed funding). Table B.4 shows the minimum detectable differences (MDDs) for subgroup comparisons in some scenarios. The study design has adequate precision for teaching staff-level subgroup analysis to detect small differences ranging from 0.266 to 0.327 standard deviation units. At the center and manager levels, it could be possible to detect statistically significant differences between subgroups if the differences were moderate to large. For analysis at the manager level, the MDDs range from 0.396 to 0.485. The MDDs are especially large for comparing subgroups on leadership scores at the center level, ranging from 0.511 to 0.627. Therefore, with a sample of 120 centers, any subgroup analyses at the center level will be exploratory.

Shape4

Table B.4. MDDs for subgroup comparisons with a total sample of 120 centers

Proportion of the sample in Subgroup 1

Proportion of the sample in Subgroup 2

Number of centers or respondents in Subgroup 1

Number of centers or respondents in Subgroup 2

MDD (standard deviation units)

Centers

0.50

0.50

60

60

0.511

0.33

0.67

40

80

0.543

0.25

0.75

30

90

0.591

0.33

0.33

40

40

0.627

Center managers

0.50

0.50

120

120

0.396

0.33

0.67

80

160

0.420

0.25

0.75

60

180

0.457

0.33

0.33

80

80

0.485

Teaching staff

0.50

0.50

672

672

0.266

0.33

0.67

448

896

0.283

0.25

0.75

336

1,008

0.308

0.33

0.33

448

448

0.327

Note: In this table, the study team assumed a type I error rate of 0.05 (two-sided) and power of 0.80. The study team assumed a sample with an average of 2 center managers and 14 teaching staff per center and assumed a 100 percent response rate for center managers and an 80 percent response rate for teaching staff.

MDD = minimum detectable difference.

B.3. Design of data collection instruments

1. Development of data collection instruments

As part of the ExCELS descriptive study, the study team conducted a literature review,6 developed a theory of change (Appendix A), and drafted a compendium of existing measures for understanding leadership in ECE to set a foundation for developing the leadership measure. To begin development, the study team consulted with ECE experts and established a stakeholder workgroup to review an early draft of the survey questions. Next, the study team conducted a pretest with nine ECE center staff to refine the staffing structure and leadership positions (SSLP) interview (Instrument 4) and both surveys (Instrument 6 and Instrument 7) in advance of the descriptive study. The SSLP interview pretest helped the study team clarify interview questions and ensure the interview could be completed within the expected burden for the instrument. The pretest of the survey instruments helped the study team refine the survey questions, decrease cognitive burden on respondents, and remove survey questions to ensure the instruments met the expected burden. The new leadership measure captures three elements of ECE leadership: (1) who leaders are (the individuals who participate in decision-making and quality improvement in centers through formal or informal leadership roles); (2) what leaders bring (the values and beliefs of individuals who participate in decision-making and quality improvement); and (3) what leaders do (the actions individuals take and practices they pursue as part of their leadership).

B.4. Collection of data and quality control

Table B.5 below outlines the data collection instruments that will be used for the ExCELS descriptive study, the respondents for each, and the expected time to complete.

Shape5

Table B.5. Data collection activities for the ExCELS descriptive study, by respondent, time to complete, and mode

Data collection activity

Respondents

Time to complete

Mode

Center recruitment call

(Instrument 1)

Primary site leader

20 minutes

Telephone

Umbrella organization recruitment approval call (Instrument 2)

Umbrella organization director

20 minutes

Telephone

Engagement interview

(Instrument 3)

Primary site leader

20 minutes

Telephone

Staffing structure and leadership positions interview (Instrument 4)

Primary site leader

30 minutes

Telephone with CADE on the web

Teaching staff roster (Instrument 5)

Primary site leader

15 minutes

CADE on the web

Center manager survey

(Instrument 6)

One to three center managers per center based on center size

25 minutes

Web with paper option

Teaching staff survey

(Instrument 7)

All teaching staff including lead, head, or co-teachers and assistant teachers in classrooms serving children whose ages range from birth to age 5 (but who are not yet in kindergarten)

60 minutes

Web with paper option

CADE = computer-assisted data entry.

Recruitment protocol. Mathematica, the contractor, will collect data for this study. Using publicly available information, the study team will send advance materials to 2,000 centers in four states to advertise the study. The advance materials will include a joint-agency informational letter about the descriptive study (signed by both the director of the Office of Head Start and the director or acting director of the Office of Child Care), an informational letter further explaining the descriptive study (signed by the Mathematica survey director), a study brochure, and a study fact sheet. (See Appendix B for the joint-agency letter, advance recruitment letter and email, study brochure, and study fact sheet.) The team will then follow up with more targeted letters and emails by ExCELS liaisons—members of the study team who will serve as points of contact between the ExCELS study team and the centers (also Appendix B). Based on the recruitment experience in other ECE studies, the study team expects to follow-up with approximately 1,800 of the centers to secure the participation of the 120 centers needed for the study. The follow-up letter and email will notify the primary site leader that the study team would like to schedule a phone conversation with them (recruitment call; Instrument 1) to discuss the study in greater detail and learn about some of the center’s key characteristics and discuss its participation in the study. Once the primary site leader is reached by phone, liaisons will conduct the recruitment call (Instrument 1) with them requesting their center’s participation in the study. Some centers that are part of a program or larger organization may need approval to participate in the study from their program office or the larger umbrella organization which the center is affiliated with. Liaisons will conduct the umbrella organization recruitment approval call (Instrument 2) with the program directors or administrators of the umbrella organization to gain approval to recruit these centers.

The study’s concept of leadership is broad and not focused on a single person. We expect the participation of a range of centers given the broad conceptualization of what effective leadership can be and the different forms and structures it may take. The study team prepared recruitment materials to explain the study as one to learn about who contributes and participates in decision-making and stress that the study is not gathering information for accountability purposes or as an evaluation of centers or individuals.

Interviews with primary site leader. If the center agrees to participate in the study, liaisons will conduct an engagement interview (Instrument 3) with the primary site leader by phone to collect their center’s characteristics and ensure the center is eligible to participate. Once the study team confirms a center’s eligibility, data collection begins with the liaison conducting the SSLP interview (Instrument 4) with the primary site leader by phone. Before the engagement interview and SSLP interview, the liaison will send an email (Appendix C) to the primary site leader to confirm the interview schedule and topics that will be discussed. Quality assurance (QA) of the engagement interview and SSLP interview will be built into the liaison training. Furthermore, each liaison will have their first engagement interview and SSLP interview monitored and will receive immediate feedback. Liaisons will also participate in ongoing monitoring throughout data collection.

The information collected in these interviews will assist us in monitoring center characteristics so that we will achieve variation on these characteristics that may reflect differences in leadership. As a measure development project, we will want to achieve variation on center characteristics and the study’s conceptualization of leadership. As described in Supporting Statement Part A, Section A.2, to support site selection, the study team will access publicly available information about centers, including QRIS levels which can reflect administration and management characteristics. We will use this information to select centers to recruit. During the engagement interview, we will ask the primary site leader to confirm their center’s QRIS rating and to tell us if staff have participated in any leadership development programs. These two questions can be used as proxies for understanding whether a center might have stronger or weaker leadership. We will monitor the responses of these two questions so that we do not over-sample centers with high QRIS ratings or with staff who have participated in leadership development. We will examine these and other center characteristics to describe the sample and examine if the new measure can make distinctions in leadership elements by varying center characteristics.

Center manager survey. The study team will identify the respondents for the center manager survey through the SSLP interview. The team will send potential respondents a paper and email invitation that will provide a link to the web-based survey (Instrument 6). Center managers could request a paper copy of the survey by calling or emailing the study team at the phone and email address noted in the invitation. If the survey has not been completed within the requested time frame, the study team will send follow-up emails and a follow-up letter to the respondents (Appendix D). If the study team conducts site visits to centers (to support the completion of teaching staff surveys as described below), a study representative will bring paper copies of the center manager survey with them during the site visit so the respondent will have the option to complete it on paper at that time. The QA of the center manger survey is described in Section B7.

Teaching staff roster and teaching staff survey. The study team will ask the primary site leader to provide a list of teaching staff (teaching staff roster, Instrument 5) which will become the sample of teaching staff that will be invited to participate in the survey. The study team will collect language preference for the survey (English or Spanish) in the teaching staff roster to be able to administer the survey in Spanish to teaching staff with that preference. Teaching staff will receive an invitation packet addressed directly to them with the advance letter (Appendix E) which will include the web address to the web-based survey (Instrument 7). The mailing will also include a $5 gift card with their survey invitation materials (see Section A9 for details). Teaching staff could request a paper copy of the survey by calling or emailing the study team at the phone and email address noted in the invitation. Teaching staff will receive an email invitation a week after they receive the invitation letter. If teaching staff do not complete the survey within the requested time frame, the study team will send follow-up emails and a follow-up letter with a paper survey to encourage survey response (Appendix E). If centers do not achieve an 70% response rate after all the follow-up emails and letters are sent to respondents, we will ask primary site leaders if we can visit their center to address questions and encourage survey completion. If the primary site leader agrees, a study representative would visit the center on a pre-determined day to support the completion of the teaching staff and center manager surveys. During the visit, the field staff will distribute the advance letter inviting potential respondents to complete the survey. The advance letter will provide a link to the web-based survey, and the study representative will bring paper copies of the survey so that respondents will have the option to complete it on paper if they prefer. The study representative will be available at the center for up to two to three hours to speak with staff and answer questions and will return to the center a day or two later to pick up completed paper surveys.

The QA of the teaching staff survey is described in Section B7. Supervisors will oversee the work of the field staff by requiring each staff member to check in via phone or email at the end of each day of their site visits, to monitor each day’s data collection progress. The study team will monitor real-time survey completions through the web instruments and by field staff reporting the number of completed surveys collected in the field.

B.5. Response rates and potential nonresponse bias

1. Response rates

The study team will collect data from 120 centers that are eligible and agree to participate in the study. Recruitment will be on-going, and the study team expects to be able to replace a center that may withdraw from the study before staff are invited to complete surveys.

Across the 120 centers, the study team will invite 240 center managers to complete the center manager survey, and 1,680 teaching staff to complete the teaching staff survey. The study team expects all center managers to complete the center manager survey, and at least 80 percent of teaching staff to complete the teaching staff survey.

The study is not designed to produce statistically generalizable findings. However, the team will calculate the response rate for center managers, which is the number of center managers who completed the survey divided by the number of center managers sampled. In addition, the study team will calculate the response rate for teaching staff, which is the number of teaching staff who completed the survey divided by the number of eligible teaching staff7 on the teaching staff roster (Instrument 5). These calculations will inform whether the responses of center managers and teaching staff who complete the survey will produce a reliable measure of center leadership. It is essential in the early stages of measures development to obtain high response rates (80 percent or higher) in order to pursue the analysis to establish the properties of the new measure and test how leadership functions within the theory of change (described in section B.7.2).

a. Maximizing response rates

The study team will offer each participating center an honorarium in the amount of $150 in recognition of the time the primary site leader and on-site coordinator will spend to support the study’s data collection activities. The study team will offer survey respondents gift cards as tokens of appreciation for their participation ($25 for center managers and $40 for teaching staff).

The study team will monitor response rates for each instrument, at the center level and overall, to provide real-time progress reports on response rates. The study team will work with primary site leaders and field staff (if applicable) to obtain additional surveys.

Center managers and teaching staff will have the flexibility to complete their surveys online or on paper, which will allow the study team to obtain high response rates. The study team will send follow-up emails and a follow-up letter to respondents that have not completed their surveys a week after the invitation emails were sent. If a center’s teaching staff survey response rate is below 70 percent after all the follow-up emails and letter have been sent, a site visit with the center will be scheduled to distribute and collect paper surveys. The study team will monitor response rates of center manager surveys prior to the site visit to ensure timely follow-up during the visit. Field staff will remind center managers and provide them with paper copies in an effort to collect missing surveys by the end of the visit. During the same visit, field staff will encourage teaching staff to complete the survey online or on paper and assist with any questions or issues to support survey completion. See appendices D and E for all respondent follow-up emails and letter.

2. Nonresponse

Based on other similar ECE projects, the study team does not expect substantial nonresponse for center managers. The potential challenge with survey nonresponse exists mainly for the teaching staff survey. The study will attempt to collect data from all teaching staff at each center in the field test to understand the reliability of center leadership reported by teaching staff as well as the extent of variation in responses within and across centers. The study team will maximize response rates for the teaching staff survey using the methods described in the section above. As part of study reporting, the study team plans to present information about the response rate for teaching staff and the positions of teaching staff (listed in Instrument 5) and center characteristics for those who completed the survey and those who did not.

B.6. Production of estimates and projections

The goals of this study are to examine the psychometric properties of the measure to develop a short-form measure of ECE leadership and test the associations among leadership elements and outcomes depicted in the theory of change. The data will not be used to generate population estimates, either for internal use or dissemination.

B.7. Data handling and analysis

1. Data handling

The study team will test the web surveys in multiple rounds before fielding to confirm that they are working correctly. This process consists of testing different paths of questions that a respondent can go through and ensuring that skips and checks are working correctly. The study team tests all possible paths using a random data generator for 1,000 cases to fully test the programmed logic.

During data collection, the web surveys will include checks to ensure responses are within expected ranges and questions are not left blank. The checks will flag inconsistencies for respondents in real time, prompting them to review their responses before moving forward. The study team will also program the surveys to do internal consistency checks between responses and to have respondents skip questions that do not apply to them as a result of previous responses.

As another internal check, Mathematica also conducts data reviews to ensure the web survey is working as intended. The study team will review all center manager and teaching staff surveys completed online for missing responses, inconsistencies, and respondent break-off patterns. At the beginning of data collection, the study team will conduct a preliminary data review by running frequencies and cross-tabulations to confirm that the web survey is working as expected and to check for inconsistencies in the data. The study team will conduct a second data review on both instruments halfway through the data collection period.

The study team will conduct QA checks on all completed paper surveys. Field staff will check the paper surveys while still at the center for missing data and follow-up with the respondent before leaving. The study team will review each survey for the same checks built into the web instruments. The team will determine whether any respondents need to be contacted to address any issues. The team will QA the data entry of center manager paper surveys by reviewing the entered responses and comparing them to the paper survey for accuracy. There may be a large volume of teaching staff surveys completed on paper. Therefore, they will be entered into a data entry program that allows for double-data entry of all surveys to validate data entry in real time.

2. Data analysis

The instruments included in this OMB package will yield data to be analyzed using quantitative methods. The statistical precision of the analysis rests on the ability to obtain high response rates to the surveys (discussed in section B.2.2.c).

The analyses fall into four major areas, linked to the study’s research questions (see Supporting Statement A, Section A.2), as follows:

  1. Investigate the psychometric properties (including reliability and validity) of the leadership measure and develop a short-form measure of ECE leadership. The analyses will draw on classical test theory, item response theory (IRT), and generalizability theory. The analytic approaches include confirmatory factor analysis, Rasch modeling, differential item functioning (DIF) analysis, Cronbach’s alpha, item-to-total correlations, and generalizability theory analysis. Together, these approaches will identify the strongest items for the leadership measure and how items from the leadership elements (that is, who informal and formal leaders are, what they bring, and what they do) may form different types of scores. The team will explore the variation in responses across teaching staff within centers as well as across centers. We will want to learn how much variation we find in leadership is due to varying perceptions and experiences of teaching staff within the same center relative to differences reported across centers to understand the extent that each contributes to patterns in the measure. The study team will also estimate bivariate correlations with center culture, climate, and communication (such as culture of respect, shared growth, and learning; collaboration among staff) to examine the concurrent validity for the measure.

  2. Describe the leadership scores for the overall sample and by subgroups. For the new leadership measure, one conceptualization of the scores the study team may create would include a total score for leadership, three element scores, and potential subscale scores for what leaders do. Analyses will include descriptive statistics (means and standard deviations, percentages) for the overall sample on these scores. In addition, the study team will examine the variation of leadership scores by staff characteristics and roles (for example, lead, head, and co-teachers versus assistant teachers; infant and toddler teaching staff versus preschool teaching staff) and center characteristics (for example, funding sources, center size, whether embedded in a larger organization or part of a chain or not).

  3. Describe formal leadership roles and the structure of formal leadership within the center. The study team will calculate descriptive statistics on staff formal leadership roles across different management, oversight, and supervisory responsibilities. The study team will also conduct cluster analysis to develop a preliminary typology of formal leadership structures (for example, if responsibilities are dispersed versus clustered across different staff).

  4. Test the hypothesized associations in the theory of change. The study team will use multivariate ordinary least squares (OLS) regressions or hierarchical linear models (HLMs) to examine how leadership scores are associated with staff and center characteristics or context and how center-level leadership scores are associated with center culture, climate, and communication and center- and staff-level outcomes (such as teaching staff turnover and job satisfaction). In addition, the study team will test whether center culture, climate, and communication mediates the association between leadership scores and staff and center outcomes. Like most survey studies, the descriptive study may experience missing data. To address the issues of potential bias, the study team will use multiple imputation for missing data to maximize the number of individuals included in the analysis.

3. Data use

Mathematica will prepare the following products based on the analysis of the data from the descriptive study:

  • A comprehensive report to present the analysis and implications from the descriptive study

  • Briefs using data collected during the descriptive study, highlighting the relevant findings from the study, and conveying specific uses of the new measure for different audiences

  • A restricted-use file and documentation that will be available for secondary analysis

  • A short-form measure of ECE leadership

The study team plans to archive a restricted-use data file at the Child and Family Data Archive for secondary analysis. It will be accompanied by a data user’s manual to inform and assist researchers who might be interested in using the data for future analyses. The manual will include (1) background information about the study, its design, and the leadership measure; (2) an overview of the data collection procedures and instruments; (3) data preparation and the structure of the data files; and (4) descriptions of scores and composite variables.

B.8. Contact person(s)

Mathematica will lead the data collection activities described in this ICR. Table B.7 lists the individuals responsible for the data collection activities and statistical aspects of the survey and study design.

Shape6

Table B.7. Contact persons

Nina Philipsen, Ph.D.

Senior Social Science Research Analyst

Office of Planning, Research, and Evaluation

Nina.Hetzner@acf.hhs.gov

Bonnie Mackintosh, Ed.D.

Social Science Research Analyst

Office of Planning, Research, and Evaluation

Bonnie.Mackintosh@acf.hhs.gov

Gretchen Kirby

Project Director

Mathematica

GKirby@Mathematica-Mpr.com

Lizabeth Malone, Ph.D.

Co-Principal Investigator

Mathematica

LMalone@Mathematica-Mpr.com

Anne Douglass, Ph.D.

Co-Principal Investigator

University of Massachusetts Boston

Anne.Douglass@umb.edu

Yange Xue, Ph.D.

Senior Researcher

Mathematica

YXue@mathematica-Mpr.com

Annalee Kelly

Survey Director

Mathematica

AKelly@mathematica-Mpr.com




Attachments

Appendices

Appendix A. ExCELS Theory of Change
Appendix B. Center Recruitment Materials
Appendix C. Interview Confirmation Emails
Appendix D. Center Manager Survey Respondent Materials

Appendix E: Teaching Staff Survey Respondent Materials

Instruments

Instrument 1. Center Recruitment Call Script
Instrument 2. Umbrella Organization Recruitment Approval Call Script
Instrument 3. Engagement Interview Guide
Instrument 4. Staffing Structure and Leadership Positions (SSLP) Interview Guide
Instrument 5. Teaching Staff Roster
Instrument 6. Center Manager Survey
Instrument 7. Teaching Staff Survey

1 Abel, M. B., T. N. Talan, and M. Magid. “Closing the Leadership Gap: 2018 Status Report on Early Childhood Program Leadership in the United States.” Wheeling, IL: McCormick Center for Early Childhood Leadership at National Louis University, December 2018.

2 Child Care Aware of America. “We Can Do Better: Child Care Aware of America’s Ranking of State Child Care Center Regulations and Oversight; 2013 Update.” Arlington, VA: Child Care Aware of America, 2013.

3 National Survey of Early Care and Education Project Team. Characteristics of Center-based Early Care and Education Programs: Initial Findings from the National Survey of Early Care and Education (NSECE).” OPRE Report #2014-73a. Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, 2014.

4 If a center does not have an education program lead, the study team will select the person who is clearly identified as the second-in-command. This person may serve various roles in the center. In medium centers that do not have an education program lead or a clear second-in-command but do have multiple center managers other than a director, the team will randomly select a second respondent from among the center managers.

5 National Survey of Early Care and Education Project Team. “Measuring Predictors of Quality in Early Care and Education Settings in the National Survey of Early Care and Education.” OPRE Report #2015-93, Washington, DC: Office of Planning, Research and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, 2015.

6 Kirby, G., A. Douglass, J. Lyskawa, C. Jones, and L. Malone. “Understanding Leadership in Early Care and Education: A Literature Review.” OPRE Report No. 2021-02. Washington, DC: Office of Planning, Research, and Evaluation, Administration for Children and Families, U.S. Department of Health and Human Services, 2021.

7 Teaching staff are eligible for the survey if they are in classrooms serving children whose ages range from birth to age 5 (but who are not yet in kindergarten) at the time of teacher rostering and are still at the center by the time of survey release. Teaching staff include lead, head, or co-teachers and assistant teachers. Any teaching staff who begin employment between rostering and survey release, which will be roughly two weeks, are not eligible for the survey.

File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File Modified0000-00-00
File Created2022-02-16

© 2024 OMB.report | Privacy Policy