Supporting Statement Part B for OMB Approval
Permanency Innovations Initiative (PII) Evaluation: Phase I
August 2012
PART B. STATISTICAL METHODS (USED FOR COLLECTION OF INFORMATION EMPLOYING STATISTICAL METHODS)
B.1. Respondent Universe and Sampling Methods
The cross-site implementation study respondent universe is the six PII grantees – their staff and other stakeholders at the sites. Respondents generally will be convenience samples of those who are knowledgeable about the PII grantee and available to participate in interviews and surveys. The samples differ depending on the instrument, as follows:
The Survey of Organization/System Readiness will be administered one time to approximately 30 individuals per site who will include staff such as the program manager, steering committee members, supervisors, and caseworkers/practitioners, as identified by the grantee and the evaluation contractor site lead;
The Implementation Drivers Web Survey will be administered twice per year to approximately 25 individuals per site who are active participants in the grantees’ organizational structures and have knowledge of and direct experience with the grantee’s implementation, as identified by the grantee and the evaluation contractor site lead;
The Grantee Case Study Protocol will be used by the evaluation contractor site lead and implementation study lead in annual in-person and quarterly telephone interviews with an estimated five persons per site who are familiar with the grantee’s context, structure, resources, key activities and milestones, impact of PII national activities, and implementation outcomes, as identified by the grantee and the evaluation contractor site lead; and
Fidelity Data (Implementation Quotient Tracker) will be used for analyzing administrative data obtained from the grantee on fidelity to their program models.
The number of
respondents in the implementation study is not a sample of a larger
population but is the approximate number of people who were involved
in the development and implementation of the intervention in each
site. We propose to engage all such persons
(that is, adopt a census approach, rather than a purposeful sample)
because we expect implementation experiences to be highly variable in
the population and the population is relatively small.
Because the respondents are a finite group, there is no sampling
error to worry about and hence no need for a power analysis.
Several features of the study help to reduce burden on the sites. First, the cross-site implementation study includes web-based data collection to help reduce the burden on respondents and make it more convenient for them to respond. Whenever possible, data for the implementation study will be collected from existing documentation or administrative data sources in order to reduce burden on the grantee staff. State administrative data will also be utilized to examine long-term outcomes of each grantee’s intervention. In some cases, some of the same staff, partners, and agencies might respond to the Survey of Organization/System Readiness (which is only administered once) and the Implementation Drivers Web Survey (which will be administered up to six times per grantee), and our decision to include some of the same people in both surveys will be based on the respondents’ particular perspective on or knowledge about the grantee’s intervention or target population. For the Grantee Case Study Protocol, we will use existing documentation as much as possible, and only conduct interviews after we have conducted a thorough search and are unable to find relevant materials. If requested, we can discuss this issue more during a follow-up phone call with OMB.
The respondent universe for the site-specific impact evaluation in Kansas is the grantee target population of children ages 3-16 with serious emotional disturbance (SED) who are in out-of-home placement, and their families. All eligible families will be sampled and (with family consent) randomly assigned to treatment or control groups. The size of the target population – children age 3-16, in foster care, and with SED – is estimated to represent approximately half of all children in foster care in Kansas, where about 2,600 children between the ages of 3 and 16 enter foster care annually. Random assignment procedures will allocate 50 percent of the eligible cases to the treatment condition and 50 percent of the eligible cases to the control condition. A total of 450 cases in each group is anticipated over the 3 year clearance period requested.
The evaluation contractor completed the following power analysis based on an anticipated sample size of 900 families (450 in each group) over 3 years. Data from the Kansas state child welfare tracking system (FACTS) revealed that currently 18 percent of placements of children with SED reunify within 12 months compared to 28 percent of cases involving children without SED. At 24 months, the respective rates are 42 and 52 percent. Kansas will strive to eliminate this discrepancy for families whose children have an SED. A 10-percent improvement in the 12-month reunification rate would be regarded by Cohen (1988) as small; yet, the grantee Steering Committee contends that a 10-percent improvement would be significant and substantive and it would in fact represent a complete neutralizing of the effect of SED on reunification. Power analysis confirms that the sample size proposed here for this evaluation is sufficient; an N of 900 provides 99.6-percent power in the best-case scenario and 96.6-percent power in the worst-case scenario to detect a 10-percent difference between treatment and control groups in the 12-month reunification rates.1
The Washoe County, Nevada universe includes two target populations: (1) new cases involving children age 17 ½ or younger coming into the system, deemed unsafe, with a caregiver, and at risk of foster care placement, and (2) families with children who have been in foster care at least 12 months and who have one or more of the defining case risk characteristics at time of placement (i.e., parental substance abuse, homelessness/inadequate housing, single parent households, or parental incarceration),2 a goal of adoption or guardianship, and an available caregiver. All cases in the first population (new cases) will be randomly assigned to treatment or control groups. Caseworkers will have been randomized into treatment or control groups prior to the case random assignment; cases assigned to the treatment group will also be assigned to a treatment caseworker, and cases assigned to the control group will be assigned to a control caseworker. Cases in the second population (children already in foster care) will stay with their current caseworkers, who will be randomly assigned to treatment or control conditions.
With an anticipated sample size of 525 across the two target populations over 3 years there is sufficient power to detect differences of 20 percent in permanency outcomes between the intervention and control groups.3 The power analysis calculates a conditional probability based on the number of cases, number of caseworkers, difference to be detected, and level of significance (e.g., .01, .05). The difference to be detected was determined by the developers of the intervention; based on the population and the type of intervention services, they anticipate that there will be a difference of 20 percent in permanency outcomes. To calculate power, certain assumptions have to be made. For example, we do not have data on how much variation in permanency might be due to the skill, effort, and charisma of the caseworker. If caseworkers vary substantially in effectiveness, then the intraclass correlation (ICC) will be higher and power will be lower. For the calculations presented here, the ICC is calculated in a mid-range for 525 cases, 38 caseworkers, and a significance level of .05. The calculation is predicated on the assumption that currently 30 percent of the children exit care by 12 months. Based on the assumptions, a difference between groups of 20 percent (i.e., 50 percent exit care by 12 months in the intervention group) can be detected with a power of 93 percent. Given the sample size, the number of caseworkers, and the expected difference between the intervention and control group, it is expected the study will have sufficient power to detect differences at the .05 level of significance.
Another way to present the power calculation is shown below. The first column represents the difference between intervention and control groups on the outcome; the second column shows the probably of detecting that difference using a 95-percent two-sided confidence interval. It shows that the probability of detecting a difference of 17 percent is 83 percent; in previous years Washoe has found that 30 percent of all cases exit within 12 months, and estimates that the SAFE-FC intervention will increase this by 20 percent.
Difference Probability
20% 93%
17% 84%
14% 69%
B.2. Procedures for the Collection of Information
Sampling Procedures
For the cross-site implementation study, grantees will identify staff and stakeholders who possess information relevant for each type of data collection. The contractor evaluation team will contact the respondents and ask them to participate in the data collection effort.
In Kansas and Washoe, all cases that fit the target population criteria are being randomly assigned to treatment or control groups and will be included in the information collection. In Kansas, the treatment/control proportions are 50/50. In Washoe, the ratio is 40 percent assigned to the treatment group and 60 percent to the control group, due to the fact that the intervention requires caseworkers to carry a smaller caseload than caseworkers in the treatment as usual condition.
Data Collection Procedures
All six PII grantees will participate in the cross-site implementation study, which includes the following instruments (see Instruments):
Survey of Organization/System Readiness. This survey will investigate the extent to which PII grantees (staff and stakeholders) demonstrate interest in and willingness to use evidence-based interventions to address barriers to permanence for children and youth most at risk of long-term foster care. In addition, the survey will explore respondents’ perceptions of organizational climate as it relates to readiness to change and individual and organizational interest in supporting rigorous evaluation. This survey will be completed once by approximately 30 respondents in each of the six sites. The evaluation contractor’s site leaders will work with each site to identify candidate respondents. We estimate that it will take each of the 30 respondents per site about 20 minutes per response.
Implementation Drivers Web Survey. This survey will track the processes that sites use to implement PII interventions. It comprises eight sections: practitioner selection, training, supervision/coaching, performance assessment, decision support data systems, facilitative administration, systems intervention, and leadership. It will be administered twice a year to approximately 25 respondents at each grantee. We estimate that it will take each of the respondents about 12 minutes per response.
Grantee Case Study Protocol. This case study (which will be conducted by the evaluation contractor’s site leaders and implementation study leader) will include qualitative examination of: key implementation activities; interim products and milestone events that occur during exploration, installation, and initial implementation; and the stages of implementation that set a foundation for achievement of full implementation. It will include sections on the larger political and organizational context in which each PII grantee operates, as well as key activities and achievement of milestones. Data for the case study will be collected through review of existing documentation, conducting phone interviews, and conducting in-person interviews during site visits. For interviews, we estimate that it will take each of approximately 30 respondents (five per site) 2 hours to complete. Interviews will be conducted four times per year.
Fidelity Data/Implementation Quotient Tracker. Fidelity to grantee interventions will be tracked through completion of an implementation quotient (IQ) tracker, which will capture the proportion of caseworkers/practitioners at a given point in time that are conducting the intervention with fidelity. Grantees will submit fidelity data quarterly over a period of two years, beginning six months after the grantee begins full implementation of the intervention. We estimate that reporting these data will take grantee staff about 1.5 hours per response to compile from administrative records.
Kansas will be administering an assessment battery to families, starting when a child is placed in foster care, then a follow-up administration approximately 6 months later (see Instruments). Prior to administration of the battery, parents will complete a consent form, and an initial information form, which will take an estimated 0.1 hours to complete. The assessment battery will be administered by trained data collectors (KIPP Data Liaisons). The assessment battery includes interviews with the parent/caregiver and family observations. Data liaisons will also collect information from parents to use in completing the North Carolina Family Assessment Scale for General Services and Reunification (NCFAS – G+R) following the interview. We estimate that it will take approximately 1.5 hours per family to complete the assessment battery. Separately, the family’s caseworker will complete the CAFAS/PECFAS4 assessment about the child. The data liaison will review the family’s case file and have discussions with the caseworker to verify information recorded on the NCFAS-G+R instrument. We estimate that it will take 1.0 hour per family for the caseworker to complete the CAFAS/PECFAS, and .5 hours per family for the caseworker to take part in NCFAS – G+R discussions with the data liaison. (see Instruments).
Washoe County also will be administering an assessment battery to families (see Instruments). Washoe will be using a Computer-Assisted Self Interview (CASI) format for the assessment. For the group of cases in which children have already been in care for at least 12 months, the first administration will occur when the intervention is implemented, expected in July 2012. Subsequent administrations will occur every 6 months and at case closure. For new incoming cases, the first administration will occur shortly after a case is opened, and subsequent administrations will occur every 6 months and at case closure. Parents will complete a consent form. The family assessment battery will take approximately 90 minutes to complete.
B.3. Methods to Maximize Response Rates and Deal with Nonresponse
We do not anticipate problems with response rate and nonresponse for the cross-site implementation study. It is a requirement of the PII grants that the grantees participate fully in any cross-site evaluation activities (see funding opportunity announcement HHS-2010-ACF-ACYF-CT-0022, Initiative to Reduce Long-Term Foster Care, p. 10), and response rates for the various components of the implementation study will be monitored by the Children’s Bureau.
With respect to the impact evaluations in Kansas and Washoe County, nonresponse is only an issue for collection of data on proximal outcomes through the family assessment batteries. The final analysis of whether or not each grantee was successful in improving permanency outcomes (the main outcomes of interest for PII) will be conducted using deidentified administrative data on all cases in the study. Thus, there will be no nonresponse bias in the final, distal outcome analysis. Moreover, we will be able to determine using administrative data whether nonresponse in the proximal data collection may have biased the proximal outcome findings. In this event, nonresponse weighting adjustments could be utilized to minimize the impact of nonresponse on these proximal outcome results.
Although the PII evaluation is not offering incentives for participation in data collection in Kansas and Washoe County, the grantees are making the following efforts to maximize response rates for the family assessment batteries.
In Kansas: a) local agencies involved in the Kansas project made a decision to provide a small monetary incentive ($10 gift card) from their own budgets (separate from the evaluation) for parents completing the assessment battery. The same incentive is available for older youth for participation in the Family Interaction Task portion of the assessment battery; b) the trained assessors who administer the assessment battery meet face-to-face with parents to do so. The assessor explains the data collection process to the parent and explains that the information will be kept private. The assessor begins the process by allowing the parent to ask any questions they may have about the process. The assessor may involve the KIPP supervisor to address the parent’s concerns; and c) the assessor stays in the same room with the parent as he or she completes the questionnaires. This allows the data coordinator to answer the parent’s questions and minimize nonresponse on individual items of the questionnaire. If the parent has difficulty reading the questionnaire, the data coordinator may read the questions to the parent.
In Washoe County: a) the data collectors will be in frequent contact with an evaluation liaison to get the most current and up-to-date contact information for the families; and b) similar to Kansas, some of the strategies will be used – when necessary, data collectors will remain in the room while parents are completing the CASI, and will be able to assist parents in completing the instruments; supervisors will be available to address parents’ concerns and needs for additional information about the assessment battery and the evaluation.
B.4. Test of Procedures or Methods to be Undertaken
Kansas is pretesting its assessment battery (OMB generic clearance 0970-0355 received on Oct. 4, 2011). Washoe plans only limited testing of its battery, on nine or fewer respondents, due to the intervention purveyor’s extensive prior experience with the instruments in other evaluations. The cross-site implementation study instruments are also being tested with nine or fewer members of the contractor’s evaluation team. Any difficulties we encounter with respect to our procedures, materials, or instruments will be discussed with the evaluation leadership, and suggested revisions to the evaluation plans will be outlined.
B.5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data
The team is led by Maria Woolverton, project officer; Andrea Sedlak, project director for the PII evaluation; and Mark Testa, principal investigator for the evaluation. Additional staff consulted on statistical issues at Westat include John Rogers and Barnali Das, senior statisticians.
Reference
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed. ed.). Hillsdale, NJ:: Lawrence Erlbaum Associates.
1 Using one-sided log rank test (α ≤ .05) assuming 50% reunification by 12 months and no intraclass correlation among children from the same parents (i.e. best-case scenario), power is 99.6 %. The effective sample size is 1386 (693 treatment +693 control), which is the expected number of children (as opposed to families). Power is 96.6% when assuming perfect (1.0) intraclass correlation among children from the same parents (i.e. worst-case scenario), which effectively reduces sample size to N=900 (450 treatment +450 control), the expected number of families.
2 Data mining identified these risk characteristics as associated with longer stays in foster care than cases that did not exhibit these risks. Note that Population 1, unlike Population 2, is not limited to cases with the risk characteristics. However, the risk characteristics will be tracked for both populations and will be included in the analysis.
3 Kansas’s power analysis uses a 10% difference in permanency outcomes while Washoe’s uses a 20% difference. These percentages are what the grantees expect to see, given their program models; they are implementing different interventions and have very different target populations, so it is not unreasonable that their differences in outcomes would not be the same.
4 The PECFAS collects similar information as the CAFAS but is administered to younger children (as young as 3).
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Supporting Statement Part B for OMB Approval |
Author | Elizabeth Quinn |
File Modified | 0000-00-00 |
File Created | 2021-01-30 |