Supporting Justification for OMB Clearance of Teen Pregnancy Prevention Replication Evaluation (OMB Control #0990-NEW)
Part B: Statistical Methods for Follow-up Data Collection
For the TPP Replication Study, HHS has selected three program models, representing different approaches to the prevention of teen pregnancy, and has selected three replications of each model. Of the nine replications selected, six will be entirely school-based and three will operate in community settings (primarily clinics). The total sample of youth for the study is approximately 7,353 after the first follow-up survey and 6,840 at the time of the second follow-up-- a sufficient sample to detect policy-relevant impacts of individual program replications. For each replication (which can occur across multiple sites), youth are assigned to a treatment group that receives the intervention or to a control group that does not. Selection of the unit of randomization is driven by: a) the setting in which the replication is implemented; the need to minimize disruption of the program’s normal operation; and by the desire to minimize contamination across groups, to the greatest extent possible. For the purpose of this clearance, OAH is seeking OMB approval for follow-up survey data collection for the Teen Pregnancy Prevention (TPP) Replication Study. The 60-day notice for the follow-up survey data collection was published March 15, 2012.A baseline survey is currently being conducted with both program and control groups before the youth in the program group are exposed to the pregnancy prevention intervention (The study and baseline data collection were approved on June 8, 2012 under OMB clearance number 0990-0394).
The follow-up survey approach will use a combination of hard copy, text messages, and email reminders, social media postings such as Facebook (provided the sample members agree to our use of their information in that way) throughout the study period (e.g., quarterly) to keep in direct contact with sample members and remind them of upcoming surveys. Similar to the baseline, the follow-up surveys will be self-administered using a web-based survey with ACASI technology.
The universe of potential respondents will vary across study sites, depending on the type of program in place at each site. Hence, we first describe the possible types of program structures and the corresponding study design.
In three of the six school-based replications, classes will be randomly assigned. Random assignment will occur after students have been assigned to classes but before the classes are scheduled to begin. Depending on the number of students in each class, the number of classes needed will vary. For the burden calculations, we have assumed a sample of 48 classes in each of the three replication sites where classes will be randomly assigned, with 19-20 students in each class who have parental consent to participate, for a beginning sample of approximately 2,850 students. In the remaining three school-based replications and the three clinic-based interventions, individual youth will be randomly assigned, with a sample size of approximately 950 in each site. The initial total sample size for the TPP evaluation is approximately 8,550 youth.
The follow-up survey will be conducted at two time-points with youth in both treatment and control groups after youth in the treatment group have been exposed to the intervention. Depending on the program model, the first follow-up survey will be administered 6-12 months after the baseline survey; the final follow-up will be administered 18-24 months after baseline. The follow-up survey will be web-based and will use audio computer assisted survey interview (ACASI) technology. To the extent feasible, the self-administered first follow-up survey will be completed in the school setting; otherwise the survey will be completed in a setting of convenience for the respondent via the web.1
First we will conduct site-level analyses and then a set of pooled analyses that will use data from the three replications of a model. All power calculations are based on the analytic sample at final follow-up (80% of originally-consented youth).
The statistical power of the design depends on several parameters that are not observable, but can be estimated with some precision. In particular, in a cluster-randomized design MDEs depend on the intraclass correlation (ICC), the proportion of level-2 variance explained by covariates (level-2 R-squared), and the proportion of level-1 variance explained by covariates (level-1 R-squared). In order to obtain plausible values for the ICC and R-squares for the current study design, we analyzed relevant data from Add Health. From these data, we estimate the ICC to be 0.025, the level-1 R2 to be 0.35, and the level-2 R2 to be 0.65.
The MDEs for the site-level and pooled impact analyses are presented in Exhibit A16.1. These estimates suggest to us that the study is adequately powered to detect impacts on sexual behavior outcomes at the individual site level. However, it is very unlikely that the evaluation will be able to detect the programs’ impacts on teen pregnancy and STIs in the site-level analysis, given the low prevalence of these outcomes. The larger samples in the pooled analyses increase the likelihood that we will be able to detect effects on these outcomes.
The numbers given as the analytic sample at final follow-up are the expected available sample size (i.e., 80% of the originally-consented and randomly assigned sample). For other behavioral outcomes, such as “sex in the last 90 days”, we were guided in our calculation of the analytic sample size needed (and hence the calculation of the initial sample to be randomly assigned) for individual site-specific designs by findings from other evaluations of sexual health interventions for teens as well as by prevalence estimates derived from Add Health data. Pooling the data across replications will allow us to detect smaller impacts on such behaviors. However, no such guidance existed for calculating the sample size needed to detect impacts on teen pregnancy, births or STIs, since prior evaluations have not focused on these outcomes. Prevalence data on these behaviors provided some assurance that we might be able to detect program impacts on the behaviors using the pooled data.
We will report the sample size at final follow-up as a percentage of the initially-consented sample size as a measure of the internal validity of the findings, unrelated to the cumulative response rate which will be separately reported so that readers can make judgments of the external validity of the study’s findings. It is of course essential to report and assess both measures of validity when considering the study’s findings.
Exhibit B1.1: Minimum Detectable Effects for Site-Level Analysis in Each Site or Pooled for Three Sites at Longer-Term Follow-Up
Data
Type |
Outcome Variable |
||
Teen Pregnancy |
Sex in Previous 90 Days |
STI |
|
A: Safer Sex |
|||
Single Site (1:1) |
5.8 percentage points (n=720 individuals) |
7.4 percentage points (n=720 individuals) |
3.3 percentage points (n=720 individuals) |
Single Site (2:1) |
6.1 percentage points (n=720 individuals) |
7.8 percentage points (n=720 individuals) |
3.5 percentage points (n=720 individuals) |
Three Pooled Sites (1:1) |
3.3 percentage points (n=2,160 individuals) |
4.3 percentage points (n=2,160 individuals) |
1.9 percentage points (n=2,160 individuals) |
Three Pooled Sites (2:1) |
3.5 percentage points (n=2,160 individuals) |
4.5 percentage points (n=2,160 individuals) |
2.0 percentage points (n=2,160 individuals) |
B: Reducing the Risk |
|||
Single Site (1:1) |
3.3 percentage points (n=56 classrooms) |
8.2 percentage points (n=56 classrooms) |
2.4 percentage points (n=56 classrooms) |
Single Site (2:1) |
3.5 percentage points (n=56 classrooms) |
8.8 percentage points (n=56 classrooms) |
2.6 percentage points (n=56 classrooms) |
Three Pooled Sites (1:1) |
1.9 percentage points (n=168 classrooms) |
4.8 percentage points (n=168 classrooms) |
1.4 percentage points (n=168 classrooms) |
Three Pooled Sites (2:1) |
2.0 percentage points (n=168 classrooms) |
5.1 percentage points (n=168 classrooms) |
1.5 percentage points (n=168 classrooms) |
C: ¡Cuidate! |
|||
Single Site (1:1) |
3.1 percentage points (n=800 individuals) |
7.7 percentage points (n=800 individuals) |
2.4 percentage points (n=800 individuals) |
Single Site (2:1) |
3.3 percentage points (n=800 individuals) |
8.2 percentage points (n=800 individuals) |
2.5 percentage points (n=800 individuals) |
Three Pooled Sites (1:1) |
1.8 percentage points (n=2,400 individuals) |
4.4 percentage points (n=2,400 individuals) |
1.4 percentage points (n=2,400 individuals) |
Three Pooled Sites (2:1) |
1.9 percentage points (n=2,400 individuals) |
4.7 percentage points (n=2,400 individuals) |
1.5 percentage points (n=2,400 individuals) |
For all power calculations, we set the alpha level to 5 percent for a two-tailed test, and the power of the test to 80%. We also assumed that 35% of control group members would have had sex in the prior 90 days at the time of the follow-up survey (www.cdc.gov/mmwr/pdf/ss/ss5905.pdf), except for SSI, in which all participants are sexually active at baseline and we assume that 75% will be sexually active at follow-up; and that 2% would have contracted an STI (4% in SSI, due to the higher rate of sexual activity) (http://www.cdc.gov/std/stats09/tables/10.htm). We further assume that 132 out of 1000 teens in the control group will become pregnant during the course of the SSI study, and 45 out of 1000 during the course of the RtR and ¡Cuidate! studies. These assumptions are based on the pregnancy rates in high-risk groups in those age ranges and the length of the follow-up. In addition, we assumed that variables collected in the baseline survey, including baseline measures of the outcome variable, would explain 35% of the variation in the outcome measure for individual random assignment. For cluster random assignment, we assume that those variables will also explain 65% of the variation at the group level and that the classroom-level ICC is 0.025, as explained in the text.
The evaluation will collect information on youth characteristics, knowledge, attitudes, skills and behaviors from approximately 7,353 youth across nine selected replication sites at the first follow-up survey point, and the same data on approximately 6,840 youth at the final follow-up point.
The consent procedures for the study were described and approved as part of the TPP Replication Study baseline survey (OMB Clearance number 0990-0394) and are summarized here. In clinic sites, trained clinic staff will obtain youth consent and, where indicated (i.e., when parents accompany a minor child to the clinic) parental consent. In school-based replication sites, school staff will assist in obtaining active parental consent and student assent to participate in the evaluation. Parental consent will be obtained at the beginning of the study for possible participation in the program and for the baseline and all subsequent data collections. We will not re-consent parents at any subsequent time. Youth, on the other hand, will be asked to assent at baseline and to re-assent before completing each of the two subsequent surveys.
While, for the baseline survey, in school-based replication settings the survey will be group-administered, we assume that for the follow-up surveys most administration will be individual. Data collection staff will contact each participant in the study, using agreed-upon media strategies (e-mail, texting, etc.) as well as assistance from program staff, when possible, to remind them about the follow-up survey and provide instructions on how to access the Web survey and a PIN/password to enable access. Repeated reminders will be sent by electronic media until the survey has been accessed and completed. On-site data collection staff will provide assistance in identifying a location for youth to access a computer and complete the survey in privacy, whenever such assistance is needed.
Once the sample member has completed the survey, the last screen will inform him or her “the survey is now complete”. The youth will leave the computer, real-time verification of completion will be recorded in the survey database, and the youth will be sent a $25 gift card electronically. In the rare cases where a hard copy survey is completed, youth will place the entire questionnaire in a return envelope, seal it, and return it to a contractor staff member. Staff will send the completed questionnaires to the contractor’s office, where the questionnaires will be receipted and checked for completeness, and the data entered into the survey database.
We expect a better than 90 percent response rate to the baseline survey because survey administration will occur shortly after active parental consent is received (or, in the case of the clinic patients recruited to the study, at the time they are recruited for the study).
We expect to achieve an 80 percent response rate at the second and final follow-up point (and an 86 percent or higher response rate on the short-term follow-up survey). Eligibility for each data collection point does not require participation in the prior data collection point as long as parent consent and youth assent are obtained for the current data point. As indicated in B.2, parental consent will be obtained at the beginning of the study for possible participation in the program and for the baseline and all subsequent data collections. We will not re-consent parents at any subsequent time. Youth, on the other hand, will be asked to assent at baseline and to re-assent before completing each of the two follow-up surveys.
In the study analysis and reports we will distinguish between external and internal validity. For internal validity, we are concerned only with the survey completion rates of those youth who have been randomized (or whose classes were randomized) into the study. The rates of 90% at baseline, 86% at first follow-up and 80% at final follow-up are not however cumulative. At each time point, the percentage represents the expected proportion of originally-consented youth that completes the survey. Following the What Works Clearinghouse guidelines, we believe that, with the expected completion rates at follow-up and no serious attrition bias, we can include in the follow-up analyses all youth who responded, including those for whom baseline data are missing.
For external validity, we need to calculate a cumulative response rate. In this case, the program and school response rate is assumed to be 100% since grantees and their school or agency partners were required as a condition of the grant to participate in the evaluation if invited. If we assume a parental/youth consent rate (our experience is that they will be the same) of 90%, then the cumulative response rate at each point is 90% x 90% (81%) at baseline, 90%x86% (77.4%)at first follow-up, and 90%x80% (72%) at final follow-up.
|
Completion rate |
Cumulative (based on prior contact) |
Consent/assent |
0.90 |
0.90 |
Baseline |
0.90 |
0.81 |
First follow up |
0.86 |
0.77 |
Final follow up |
0.80 |
0.72 |
To ensure these response rates, evaluation staff will employ a systematic strategy designed to maintain contact with youth in the sample between data collection points. These contacts will be, for the most part, electronic, using agreed-upon media to check and update contact information, remind youth of upcoming survey dates and encourage them to text or e-mail questions or requests for assistance. On-site data collection staff will work with schools and community agencies to locate youth who have changed schools or moved and fail to respond to electronic efforts to contact them.
Even with such high response rates, however, survey nonresponse can bias impact estimates if outcomes of survey respondents and nonrespondents differ, or if the types of individuals who respond to the surveys differ for the treatment and control groups. To correct for differences between respondents and nonrespondents on follow-up surveys, we will construct sample weights that mirror the characteristics of the full sample, so that the baseline characteristics of the responders to the follow-up survey mirror those of the full sample.
The instrument submitted for clearance here is very similar to the TPP Replication Study baseline survey approved by OMB on June 8, 2012 (OMB Clearance Number 0990-0394) In addition, it is very similar to the follow-up survey approved for a portion of the study sites on September 27, 2012 by OMB for the Evaluation of Pregnancy Prevention Approaches (PPA) study (0970-0360). That measure was pretested by Mathematica. Mathematica staff recruited pretest participants and study staff talked directly with all interested teens to explain the pretest and the need to obtain parental consent prior to their participation.
We plan to conduct a similar pretest with the TPP follow-up survey on up to nine individuals and will submit a pretest report.
Administration of the follow-up survey for the TPP Replication evaluation will be overseen by the contracting organization, Abt Associates Inc., and its subcontractor, DIR. The same contractor will analyze the data. Individuals whom OAH has consulted on the collection and/or analysis of the follow-up data include those listed below.
Alan Hershey
Mathematica Policy Research, Inc.
P.O. Box 2391
Princeton, NJ 08543
(609) 275-2384
Christopher Trenholm
Mathematica Policy Research, Inc.
P.O. Box 2391
Princeton, NJ 08543
(609) 936-279-6384
Laura Kalb
Mathematica Policy Research, Inc.
955 Massachusetts Avenue, Suite 801
Cambridge, MA 02139
(617) 301-8989
Kristin Moore
Child Trends
4301 Connecticut Ave. NW
Washington, DC
20008-2333
(202) 362-5580
Jennifer Manlove
Child Trends
4301 Connecticut Ave. NW
Washington, DC
20008-2333
(202) 362-5580
Ralph DiClemente
Rollins School of Public Health
1518 Clifton Road NE
Atlanta,
GA 30322
rdiclem@sph.emory.edu
tel:
(404) 727-0237
Jim Jacard
Florida International University
Center for Children and Families
11200 SW 8th Street Office:
DM 248E
AHC 1 Rm. 140
Miami, Florida 33199
jaccard@fiu.edu
Meredith Kelsey
Abt Associates
55 Wheeler St.
Cambridge, MA 02138
Christine Markham
The University of Texas School of Public Health
P.O. Box 20186
Houston, TX 77225
(713) 500-9646
Gladys Martinez, PhD
National Survey of Family Growth
(NSFG)
National Center for Health Statistics
3311 Toledo
Road, Room 7310
Hyattsville, MD 20782
Tel: 301-458-4108
Pat Paluzzi
President
Healthy Teen Network
1501 Saint Paul St., Suite 124
Baltimore, MD 21202
(410) 685-0410
Susan Philliber
Philliber and Associates
16 Main St.
Accord, NY 12404(845) 626-2126
Michael Resnick
Division of Adolescent Health and Medicine
717 Delaware St. SE, Suite 370
Minneapolis, MN 55414-2959
(612) 624-9111
Matt Stagner
Chapin Hall – University of Chicago
Executive Director
1313 E. 60th St.
Chicago, I'll 60637
Melissa Gilliam, MD MPH
Department of Obstetrics and Gynecology
The University of Chicago
5841 S. Maryland Ave., MC2050
Chicago, IL 60637
mgilliam@babies.bsd.uchicago.edu
Inquiries regarding statistical aspects of the study design should be directed to:
Amy Feldman Farb, Ph.D.
Office of Adolescent Health
U.S. Department of Health and Human Services
1101 Wootton Parkway, Suite 700
Rockville, MD 20852
(240) 453-2836
or
Lisa Trivits, Ph.D.
Office of the Assistant Secretary for Planning and Evaluation (ASPE)
U.S. Department of Health and Human Services
200 Independence Ave, SW
Washington, DC 20201
(202) 205-5750
Dr. Feldman Farb and Dr. Trivits are the TPP Evaluation project officers. Both have overseen the development of the current follow-up instrument.
Inquiries related to the Teen Pregnancy Prevention Program, or evaluations of it, may be directed to:
Amy Farb, Ph.D.
Office of Adolescent Health
Office of the Assistant Secretary for Health
U.S. Department of Health and Human Services
1101 Wootton Parkway, Suite 700
Rockville, MD 20852
(240) 453-2836
1 Paper surveys will only be used if it is not possible to complete the survey via the web.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Supporting Justification for OMB Clearance of Evaluation of Pregnancy Prevention Approaches Part B: Statistical Methods for Base |
Author | Mary Hess |
File Modified | 0000-00-00 |
File Created | 2021-01-30 |