Supporting Justification for OMB Clearance of Evaluation of Adolescent Pregnancy Prevention Approaches
Part A: Justification for the Collection of First Follow-up Data
CONTENTS
A1. Circumstances Making the Collection of Information Necessary
1. Legal or Administrative Requirements that Necessitate the Collection
2. Study Objectives
A2. Purpose and Use of the Information Collection
A3. Use of Improved Information Technology and Burden Reduction
A4. Efforts to Identify Duplication and Use of Similar Information
A5. Impact on Small Businesses or Other Small Entities
A6. Consequences of Collecting Information Less Frequently
A7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
A8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency
A9. Explanation of Any Payment or Gift to Respondents
A10. Assurance of Confidentiality Provided to Respondents
A11. Justification for Sensitive Questions
A12. Estimates of Annualized Burden Hours and Costs
A13. Estimates of Other Total Annual Cost Burden to Respondents and Record Keepers
A14. Annualized Cost to the Federal Government
A15. Explanation for Program Changes or Adjustments
A16. Plans for Tabulation and Publication and Project Time Schedule
1. Analysis Plan
2. Time Schedule and Publications
A17. Reason(s) Display of OMB Expiration Date is Inappropriate
A18. Exceptions to Certification for Paperwork Reduction Act Submissions
references
(continued)
ATTACHMENTS:
Attachment A: Question by Question Source List and Crosswalk between
PPA Baseline and First Follow-up Surveys
Attachment B: Question Justification
Attachment C: Sources Referenced
Attachment D: Entities Consulted
Attachment E: 60 Day Federal Register Notice
Attachment F: Pretest Report
The U.S. Department of Health and Human Services (HHS) is conducting the Evaluation of Adolescent Pregnancy Prevention Approaches (PPA), an eight-year demonstration designed to study the effectiveness of promising policy-relevant strategies to reduce teen pregnancy. The study was designed to include up to eight evaluation sites, and at this point it appears that there will be seven sites:
one site – Chicago Public Schools, implementing the Health Teacher curriculum – has been recruited, and a baseline survey has been implemented; and
six federally-funded grantees have been recruited
Approval for outreach discussions with stakeholders, experts in the field, and program developers was received on November 24, 2008 (OMB Control No. 0970-0360). Approval for the baseline survey data collection and the collection of youth participant records was received on July 26, 2010 (OMB Control No. 0970-0360). Emergency clearance for site-specific variants of the baseline survey questionnaire was received on August 22, 2011 (OMB Control No. 0970-0360).
We now seek OMB approval for the first follow-up data collection, and for two tailored site-specific follow-up questionnaires. Similar to the baseline survey effort, a large group of federal staff has collaborated to modify a previously drafted PPA follow-up instrument into a “concordance follow-up instrument” suitable for all HHS pregnancy prevention evaluations, including but not limited to PPA. HHS is trying to maximize consistency across evaluations of federal pregnancy prevention grant programs. In 2010 and 2011, the Administration for Children and Families (ACF) and the Office of Adolescent Health (OAH), in coordination with other HHS offices overseeing pregnancy prevention evaluation, collaborated to consider revisions to the previously drafted PPA instrument.
As in the case of baseline data collection, site-specific variation in follow-up data collection instruments is planned, because of the differences among the seven PPA sites. As PPA sites were recruited, we found that variations in their target populations and program models make it essential to tailor data collection, at both baseline and follow-up, to analytical priorities in each site. Developing those site-specific instruments involves working closely with the six sites that are federal pregnancy prevention grantees, and with the local evaluators they have engaged as a condition of their grants.
The collaboration with the six grantee sites also involves specifying the exact schedule for follow-up data collection. Across these sites, there is variation in the length of the program being tested, the age of the target population, the key outcomes on which impacts are of greatest interest, and thus on the most suitable schedule for follow-up surveys. The PPA technical work group (TWG) provided important guidance for the timing of two follow-up surveys: a first follow-up no earlier than 3-6 months after program completion, and a second no later than 18-24 months after program completion. This guidance has been quite closely followed, with well-justified exceptions. In two cases the negotiation with local evaluators led to plans for three follow-ups, with the third follow-up inserted as an early survey. In one case, the final follow-up timing deviates from the TWG guidance because the program lasts 18 months; follow-ups are scheduled at 6, 18, and 30 months after enrollment, which means there will be a follow-up during the intervention, immediately after it ends, and 12 months after it ends.
The process of working out these instruments and survey schedules has now been completed site by site, and the result determines when the first follow-up survey must be administered in each site, and thus determines for which sites approval of follow-up data collection is most urgent. This submission presents follow-up questionnaires and estimated burden for two sites. In the Chicago Public Schools (CPS) site, baseline data collection was conducted in fall 2010, and the first follow-up is to be conducted in fall 2011, as part of a test of the Health Teacher curriculum for seventh-graders. CPS is not a federal grantee, and the standard PPA follow-up instrument can be used; in this case, therefore, the tailored follow-up questionnaire is also the “concordance” questionnaire that has been defined as a foundation for all PPA sites and for use in other federal pregnancy prevention evaluations. Approval is sought for use of this instrument for the first of the planned two follow-up surveys.
The second site involves a federal grantee—the Oklahoma Institute for Child Advocacy (OICA), which is testing the effect of Power Through Choices 2010 on youth residing in foster care group homes. OICA will enroll the first of its sample cohorts in early fall 2011, deliver a ten-session program, and then conduct an “immediate posttest” follow-up survey, to be followed by surveys at six and twelve months. For OICA, approval is sought for use of the instrument to be used in both the immediate post-test and six-month follow-ups. For both of these PPA sites, early approval of follow-up questionnaires is essential to maintain the schedule of data collection. As development of site-specific follow-up questionnaires for the remaining PPA sites is completed, they will be submitted to OMB along with the estimated burden for those sites.
A1. Circumstances Making the Collection of Information Necessary
For decades, policymakers and the general public have remained concerned about the prevalence of sexual intercourse among adolescents. Although adolescents today are waiting somewhat longer before having sex than they did in the 1990s, 60 percent of teenage girls and more than 50 percent of teenage boys report having had sexual intercourse by their 18th birthday.1 Approximately one in five adolescents has had sexual intercourse before turning 15.2 Rates of teenage pregnancy declined by 38 percent from 1990 to 2004, and the rate of teen births followed a similar decline3 until recently, when the rate of births rose by 5 percent from 2005 to 2007 for teens aged 15-19.4
HHS is interested in identifying and evaluating promising approaches to reduce teen pregnancy, associated risk behaviors, and their consequences. Combined with the baseline data collection, the first follow-up data collection described in this ICR will provide important information to guide policy decisions aimed at addressing this serious concern.
Baseline data (collection already approved) will serve several important purposes. It will be used to establish baseline equivalence of the treatment and control groups and thus to confirm the integrity of the random assignment process. Baseline variables will be used to define subgroups for which impacts will be estimated, and to adjust impact estimates for the baseline characteristics of nonrespondents to the follow-up survey. Many baseline variables will be measures of outcomes measured again at follow-up; their baseline values can be used to improve the precision of impact estimates by their inclusion as covariates in the impact models.
The follow-up data collection for which approval is now sought will focus on two types of outcomes – both of which can only be measured through surveys of youth. The first are sexual risk outcomes, including the extent and nature of sexual activity, use of contraception (if sexually active), pregnancy, and testing for and diagnoses of STDs. The second are a series of intermediate outcomes that may be associated with the sexual risk outcomes and therefore important to measure as potential pathways of any program effects on sexual risk behavior. Examples of these outcomes include participation in and exposure to pregnancy prevention programs and services, intentions and expectations of sexual activity, relationships with family and friends, knowledge of contraception and sexual risks, dating behavior and alcohol and drug use. In addition, the survey includes a small number of questions that identify socio-demographic or other characteristics of youth in the study sample, which may be used either for descriptive purposes or as potential covariates in the regression models for measuring program effects. Finally, for sample youth who report not being sexually active, the survey includes questions to support a descriptive analysis of these youth and a future investigation of their potential transition into sexual activity (to ensure privacy of youth who respond to the surveys, the length of the series of questions for non-sexually active youth has been timed to approximate to the length of the series for sexually active youth).
The need to tailor content of the follow-up questionnaires for PPA to specific sites is a reflection of how the sites’ programs have been funded. The PPA site programs are supported by two major funding streams. The first stream, administered by the DHHS Office of Adolescent Health, for Teen Pregnancy Prevention (TPP) Programs, promotes both aims with two funding tiers: 75% of funds go to discretionary grants to replicate evidence-based programs, and 25% go to discretionary grants to conduct innovative demonstration evaluations. The second funding stream, the Personal Responsibility Education Program (PREP), which is administered by the Administration for Children and Families, provides a formula grant to states to replicate evidence-based teen pregnancy prevention programs or substantially incorporate elements of such programs. PREP also provides funding for discretionary grants for Innovative Strategies demonstration evaluations, as well as a Tribal program. Many grantees funded under these two funding streams are required to conduct their own local evaluations, and this is true of the grantees selected as PPA sites.
In addition to local evaluations, these grantees are required, if selected, to participate in one of several federal evaluation studies currently being planned or implemented that examine the impact of teen pregnancy prevention programs. Collaboration between grantees and the PPA evaluation is mandated. One part of this collaboration is to develop a “blended” baseline questionnaire that addresses PPA research objectives but also incorporates the site-specific research priorities established by local evaluators in their required plans. The result is that tailored versions of all questionnaires–baseline and follow-up—are required for the PPA sites.
Public Law 110-161, which set fiscal year (FY) 2008 appropriations levels, included the following language: “$4,500,000 shall be available from amounts available under section 241 of the Public Health Service Act to carry out evaluations (including longitudinal evaluations) of adolescent pregnancy prevention approaches.” The same language appropriated $4,450,000 in each of FYs 2009, 2010, and 2011. These funds have been used for the PPA evaluation.
In FYs 2008 and 2009, these funds were overseen by ACF’s Family and Youth Services Bureau (FYSB). In FYs 2010 and 2011, these funds were overseen by HHS’ Office of Adolescent Health (OAH). However, through all FYs, FYSB and OAH have asked ACF/OPRE to assist in facilitating the research contract. ACF is now assisting OAH in facilitating the contract.
To accomplish the objective of the appropriations, ACF and OAH – heretofore referred to as HHS – seek OMB approval of the first follow-up survey instrument of program participants, for the first two PPA sites.
The objective of the PPA evaluation is to test selected promising approaches to prevent teen pregnancy among middle school- and high school-aged teens. The evaluation will help HHS determine the effectiveness of various approaches in affecting key outcomes related to pregnancy prevention (for example, sexual debut, pregnancy, sexually transmitted disease [STD] infection, and so on). Ultimately, the purpose of the evaluation is to provide stakeholders—including practitioners and federal and other policymakers—with information on a range of approaches that hold promise for preventing teen pregnancy, and, through the follow-up surveys, to assess rigorously the effectiveness of these approaches.
In the PPA evaluation, HHS has identified seven study sites that will implement different pregnancy prevention approaches. In three of these sites, the programs to be tested will be school-based—operated in high schools or middle schools. In the other sites, the programs to be tested will be operated in community-based organizations (CBOs). The study will use a sample of approximately 9,000 teens across all sites. In each site, youth will be assigned to a treatment group that receives the program of interest, or to a control group that does not. In five sites, to ensure that behavior of control group youth is not affected, or “contaminated” by interaction with treatment group youth, random assignment will be done generally at the cluster level (that is, the school or CBO). In the other two sites, random assignment will be done at the individual level, because risks of contamination are low. In the two sites whose follow-up questionnaires are submitted now for approval, random assignment is by cluster. In Chicago middle schools have been randomly assigned, and for the Oklahoma grantee foster care group homes will be randomly assigned, and a total of 2,680 youth will be enrolled in the sample.
A baseline survey will be conducted with both the program and control groups before the youth in the program group are exposed to the pregnancy prevention programs. The first follow-up surveys (the purpose of this OMB submission) will be conducted in most instances, and pursuant to the TWG guidance, no sooner than 3-6 months after the end of the scheduled program intervention for each sample member. The final follow-up survey (for which approval will be sought in a later submission) will be conducted with participating youth no later than 18-24 months after the scheduled end of the program. The exact timing of the two follow-up surveys has been determined in each site, taking into account the length of the program, the age of the target population, and the priority outcomes of interest. Wherever possible, there will be group administration of the self-administered survey; when necessary to increase response rates, this method will be augmented with web survey and telephone follow-up.
Follow-up data will be used to address the following research questions on program impact:
Are the (selected) approaches effective at meeting their immediate objectives (for example, improving knowledge of pregnancy risks)?
Are the approaches effective at reducing adolescent pregnancy?
What are their effects on related outcomes, such as postponing sexual activity and reducing or preventing sexual risk behaviors and STDs?
Do these approaches work better for some groups of adolescents than for others?
Major evaluation activities will include the following:
Identifying promising strategies and programs through a review of the literature and interviews with the “field” (for example, researchers, policy experts, and program developers) in order to focus the evaluation on interventions that are of substantial interest to the field and show the most promise for reducing rates of teen sexual activity and pregnancy (completed) .
Recruiting sites to participate in an evaluation of selected interventions (from among those identified by the field) and providing assistance on evaluation support activities (completed).
Collecting data on the research sample at baseline and at two follow-up data collections.
Analyzing data collected and preparing reports with the results.
HHS is conducting this evaluation through a lead contractor, Mathematica, and its subcontractors: Child Trends, Twin Peaks, LLC, and National Abstinence Education Association.
A2. Purpose and Use of the Information Collection
Information collected through the first follow-up survey is key to assessing program impacts. If this request is approved, the PPA evaluation will collect first follow-up data through a self-administered paper-and-pencil interview (PAPI).
Follow-up data will measure: teens’ demographic characteristics, family structure and relationships; receipt of information and services related to reproductive health; recent stressors; knowledge, attitudes, and expectations about sexual activity and contraception; dating experience and current dating status; and alcohol and drug use. Items specifically for sexually-active youth will also measure sexual activity and teen births.
Attachment A presents a “crosswalk” between the questions approved for the baseline survey and the questions included in the basic “concordance” version of the follow-up questionnaire. The crosswalk also provides information on the source of each question. In a few instances, questions are noted as being developed by the PPA study team or developed as a performance measure. Questions developed by the PPA study team are generally straight-forward, and in some cases were part of Chicago baseline administration or included on the first follow-up instrument that was pretested. Those items developed as performance measures have been piloted by multiple grantees5. This information is also included on the crosswalk. This basic “concordance” version of the follow-up questionnaire will be used in the Chicago site. Attachment B lists the topics covered in this instrument, the justification for their inclusion, and how the data from the questions will be used (as a covariate, to determine intermediate outcomes, or to determine sexual risk outcomes). A list of national surveys reviewed in developing the first follow-up survey instrument for the PPA evaluation is provided in Attachment C.6 Attachment D provides contact information of the persons or federal entities consulted in the drafting and refinement of the first follow-up survey instrument. For the Oklahoma grantee site, adjustments have been made to address the specific goals of that site’s program. Both instruments are presented in separate files. A crosswalk between the Oklahoma first follow-up instrument and the first follow-up concordance instrument is also presented. Any question on the Oklahoma instrument that is not part of the PPA concordance instrument was pretested by the local evaluator in Oklahoma as part of their pilot in summer 2011. Additional information about their pretest is found in B4.
A3. Use of Improved Information Technology and Burden Reduction
The data collection plan reflects sensitivity to issues of efficiency, accuracy, and respondent burden. Where feasible, information will be gathered from existing data sources; the information being requested through surveys is limited to that for which the youth are the best or only information sources. Improved information technology will be used when appropriate and cost-effective. During the first follow-up data collection, self-administered PAPIs will be used for all group-based completions. In those instances in which the survey must be administered to individuals outside of a classroom setting, respondents will be provided a PIN/password for web completion or will be administered a telephone survey. The advantages of PAPI over more technologically innovative approaches, such as laptops or personal digital assistants (PDAs), are that it enables respondents to set their own pace; provides accurate responses to sensitive questions; reduces costs; and simplifies administration logistics, as the majority of interviews will be conducted in a classroom setting. This method is also consistent with other recent youth surveys and evaluations. Studies have shown no difference between PAPI and computer-assisted self-interviewing (CASI) in reports of most measures of male-female sexual activity, including reports such as ever having had sexual intercourse, recent sexual activity, number of partners, condom use, and pregnancy.7,8,9,10,11,12 Turner et al.5 found that CASI improved reporting on low-prevalence behaviors such as male-male sex, injection drug use, and sexual contact with intravenous drug users.
A4. Efforts to Identify Duplication and Use of Similar Information
The information collection requirements for the PPA evaluation have been carefully reviewed to determine what information is already available from existing studies and what will need to be collected for the first time. Although the information from existing studies provides value to our understanding of reducing teenage sexual risk behavior, HHS does not believe that it provides sufficient information on a sufficient range of programs to policymakers and stakeholders aiming to reduce this behavior. The data collection for the PPA evaluation is an essential step to providing this information.
A5. Impact on Small Businesses or Other Small Entities
Programs in some sites may be operated by community-based organizations. The data collection plan is designed to minimize burden on such sites by providing staff from Mathematica Policy Research to assist in group data collection. For respondents who do not complete the survey in the group setting, Mathematica will provide passwords for web completion or will conduct a telephone data collection, thus minimizing requirements for extensive “sample pursuit” by site staff.
A6. Consequences of Collecting Information Less Frequently
First follow-up data are essential to conducting a rigorous evaluation of pregnancy prevention programs, per appropriations. In the absence of such data, funding decisions on teen pregnancy prevention programs will continue to be based on insufficient and outdated information on program effectiveness.
A7. Special Circumstances Relating to the Guidelines of 5 CFR 1320.5
There are no special circumstances for the proposed data collection.
A8. Comments in Response to the Federal Register Notice and Efforts to Consult Outside the Agency
The 60-day notice was published in the Federal Register on July 12, 2010, on pp. 39695-39696, with the document identifier of OS–0990–New. The text is found in Attachment E. No comments or questions were received.
A9. Explanation of Any Payment or Gift to Respondents
Participants completing the first follow-up survey in a group setting will receive a $10.00 gift card. Group make-up sessions will be offered to capture any initial non-respondents. Those youth who do not complete the survey in a group setting will be given the option to complete the follow-up survey via telephone or web; these respondents will receive a $25.00 gift card. A higher incentive is offered to these respondents because completion outside of the group administration requires greater initiative and cooperation on behalf of the respondent, as well as additional time outside of the school day.
HHS has embedded protections for privacy in the study design. Data collection will occur only if informed consent is provided by a parent or legal guardian if the respondent is a minor, or by respondents themselves if they are 18 or older. Consent for the duration of the study will be collected prior to baseline data collection. The consent form, which was approved through the baseline survey ICR, explains the data being collected, and its use. The form also states that answers will be kept private, that youths’ participation is voluntary, and that they may refuse to participate at any time. Participants and their parents/guardians are told that, to the extent allowable by law, individual identifying information will not be released or published; rather, data collection will be published only in summary form with no identifying information at the individual level. The form also notes that the evaluation has obtained a Certificate of Confidentiality from the National Institutes of Health (NIH). In addition, student assent will be obtained prior to each group survey administration. Our protocol during the self-administration of the paper-and-pencil instrument will provide reassurance that we take the issue of privacy seriously. It will be made clear to respondents that identifying information will be kept separate from questionnaires. The questionnaire and envelope will have a label with a unique ID number; no identifying information will appear on the questionnaire or return envelope. Before turning completed questionnaires in to field staff, respondents will place them in blank envelopes and seal them. This approach has been shown in research to yield the same reports of sexual activity as computer-assisted surveys in school settings, and a lower incidence of student concerns about privacy. Identifying and contact information will be stored in secure files, separate from survey and other individual-level data.
As in the baseline survey, many of the measures in the first follow-up survey ask for information of a sensitive nature (Exhibit A11.1) because the programs we will be evaluating are designed specifically to reduce sexual activity and associated risk behaviors among teens. Comprehensive measures of behavior are included because they will provide more accurate representations of teen sexual behavior, and the responses will significantly supplement the knowledge currently available on program effectiveness.
Sensitive questions are drawn from previously-successful youth surveys and evaluations (see Attachment C). The items have been carefully selected, and we have been guided by past experience in determining whether or not the benefits of measures may outweigh concerns about the heightened sensitivity among sample members, parents, and program staff to specific issues. Although these questions are sensitive, they are commonly and successfully asked of youth similar to those who will be in the study. Many of the sensitive items related to sexual activity will be asked only of sample members who report being sexually active.
Exhibit A11.1: Summary of Sensitive Questions and their Justification
Topic13 |
Justification |
Intentions regarding sexual activity (questions 3.20-3.24 in Part A) |
Intentions regarding engaging in sex and other risk-taking behaviors are extremely strong predictors of subsequent behavior (Buhi and Goodson, 2007). Intentions are strongly related to behavior and will be an important mediator predicting behavior change. |
Drug and alcohol use (questions 6.1–6.5 in Part B1 and B2) |
There is a substantial body of literature linking various high-risk behaviors of youth, particularly drug and alcohol use, sexual intercourse, and risky sexual behavior. The effectiveness of various program strategies is expected to differ for youth who are and are not experimenting with or using drugs and alcohol (Tapert et al., 2001; Li et al., 2001; Boyer et al., 1999; Fergusson and Lynskey, 1996; Sen, 2002; Dermen et al., 1998; Santelli et al., 2001.) |
Sexting (questions 4.16 – 4.19 in Part B2) |
The relationship between the use of technology among youth and sexual behavior is an emerging topic of interest that has not yet been heavily researched (National Campaign to Prevent Teen and Unplanned Pregnancy, Sex and Tech Survey, 2008). Questions will be asked of non-sexually active youth to examine this relationship, and identify potential pathways leading to the transition from non-sexually active to sexually active, and factors affecting the rate of that transition. |
Sexual activity, incidence of pregnancy and STDs, and contraceptive use (questions 3.28; 4.1–5.6 in Part B1) |
Sexual activity, incidence of pregnancy and STDs, and contraceptive use are all key outcomes for the evaluation. The majority of these questions are asked only of youth who report being sexually active. |
The PPA information collection does not impose a financial burden on youth respondents. Respondents will not incur any burden other than the time spent answering the questions contained in the questionnaires.
Exhibit A12.1 summarizes the reporting burden on study participants. Enrollment will occur over three years, so this burden is based on one-third of the expected sample in Chicago and Oklahoma. Questionnaire response times were estimated from pretests with student respondents and from prior experience. The annual burden for questionnaire response is estimated from the total number of completed questionnaires proposed (expected response rate of 85 percent at first follow-up) and the time required to complete the questionnaires. The total annual burden is expected to be 399 hours.
Exhibit A.12.1. Reporting Burden on Study Participants for Early Follow-Ups (for Chicago and Oklahoma)
Site/Program |
Annualized Number of Respondents |
Number of Responses per Respondent |
Average Burden Hours per Response |
Total Burden Hours (Annual) |
Chicago Public Schools/Health Teacher |
430 |
1 |
.5 |
215 |
Oklahoma Institute of Child Advocacy/Power Through Choices |
|
|
|
|
Immediate post-test |
306 |
1 |
.6 |
183.6 |
6 month follow-up |
306 |
1 |
.6 |
183.6 |
Total |
1,042 |
|
|
582 |
Enrollment will occur over three years, so this burden is based on one-third of the expected sample in Chicago and Oklahoma. Questionnaire response times were estimated from pretests with student respondents and from prior experience. The annual burden for questionnaire response is estimated from the total number of completed questionnaires proposed (expected response rate of 85 percent at early follow-ups) and the time required to complete the questionnaires. The total annual burden is expected to be 583 hours.
These information collection activities do not place any additional cost on respondents.
This clearance request is specifically for collecting first follow-up data in two sites (Chicago and Oklahoma). Total estimated cost to the government for first and second follow-up data collection across all sites is $5,920,551. Total cost for first follow-up data collection only is $2,746,538. Because first follow-up data collection will be carried out over a total of three years as successive sites start up and enroll samples, the estimated annualized cost to the government for first follow-up data collection is $915,513 per year. The estimated annualized cost of first follow-up data collection for Chicago and Oklahoma is $261,576.
OMB gave approval on November 24, 2008, for outreach discussions with stakeholders, experts in the field, and program developers (OMB Control No. 0970-0360). OMB also gave approval for baseline survey data collection and the collection of youth participant records on July 26, 2010 (OMB Control No. 0970-0360) Emergency clearance for site-specific variants of the baseline survey questionnaire was received on August 22, 2011 (OMB Control No. 0970-0360).
In response to comments from members of our technical work group, the timing of the first and second follow-up data collections has changed from what was described in the baseline OMB package. In most instances, the first follow-up data collection will occur no sooner than 3-6 months after the program end date (as opposed to 12 months after the program start date), and the second follow-up survey will be administered no later than 18-24 months after the end of the program (as opposed to 36 months after the program start date), with the exception of the OICA and Ohio Health sites. The exact schedule for each site will be determined based on the length of the program and the age of the sample youth.
HHS now seeks OMB approval for the first follow-up survey. The collection of these data will take place over three years, as successive sites continue evaluation sample enrollment and implementation of their programs. The data will be used for the impact analysis. The majority of the questions for which approval is now sought were approved through the baseline data collection; Attachment A crosswalks the baseline and first follow-up surveys. The study design calls for a variety of pregnancy prevention programs to be evaluated, including comprehensive sex education programs, abstinence-based programs, and STD/HIV prevention programs.
This phase of the PPA demonstration and evaluation involves collecting first follow-up data that will be used for the impact evaluation.
Before estimating impacts, HHS will conduct two analyses of the data from the baseline survey. First, HHS will use the data to describe the study sample and help define subgroups of policy interest. This step will enable HHS to compare the characteristics of youth in the study with youth nationwide and provide guidance on how the study sample and findings might generalize to a broader policy setting. Second, HHS will assess whether random assignment resulted in similar baseline characteristics of youth, on average, for the treatment and control groups.
Pregnancy prevention approaches emphasize different outcomes. Some focus on promoting abstinence; others focus on use of contraceptives and avoiding STDs. The baseline data collected from program participants will ultimately be used to evaluate the effectiveness of these promising approaches with particular emphasis on the outcomes they target, as well as common outcomes across all approaches.
Given the underlying experimental design, unbiased impact estimates can be obtained from the simple, cross-sectional difference in average outcomes between the treatment and control groups, measured at follow-up. This means that baseline data on outcomes are not necessary to obtain unbiased impact estimates; however, baseline data can still be useful for the analysis. In particular, we can use baseline data to construct covariates for use in the regression models for estimating program impacts. We can thus improve the precision of the impact estimates by reducing the residual variance in the models (that is, the portion of the variance in outcomes that is left unexplained after accounting for treatment status). This gain in precision is often largest when a baseline measure of the outcome can be included as a covariate, so ideally one would use a consistent measure of the outcome variables over time, and ideally word survey questions related to particular outcomes as similarly as possible between the baseline and followup surveys. However, such consistency is not essential to achieve valid impact estimates (since they are obtained cross-sectionally with an experimental design).
The empirical specification for the model will depend on the unit of random assignment, which will depend on the type of program provided at a specific site. As we discuss further in section B1, most sites will use random assignment of entire schools, but some sites will employ random assignment of individuals within the site. With random assignment of students, our model can be expressed as:
(1) ,
where yi is the outcome of interest for student i; xi is a vector of baseline characteristics for student i, including baseline measures of the key outcomes; Ti is an indicator equal to one if the student is in the treatment group and zero if in the control group; and i is a random error term for student i. The vector of baseline characteristics xi will include demographic characteristics such as age, gender, race/ethnicity, and baseline measures of key outcomes. The parameter estimate for is the estimated impact of the program.
In most sites, schools will be randomly assigned and the estimation must account for the correlation of outcomes between students in the same school, as they may be exposed to similar influences not otherwise captured in the regression model. Therefore, each student cannot be considered statistically independent. We can modify the previous regression model as:
(2) .
The general structure of the model is the same, but now yis is the outcome measure for student i in school s (and similarly for the vector of baseline characteristics xis and the error term is). The treatment status Ts is now defined by school rather than by individual. Most importantly, the error term in Equation (2) accounts for the clustering of students within schools because of the inclusion of the school-level error term s—a school “random effect.” If this error term is excluded, the precision of the impact estimates could be seriously overstated. As in Equation (1), the estimated impact of the program is .
The specific maximum-likelihood methods for estimating the parameters of the models will depend on the form of the dependent variable. Logistic regression procedures will be specified for binary outcomes (such as whether the student has an STD) and multinomial regression procedures will be specified for categorical outcomes (such as the number of sexual partners).
Random assignment provides an unbiased estimate of the impact on all eligible youth, but some youth may never show up for services or classes. Assuming the program has no effect on youth who never show up, we can make a simple adjustment to calculate the impact on participants by dividing the impact on eligible youth by the participation rate. (However, this adjustment cannot be used in the more likely scenario that youth receive some, but not all, of the intervention.)
The effects of pregnancy prevention approaches may differ for different groups of youth. We will estimate impacts for subgroups of youth by adding to Equations (1) and (2) a term that interacts the treatment indicator by a binary indicator indicating whether the youth is in the subgroup or not. The estimate of the coefficient on this term provides an estimate of the difference in the program effect across the subgroups.
Certain exploratory analyses may also be conducted that further exploit the longitudinal (combined baseline and follow-up) data. For example, analyses can be conducted to examine the baseline variables that correlate with sexual risk behavior at follow-up, regardless of their treatment status. While such analyses are inherently correlational and not causal, they can nevertheless offer an understanding of which potential mediators of sexual risk behavior (for example, attitudes or knowledge) that are most predictive and, thereby, some guidance to both programs and evaluators on which mediators to emphasize in their work. In addition, should the models above reveal statistically significant evidence of a program impacts at later follow-up(s), models can be estimated that introduce measures of mediators from the first follow-up as covariates and observing how much of the impact can be explained by them. While again non-experimental, findings from these models can offer suggestive evidence of the mediator(s) through which program impacts are emerging, again providing some guidance for the direction of future research and program development.
The entire PPA evaluation will be conducted over an eight-year period. HHS began consultation with stakeholders about the design of the study and identification of potential programs and sites in September 2008 and will continue through March 2011. The baseline data collection for which HHS received OMB approval on July 26, 2010, (OMB Control No. 0970-0360) will take place over a three-year period beginning in November 2010 and ending by May 2013. The first and second follow-up data collections are projected to occur between fall 2011 and fall 2015. An interim report on program impacts, based on the first follow-up survey covered by this request, will be completed in June 2014, and a final report based on the second follow-up survey will be completed in June 2016.
All instruments will display the OMB number and the expiration date.
No exceptions are necessary for this information collection.
SUPPORTING REFERENCES FOR
INCLUSION OF
SENSITIVE QUESTIONS OR GROUPS OF QUESTIONS
Boyer, Cherrie B., Jeanne M. Tschann, and Mary-Ann Shafer. "Predictors of Risk for Sexually Transmitted Diseases in Ninth Grade Urban High School Students." Journal of Adolescent Research, vol. 14, no. 4, 1999, pp. 448-65.
Buhi, Eric R. and Patricia Goodson. "Predictors of Adolescent Sexual Behavior and Intention: A Theory-Guided Systematic Review." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 40, no. 1, 2007, pp. 4.
Dermen, K. H., M. L. Cooper, and V. B. Agocha. "Sex-Related Alcohol Expectancies as Moderators of the Relationship between Alcohol use and Risky Sex in Adolescents." Journal of Studies on Alcohol., vol. 59, no. 1, 1998, pp. 71.
DiClemente RJ, Durbin M, Siegel D, Krasnovsky F, Lazarus N, and Comacho T. "Determinants of Condom use among Junior High School Students in a Minority, Inner-City School District." Pediatrics, vol. 89, no. 2, 1992, pp. 197-202.
DiClemente RJ, Lodico M, Grinstead OA, Harper G, Rickman RL, Evans PE, and Coates TJ. "African-American Adolescents Residing in High-Risk Urban Environments do use Condoms: Correlates and Predictors of Condom use among Adolescents in Public Housing Developments." Pediatrics, vol. 98, no. 2, 1996, pp. 269-78.
DiIorio, Colleen, William N. Dudley, Johanna E. Soet, and Frances McCarty. "Sexual Possibility Situations and Sexual Behaviors among Young Adolescents: The Moderating Role of Protective Factors." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 35, no. 6, 2004, pp. 528.
Dittus PJ and Jaccard J. "Adolescents' Perceptions of Maternal Disapproval of Sex: Relationship to Sexual Outcomes." The Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine, vol. 26, no. 4, 2000, pp. 268-78.
Fergusson, David M. and Michael T. Lynskey. "Alcohol Misuse and Adolescent Sexual Behaviors and Risk Taking." Pediatrics, vol. 98, no. 1, 1996, pp. 91.
Li, Xiaoming, Bonita Stanton, Lesley Cottrell, James Burns, Robert Pack, and Linda Kaljee. "Patterns of Initiation of Sex and Drug-Related Activities among Urban Low-Income African-American Adolescents." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 28, no. 1, 2001, pp. 46.
Santelli, John S., Leah Robin, Nancy D. Brener, and Richard Lowry. "Timing of Alcohol and Other Drug use and Sexual Risk Behaviors among Unmarried Adolescents and Young Adults." Family Planning Perspectives, vol. 33, no. 5, 2001.
Sen, Bisakha. "Does Alcohol-use Increase the Risk of Sexual Intercourse among Adolescents? Evidence from the NLSY97." Journal of Health Economics., vol. 21, no. 6, 2002, pp. 1085.
Tapert, Susan F., Gregory A. Aarons, Georganna R. Sedlar, and Sandra A. Brown. "Adolescent Substance use and Sexual Risk-Taking Behavior." Journal of Adolescent Health: Official Publication of the Society for Adolescent Medicine., vol. 28, n3, 2001, pp.181
1 Abma, J. C., G. M. Martinez, W. D. Mosher, and B. S. Dawson. “Teenagers in the United States: sexual activity, contraceptive use, and childbearing”, Vital and Health Statistics, vol. 23, no. 24, 2004, pp. 1–48.
2 Albert, B., S. Brown, and C. Flannigan, eds. 14 and Younger: The Sexual Behavior of Young Adolescents. Washington, DC: National Campaign to Prevent Teen Pregnancy, 2003.
3 Teen birth rates declined by 34% from 1991–2005. See: Hamilton, B. E., J. A. Martin, and S. J. Ventura. “Births: Preliminary data for 2006.” National Vital Statistics Reports, vol. 56, no. 7. Hyattsville, MD: National Center for Health Statistics, 2007.
4 Hamilton BE, Martin JA, Ventura SJ. Births: Preliminary data for 2007. National vital statistics reports, Web release; vol 57 no 12. Hyattsville, MD: National Center for Health Statistics. Released March 18, 2009.
5 Performance measures are those items the TPP and PREIS grantees are required to report on annually as a condition of their grant. These items have been piloted by five grantees across Texas, California, and Louisiana. Items were piloted with males and females ranging in age from 11 to 18.
6 In order to best fit the proposed PAPI survey mode for the targeted age range, nearly all proposed survey items were adapted, to some degree, from those found on these national surveys. Adaptations included modifications in the wording to make questions easier to understand in PAPI administration, and/or modifications in response categories to simplify the options available, or to address more directly the main goal of the follow-up survey, which is to support an eventual impact evaluation. Where we are blending the PPA standard instrument with priorities of grantees’ local evaluators, some items are taken from instruments drafted by the local evaluator, and in those cases they are generally taken from established surveys or the local evaluator’s past research.
7 Turner, C.F., L. Ku, S.M. Rogers, L.D. Lindberg, J.H. Pleck, and F.L. Sonenstein. “Adolescent Sexual Behavior, Drug Use, and Violence: Increased Reporting with Computer Survey Technology.” Science, vol. 280, 1998, pp. 867–873.
8 Beebe, Timothy J., Patricia A. Harrison, James A. McCrae Jr., Ronald E. Anderson, and Jayne A. Fulkerson. “An Evaluation of Computer-Assisted Self-Interviews in a School Setting.” Public Opinion Quarterly, vol. 62, 1998, pp. 623–632.
9 Beebe, Timothy J., Patricia A. Harrison, Eunkyung Park, James A. McRae, Jr., and James Evans. “The Effects of Data Collection Mode and Disclosure on Adolescent Reporting and Health Behavior.” Social Science Review, vol. 24, no. 4, 2006, pp. 476–488.
10 Brener, Nancy D., Danice K. Eaton, Laura Kann, JoAnne Grunbaum, Lori A. Gorss, Tonja M. Kyle, and James G. Ross. “The Association of Survey Setting and Mode with Self-Reported Health Risk Behaviors Among High School Students.” Public Opinion Quarterly, vol. 70, 2006, pp. 354–374.
11 Webb, P.M., G.D. Zimet, J.D. Fortenberry, and M.J. Blythe. “Comparability of a Computer-Assisted Versus Written Method for Collecting Health Behavior Information from Adolescent Patients.” Journal of Adolescent Health, vol. 24, no. 6, 1999, pp. 383–388.
12 Schochet, Peter Z. “An Approach for Addressing the Multiple Testing Problem in Social Policy Impact Evaluations.” Evaluation Review, vol.33, no.6, December 2009.
13Question numbers referred to are for the standard concordance instrument (for the Chicago site). Italicized question numbers immediately following, where appropriate, refer to sensitive questions retained in the instrument for the Oklahoma grantee site and the question numbers in that site’s instrument.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Barbara Collette |
File Modified | 0000-00-00 |
File Created | 2021-01-31 |