0990-0290

0990-0290.doc

Adolescent Family Life Care Program Core Evaluation

OMB: 0990-0290

Document [doc]
Download: doc | pdf

7



B. Collection of Information Employing Statistical Methods



Statistical methods are not used in the collection of information for all AFL demonstration projects using the revised core evaluation instruments; therefore, responses to this section apply only to the methods used for the cross-site evaluation of the AFL program.

1. Respondent Universe and Sampling Methods

The cross-site evaluation (which will be a subset of the projects and respondents to the survey) will include up to approximately 2,661 adolescents receiving abstinence education. Adolescents served by Title XX Prevention projects and those selected to serve as comparison groups will participate in the cross-site evaluation.

A total of 36 Prevention projects serve adolescents. From these projects, 7 Prevention projects involving 30 schools or after-school sites have been selected to obtain the sample of 2,661 participants for the cross-site evaluation. Prevention projects were selected for participation based on the rigor of their evaluation designs, namely those that have equivalent treatment and comparison groups and that avoid contamination by the intervention of comparison group respondents. We also prioritized projects that are located in different geographic regions in order to maximize regional diversity and projects that employ implementation strategies conducive to rigorous evaluation (including appropriate timing of program delivery). Information about evaluation design rigor, implementation strategies, and project characteristics was obtained by reviewing end-of-year reports submitted to OAPP and through discussions with OAPP project officers. Within each project, adolescents will be assigned by AFL project staff to treatment and comparison groups.

We conducted power analyses to determine the optimal sample size for detecting statistically significant differences between treatment and comparison groups. The frequency with which adolescents report they have engaged in communication with their parents about abstinence and related topics serves as the primary outcome measure, and responses will be averaged across 15 items using a 4-point scale (from 0 = no talk to 4 = four times or more in the previous 3 months; Miller et al., 1993) for the purposes of power calculations. Power calculations were based on the comparison between treatment and comparison groups. [Three other outcome measures—attitudes about abstinence, intentions to have sex, and sexual activity—will not be considered in final power analyses because some projects may obtain waivers to omit these questions among very young respondents (aged 9 to 13) or among respondents targeted through organizations, such as schools, that will not allow data collection of such sensitive information.] Several assumptions were made concerning population parameters for power analyses of the parent-child communication outcome. First, we assumed a 0.5 correlation coefficient between outcomes measured at baseline and 18-month follow-up for the same respondent. Although there is little definitive information about the true correlation over 2 years, there is some evidence from 1-year follow-up studies that such correlation is no stronger than we assume here (Sales et al., 2006). Second, we assume that all outcomes between different respondents will be uncorrelated. (Siblings or adolescents living in the same household as an enrolled study participant will be excluded.) The exception to this is that because adolescents in Prevention projects are clustered within schools, neighborhoods, or communities, we assumed a school or community-level intraclass coefficient of 0.10, based on pilot data analyses and prior RTI school-based data about adolescent risk behavior. Third, it was assumed that adolescents will report a mean score of 1.2 at baseline and that treatment adolescents would report a mean score of 1.7 at the end of the second school year, as reported by Miller (1993). Each of these assumptions is very conservative, resulting in increased sample sizes for our evaluation. In contrast, Miller (1993) produced similar effects at 3 months, using an extremely low intensity intervention (a videotape viewed by adolescents and their parents). However, our assumption allows us to include enough subjects in our evaluation to detect small effects, and making a less conservative assumption would create the possibility that the Prevention project interventions are efficacious but that our sample size is not large enough to detect this.

To achieve 0.80 power, analyses indicate that a total of 2,661 adolescents from 24 schools or after-school sites will need to complete the baseline survey. The numbers of adolescents in the respondent universe and in each sample are shown in Exhibit 11. The expected response rate at the second school year follow-up includes all adolescents who participate at baseline, including those who may refuse to participate in the first school year follow-up data collection.

All decisions about assumptions that guided our power analysis were intended to err in favor of a larger sample size to safeguard for the possibility of a worst case scenario in terms of difficulty detecting effects. These assumptions increased our confidence that smaller effects produced by Prevention projects than those found by previous programs would be reasonably detected using the sample sizes we identified.

As noted, our sample design is based on conservative assumptions about survey response. Thus, our estimates of longitudinal retention rates shown in Exhibit 11 should be viewed as “worst case” scenarios that if hold true, would still ensure sufficient sample sizes to reasonably detect small program effects. For Prevention, we estimate that at least 96% of adolescents who are contacted and for whom parent consent is obtained will complete the baseline survey, that at least 85% of adolescents will be retained between the baseline and first follow-up survey, and that at least 80% of treatment adolescents and 70% of comparison adolescents will be retained between the baseline and second follow-up surveys.

Exhibit 11. Longitudinal Response Rates and Numbers of Adolescents

Numbers and Response Rates

Treatment Adolescents

Comparison Adolescents

Total

Number of subjects to be contacted at baseline

1,768

1,786

3,554

Expected parent consent rate

81%

75%


Number of subjects with parent consent at baseline

1,432

1,340

2,772

Expected response rate at baseline

96%

96%


Number of completed baseline surveys

1,375

1,286

2,661

Expected response rate at end of school year

85%

85%


Number of completed first follow-up surveys

1,169

1,093

2,262*

Expected response rate at end of second school year

80%

70%


Number of completed second follow-up surveys

1,100

900

2,000*

*A subset of the original 2,661 baseline respondents.

Exhibit 12 shows longitudinal retention rates for prior studies of various lengths.

Exhibit 12. Longitudinal Completion and Retention Rates for Prior Studies

Project

Institution/
Client

Sample

Survey

Time from Baseline

Follow-up Survey Completion Rate

Baseline to Follow-up Retention Rate

Evaluation of abstinence-based pregnancy prevention program (Project IMPPACT)

Inwood House/U.S. Department of Health and Human Services

7th and 8th grade students

Paper and pencil questionnaire

2 years

75%

59%

Child and Family Well-being Study (The Three Cities Study)

Johns Hopkins University/ National Institute for Child Health and Human Development

Focal children of poor households

Physical measurements and a CAPI/ ACASI questionnaire

Wave 2: 1.5 years

Wave 3: 6 years

82%

80%



It should be noted that while attrition will inevitably occur in this study, as it usually does in any longitudinal study, we do not expect attrition to bias any of the study’s main findings. In sample surveys, there will almost always be missing data due to the attrition (or initial nonresponse) of selected respondents. In longitudinal surveys, this problem is typically exacerbated as a function of time because there may be further attrition at each wave of the survey. Three distinct mechanisms causing missing data can be identified and the cause of missingness determines the extent to which bias may be introduced into the study estimates. These mechanisms include the following:

Data are said to be missing completely at random (MCAR) if the probability of attrition is unrelated to study outcome variables or to the value of any other explanatory variables, including the exposure conditions. No additional bias will be introduced to estimates based on incomplete data due to missingness under MCAR. However, the reduced data set will typically result in larger standard errors.

Data are said to be missing at random (MAR) if the probability of attrition is unrelated to study outcome variables after controlling for other explanatory variables. That is, attrition may vary by demographic characteristics. For example, adolescents of lower income may be more likely to drop out of the survey compared to adolescents of higher income. Thus bias would be introduced into an overall outcome variable estimate for adolescents but not into income-specific estimates. Thus, under MAR, the potential bias in estimates due to missingness can be eliminated (or reduced significantly) if the appropriate explanatory variables, such as income, are controlled for.

Data are said to be missing not at random (MNAR) if the probability of attrition is related to the study outcome variable itself. For example, suppose that adolescents who indicate lower parent-child communication about sex at baseline are more likely to drop out of the survey than adolescents who report more parent-child communication. In this case, the overall estimate of parent-child communication among all adolescents will be biased upward by attrition.

In practice, all three missingness mechanisms may be at work (i.e., different attriters may drop out according to different mechanisms). If MNAR is not dominant, then reasonably unbiased estimates of study outcomes can be constructed through appropriate modeling. In the case of this study, we do not expect MNAR to be present.

2. Procedures for the Collection of Information

To gather sensitive and complex data for the cross-site impact evaluation, AFL demonstration project evaluation staff will administer paper and pencil Teleform surveys with treatment and comparison adolescents.

In order for adolescents aged 17 or younger to be included in the cross-site evaluation sample, their parents must be able to read English or Spanish to provide active consent for their adolescent’s participation (either in writing or by telephone with mailed documentation), and all adolescents must be able to read English or Spanish to provide written consent or assent for their own participation in the study. Consent forms and assent forms are included in Appendix E.

All AFL sites will submit the survey instruments to their site IRB prior to initiating data collection. Copies of local site IRB approvals will be submitted to RTI’s IRB. The questionnaire data will be treated as private and maintained in a manner that satisfies the privacy requirements set forth by the site IRB. Any and all transmission of individual or case level data will also be done in accordance with privacy requirements set forth by their site IRB.

Data collection training, monitoring, and ongoing technical assistance will be provided for projects participating in the cross-site evaluation in order to assure high quality data collection procedures. All AFL project staff administering core evaluation instruments will be trained in survey administration, including consent and assent procedures, privacy guidelines, and identifying respondent distress. In addition, the training will emphasize to AFL project staff the importance of following the data collection procedures, including mailing procedures, in order to ensure that the rationale for data collection procedures is fully understood by those responsible for data collection.

Data collection staff will be encouraged to avoid reading all questions to groups of respondents if possible in order to avoid adolescents looking at each others’ survey responses. Completed instruments will be sealed in envelopes, and project staff will not unseal envelopes containing completed surveys in the presence of respondents. AFL Prevention project staff with access to identifying information will never view responses about respondents’ sexual activity in order to avoid any mandatory reporting requirements in their state. The lists of identifiers and identification numbers will be sent to RTI for safekeeping during the cross-site evaluation. Standard procedures will be developed for identification number assignment and linking for the cross-site impact evaluation, with exceptions made if necessary.

Cross-site evaluation baseline data will be collected by Prevention grantees from October 2008 through November 2009.

For the cross-site evaluation, individual parent consent form return incentives will be provided (such as arm bands, pencils, or mirrors) even if the parent refuses to allow the adolescent to participate. Adolescents will also receive a $10 gift card incentive for baseline data collection because adolescents are a difficult cohort to recruit for a 20-minute survey without the use of a small incentive. The decision to use incentives for this study is based on previous findings in the literature (Abreu & Winters, 1999; Shettle & Mooney, 1999; Singer et al., 1999) and by studies that incentives can significantly increase response rates among adolescents. Although these studies differ in other respects that could account for some variability in response rates, overall, incentives of at least $10 were generally associated with higher response rates compared with no incentive. It is expected that these modest incentives will enhance survey response rates without biasing responses or coercing respondents to participate, as well as higher data validity as adolescents become more engaged in the survey process. Because incentives are geographically and culturally specific, this standardized value will be offered, but individual grantees will determine what is actually provided. A protocol for standardized incentives for the cross-site impact evaluation will be suggested. Additional explanation regarding the use of incentives in this study is provided in Section A9.

Treatment and comparison group adolescents who completed baseline surveys will be surveyed again approximately 1 and 2 years after baseline (from March 2009 through November 2011). A potential threat to the external validity of the proposed longitudinal design is loss to follow-up or attrition (Biglan et al., 1991). In other words, the results of the evaluation may be different among the group of subjects who remain in the study after baseline from those who do not remain in the study after baseline. Potential attrition may be an important consideration in the selection of adolescents, particularly because grantees frequently recruit clients located in areas with high levels of transience and hard-to-reach populations (such as low-income families without telephones). RTI’s experience suggests that by using mail surveys and tracing and locating services and by obtaining extensive locating information from participants at baseline (i.e., cell phone, e-mail, contact information for family or friends), it becomes more likely to successfully survey at follow-up 80% of respondents who completed baseline interviews.

All questionnaire hard copies and electronic data will be stored in a secure area designated by the site IRB. AFL project staff will store completed parent consent and adolescent consent/assent forms in separate locked filing cabinets. Completed Prevention instruments for the cross-site evaluation will be sent via Federal Express to the RTI project director and marked as confidential with no expense to participating demonstration projects within 1 business day of survey administration. No respondent names will be included in the Federal Express package of completed instruments. Assent/consent forms and completed surveys must be shipped to RTI separately and on different days. RTI will be notified and provided a tracking number for each shipment. If shipments do not arrive as scheduled, tracing will immediately be initiated through Federal Express. This process will be monitored and feedback provided to AFL project staff throughout the data collection period. If needed, AFL project staff may be re-trained regarding mailing procedures.

3. Methods to Maximize Response Rates and Deal with Nonresponse

The following procedures will be used to maximize cooperation and to achieve the desired high response rates for the cross-site evaluation:

A $10 gift card will be offered to participants who complete the baseline survey. An additional $10 gift card for each follow-up survey will be offered to participants who complete the follow-up survey at the end of the first school year and at the end of the second school year.

An attempt will be made to locate participants who leave the study before the end of the cross-site evaluation. Location efforts will include mailings of refusal conversion materials designed to persuade participants to complete the study. In addition to using mailed refusal conversion materials, RTI may also conduct telephone-based refusal conversion, contacting each attriting participant via telephone.

RTI and AFL grantees will provide a toll-free telephone number to all sampled individuals and invite them to call with any questions or concerns about any aspect of the study.

AFL grantee data collection staff will work with RTI project staff to address concerns that may arise.

4. Tests of Procedures or Methods to be Undertaken

RTI implemented pilot tests of the core evaluation instruments (OMB 0990-0291) previously approved by OMB with 145 youths in North Carolina. The purpose of the pilot tests was twofold: (1) to assess technical aspects and functionality of the survey instrument and (2) to identify areas of the survey that were either unclear or difficult to understand.


Pilot test data collection was conducted from October through December 2007. Eligible participants originated from a convenience sample of students aged 9 to18 in North Carolina who attended schools in the state with low performance in reading and English and lived in low income communities. Low performance in reading was measured by the percentages at end of grade testing. Schools were eligible if 70% or fewer of their students were at grade level for reading. Parents were recruited to give permission for their children to participate in the pilot study through Parent/Teacher Association (PTA) meeting presentations, principal/school involvement, tabling at school events, flyers at libraries, attendance at fall festivals, and word of mouth through parents who had already agreed for their children to participate in the study. To obtain 145 completed surveys, RTI obtained contact information for 188 parents. Parents who expressed interest for their child(ren) to participate in the study received a lead letter from RTI. A screener conducted with parents or students aged 18 and older was used to determine study eligibility of participants. Students self-administered either the baseline or follow-up instrument at local libraries, community facilities, or schools under the supervision of RTI survey administration staff. A total of 72 baseline and 73 follow-up survey instruments were completed, including questions regarding parent-child communication, attitudes and beliefs about abstinence and sexual risks, involvement in positive activities, beliefs about the future, and demographic characteristics. Three participants completed survey instruments in Spanish. Nine participants aged 14 or older also self-administered new items, including questions regarding sexual activity and contraception.


Of the 188 parents contacted by RTI, 3 refused participation, 20 students whose parents agreed to their participation did not attend survey administration, and 2 students were found be ineligible. An additional 18 parents could not be reached by phone to schedule survey administration. A total of 145 student surveys were completed for a 77% response rate. Analyses of the pilot test data indicated there were few significant technical problems with the survey instrument. Many of the respondents in the pilot study put check marks in the boxes instead of filling them in. RTI has replaced the response boxes with circles to increase the likelihood that responses will be accurately scanned. Many respondents were younger than 13, and some said they skipped questions that referred to “teens” because they did not think such questions applied to them. RTI has changed the term “teens” to “young people” to apply to all youths. Some respondents were unsure about what to answer for their race. RTI has created an additional response option for “other (describe ______________)” for race. Lastly, a few respondents wrote their names on the front of the surveys, even though RTI asked them not to. RTI has added a note to the front of the survey that clearly asks respondents not to do this.


There were no outlier values, and all response options were labeled correctly. All skip patterns appeared to function correctly except questions referring to parents. Some students responded that they did not have a mother (or father) and then answered questions about that person. RTI has changed the language in relevant questions to make it clear that having a mother (or father) does not necessarily mean living with them, and not having a mother (or father) means not having one at all. Our findings suggest that there were no logic problems with the survey and the data were accurately recorded. There were no non-response problems with the survey except for a substantial amount of missing data on the question regarding extracurricular activities. RTI has changed this question to an item assessing the overall frequency of participation in extracurricular activities. The average length of the survey was 22 minutes, with a range of 10 to 50 minutes.


Based on the findings of the pilot test, the survey appears to function as intended and is not overly burdensome, sensitive, or difficult to understand. Therefore, few substantive revisions were made to the survey instrument as a result of pilot testing.


5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

The agency official responsible for receiving and approving contract deliverables is:

Johanna Nestor
240-453-2808
jnestor@osophs.dhhs.gov
Office of Population Affairs/DHHS
1101 Wotton Parkway, Suite 700
Rockville, MD 20852

The person who designed the data collection is:

Olivia S. Ashley, Dr.P.H.
919-541-6427
osilber@rti.org
RTI International
3040 Cornwallis Road
Research Triangle Park, NC 27709

The person who will collect the data is:

Karen Morgan, Ph.,D.

(919) 485-7779

kcmorgan@rti.org

RTI International
3040 Cornwallis Road
Research Triangle Park, NC 27709

The persons who will analyze the data are:

Georgiy Bobashev, Ph.D.
919-541-6167
bobashev@rti.org
RTI International
3040 Cornwallis Road
Research Triangle Park, NC 27709

Michael Penne, M.S.
919-541-5988
penne@rti.org
RTI International
3040 Cornwallis Road
Research Triangle Park, NC 27709

Marni Kan, Ph.D.

919-485-2756

mkan@rti.org

RTI International

3040 Cornwallis Road
Research Triangle Park, NC 27709

References

Abma, J., Martinez, G., Mosher, W., & Dawson, B. (2004). Teenagers in the United States: Sexual activity, contraceptive use, and childbearing, 2002. Vital and Health Statistics, Series 23, No 24. Hyattsville, MD: National Center for Health Statistics.

Abreu, D. A., & Winters, F. (1999). Using monetary incentives to reduce attrition in the survey of income and program participation. In Proceedings of the Survey Research Methods Section of the American Statistical Association. http://www.amstat.org/ sections/SRMS/proceedings/. Last updated on May 24, 2007.

Albert, B., Lippman, L., Franzetta, K., Ikramullah, E., Keith, J. D., Shwalb, R., et al. (2005). Freeze frame: A snapshot of America’s teens. Washington, DC: National Campaign to Prevent Teen Pregnancy.

Amin, R., & Sato, T. (2004). Impact of a school-based comprehensive program for pregnant teens on their contraceptive use, future contraceptive intention, and desire for more children. Journal of Community Health Nursing, 21, 39-47.

Barnet, B., Liu, J., Devoe, M., Alperovitz-Bichell, K., & Duggan, A. K. (2007). Home visiting for adolescent mother: Effects on parenting, maternal life course, and primary care linkage. Annals of Family Medicine, 5, 224-232.

Biglan, A., Hood, D., Brozovsky, P., Ochs, L., Ary, D., & Black, C. (1991). Subject attrition in prevention research. In. W. Bukoski, & K. Leukefeld (Eds.), Drug abuse prevention research: Methodological issues. NIDA Research Monograph (Vol. 107, pp. 213-223). Rockville, MD: National Institute on Drug Abuse.

Black, M., Bentley, M. E., Papas, M. A., Oberlander, S. A., Teti, L. O., McNary, S., Le, K., & O’Connell, M. (2006). Delaying second births among adolescent mothers: A randomized, controlled trial of a home-based mentoring program. Pediatrics, 18, 2005-2318.

Blake, S. M., Simkin, L., Ledsky, R., Perkins, C., & Calabrese, J. M. (2001). Effects of a parent-child communications intervention on young adolescents’ risk for early onset of sexual intercourse. Family Planning Perspectives, 33, 52-61.

Blinn-Pike, L., Berger, T., & Rea-Holloway, M. (2000). Conducting adolescent sexuality research in schools: Lessons learned. Family Planning Perspectives, 32, 246-51, 265.

Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Hillsdale, NJ: Earlbaum.

DeCoster, J. (2004). Meta-analysis. In K. Kempf-Leonard (Ed.), The encyclopedia of social measurement (pp. 1-19). San Diego, CA: Academic Press.

Doniger, A. S., Riley, J. S., Utter, C. A., & Adams, E. (2001). Impact evaluation of the “Not Me, Not Now” abstinence-oriented adolescent pregnancy prevention communications program, Monroe County, NY. Journal of Health Communication, 6, 45-60.

Eaton, D. K., Kann, L., Kinchen, S., Ross, J., Harris, W. A., Lowry, R., McManus, T., Chyen, D., Shanklin, S., Lim, C., Grunbaum, J. A., & Wechsler, H. (2006). Youth Risk Behavior Surveillance—United States, 2005. Morbidity & Mortality Weekly Report, 55, SS-5, 1-108.

Egger, M., & Smith, G. D. (1997). Meta-analysis: Potential and promise. British Medical Journal, 315, 1371-1374.

Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. Orlando, FL: Academic Press.

Hedges, L. V., & Vevea, J. L. (1998). Fixed and random effects models in meta-analysis. Psychological Methods, 3, 486-504.

Henry J. Kaiser Family Foundation. (2003). National Survey of Adolescents and Young Adults: Sexual health knowledge, attitudes and experiences. Menlo Park, CA: Author.

Kirby, D. (2002). Do abstinence-only programs delay the initiation of sex among young people and reduce teen pregnancy? Washington, DC: National Campaign to Prevent Teen Pregnancy.

Kirby, D. (2007). Emerging answers 2007: Research findings on programs to reduce teen pregnancy and sexually transmitted diseases. Washington, DC: National Campaign to Prevent Teen and Unplanned Pregnancy.

Kirby, D., Barth, R. P., Leland, N., & Fetro, J. V. (1991). Reducing the risk: Impact of a new curriculum on sexual risk-taking. Family Planning Perspectives, 23, 253-263.

Knight, G. P., Virdin, L. M., & Roosa, M. (1994). Socialization and family correlates of mental health outcomes among Hispanic and Anglo American children: Consideration of cross-ethnic scalar equivalence. Child Development, 65, 212-224.

Krull, J. L., & MacKinnon, D. P. (1999). Multi-level mediation modeling of group-based intervention studies. Evaluation Review, 23, 418-444.

MacKinnon, D. P., Taborga, M. P., & Morgan-Lopez, A. A. (2002). Mediation designs for tobacco prevention research. Drug and Alcohol Dependence, 68, S69-S83.

Marin, B. V., Coyle, K., Gomez, C., Carvajal, S., & Kirby, D. (2000). Older boyfriends and girlfriends increase risk of sexual initiation in young adolescents. Journal of Adolescent Health, 27, 409-418.

Miller, B. C., Norton, M. C., Jenson, G. O., Lee, T. R., Christopherson, C., & King, P. K. (1993). Impact evaluation of FACTS & feelings: A home-based video sex education curriculum. Family Relations, 42, 392-400.

The National Campaign to Prevent Teen Pregnancy. (2003). With one voice 2003: America's adults and teens sound off about teen pregnancy. Washington, DC: The National Campaign to Prevent Teen Pregnancy.

The National Longitudinal Study of Adolescent Health. (1998). Waves I & II, 1994–1996. Chapel Hill, NC: Carolina Population Center, University of North Carolina at Chapel Hill. [Need to verify reference]

O’Rourke, D., Chapa-Resendez, G., Hamilton, L., Lind, K., Owens, L., & Parker, V. (1998). An inquiry into declining RDD response rates part I: Telephone survey practices. Survey Research, 29, 1-16.

Percy, M. S., & McIntyre, L. (2001). Using Touchpoints to promote parental self-competence in low income, minority, pregnant, and parenting teen mothers. Journal of Pediatric Nursing, 16, 180-186.

Shettle, C., & Mooney, G. (1999). Monetary incentives in U.S. government surveys. Journal of Official Statistics, 15, 231-250.

Silva, M. (2002). The effectiveness of school-based sex education programs in the promotion of abstinent behavior: A meta-analysis. Health Education Research, 17, 471-481.

Singer, E., Van Hoewyk, J., Gebler, N., Raghunathan, T., & McGonagle, K. (1999). The effect of incentives in interviewer-mediated surveys. Journal of Official Statistics, 15, 217-230.

Singleton, R., & Straits, B. C. (1999). Approaches to social research. New York: Oxford University Press.

Thomas, D. V., & Looney, S. W. (2004). Effectiveness of a comprehensive psychoeducational intervention with pregnant and parenting adolescents: A pilot study. Journal of Child and Adolescent Psychiatric Nursing, 17, 66-77.

Trenholm, C., Devaney, B., Fortson, K., Quay, L., Wheeler, J., & Clark, M. (2007). Impacts of four Title V, Section 510 abstinence education programs, final report. Princeton, NJ: Mathematica Policy Research, Inc.


Weed, S. (2004). Choosing the best research results: Executive summary. Washington, DC: U.S. Department of Health and Human Services.

U.S. Government Accountability Office. (2006). Abstinence education: Efforts to assess the accuracy and effectiveness of federally funded programs. Washington, DC: Author.

The White House. (2005). Program Assessment Rating Tool: 2006 budget. http://www.whitehouse.gov/omb/budget/fy2006/sheets/part.xls. Last updated July 23, 2005.

Public Law 98-512, 42 U.S.C. 300z-2, as amended.

File Typeapplication/msword
File Modified0000-00-00
File Created0000-00-00

© 2024 OMB.report | Privacy Policy