Response to OMB Questions - Dec 2009 - final

Response to OMB Questions - Dec 2009 - final.doc

Cross-site Evaluation of the Infant Adoption Training Program

Response to OMB Questions - Dec 2009 - final

OMB: 0970-0371

Document [doc]
Download: doc | pdf








Response to 11/20/2009 Office of Management and Budget

Questions and Comments Regarding the

Cross-Site Evaluation of the Infant Adoption Awareness Training Program








Submitted by


Department of Health & Human Services

Children’s Bureau

Washington, DC



Contact person:

Patricia Campiglia

Children’s Bureau

Administration on Children, Youth and Families

1250 Maryland Avenue, SW

Portals Building, Eighth Floor

Washington, DC 20024

202/205-8060

patricia.campiglia@acf.hhs.gov

Overall design/Supporting Statement


  1. We note the absence of a control group and would like to encourage ACF to explore and report back to us on its plans for identifying a reasonable control group for future iterations. 


The use of a comparison group was intentionally omitted from our evaluation design due to the limited value it is expected to have in the context of this evaluation. The determination not to include a comparison was made based on our consideration of the high costs and resources involved in identifying and administering a survey to the target group of designated staff in eligible health centers and the lessons learned in the 2001 cross-site evaluation conducted by Battelle Centers for Public Health Research and Evaluation.


The authorizing legislation (Title XII-Subtitle A) for the Infant Adoption Awareness Training Program (IAATP) specifies that priority be given to providing IAATP training to staff in Title X (voluntary family planning), Section 330 (community health), and school-based health centers funded under the Children’s Health Act who provide pregnancy or adoption information and referrals (or will provide such information and referrals after receiving training). As demonstrated by Battelle’s original survey effort, it proved difficult to identify individuals in this eligibility group that had never participated in IAATP training and that had characteristics matching those of IAATP trainees to serve in a comparison group. In fact, the demographics of non-trainees surveyed by Battelle more closely resembled the intended target population for training than did the actual IAATP trainees. The fact that the Battelle study found few significant differences between IAATP trainees and the comparison group may be due in part to the lack of comparability between these groups, which made it more difficult to interpret observed findings.


Our own experience conducting the cross-site evaluation of the IAATP has been that grantees have continued to have difficulty recruiting trainees from the intended target group. In addition to the high cost of identifying control group participants within this target group, utilizing random assignment or another method for assigning these “primary eligible” participants to a control group would further limit the number of individuals from the federally mandated target group that the grantees are able to serve. We will, of course, continue to consider possible alternatives for future survey iterations.


  1. What is the virtue of a longitudinal study?  Why is a single-year study with a sample size of 1200 not sufficient?


A single-year study would not be as effective as a longitudinal study because the population included in the training changes over time. As indicated in our response to Question 1, the IAATP grantees have found it more difficult each year to recruit training participants from the mandated eligibility group of health care and other staff in Title X, Section 330, and school-based health clinic settings. In response to the limited availability of trainees from this primary eligibility group the grantees obtained permission in Year 2 of the grant to expand recruitment efforts to secondary eligibles. These secondary eligibles include individuals who provide services to pregnant women in a diversity of settings. As a result, the demographics of the cohort of training participants have changed particularly the roles and organizational affiliations of the trainees. To illustrate, demographic data collected by grantees in training Years 1 and 2 indicate a shift from training primarily health care and social work staff from public and non-profit private health settings to including staff from child welfare agencies. Most notably, trainees from child welfare agencies increased from 7% of training participants in Year 1 to 13% in Year 2. Grantee recruitment efforts also expanded to college and university-based schools of nursing and social work in Year 2, yielding a small increase in the percentage of students; from 10% in Year 1 to 12% in Year 2. Based on recruitment trends reported by grantees in Year 3, the percentage of students is expected to increase more substantially in the subsequent years of the grant.


Given the changing nature of the population, it is particularly important to understand whether this variation impacts the results obtained by trainees. Our findings in this area will inform the recommendations that we make regarding future trainee eligibility and recruitment strategies. If variation between cohorts does impact the results obtained by trainees, then programs need to target via eligibility and recruitment criteria; if no differences are determined, it is also instructive to the programs that the training is well suited to a broad range of participants.


  1. Please clarify whether you proposed to draw a PSU new sample each year or to use the same ones over time.  Please provide the rationale for whichever you plan to do.


It is our intention to draw a new primary sampling unit (PSU) sample for each year of the study. As the PSU for the study is the training session, we will draw a sample of 54 training sessions each year. A new sample is required because, as noted in our response to question 2 above, the sampling frame is changing each year. Training sessions are held at different times and in different locations (e.g., cities and states, urban and rural areas) each year, and are serving somewhat different populations. Since the total universe is not represented in the sample each year redrawing of the sample improves our ability to generalize the research findings to the total population of trainees.


  1. We do not see what skill questions are asked in the survey, so question whether this evaluation can measure skills acquisition.  Please demonstrate or revise supporting statement (SS) accordingly.


As stated in the Best Practices Guidelines for the IAATP (Appendix B of the Supporting Statement), training participants are expected to acquire the skills that are inherent in conducting non-directive, non-coercive counseling and providing appropriate resources and referrals. The guidelines further specify that these skills are presented and practiced by trainees during the training session. While change in the level and quality of trainees’ skills can not be directly observed in this evaluation (due to the privacy of patient/trainee interactions), change in the trainees’ baseline and post-training understanding and application of the practices associated with counseling and referral skills can be assessed via the survey. The pre-test and post-test questions that assess the trainees understanding and application of each of these basic skills” are identified below.

    1. Trainees will improve their basic counseling skills, including cultural competence, listening, building rapport, recognizing someone in crisis, being empathetic and treating clients with respect. [Pre-test items 8a.i–iii, 8c, 9a and 9d and corresponding post-test items 5a.i-iii, 5c, 6a and 6d]

    2. Training participants who will counsel pregnant women will be skilled in non-directive counseling to ensure that adoption information, and information about other pregnancy options, is presented objectively, without bias or judgment. [Pre-test items 8a.iv, 9b, and 9e and post-test items 5a.iv, 6b and 6e]

    3. Trainees who will counsel pregnant women will have basic case management skills, including the ability to assess service needs and make appropriate referrals. [Pre-test items 8b, 8d, 8e, 8f, 8g and 9c and post-test items 5b, 5d, 5e, 5f, 5g and 6c]

  1. Further, please either demonstrate the reliability and validity of the self-reported behavior questions or a justification for asking them as well as a discussion of the limitations of this approach that you plan to include in the study report and products.

 

As noted in our response to question 4, we are assessing the change in the understanding and application of some basic skills in case management and counseling. We are aware that while self–reported responses to questions about sensitive behaviors (e.g., drug use, sexual risk taking, and underage drinking) can vary in both reliability and validity (as demonstrated in a meta-review by Brenner, et. al. (Brenner, Billy, & Grady, 2003); answers to non-sensitive questions are considered to be more reliable and valid (e.g., Brenner, Collins, Kann, Warren, & Williams, 1995). We believe the questions under consideration are of a less-sensitive nature. In addition, there is some limited evidence that self-reports of behavior change subsequent to participation in a training curriculum designed to create change are valid reports (Curry & Purkis, 1986).


Although we would prefer to assess behavioral change through direct observation to ensure validity and reliability, the behaviors we are interested in cannot be verified independently in a cost-effective, feasible, and ethical manner. The cost of going on site and observing trainee behavior would be prohibitive, and there are confidentiality and privacy concerns that would prohibit us from directly observing the interactions between trainees and the patients they serve. 


We will take into account and note the limitations on self-reported behavioral change in our discussion of the results.


  1. OMB policy is not to approve incentives for low burden surveys such as this.  This is particularly true in cases where there is no highly specific justification and evidence base, such as is the case here.  Therefore, please drop the incentive.


The Supporting Statement and Page 1 of the post-test have been modified to reflect that an incentive will not be used.


  1. Please rewrite SS A16 in the Departments voice rather than the contractor and clarify plans for public release of the results consistent with the transparence goals of the Administration.


The language in SS A16 has been rewritten in the Departments voice.


The following language has been added to Paragraph 1 of SS A16 in order to clarify our plans for public release of the evaluation’s results: “The findings of the national cross-site evaluation will be made available to the public in formats suitable for multiple uses and audiences, such as research briefs, synthesis papers, and final study reports. The goal in producing these publications will not only be to summarize the research findings, but also to highlight key issues and lessons learned.”


  1. Given that the survey of training participants is voluntary, please clarify the use of the term mandatory in SS B1 (second paragraph).


Participation of trainees in the activities associated with the cross-site evaluation is voluntary. The language in SS B1 that states all registrants in a sampled training session are required to respond to the survey…” has been modified to read, all registrants in a sampled training session are provided the opportunity to respond to the survey…”


  1. Please clarify that any additional sample cases to whom the survey will be administered by grantees on their own will not be included in the national evaluation.


Any individuals who are surveyed by the grantee as part of the grantee’s local evaluation activities will be excluded from the national cross-site evaluation. The following language from SS A5 has been added after Paragraph 3 of SS B1: “Individuals included in the national cross-site evaluation sample will be identified to local grantees and evaluators to ensure that they will not be included in local data collections that would duplicate or fall within the timeframe of the cross-site data collection. Identifying and eliminating duplication in this prospective manner will foster cooperation among all stakeholders and foster greater response rates among the trainees selected to complete the cross-site instrument.”


  1. SS B3 provides insufficient detail about plans to contact non-respondents.”  Please provide the specific contact strategy, including number of contacts, mode and timing.


The following text addressing methods to contact non-respondents has been added after Paragraph 2 of SS B3:


Follow-up with non-respondents will be conducted by email, phone, and mail.

      1. Follow-up by email: A reminder email message will be sent to all non-respondents 2 weeks after the initial survey invitation was sent. The reminder message will include a link to the survey website and offer the trainee the option to download an attached survey to complete and return to the evaluator via email, mail or fax.

      2. Follow-up by phone: Two weeks following the email reminder, the national evaluator will contact the trainee by phone to determine whether the original and reminder email messages were received. Three attempts will be made to reach the trainee by phone. Once reached, the trainee will be given the option to complete the survey online through the original web link, to receive a third email of the survey, or to receive a mailed copy. If the trainee needs to be resent the web link or the email, the email address will be verified and the information will be re-sent. If the trainee is unable to access the Internet or does not want to complete the survey by downloading the attachment from the email, a hard copy of the post-training survey will be offered. The survey will then be mailed to the non-respondent with a stamped, return-addressed envelope.

      3. Follow-up by mail: Trainees that are not reachable by email or phone will be mailed the request to complete the survey. The mailing will include a hard copy of the survey in addition to the link to the survey website within the cover letter.


  1. Please discuss how you will investigate and address any mode effects between the pre-test (paper) and the post-test (web).  We are concerned that any differences (e.g., favorable attitudes ascertained by scales) can be biased by mode and be interpreted as true differences.


Mixed modes enable contact with respondents who move settings between administration times (as in the case of the current evaluation activity, where trainees are present as a group in the classroom for the pre-test but are scattered throughout the U.S. at the time of the post- test) and have advantages in terms of the ability to combine the strong methodological features of each survey type. As de Leeuw et. al. (2007) note, use of mixed modes methods of survey administration is increasing in frequency. The potential drawback, as noted in the question from the reviewer, is that use of mixed modes may bias results. There is the potential for bias at two levels. There may be differences in response rates, due to different access capabilities of potential respondents, and potential differences in the interpretation of and response to specific questions.


Differences in response rates due to differences in the accessibility or comfort level with technology is decreasing as email and internet usage becomes increasingly ubiquitous, but it is a potential issue (Kaplowitz, Hadlock, & Levine, 2004). We will examine differences in the demographic characteristics gathered during the pre-test of the responders and non-responders to the follow-up to determine if there is any potential response rate variation and will take this into account for the interpretation of results as appropriate.


Differential effects on question response due to mode differences have been documented primarily for interviewer-administered vs. self-report surveys (e.g., telephone vs. mail; internet vs. in-person, etc), and these differences are further amplified when the questions involve response categories that do not translate well across modes (e.g., ‘Refused’ on a telephone survey would not be noted on the paper-& pencil survey) or socially desirable response categories (Elliot et al., 2009; (Link & Mokdad, 2005) (Kaplowitz et al., 2004)). However, direct examination in controlled studies of differences between paper and pencil surveys and those administered via the web tend to show no differences in response content (Denscombe, 2006)(Bates & Cox, 2008), (Kleinman, Leidy, Crawley, Bonomi, & Schoenfeld, 2001)


Dillman (2007) has cautioned that web-based surveys have the potential to be interpreted differently due to the use of graphics, color, animation, and other features not available on a paper-and-pencil based survey, and we will ensure that the web-based version of the surveys utilized in this evaluation do not involve special fonts, colors, animation or other graphics.


  1. Please provide the status and results, if applicable, of the pilot study.  Please also describe how survey items were validated via this process.


The pilot study was conducted in Spring 2008. SS B.4 has been modified to reflect the status and results of the pilot study. Highlights of the pilot findings are provided in Appendix E of the Supporting Statement.


The following text regarding survey validation has been added: “Survey items were validated through the pilot process by testing the questions among respondents who were representative of the population that would be completing the final survey (i.e., individuals in health care settings who provide services to pregnant women and are participating in the IAATP training). The survey items were also validated prior to the pilot test using cognitive testing procedures. The purpose of this activity was to field test the survey instruments and make final improvements in the wording and layout of the two instruments. The respondents for the cognitive testing activity were the project directors of the six IAATP grantees, who were both familiar with the training objectives and experienced in testing IAATP trainees. All grantees were provided the instruments to review and participated in a joint conference call to review the surveys question-by-question and state their impressions about the items. Modifications were then made to the survey items based on the feedback received from the grantees.”

  1. For both questionnaires, we suggest changing .15 hours and .10 hours to minutes in the burden statement for ease of understanding.


The values in the burden statements on Page 1 of both questionnaires have been changed to minutes. The text addressing estimated burden in SS A12 includes reference to the time for completion of the surveys in minutes (i.e., 15 minutes for the pre-test and 10 minutes for the follow-up survey). Since the calculation of the annual respondent cost in the accompanying table is based on an hourly wage, the average burden hours per response will continue to be expressed in fractions of an hour. Please note the expression of hours in fractions of an hour has been corrected from .15 and .10 to .25 and .17, respectively. The total annual respondent cost has been corrected accordingly.


Pre-test questionnaire


  1. On the pre-test, you cannot promise confidentiality unless the statute under which you are collecting the data provides for it.  Since no statute is cited in A10, please change the sentence from Your responses will be confidential throughout this process to We will protect your data by…” then adapt the remainder of the paragraph to describe procedures.


The assurance of confidentiality on Page 1 of the pre-test has been replaced with the following statement: We will protect your data by ensuring that your name does not appear in any written reports, and your name is not associated with any comments you choose to make about the program. Data will be presented only in aggregate form.


  1. In question 1 student is not a workplace label.  What is this category trying to measure?  If someone is a student rather than a worker, shouldnt the survey terminate at this point?


The workplace label has been corrected. The appropriate label, University/College, replaces the word “Student.” The survey does not terminate for trainees from this setting.


  1. Item 6 provides a specific reference period (ie, a month) but the following questions do not.  Please clarify whether you want respondents to use a one-month, lifetime or other reference period and consider adding clarifying instructions to that affect.


Questions 6 through 9 in the pre-test and corresponding questions 3-6 in the post-test have been clarified to reflect a three-month time period. Questions 6 (pre-test) and 3 (post-test) now read, “Approximately how many clients with unintended pregnancies have you personally encountered in the last three months? ___clients .” The following lead-in to the next set of questions has been added: “For the next few questions (questions 7 through 9) [pre-test](questions 4-6) [post-test], please refer to your usual activity over the past three months.”


Clarifying language has been added to Questions 7, 8 and 9 (pre-test) and 4, 5 and 6 (post-test) in order to clarify that the respondent should respond about their usual practices over the past 3 months.


  1. In many items, you clearly define what N/A means in that context, but it is not clear in items 8 e and f.  Please clarify.


The following clarifying language has been added to Questions 8e and 8f on the pre-test:

8e. (If your responsibilities do not include working with adoption agencies on behalf of clients, mark N/A).


8f. (If your responsibilities do not include referring clients to adoption agencies/resources, mark N/A).”


  1. 10xWe are not aware of other federal data collections still using the phrase color.   Please delete or justify.


Question 10x assesses trainee knowledge of the Howard M. Metzenbaum/Multiethnic Placement Act (MEPA) of 1994, one of two federal adoption laws that the IAATP curriculum is required to address. The term color is used in Question 10x in the same context as it appears in the Act, which states, “…Neither the State nor any other entity in the State that receives funds from the Federal Government and is involved in adoption or foster care placements may delay or deny the placement of a child for adoption or into foster care, on the basis of the race, color, or national origin of the adoptive or foster parent, or the child involved. According to the Best Practices Guidelines for the IAATP, it is essential for trainees to know the provisions of MEPA. This requirement is noted in Paragraph 2 of the Supporting Statement item A.1.


  1. 13 e please use Asian in your example rather than Asian Pacific Islander as that is not a recognized federal category.  Same comment on post-test.


The term Asian Pacific Islander has been replaced with Asian in Question 13e of the pre-test and 10e of the post-test.


  1. 18.  These should be 5 separate numbered or lettered questions.   Also, the race item does not conform to federal standards.  The instruction should read, Mark (or check) one or more.”  Further, you may not include the last two categories (ie, biracial or multiracial and other).


The sub-items in Question 18 have been lettered a – e. The race item (now 18c) has been revised as specified in your comments, allowing multiple responses to the race categories and omitting biracial or multiracial and other options.

 

Post-test questionnaire


  1. This survey is NOT anonymous as we understand it, otherwise how would you know who had responded to conduct non-response follow up.  Please change the confidentiality pledge to match the one on the pre-test. 


The statement regarding protection of individual respondents data has been modified to match the revised statement on the pre-test (as described in our response to Question 14 above).

  1. Please clarify where the copy of this form that respondents can email, fax or mail comes from do they download it from the website or is it available somehow else?


The text on Page 1 of the post-test is emailed to respondents as an invitation to complete the survey. The instructions in the invitation regarding the option to complete the survey in paper form have been modified to indicate that if the respondent is unable to complete the survey online, he or she can download the survey document attached to the email. The respondent then has the option to return the completed survey document to the evaluator via email, mail or fax.

  1. Following good questionnaire design principals, questions 13 and 14 should each be split into 2 questions.


Questions 13 and 14 on the post-test have each been split into two questions (i.e., 13a.i – viii, 13b.i-viii, 4a.i-vi and 14b.i-vi.)


  1. Item 16, 6th response item is double barreled.”  Please separate into two response categories or clarify its meaning.


The sixth response in Question 16 has been modified to read, My supervisor/agency wont allow other staff the time to attend the training.

  

References



Bates, S., & Cox, J. (2008). The impact of computer versus paper-pencil survey, and individual versus group administration, on self-reports of sensitive behaviors. Computers in Human Behavior, 24(3).

Brenner, N., Billy, J., & Grady, W. (2003). Assessment of Factors Affecting the Validity of Self-Reported Health -Risk Behavior Among Adolescents: Evidence From the Scientific Literature. Journal of Adolescent Health. Retrieved December 4, 2009, from http://www.cdc.gov/HealthyYouth/YRBS/pdf/validity.pdf.

Brenner, N., Collins, J., Kann, L., Warren, C., & Williams, B. (1995). Reliability of the Youth Risk Behavior Survey Questionnaire -- Brener et al. 141 (6): 575 -- American Journal of Epidemiology. American Journal of Epidemiology, 141(6), 575-580.

Curry, L., & Purkis, I. E. (1986). Validity of self-reports of behavior changes by participants after a CME course. Journal of Medical Education, 61(7), 579-584.

de Leeuw, E., Hox, J., & Don Dillman. (2007). International Handbook of Survey Methodology. Routledge. Retrieved December 4, 2009, from http://www.xs4all.nl/~edithl/surveyhandbook/contents.htm.

Denscombe, M. (2006). Web-Based Questionnaires and the Mode Effect. Soc. Sci. Comput. Rev., 24(2), 246-254.

Dillman, D. A. (2007b). Mail and internet surveys (p. 543). John Wiley and Sons.

Elliot, M., Zaslavsky, A., Goldstein, E., Lehrman, W., Hambarsoomians, K., Beckett, M., et al. (2009). Effects of survey mode, patient mix, and nonresponse on CAHPS® hospital survey scores - page 5 | Health Services Research. Health Services Research. Retrieved December 3, 2009, from http://findarticles.com/p/articles/mi_m4149/is_2_44/ai_n31508854/pg_5/?tag=content;col1.

Kaplowitz, M. D., Hadlock, T. D., & Levine, R. (2004). A Comparison of Web and Mail Survey Response Rates. Public Opin Q, 68(1), 94-101. doi: 10.1093/poq/nfh006.

Kleinman, L., Leidy, N., Crawley, J., Bonomi, A., & Schoenfeld. (2001). A Comparative Trial of Paper-andPencil Versus Computer Administration of the Quality of Life in Reflux and Dyspepsia (QOLRAD) Questionnaire. Medical Care , 39(2), 181-189.

Link, M., & Mokdad, A. (2005). Effects of Survey Mode on Self-Reports of Adult Alcohol Consumption: A Comparison of Mail, Web and Telephone Approaches - Journal of Studies on Alcohol and Drugs. Journal of Studies on Alcohol and Drugs, 66(2). Retrieved December 3, 2009, from http://www.jsad.com/jsad/article/Effects_of_Survey_Mode_on_SelfReports_of_Adult_Alcohol_Consumption_A_Comp/1001.html.







Response to 11/20/2009 OMB Questions and Comments Page 9

File Typeapplication/msword
File TitleOverall design/Supporting Statement
Authorkeating
Last Modified Bykeating
File Modified2009-12-08
File Created2009-12-08

© 2024 OMB.report | Privacy Policy