AFI OMB Justification Package - Part A - 3.12.13_clean

AFI OMB Justification Package - Part A - 3.12.13_clean.docx

Assets for Independence (AFI) Program Evaluation

OMB: 0970-0414

Document [docx]
Download: docx | pdf






SUPPORTING STATEMENT A FOR INFORMATION COLLECTION IN THE

ASSETS FOR INDEPENDENCE (AFI)

PROGRAM EVALUATION







Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

370 L'Enfant Promenade, SW

Washington, DC 20447


August 2012







Table of Contents


a. jUSTIFICATION 1

1. Necessity for the Data Collection 1

2. Purpose of Survey and Data Collection Procedures 2

3. Improved Information Technology to Reduce Burden 5

4. Efforts to Identify Duplication 6

5. Involvement of Small Organizations 6

6. Consequences of Less Frequent Data Collection 6

7. Special Circumstances 6

8. Federal Register Notice and Consultation 7

9. Payment of Respondents 7

10. Confidentiality of Respondents 8

11. Sensitive Questions 9

12. Estimation of Information Collection Burden 10

13. Additional Cost Burden to Respondents and Record Keepers 10

14. Estimate of Cost to the Federal Government 11

15. Change in Burden 11

16. Plan and Time Schedule for Information Collection, Tabulation and Publication 11

17. Reasons Not to Display OMB Expiration Date 11

18. Exceptions to Certification for Paperwork Reduction Act Submissions 11


B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS 1

1. Respondent Universe and Sampling Methods 1

2. Procedures for Collection of Information 2

3. Methods to Maximize Response Rates and Deal with Nonresponse 7

4. Tests of Procedures to be Undertaken 9

5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 10


ATTACHMENT A Federal-Wide Assurance

ATTACHMENT B Confidentiality Agreement

ATTACHMENT C Minimum Detectable Effect

ATTACHMENT D Site Management Screens

ATTACHMENT E Informed Consent

ATTACHMENT F Panel Maintenance Mailing Materials

ATTACHMENT G Field Locating Materials

ATTACHMENT H Lead Letter and Q&A Brochure

ATTACHMENT I Follow-Up Survey Introduction

ATTACHMENT J 12-Month Follow-Up Survey Incentive Letter

ATTACHMENT K 60-Day Federal Register Notice

ATTACHMENT L References



INSTRUMENT 1 AFI Baseline Questionnaire

INSTRUMENT 2 AFI Follow-Up Questionnaire

INSTRUMENT 3 AFI Implementation Interview Instrument


ASSETS FOR INDEPENDENCE (AFI) PROGRAM EVALUATION


SUPPORTING STATEMENT


A. JUSTIFICATION


In this document, we request OMB clearance for a series of data collection activities for the Assets for Independence (AFI) Program Evaluation (hereafter, AFI Evaluation). This request is for a new collection. This submission seeks OMB approval for three data collection instruments relating to surveys of the enrolled study sample at baseline (i.e., intake to the programs studied) and 12 months following baseline, and relating to interviews to be conducted with program administrators, staff, and other stakeholders involved in the implementation of the evaluation:


  • Baseline Survey

  • 12-Month Follow-Up Survey

  • Implementation Interviews


This section provides supporting statements for each of the eighteen points outlined in Part A of the OMB guidelines for the collection of baseline information in study sites in the AFI Evaluation.

  1. Necessity for the Data Collection


Study Overview

The U.S. Department of Health and Human Services, Administration for Children and Families (ACF) is conducting an experimental evaluation of the Assets for Independence (AFI) Program. This evaluation—the first experimental evaluation of IDA projects operating under the Assets for Independence Act—will contribute importantly to understanding the effects of Individual Development Account (IDA) projects and IDA project features on participants. IDA programs provide matching funds to participants when the savings are withdrawn to spend on qualified asset purchases, most commonly homeownership, business-related expenses, or education.


This study will build on the prior quasi-experimental AFI evaluation, as well as studies of other non-AFI funded Individual Development Account (IDA) projects. While some evaluations suggest that IDAs help low-income families save, rigorous experimental research is limited. No experimental evaluation of AFI-funded programs has been conducted to date. Of the quasi- or non-experimental studies, few have focused on AFI-funded IDAs, and few have tested alternative design features.


In 2008, ACF contracted with the Urban Institute and its partners (the Center for Social Development at Washington University in St. Louis and the Center for Community Capital at the University of North Carolina at Chapel Hill) to develop and recommend a design for the next-phase AFI evaluation. As part of that design project, the research team conducted an extensive literature review, including the prior AFI evaluation and other IDA research, and convened key stakeholders to discuss key research questions, gaps in existing knowledge, and potential research methods for the next evaluation of the AFI program. The consensus of this work was that rigorous, experimental research on the AFI program is needed for several reasons. First, while the First Phase AFI evaluation laid an important foundation for AFI research, its design features (i.e., use of a comparison group derived from the Survey of Income and Program Participation, use of a national and not site-specific sample, sample size, etc.) left the field and ACF with unanswered questions about the program’s impacts on participants. Second, very few experimental studies of IDA programs have been conducted to date, most notably the American Dream Demonstration Experiment in Tulsa and a more recent evaluation of Canada’s Learn$ave program. Lastly, the AFI legislation requires that the program be evaluated using a control group. At the conclusion of the design contract, the research team developed an evaluation design that included an experimental, three-group test that compared a control group with two variations on the treatment.


Notably, resource constraints imposed by the AFI legislation – the legislation caps evaluation spending at $500,000 per year – prohibited ACF from fully implementing the design proposed under the design contract. Similarly, an in-depth review of AFI grantees revealed that none had the capacity to recruit 900 AFI-eligible cases and serve 600 clients. As a result, ACF has implemented an evaluation design that uses the experimental nature of the proposed design, but has been adapted to meet fiscal and programmatic realities.

The evaluation will be conducted in two sites, with the random assignment of up to 600 AFI-eligible cases per site. Each site will randomly assign sample members to one of two groups: a control group and a treatment group receiving conventional AFI services. The two primary research questions the study seeks to answer are:


  • What is the impact of AFI project participation on short-term outcomes such as savings, asset purchases, and avoidance of material hardship?

  • How do specific AFI project design features affect short-term participant outcomes?


In our initial submission of this justification package, we discussed the use of a three-armed experimental design to test two variations of the treatment in each site. After further follow-up conversations with potential evaluation sites, it has become clear that no AFI grantees have the capacity to recruit and serve the number of clients considered necessary to successfully implement a three-armed test. As a result, we have revised this package to reflect a more traditional (i.e., two-armed, treatment vs. control) experimental design. This design will allow the first question above to be addressed using a rigorous randomized approach. The second question can still be addressed empirically, but without the benefit of random assignment.


In addition to the impact study, the evaluation will include an implementation study to describe and document how the AFI program is designed, implemented and operated in the participating grantee sites during the time of the study. It will also explore any challenges that the grantees face and their strategies for overcoming them. This information will be used to provide context for interpreting the findings from the impact analysis and the generalizability of findings to other AFI grantees. It is important that this work be conducted while the study participants are participating in the program so that the information gathered accurately reflects the experiences of participants.



Legislative Mandate for Evaluation

The Assets for Independence Act (1998), which authorized the AFI program, requires that the program be rigorously evaluated by an independent research organization, specifying the use of a control group. The legislation also outlines a number of factors to be included in the evaluation, among them:

  • effects of incentives and organizational or institutional support on savings behavior;

  • savings rates of individuals based on demographic characteristics (e.g., gender, age, family size, race or ethnic background, and income);

  • economic, civic, psychological, and social effects of asset accumulation; and

  • effects of individual development accounts on savings rates, homeownership, level of postsecondary education attained, and self-employment;


Insufficient Data Available

The data collection instruments in this request are necessary to meet the AFI legislation’s evaluation requirements around rigor and outcomes to be measured. Annual grantee reports are insufficient to meet the legislation’s evaluation requirements. The baseline and follow-up survey instruments in this request are essential to gathering individual-level information on the outcomes required by legislation. Annual grantee reports do not capture important individual-level data, such as sources of income, material hardship, psychological measures, or civic engagement. Further, AFI grantees only collect data on the participants, meaning that they do not have any data for members of the control group and any members of the treatment group who do not follow through with services. This information is needed for a rigorous experiment. Finally, the implementation study protocols included in this request will be used to capture more in-depth program implementation and structure details that are not included in annual grantee reports. This information is critical to both understanding what the intervention is, and also interpreting impact findings.


2. Purpose of Survey and Data Collection Procedures


As discussed above, the purpose of this study is to assess the impact of participation in AFI-funded individual IDA projects on the savings, asset purchases, and economic well-being of low-income individuals and families. The study will have two main components: an impact study to assess the impacts of program participation on outcomes of interest and an implementation study to describe and document how the AFI project is designed, implemented, and operated in the participating grantee sites. The data collection instruments contained in this request are central to the successful execution of both components of the study. Below, we describe the information we intend to gather, how it will be used, how the collection will be executed, and the intended benefits of the collection. We begin this discussion by describing the process for selecting the two AFI projects to be evaluated in this study.


Site Selection and Recruitment Process

As part of the AFI Evaluation, the evaluation team has undertaken several activities to identify potential study participants. Through an assessment of existing information on past and current AFI grantees and discussions with federal staff, the contractor has developed a short list of AFI grantees that are most suitable for participation in the evaluation. In assessing a grantee’s suitability for the evaluation, the contractor (with ACF’s input) laid out several criteria related to the grantee’s experience and service capacity. They identified grantees that met the following criteria:


  • Have received their first grant in 2006 or earlier (meaning that they have completed a full five-year grant period for at least one grant);

  • Have a grant that was active during FY 2011;

  • Have opened at least 600 IDAs across all of their grants; and

  • Show some indication of potential capacity to participate in the evaluation through meeting one of the following three criteria:

    • (1) having a new grant awarded in FY 2011 of at least $300,000;

    • (2) having a grant expiring in FY 2011-FY 2013 of at least $300,000 (possibly indicating the capacity to apply for a new grant of that size); or

    • (3) having at least one grant under which 400 or more IDAs were opened.

Using these criteria as a starting point, the research team narrowed down the list of 28 possible grantees based on six dimensions related to capacity/sample size and program structure:


  1. Median or above number of AFI grants held;

  2. Median or above average grant size:

  3. Median or above number of IDAs opened per $1,000 of grant;

  4. Whether the grantee offers all three asset types (i.e., homeownership, postsecondary education, and business startup or expansion);

  5. Match rate between 2:1 and 3:1; and

  6. Whether the grantee requires 5 or more hours of financial education.

Comparing the short list of grantees on these six dimensions enabled the research team to identify 12 grantees as the strongest candidates for evaluation. The research team held site-targeted follow-up conversations based on information provided in their AFI grant applications with this group of 12 grantees to clarify information provided in their AFI grant applications, as well as data collected through the AFI Program Progress Reports (standard ACF PPR form with OMB Approval Number 0970-0334, expiration 10/31/2012). The evaluation sites will be selected based on the six criteria noted above, as well as their capacity to recruit sufficient sample and willingness to participate in the random assignment experiment. At this time, we have two sites still under active consideration for the study: Prosperity Works in Albuquerque, New Mexico, and the Community Financial Resource Center in Los Angeles. We anticipate that site agreements with these sites will be completed in September 2012.


In presenting the study’s findings, we will clearly indicate how the sites were selected and what this implies for the interpretation of the study’s findings.


It is also important to clarify how the interview respondents for the implementation study will be selected and recruited. The interviewees will be AFI grantee or subgrantee staff (including AFI program directors and IDA project managers) and other key stakeholders as appropriate. Notably, AFI grantee staffing varies from AFI project to project, and depends on such factors as administrative structure, implementation status, and the availability of nonFederal resources to support the staff. For many AFI grantees, the number of program staff is quite low. Through FY 2009, AFI grantees that had 150 or more accounts opened, as would be the case with the evaluation sites, averaged is 2.19 full-time equivalent staff members.1 As a result, the research team will likely interview the full universe of AFI program staff in a particular grantee or subgrantee. Further, given the potential variation in AFI staff and key stakeholders in each site, the actual type and number of interviews will depend on the particular nature of the grantee organizations and project setup.


As appropriate, the research team will also conduct interviews with partner organizations and/or key stakeholders. For instance, they might interview a partnering financial institution representative or staff from a participant referral partner or financial education provider. Since we anticipate that these organizations and the number of staff who are connected to the AFI program will be small, we anticipate that we will again most likely interview the universe of relevant stakeholders and partners.


Information to Be Gathered

  1. Informed Consent Form: The informed consent form provides the study participants with important information about the study, their rights as study participants, the requirements of participation, the benefits and risks to their participation, and efforts to maintain their privacy. The contractor will develop a web-based sample enrollment tool for project intake staff at each site to use in administering informed consent. It will be administered to participants at the time of application to the program, once they have been determined to be eligible to receive AFI services. Consent will be recorded electronically vs. using hardcopy paper requiring signature. The consent form also provides notification that, should the individual decline to participate in the study, they will be unable to reapply for the AFI program at that specific grantee’s site for the period of sample enrollment, which will take up to two years. Collection of their signature on this form provides the Federal Government with assurance that the study participants have received all of the information required to make the decision to participate in the study, and, having received that information, have consented to participate. The information collected here will only be used for this purpose, and will not be contained in any publications. The informed consent form is included as Attachment E.


The baseline consent includes a statement informing participants about completing annual follow-up surveys for the next three years allowing for the possibility that the evaluation will be extended beyond the current 12-month follow-up interval. Including this statement prevents having to obtain additional consent for an extended evaluation. The follow-up survey for later survey waves is the same as the instrument to be used at the 12th month, thus the multi-year consent is an informed consent. Further, gathering consent for an extended period of time is preferable to re-consenting annually. Annual re-consent would likely result in unacceptably high sample attrition among control cases, as these cases would have little reason to re-consent. (Prior to random assignment, the incentive to provide consent comes through one’s understanding that the only way to enter the AFI project is via random assignment, accepting a 50 percent chance of becoming a control case.) If follow-up surveys beyond the 12-month wave are to be undertaken with additional government funding, an amendment to the justification package will be submitted, revising the burden estimates to include the additional survey waves at the 24th and 36th months.


  1. Locating Materials: The locating materials in this package provide the evaluators with the participants’ updated contact information. This information is very important for tracking and locating study participants for the 12-month follow-up survey. This information will be gathered voluntarily, via US Postal Service, with respondents being given the opportunity to submit their updated contact information via a postage-paid envelope. The contact information gathered will only be used for tracking purposes and will not be contained in any publication. AFI program participants are lower-income individuals who often experience high mobility. As such, collecting these data on contact information is critical to successfully locating and then administering the follow-up survey to the study participants. The locating materials are included as Attachment G.


  1. AFI Baseline and Follow-Up Questionnaires: The evaluation intends to collect survey data from all study participants (i.e., members of both treatment groups and the control group) at two different points – baseline at study intake and 12 months following random assignment. (As noted above, two additional annual follow-up surveys may be conducted if funding is available to extend the study to a three-year follow-up period.) The questionnaires are divided into sections based on demographics, financial experiences and other main topics. These questions allow estimation of the incidence, prevalence, and patterns of savings, debt, and asset accumulation. The proposed questionnaire content is shown in AFI Baseline Questionnaire and AFI Follow-Up Questionnaire (attached to this package). While the actual administration will be electronic, the document shown is a paper representation of the content that is to be programmed. Programmed screen shots of the site management system can be found in Attachment D.


    • The baseline survey will be administered using computer-assisted self-interviewing (CASI) and will take 30 minutes. Using a self-administered survey to supplement project data maximizes baseline response rates and information while maintaining data quality and maximizing cost efficiency. Adopting a self-administered approach for baseline data collection capitalizes on the enrollee presence at the site thereby guaranteeing a high (estimated to be 95 percent) response rate in the baseline survey. Respondents will be encouraged to ask a site administrator for assistance if they have any difficulty navigating the survey or answering questions. If literacy or lack of familiarity with the computer becomes an issue during the intake procedures, site administrators will administer the survey to the enrollee.

    • The follow-up survey will be administered using computer-assisted telephone interviewing (CATI). It will be a 30-minute telephone survey undertaken approximately 12 months after random assignment for all sample members.


The information gathered in these two surveys is critical to estimating the impacts of AFI program participation and will be the principal source of data for the impact analysis. Even with random assignment, there will be some treatment-control differences in baseline characteristics, as a result of sampling variation. Improved estimates of the treatment effect on measured outcomes will be obtained through multivariate models that include baseline survey items as explanatory variables, in addition to a dummy variable indicating the treatment-control status of each case. The administrative datasets maintained typically by AFI grantees will provide too few baseline characteristics on a consistently measured basis (for both treatment and control cases) to adequately control for such sampling variation.


  1. AFI Implementation Interview Instrument: The implementation study will describe and document how the AFI project is designed, implemented, and operated in the participating grantee sites. This information will provide the context for interpreting findings from the impact analysis. The evaluation intends to collect qualitative data on AFI program implementation through an implementation study questionnaire that will be administered to AFI program directors, IDA project managers, other project staff, and key stakeholders during site visits to the two participating AFI projects.


The implementation interview instrument is a semi-structured questionnaire, consisting of four parts: basic information on the grantee/subgrantee organization and the AFI project; project operations during sample recruitment and project enrollment; program services to cases assigned to the control group; and challenges faced regarding the AFI program evaluation.


The evaluation staff will conduct two-day site visits to each participating AFI project prior to the start of 12-month survey data collection. The proposed implementation interview instrument is included as an attachment. This timing is important because it allows us to capture the program’s implementation and challenges at the time that the study participants experience it. It also allows us to understand what other factors may have been going on while the participants were participating that may not show up in quantitative data.



3. Improved Information Technology to Reduce Burden


Site administrators will complete hardcopy screeners to determine applicant eligibility under AFI rules. Information from each screener will be recorded and reviewed on a monthly basis to facilitate non-response analysis during baseline data collection. Major demographic and economic characteristics of nonrespondents (versus respondents) will be periodically analyzed in each site (approximately every six months) to test for the presence of nonresponse bias. The baseline survey will be administered using computer-assisted self-interviewing (CASI). The follow-up survey will be administered using computer-assisted telephone interviewing (CATI).


The CASI/CATI technology affords a number of advantages in the collection of survey data. The CASI/CATI questionnaire will be programmed to implement complex skip patterns and fill specific wordings based on answers previously provided by the respondent, assuring that only relevant and applicable questions are asked of each respondent. A second feature relates to the consistency of data. The computer can be programmed to identify inconsistent responses and can automatically resolve them through carefully worded respondent prompts. Respondent-resolved inconsistencies will result in data that are more accurate than when inconsistencies are resolved using editing rules. CASI/CATI technology also provides the respondent with a methodology that improves privacy and enhances response validity and response rates. For example, for key questions that determine net worth, we will ask respondents for the specific amount of an asset. If a specific amount cannot be given, we will display a range of values. Only if the respondent still cannot provide an estimate or range will they then be able to select “don’t know” or “refused” for that item. The web based system will also have help buttons on each screen. When applicable, additional information (e.g., definition of a word) will be accessible by selecting the <Help> button.



4. Efforts to Identify Duplication


While existing evaluations offer clues about AFI impacts and varying program features, there has been no randomized evaluation of AFI or its distinctive program components. Although AFI grantees are required to maintain information on the characteristics and program outcomes of their IDA participants, such administrative data systems are not sufficient to support this evaluation, as baseline measures and outcome measures must be consistently measured between treatment and control cases. To the extent that specific baseline characteristics are available on a consistently measured basis from the administrative data system operated by a selected evaluation site, we will make use of such data and remove these items from the baseline survey as administered at that site. With respect to outcome measures, the only such opportunity that may present itself will be with respect to the IDA balances held by AFI participants. For all other outcome measures, AFI administrative data systems will not provide comparable data for control cases, who (by design) do not participate in the AFI program. Even for IDA balances, however, the account-level data may not be available administratively, as the financial institutions that administer the IDA accounts may not have obtained the necessary authorization from accountholders to disclose such information.


5. Involvement of Small Business Organizations


To the extent that the AFI grantees or subgrantees participating in the evaluation are small businesses, the estimated burden of the implementation study questionnaire will be incurred by their administrators and staff. This questionnaire is a one-time data collection and (as indicated below) involves a small number of respondents with limited burden.


6. Consequences of Less Frequent Data Collection


The evaluation design calls for survey data to be collected at baseline and at the 12th month after baseline. If no baseline survey was to be conducted, impact estimates would be less precise due to the inability to take account of differences in measured characteristics between the treatment and control groups at the time of sample enrollment. If no follow-up survey was to be conducted, impact estimates could not be obtained for the relevant outcome measures, as the necessary data would not be available for control cases. Such data would be limited to items such as covered earnings or benefit receipt (e.g., TANF and SNAP). The impact analysis would thus not include the basic outcomes relating to savings and asset purchases.


The evaluation design also calls for an implementation study to understand program implementation, operations, and challenges. This qualitative component is a critical piece of this evaluation. Not conducting this piece of the work would disadvantage the study and its findings in several ways. First, as noted, the implementation study will provide critical information about how the program was implemented in the study sites. This is helpful in not only understanding whether the program was implemented as intended, but also what the program that study participants experienced looked like. Second, the implementation study will explore challenges to program implementation and operation in the study sites. These challenges may affect the study’s impact findings, but would not be detectable in the quantitative data. Lastly, the implementation study will provide context for the evaluation by providing evaluators with more information about the context within which the program operated. Changes in the local community or the organization operating the program, for example, may affect program implementation and effectiveness. Without gathering this information on program implementation, operation, and challenges, the study misses out on critical context that is needed to understand the impact findings.


Not collecting the qualitative and quantitative information described throughout this statement would substantially diminish the evaluation’s ability to meet the legislative evaluation requirements. Specifically, it would prohibit measurement of the impact of AFI participation on key outcomes, such as savings and asset purchases. Accordingly, the information the study produces would be less relevant to the policy and program decisions that must be made to support asset building and the economic self-sufficiency of low-income families.


7. Special Circumstances


There are no special circumstances for the proposed data collection efforts.


8. Federal Register Notice and Consultation


In accordance with the Paperwork Reduction Act of 1995, the Administration for Children and Families (ACF) at the Department of Health and Human Services published a notice in the Federal Register on February 23, 2012, page 1072. The document number is FR Doc. 2012–3946. The notice provided a 60-day period for public comments, and comments were due by April 24, 2012. A copy of the notice is shown in Attachment J. ACF published the Federal Register Notice and received one request for copies of the instruments, but did not receive any substantive comments on the instruments or the proposed data collection.


The baseline and follow-up survey instruments were developed under contract to the ACF/HHS by the Urban Institute and its subcontractor, RTI International. Both organizations have been instrumental in the major previous IDA evaluation efforts. Under prior contract with ACF/HHS in 2009-2010, the Urban Institute was responsible for the development of design options for this ongoing phase of AFI evaluation research. During 2008-2009 under the foundation-supported American Dream Demonstration (ADD), RTI International implemented the 10th-year follow-up survey at the experimental IDA site in Tulsa, Oklahoma. While at Abt Associates, the Urban Institute’s current project director (Gregory Mills) led the four-year impact study of the ADD Tulsa site, as well as the first-phase non-experimental AFI evaluation.


The following list of consultants advised the Urban Institute on its development of design options for the current AFI evaluation: Kameri Christy-McMullin, William Gale, David Greenberg, David Kaufmann, Edmund Khashadourian, Kilolo Kijakazi, Alana Landey, Gretchen Lehman, Benita Melton, Gregory Mills, Zach Oberfield, Robert Plotnick, Ida Rademacher, Thomas Shapiro, Trina Williams Shanks, John Tambornino, and Beadsie Woo.


There are no unresolved issues resulting from these consultations.


9. Tokens of Appreciation for Respondents


Our study plan includes tokens of appreciation to respondents in the amount of $20 at the baseline interview and again upon completing the follow-up survey.


At baseline, IDA applicants are asked to complete a 30-minute self-administered baseline questionnaire as part of the program’s intake procedures. The offer of a $20 token of appreciation provided at this time will encourage individuals to enroll in the study and will engender good will among the study enrollees, important to their continued study cooperation as members of either the treatment or control group.


At follow-up, however, study participants are again asked to complete a 30-minute survey, administered to them by telephone. The survey again collects a significant amount of detailed financial information. Individuals may view the need to provide such information as burdensome. This may be especially true of control subjects who are not vested in the program. Thus, to prevent differential nonresponse between treatment and control groups, ACF recommends offering respondents another $20 as a token of appreciation in order to improve cooperation at follow-up. Estimates of program impacts may be biased if the respondents in each group are not comparable due to differential group nonresponse.


Many surveys are designed to offer incentives of varying types with the goal of increasing survey response. Monetary incentives at one or more phases of data collection have become fairly common, including some federally-sponsored surveys. Examples include the National Survey on Drug Use and Health (NSDUH, Substance Abuse and Mental Health Services Administration), the National Survey of Family Growth (NSFG, National Center for Health Statistics), the National Health and Nutrition Examination Survey (NHANES, National Center for Health Statistics), the National Survey of Child and Adolescent Well-Being (NSCAW, Administration for Children and Families), and the Early Childhood Longitudinal Study-Birth Cohort (ECLS-B, U.S. Department of Education).


There has been extensive publication about the relative efficacy of different monetary incentives, but several federal agencies have determined $20 to be effective. The U.S. Census Bureau has experimented with and begun offering monetary incentives for several of its longitudinal panel surveys, including the Survey of Income and Program Participation (SIPP) and the Survey of Program Dynamics (SPD). SIPP has conducted several multi-wave incentive studies, most recently with their 2008 panel, comparing results of $10, $20, and $40 incentive amounts to those of a $0 control group. They examined response rate outcomes in various subgroups of interest (e.g., the poverty stratum), use of targeted incentives for non-interview cases, and the impact of base wave incentives on later participation. Overall, $20 incentives increased response rates and improved the conversion rate for non-interview cases. Other research on the SIPP and SPD found that even more than improving response at baseline, offering former nonrespondents an incentive in a subsequent wave improved their response rate (Creighton et al, 2007).


The National Survey on Drug Use and Health (NSDUH, Substance Abuse and Mental Health Services Administration) conducted an experiment in which the cost per interview in the $20 incentive group was 5 percent lower than the control group, whereas the $40 incentive group cost was 4 percent lower than the control, due to reduced effort needed in gaining cooperation (Kennet et al., 2005). The NSDUH adopted an intermediate incentive of $30 because the greatest increase in response rate was found in the $20 incentive condition, and the $40 condition obtained a higher variation in per-interview costs. A similar incentive experiment conducted for the National Survey of Family Growth (NSFG, National Center for Health Statistics) Cycle 5 Pretest examined $0, $20, and $40 incentive amounts. The additional incentive costs were more than offset by savings in interviewer labor and travel costs (Duffer et al, 1994).


10. Confidentiality of Respondents


Concern for the privacy and protection of respondents’ rights is a central part in the implementation of the AFI Evaluation and will be given the utmost emphasis. The information requested under this collection will be private in a manner consistent with 42 U.S.C. 1306, 20 CFR 401 and 402, 5 U.S.C.552 (Freedom of Information Act), 5 U.S.C. 552a (Privacy Act of 1974) and OMB Circular No. A-130. (Attachment A: Notice of approval of Federal-Wide Assurance, submitted by RTI to the Office for Human Research Protections (OHRP), DHHS in compliance with the requirements for the protection of human subjects (45 CFR 46)).


The baseline questionnaire uses techniques to afford privacy for the respondent during the interview process. The computer-assisted self-interviewing (CASI) portion of the instrument will maximize privacy by giving control of the sensitive questionnaire sections directly to the respondent. The CASI methodology allows the respondent to key his or her own responses into the computer via the keyboard.


With the CASI methodology, all data are entered privately by the respondent and completed interview data are electronically stored automatically on the Contractor’s secured servers. Respondents or interviewers are unable to review or to edit questionnaire data as the completed interview files are locked. On the data file, respondents are identified only by a link number assigned to data files and questionnaires/interviews. Although the link number is associated with respondent information, this location information is deleted by the Contractor before the delivery of data to ACF. The contact information, which is maintained in a separate file for Contractor use in sampling and analysis, is purged at the completion of data processing.


Although the respondent’s first name, address, and phone number will be collected within the baseline and follow-up interview, it will be used only for re-contact purposes. Once the CASI data are stored on the contractor’s server, the respondent's contact information will be split off into a separate database with only the random number ID for linkage.  The rest of the CASI data will be converted into a SAS data file format and merged onto the master data file.


The follow-up survey will be conducted over the telephone by telephone interviewers. All interviewers will receive training on the importance of keeping all information learned from respondents private. A confidentiality pledge will be read and signed by all interviewers during the project training process (Attachment B).


The implementation study will be conducted by experienced field interviewers from the Urban Institute and its subcontractor, MEF Associates. These interviewers will administer a semi-structured questionnaire through on-site personal interviews with administrators and staff of the AFI grantee/subgrantee organizations participating in the evaluation.


There will be no Privacy Act System of Records established for this effort.


11. Sensitive Questions


The Social Engagement and Outlook Section of the AFI Baseline and Follow up Questionnaires includes questions about physical and emotional health that some respondents may consider sensitive. In the CASI administration of the baseline questionnaire, the respondent enters his/her answers directly into the computer. Data from the electronic interviews are stored directly on the Contractor’s secured servers. The questionnaire data are processed immediately upon receipt at the Contractor’s facilities, and all links between a questionnaire and the respondent’s address are destroyed after all data processing activities are completed. The instructions and introduction to the baseline questionnaire will provide the required privacy assurances, explain that participation in the study is voluntary, and provide contact information for the study’s Principal Investigator.


The follow-up interview will be conducted by telephone interviewers trained in administering the AFI survey. The interviewer will administer the questionnaire over the telephone from a secure call center facility. When calling to conduct the AFI survey, the interviewer will ask the respondent to go to a private location for the duration of the interview. The interviewer will repeat the privacy assurances and note the respondent’s answers into the computer.


There no questions of a sensitive nature on the implementation questionnaire.


12. Estimation of Information Collection Burden


As part of the impact study, 1,100 AFI-eligible study participants will complete a 30-minute self-administered baseline survey prior to random assignment (which will begin approximately December 2012 and continue through February 2014). An estimated 85 percent of the study participants, or 935, will complete a 30-minute telephone-administered follow-up survey approximately 12 months after random assignment (approximately February 2015 for the last-enrolled participants). As part of the implementation study to be conducted at the two evaluation sites during 2013-2014, a total of 30 administrators and staff members (i.e., executive directors of AFI grantee agencies, IDA project managers, and other project staff members) will participate in a 60-minute implementation interview. A more in-depth discussion of the process for and rationale of selecting these 30 individuals is included on pages 4 and 5 above.


The time per response is estimated at 30 minutes (0.5 hour) for the baseline survey, 30 minutes (0.5 hour) for the follow-up survey, and 60 minutes (1.0 hour) for the implementation interview.


The estimated annual burden (based on a three-year study duration) is 350 hours. See the table below for estimated annual burden for each type of instrument.


The annualized cost burden to respondents is based on the estimated burden hours and the assumed hourly wage rate for respondents. The assumed wage rate is the average hourly earnings for those in private, non-farm positions: $ 23.24 (http://www.dol.gov/dol/topic/statistics/index.htm). The estimated annualized cost (based on a three-year study duration) is $8,134. See the table below for estimated annual cost burden for each type of instrument.


Annualized Estimated Burden for AFI Evaluation


Instrument

Annual

Number of Respondents

Number of Responses per Respondent

Average Burden Hours Per Response


Estimated Burden Hours



Hourly Wage Rate



Annualized Cost

AFI Baseline Questionnaire: AFI-eligible participants

367

1

.50

184

$23.24

$

4,276

AFI Follow-up Questionnaire: AFI-eligible participants

312

1

.50

156

$23.24

$

3,625

AFI Implementation Interview Instrument: Administrators and staff

10

1

1.00

10

$23.24

$232

Total


 

 

350


$

8,134


13. Additional Cost Burden to Respondents and Record Keepers


There are no additional costs beyond those outlined in A.12 above.


14. Estimate of Cost to the Federal Government


ACF is funding the costs of the study. The annualized costs to the Federal Government of baseline, 12 month follow up and implementation data collection are $138,504 for the costs of survey development and administration by RTI International.


15. Change in Burden


This is a new collection.



16. Plan and Time Schedule for Information Collection, Tabulation and Publication


Plans for the AFI Evaluation involve a final report deliverable and several dissemination activities. The final report will include a literature review, details on the design of the experiment, evaluation methodology, and findings from data analyses. Dissemination activities will include producing one or several policy briefs or one-page fact sheets could highlight some of the findings in the final report to reach policymakers, practitioners, advocates, and researchers.

AFI Program Evaluation Schedule


Informed consent and baseline data collection: 4-18 months after OMB clearance

Implementation study data collection: 14-16 months after OMB clearance

Locating materials mailed: 10-24 months after OMB clearance

12-month follow-up data collection: 16-30 months after OMB clearance

Data analysis: 31-33 months after OMB clearance

Final report issued: 36 months after OMB clearance


17. Reasons Not to Display OMB Expiration Date


The OMB expiration date will be displayed on all AFI evaluation data collection materials.


18. Exceptions to Certification for Paperwork Reduction Act Submissions


This submission describing data collection requests no exceptions to the Certification for Paperwork Reduction Act (5 CFR 1320.9).


1 U.S. Department of Health and Human Services, Administration for Children and Families, Office of Community Services. 2010. Assets for Independence Program: Status at the Conclusion of the Tenth Year. Washington, DC: U.S. Department of Health and Human Services, Administration for Children and Families, Office of Community Services.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOMB Application for
Authormcl2
File Modified0000-00-00
File Created2021-01-29

© 2024 OMB.report | Privacy Policy