AFI OMB Justification Package - Part B - 10.30.14

AFI OMB Justification Package - Part B - 10.30.14.docx

Assets for Independence (AFI) Program Evaluation

OMB: 0970-0414

Document [docx]
Download: docx | pdf






SUPPORTING STATEMENT B FOR INFORMATION COLLECTION FOR THE

ASSETS FOR INDEPENDENCE (AFI)

PROGRAM EVALUATION










Office of Planning, Research, and Evaluation

Administration for Children and Families

U.S. Department of Health and Human Services

370 L'Enfant Promenade, SW

Washington, DC 20447




August 2012












B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS


This document presents Part B of the Supporting Statement for a series of data collection activities for the Assets for Independence (AFI) Program Evaluation (hereafter, AFI Evaluation). This request is for a new collection. This submission seeks OMB approval for three data collection instruments relating to surveys of the enrolled study sample at baseline (i.e., intake to the programs studied) and 12 months following baseline, and relating to interviews to be conducted with program administrators, staff, and other stakeholders involved in the implementation of the evaluation:


  • Baseline Survey

  • 12-Month Follow-Up Survey

  • Implementation Interviews


1. Respondent Universe and Sampling Methods


Site Selection and Eligibility. As discussed in Supporting Statement A, the evaluation team implemented a rigorous set of activities to identify and screen potential AFI grantee sites. The criteria for site selection is included below:


  • Received their first grant in 2006 or earlier (meaning that they have completed a full five-year grant period for at least one grant);

  • Had a grant that was active during FY 2011;

  • Had at least 600 IDAs opened across all of their grants; and

  • Showed some indication of potential capacity to participate in the evaluation through meeting one of the following three criteria:

(1) had a new grant awarded in FY 2011 of at least $300,000;

(2) had a grant expiring in FY 2011-FY 2013 of at least $300,000 (possibly indicating the capacity to apply for a new grant of that size); or

(3) had at least one grant under which 400 or more IDAs were opened.


Using these criteria as a starting point, the research team narrowed down the list of possible grantees based on six dimensions related to capacity/sample size and program structure discussed in greater detail in page 4 of Supporting Statement A. The research team held non-standardized follow-up conversations based on information provided in their AFI grant applications with this group of 12 grantees to follow-up on information provided in their AFI grant applications, as well as data collected through the AFI Program Progress Reports(ACF PPR form with OMB Approval Number 0970-0334, expiration 10/31/2012). The final evaluation sites will be selected based on the six criteria noted above, as well as their capacity to recruit sufficient sample and willingness to participate in the random assignment experiment. At this time, we have two sites still under active consideration for the study. We anticipate that site agreements with these sites will be completed in September 2012.


Implementation Study Sample Selection. The individuals to be interviewed for the implementation study will be AFI grantee and subgrantee staff, including AFI program directors, IDA project managers and other key stakeholders as appropriate. Notably, AFI grantee staffing varies from AFI project to project, and depends on such factors as administrative structure, implementation status, and the availability of nonFederal resources to support the staff. For many AFI grantees, the number of program staff is quite low. Through FY 2009, AFI grantees that had 150 or more accounts opened, as would be the case with the evaluation sites, averaged is 2.19 full-time equivalent staff members.1 As a result, the research team will likely interview the full universe of AFI program staff in a particular grantee or subgrantee. As appropriate, the research team will also conduct interviews with partner organizations and/or key stakeholders. For instance, they might interview a partnering financial institution representative or staff from a participant referral organization. Since we anticipate that the number of staff from these organizations who are connected to the AFI program will be small, we anticipate that we will most likely interview the universe of relevant stakeholders and partners.


Respondent Universe. The respondent universe for Assets for Independence (AFI) Program Evaluation consists of persons aged 18 years old and older who reside within the selected site areas and meet the site specific eligibility criteria to take part in the evaluation. Study participants will be low-income workers applying to receive Individual Development Accounts (IDAs) from AFI grantee institutions. IDAs are savings accounts that can be used to fund small-business development, higher education, or the purchase of a first home. Respondents also include grantee staff who will participate in the implementation study.


Study Eligibility. Study eligibility criteria can vary slightly, but the main eligibility criteria are below:


  • Annual household income must be below 200 percent of the federal poverty level ($21,780 for an individual, $29,420 for a couple) or below the eligibility level for the federal Earned Income Tax Credit; and

  • Household assets must be less than $10,000, excluding a residence and one vehicle.


2. Procedures for Collection of Information


Sample Design

The sample design calls for the evaluation to be conducted in two sites, each with random assignment of between 500 and 600 AFI-eligible cases. The sites will enroll their study samples within a 15-month period from approximately December 2012 through February 2014. Each site will randomly assign sample members to one of two groups: a control group and a treatment group receiving conventional AFI services.


Estimation Procedures

In voluntary programs such as AFI, random assignment takes on special importance for evaluation because under normal, nonexperimental settings people who enter an AFI project may differ in unobservable ways from AFI nonparticipants. Multivariate statistical techniques can account for observable differences (and correlated unobservables), but have difficulty dealing with such unobservables as a client’s work ethic, propensity to save, or self-motivation. If participants are similar to nonparticipants in observed characteristics but are more motivated and thus apply for AFI, the better outcomes for AFI participants may result from the more favorable unobserved characteristics of participants, not from the project itself. With random assignment, the sample of individuals allowed to participate is likely very similar on both observable and unobservable characteristics to the control group sample not allowed to participate.


Random assignment is an effective tool for estimating the effects of alternative AFI project features as well. Although AFI grantees vary naturally in their match rates, required hours of financial education, and other program components, it is risky to use such variation as the basis for natural experiments in program design. One reason is that program variations may reflect other fundamental differences in program settings, such as differences in the financial capacities of the administering program agencies. In addition, the client subpopulations attracted to these differing program models are likely to vary in important ways, confounding any attempt to isolate the effects of the program features from the underlying heterogeneity of the clientele.


Degree of Accuracy Required

A key issue is whether the design is strong enough to detect the expected effects. Per group sample sizes must be sufficient to detect reasonable differences between the treatment condition (A) and the control condition (B), while accounting for loss of sample due to nonparticipation, dropping out, and survey nonresponse and attrition. The statistical power of a study design is the probability of detecting a real difference between two groups on outcomes of interest. Sample size is the primary determinant of the power of the experiment.


Equal allocation of the sample across groups maximizes the efficiency of the sample; unequal allocation requires larger total sample sizes to achieve the same level of power. Some experiments allocate less sample to control groups to gain the participation of sites that might object to a large percentage of applicants being denied the service. For this evaluation, we plan to utilize equal allocation of the sample at each site between the treatment and control groups.


Consider a two-group design in two sites, with total samples of 600 and 500. For survey-measured participant outcomes, assuming a follow-up survey response rate of 85 percent, the expected per-group sample sizes will be 255 (0.85 x 300) for the first site and 213 (0.85 x 250) for the second site. For pooled analysis of survey outcomes across both sites, the per-group sample will be 468 (255 + 213).


One common way to measure power is to calculate the minimum detectable effect (MDE) as

a fraction of the standard deviation of a given variable in the sample. The MDE is “the smallest effect that, if true, has an X percent chance of producing an impact estimate that is statistically significant at the Y percent level,” where X is the desired statistical power and Y is the desired significance level (Bloom 1995).


Our MDE calculations are based on standard statistical assumptions: a desired power of 80 percent, a significance level (two-sided) of 5 percent, and a control-group mean outcome value of 0.50 for a survey-measured short-term outcome such as the incidence of material hardship.


As shown in Attachment C, the minimum detectable effects under balanced two-group designs in each site (equal numbers randomly assigned to groups A and B) are 0.123 in the first site and 0.134 in the second site, for the the treatment-to-control comparisons. These numbers represent proportional effects of 25 to 27 percent. As also shown in Attachment C, pooled-site estimates provide greater precision, resulting in MDEs of 0.091 (proportionally, 18 percent) for the treatment-to-control comparisons under the balanced two-group designs. The MDEs are conservative to the extent that they do not account for the intended use of multivariate models in estimating treatment effects; the inclusion in these models of explanatory variables measured in the baseline survey is expected to improve the precision of impact estimates.


The minimum detectable effects for pooled estimates on first-year outcomes are within the range of SIPP-estimated effects of liquid asset holdings on material hardship among households in the lowest income quintile.2 Specifically, modest liquid asset holdings (up to $2,000) were associated with significantly lower rates of hardship in the following year for six of eight tested measures relating to health, housing, and food security. For four of these measures, the estimated proportional effects were 25 to 33 percent for four of these measures and 10 to 15 percent for two measures.


In the context of this study, pooled estimates are not subject to the degree of cross-site variance that might normally be present, as the two sites (both in southwestern U.S. metropolitan areas) have similar client demographic characteristics.



Data Collection Procedures

Baseline Survey and Randomization: Our technical approach to information collection procedures for the baseline survey includes a web-based tool that the evaluation sites will use at project intake to consistently and efficiently conduct three front-end activities for AFI project applicants, as soon as they are determined project-eligible: informed consent to be subject to random assignment, baseline data collection, and random assignment itself. With the study implemented at multiple intake locations (at a minimum, one location for each of the two participating AFI grantees), with intake periods of up to 15 months, and with analysis possibly undertaken with pooled data, it is essential that these three activities are uniformly implemented over time and across sites.


As noted earlier, AFI project data will inform the design of our baseline questionnaire; however, baseline data collection will involve collection of survey information via an in-person computerized self-administered questionnaire. Using a self-administered survey to supplement project data allows us to maximize baseline response rates and information while maintaining data quality and maximizing cost efficiency. Adopting a self-administered approach for baseline data collection capitalizes on the enrollee presence at the site thereby guaranteeing a very high response rate (estimated at 95 percent) in the baseline survey. If literacy is a major problem or lack of familiarity with the computer a concern, site intake staff would assist with survey administration. Staff will be trained to administer the survey, including the ability to explain financial terms. Respondents will be encouraged to ask a site administrator for assistance if they have any difficulty navigating the survey or answering questions.


For the initial data collection, RTI will develop an integrated web-based tool that maximizes the accuracy of collected data and ensures a smooth and quick transition of data between the stages of processing across evaluation sites. The system will include three key components: a web-based site management system, a baseline survey with programmed randomization algorithms for assignment to treatment and control groups, and a secure system database that contains individual case status information that will be used for site based reports.


Web-based site management system: The site management system will provide on-demand access to information and facilitate reporting and communication among the project management team. The system will be a central hub that designated site administrators and project staff members can access through an Internet browser. This system will also allow site administrators and project enrollees from the grantee organizations the ability to provide initial intake information to determine eligibility for the experiment, provide informed consent to participate in the research study, complete the baseline study questionnaire, and be randomly assigned to either a treatment or control group. Screen shots of the system and selected instrument questions can be found in Attachment D.


Informed consent: Before administering the baseline survey, site administrators will review the informed consent online with participants. The informed consent form will provide participants with enough information to decide about participation, including information about the experiment’s purpose, procedures used, and participation benefits and risks. Site administrators will acknowledge receiving informed consent in the programmed survey. Attachment E contains a copy of the informed consent language. The follow-up survey for later survey waves is the same as the instrument to be used at the 12th month, thus the multi-year consent is an informed consent. The consent form indicates a three-year duration for the evaluation. This allows for the possibility of the evaluation being extended beyond the current 12-month follow-up period, without requiring any re-consent of the enrolled sample members. Annual re-consent would likely result in unacceptably high sample attrition among control cases, as these cases would have little reason to re-consent. Prior to random assignment, the incentive to provide consent comes through one’s understanding that the only way to enter the AFI project is via random assignment, accepting a 50 percent chance of becoming a control case. Once assigned to the control group, individuals have no further incentive to accept this restriction on their access to IDA services.) If the evaluation is extended, the proposed additional information collection will be submitted to OMB.


Baseline interview: Upon recording acknowledgement of informed consent directly into the computer, site administrators will provide participants with an on-site computer, assist participants with the login procedures to the baseline survey, and answer any questions as the survey progresses. If the participant cannot read or has difficulty using the computer, the site administrator will be trained to administer the survey to him or her.


The baseline survey will be designed with an easy-to-use interface so respondents can easily follow the flow of the survey. Each survey page will be designed to contain a certain number of questions, making pages easy to read and load in less than three seconds. When needed, drop-down boxes can be used to provide respondents with their answer options. Any skip logic will be programmed into the survey so it happens automatically and users can continue with the next question or section. The survey will also be programmed so users can stop at any point and return to exactly where they left off. This way, respondents who are interrupted while taking the survey can pick up later without having to start over again. Respondents will have the option of selecting “don’t know” and “refused” for survey questions, such as those requesting a specific dollar amount of an asset. If a specific amount cannot be given, we will display a range of values. If the respondent still cannot provide an estimate or range, they will be able to select don’t know or refused for that item. The web based system will also have help buttons on each screen. When applicable, additional information (e.g., definition of a word) will be accessible by selecting the <Help> button. Programmers and instrument development staff will thoroughly test and debug the system. Sample screen shots are included as Attachment D.


Randomization: At the end of the interview, the online system will use a predetermined algorithm to assign participants to the treatment or control group. The algorithm will give each individual the same probability of random assignment to each group. Participants assigned to the treatment group will receive a message stating that they are eligible to enter the AFI project. Participants assigned to control status will be notified that they can access non-AFI services but cannot enter the AFI project.


Web site security: Since the information contained in this web site will be sensitive, security is important. The web site’s membership comprises the RTI, Urban Institute, MEF Associates, and ACF project teams as well as site administrators designated by ACF. The web site uses Secure Socket Layers to create a secure, gated community where access is restricted and only authorized users will be allowed entry into specific areas and are granted certain functional privileges.


12-Month Follow-up Survey:

Tracing. Sample mobility and panel attrition are familiar challenges to longitudinal studies. To rigorously determine the effects of AFI participation, study participants will be tracked so they can be interviewed for the follow-up survey. We will implement panel maintenance activities prior to and during the follow-up survey that will focus on keeping an up-to-date database of sample member contacting information to minimize attrition and nonresponse due to incomplete or incorrect contact information. One round of sample maintenance as well as locating activities will be conducted during the administration of the follow-up survey. The combination of the two will produce accurate contact information on sample members at reasonable costs. Attachment F contains panel maintenance materials.


A round of sample maintenance will be conducted in early fall 2013, prior to the launch of the follow-up survey. Sample members’ names will be submitted to batch tracing; they will also be sent a letter asking them to update their information by mail. We expect approximately 15 percent of the sample to return update cards based on results of previous efforts.


Any sample members not located for the 12-month telephone interview will be traced by interactive tracing experts who have access to a variety of databases to locate and verify current addresses and telephone numbers. Interactive tracing specialists contact friends and relatives, use crisscross directories to identify neighbors, and contact directory assistance for possible updates and use a management system to keep a history of calls to subjects and contact.


If interactive tracing does not yield good contact information, trained field representatives will attempt to visit the last known address. Trained, experienced staff will investigate physical locations to verify or disprove the subject’s reported location. Field tracers are trained to establish trust and elicit information from a subject's relatives, neighbors, schools, business associates, and government agencies. If the sample member is no longer at the address, the field tracer will attempt to locate the individual or someone who knows the sample member, following procedures proven to be effective in other studies. For example, at apartment buildings, the field representative will try to get information from the manager; at abandoned residences, the field representative will visit neighbors. When found, sample members will be asked to call and complete the survey, provide a telephone number, or schedule an interview. Attachment G contains field locating materials.


Lead Letters. Using a letter to inform households about a forthcoming telephone call and giving them a general description of the survey being conducted has been shown to increase survey response rates (DeLeeuw, 2007). The letter will describe the purpose of the survey will: 1) inform sample members of the purpose of the AFI Evaluation; 2) provide useful information regarding the survey; 3) include a toll-free telephone number that respondents can call if they have questions; and 4) include information regarding the incentive that will be offered to respondents who agree to participate. Attachment H contains a copy of the lead letter.


Interviewer Training. A comprehensive training manual will guide interviews. All telephone interview staff will be trained on the study background, methods for administering the questionnaire, confidentiality and informed consent requirements, question-by-question item review, refusal avoidance techniques, ways to maximize response rates, and quality control and performance expectations. At the end of training, all telephone interviewers will be certified for data collection by successfully completing a certification interview. Skills to be assessed include ability to accurately explain the purpose and goals of the project, ability to gain cooperation, refusal avoidance and conversion skills, effective communication skills, ability to adhere strictly to informed consent and questionnaire scripts, and ability to deal with collecting financial information (appropriate probing techniques).


Data Collection. Households will be contacted by telephone approximately one week after the lead letter has been sent. Lead letters (Attachment H) will include an invitation to take the survey via web using a unique username and password. Interviewers will introduce themselves, ask to speak to the selected respondent and (when applicable) state "You may have received a letter from us” then will inform the potential participant about the study and proceed with the introductory script and informed consents (Attachment I). The 12-month follow-up self-administered survey will contain the same security settings as the self-administered baseline interview. 


Implementation Study. The main data collection approach will be site visits to the AFI grantees that involve interviews with key administrators, staff, and stakeholders; observations of grantee services; and reviews of relevant documents and data.


Site Visit Interviews. The site visits to the selected AFI grantees will be two days each and will occur toward the end of the period of baseline data collection and random assignment. At that point, we anticipate that the earliest enrolled study participants will be nearing (or will have reached) their 12th month after random assignment, although the projects will be continuing to assign and enroll new participants. This timing is important because it allows us to capture the program’s implementation and challenges at the time that the study participants experience it. It also allows us to understand what other factors may have been going on while the participants were participating that may not show up in quantitative data. Before the visit, we will review information from the site selection to identify any clarification questions and help structure and streamline the site visit activities. While on site, we will conduct semi-structured in-person interviews with individuals in differing roles to obtain a range of perspectives on the AFI project. The interviews will depend on the particular nature of the grantee organizations and project setup, but will likely include:


  • IDA project and grantee administrators, both on site and at a central project office;

  • IDA site staff, including the financial education provider; and

  • partner organizations, such as a financial institution representatives or staff from a participant recruiting organization.


Procedures with Special Populations

Two versions of the baseline and follow up instruments will be prepared: an English version and an Other Language version. The other language will likely be Spanish, but will be based on the site chosen and the populations served by that grantee. Both versions will have the same essential content.


3. Methods to Maximize Response Rates and Deal with Nonresponse


Baseline Survey. All individuals who agree to participate in the evaluation must complete the baseline instrument in order to have the opportunity to be randomly assigned to the AFI project. It is possible, however, that a small number of individuals will refuse or “break-off” the interview, leading to a less than 100 percent response rate at baseline. Nonetheless, a high response rate of 95 percent is expected for this instrument.


Site administrators will complete hardcopy screeners, to determine application eligibility under AFI rules and to record basic client characteristics. Information from each screener will be recorded and reviewed on a monthly basis to facilitate non-response analysis during baseline data collection. Major demographic and economic characteristics of nonrespondents (versus respondents) will be periodically analyzed in each site (approximately every six months) to test for the presence of nonresponse bias.


Follow-up Survey. To maximize interview response rates, the proactive tracing strategies described above–panel maintenance letters and batch tracing—will be implemented before 12-month follow-up data collection begins. Response rate outcomes will be routinely reviewed during the data collection period to identify the root causes for nonresponse and to develop strategies to increase them. For instance, our goal for many longitudinal studies is to achieve at least an 85 percent response rate. We will conduct a review of the cases to determine whether nonresponse is the result of issues with interviewers contacting respondents, gaining cooperation from respondents, or working fewer hours than expected. When the major causes of nonresponse are identified, tailored strategies are put into place; such strategies may include increasing calling effort during specific call windows or developing different scripts to address specific respondent concerns.

Respondent Tokens of Appreciation. Sample members who complete the baseline survey will receive $20 for their participation, and will receive $40 for their participation in the 12-month follow-up survey. This token of appreciation will be mentioned first in the consent form and again in the lead letter (Attachment J) sent to sample members prior to the follow-up survey launch. In each instance, the token of appreciation is intended to encourage, but not obligate, participation. For the baseline survey, the token will be provided in a manner decided by the program agency, either in person or by mail. For the follow-up survey, the token will be mailed to respondents within 2 to 4 weeks after survey completion.


As discussed in Supporting Statement A, a wide variety of research has shown that tokens of appreciation or incentives improve response rates in telephone surveys (Singer, 2002; Cantor, O’Hare, and O’Connor, 2007). Incentives can help gain cooperation through fewer calls, which can help make their use cost effective. Additionally, studies have shown that modest incentives are not coercive (Singer and Bossarte, 2006). Thus, implementing an incentive plan can be a cost-effective way for surveys to improve response rates and lower refusal rates, and could, over the course of data collection, actually reduce costs and burden to respondents by reducing the need for additional calls to potential respondents.


The project team reviewed many designs for this study to maximize participation in the follow-up survey where panel attrition is expected. One consideration was whether to provide tokens of appreciation before the interview (prepaid) or after the interview (promised). Many studies in the survey literature find prepaid incentives to be more effective than promised incentives (e.g., Linsky, 1975 and Armstrong, 1975 for an overview; Church, 1993). However, this has not been demonstrated in the context of a program evaluation with random assignment. As noted in Supporting Statement A, sample members assigned to the control group will be less motivated to complete the follow-up survey. Furthermore, prepaid tokens of appreciation may have differential responses from respondents in the treatment group who maintain an ongoing relationship with the program compared with respondents in the control group who do not. Lacking evidence that a prepaid token will result in less differential nonresponse, we opt to provide more traditional promised tokens of appreciation.


Various studies have demonstrated significant effects of promised incentives compared to a no incentive condition. For example, Cantor et al. (2003) found an almost 10 percent increase in response rate when promising $20 (vs. no incentive) in an RDD survey. In a meta-analysis of 39 controlled experiments, Singer et al. (1998) found that the effect of prepaid incentives on response rates did not differ significantly from the effect of promised incentives. Other studies (e.g., Yu and Cooper, 1983) also found promised tokens of appreciation significantly improved response rates.


Interviewer Training. Response rates vary greatly across interviewers (e.g., O’Muircheartaigh and Campanelli 1999). Improving interviewer training has been found effective in increasing response rates, particularly among interviewers with lower response rates (Groves and McGonagle 2001). The following interviewing procedures will be used to maximize response rates:

  1. Interviewers will be briefed on the potential challenges of administering a survey on financial experiences with low income families. Well-defined conversion procedures will be established.

  2. If a respondent initially declines to participate, a member of the conversion staff will re-contact the respondent to explain the importance of participation. Conversion staff are highly experienced telephone interviewers who have demonstrated success in eliciting cooperation. Conversion staff will be able to provide a reluctant respondent with the name and telephone number of the contractor’s project manager who can provide respondents with additional information regarding the importance of their participation.

  3. A toll-free number, dedicated to the project, will be established so potential respondents may call to confirm the study’s legitimacy.


Refusal avoidance training will take place approximately two to four weeks after data collection begins. During the early period of fielding the survey, supervisors, monitors, and project staff will observe interviewers to evaluate their effectiveness in dealing with respondent objections and overcoming barriers to participation. They will select a team of refusal avoidance specialists from among the interviewers who demonstrate special talents for obtaining cooperation and avoiding initial refusals. These interviewers will be given additional training in specific techniques tailored to the interview, with an emphasis on gaining cooperation, overcoming objections, addressing concerns of gatekeepers, and encouraging participation. If a respondent does refuse to be interviewed or terminates an interview in progress, interviewers will attempt to determine their reason(s) for refusing to participate, by asking the following question: “Could you please tell me why you do not wish to participate in the study?” The interviewer will then code the response and any other additional relevant information. Particular categories of interest include “Don’t have the time,” “Inconvenient now,” “Not interested,” “Don’t participate in any surveys,” and “Opposed to government intrusiveness into my privacy.”


Quality Circle Meetings. The contractor will hold weekly QC meetings with interviewers and supervisors to discuss data collection progress and issues. Our experience has shown that these sessions build rapport and enthusiasm among interviewers and project staff, allow project staff to identify important refusal conversion strategies, assist in the refinement of the instrument, and provide ongoing training for staff. Such meetings have identified previously unrecognized problems with a CATI instrument, such as questions that the respondent does not understand, questions that are difficult to administer, and software problems. These sessions also provide feedback on the data collection procedures and systems.


Data Review. We will periodically review data frequencies from the CATI survey to ensure that the program is working as intended and also to identify areas for interviewer feedback. We will review for high item-level nonresponse rates, recording of complete verbatim responses and contact information, and questions that may be unclear or confusing to interviewers and sample members.


4. Tests of Procedures or Methods to be Undertaken


A preliminary cognitive assessment of the instrument content and format has informed refinements to the baseline and follow-up survey instruments. In February 2012, nine cognitive interviews were conducted by survey methodologists experienced in cognitive interviewing methods.


Eligibility and Consent. Project staff and their family members were not eligible to participate in the cognitive test. All participants signed a consent form prior to beginning the interview, which was read to them by the interviewer. A copy of the form was provided for the participant’s records. The consent form included a separate request to audio record the interview to facilitate note-taking, with recordings to be destroyed shortly after the summary reports were prepared and analyzed. All reports were written in a common summary shell.


Testing Procedures. During the cognitive interview, a portion of participants were asked to complete the hardcopy baseline survey instrument on their own. To maximize confidentiality during the interview, participants were instructed to record only first and last initials when answering the household demographic items, and to enter “Xs” for their phone number. After completing the demographic portion of the survey, they participated in a guided think-aloud process with the interviewer in which the respondent was asked to discuss individual questions and response sets in the instrument to gauge their ease or difficulty in completing the survey, their ability to successfully navigate through the instrument (for example, following instructions and marking answer choices for the online baseline survey), and their understanding of definitions and terminology in the survey.


The interviews averaged approximately 30 minutes and included a review of a number of questionnaire items, including some that had been cognitively tested previously for the Survey of Income and Program Participation (SIPP) and the American Dream Demonstration (ADD). This was to look for any context effects that may have been introduced with the removal of some items and to gauge how well the items worked in a self-administered format.



Results. Survey methodologists found no systematic problems of sequence, sensitivity, or overlapping response options during the AFI cognitive interviews. The questions were understood and readily answered. There were no observable differences between modes in terms of comprehension.


5. Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data


The basic sample design for the AFI program evaluation was reviewed by senior professional staff at the Urban Institute and RTI International. These staff included Dr. Douglas Wissoker, one of the internal consultants who comprise the Urban Institute’s Statistical Methods Group.


The AFI Evaluation contract was awarded to Urban Institute on September 27, 2011. Contractor personnel will implement the field assessment, recruit and select AFI sites, develop the survey instruments, conduct initial data collection and random assignment, implement participant tracking and conduct the 12-month follow-up survey, conduct the implementation study, conduct data analysis and develop statistical reports. ACF will provide direction and review functions to the Contractor. Data collection will be conducted during the 2012-2015 calendar years by RTI International, an independent, nonprofit research institute located in North Carolina.


1 U.S. Department of Health and Human Services, Administration for Children and Families, Office of Community Services. 2010. Assets for Independence Program: Status at the Conclusion of the Tenth Year. Washington, DC: U.S. Department of Health and Human Services, Administration for Children and Families, Office of Community Services.

2 Gregory Mills and Joseph Amick, “Can Savings Help Overcome Income Instability?” Urban Institute, Brief 18, Perspectives on Low-Income Working Families, December 2010.


File Typeapplication/vnd.openxmlformats-officedocument.wordprocessingml.document
File TitleOMB Application for
Authormcl2
File Modified0000-00-00
File Created2021-01-26

© 2024 OMB.report | Privacy Policy