Memorandum United States Department of Education
Institute of Education Sciences
National Center for Education Statistics
DATE: May 14, 2020
TO: Robert Sivinski, OMB
THROUGH: Carrie Clarady, Avar Consulting, in contract to NCES
FROM: Tracy Hunt-White, Team Lead, Postsecondary Longitudinal and Sample Surveys, NCES
SUBJECT: 2019–20 National Postsecondary Student Aid Study (NPSAS:20) Calibration Results Update Change Request (OMB# 1850-0666 v.30)
The 2019-20 National Postsecondary Student Aid Study (NPSAS:20) is a nationally representative cross-sectional study of how students and their families finance education beyond high school in a given academic year. NPSAS is conducted by the National Center for Education Statistics (NCES) and was first implemented by NCES during the 1986–87 academic year and has been fielded every 3 to 4 years since. This request pertains to the 11th cycle in the NPSAS series conducted during the 2019–20 academic year. NPSAS:20 is both nationally and state-representative and will serve as the base year data collection for the 2020 cohort of the Beginning Postsecondary Students Longitudinal Study (BPS:20), a study of first-time beginning postsecondary students that will be conducted three years (BPS:20/22) and six years (BPS:20/25) after beginning their postsecondary education. NPSAS:20 will consist of a nationally representative sample of undergraduate and graduate students, and a nationally representative sample of first-time beginning students (FTBs). Subsets of questions in the NPSAS:20 student interview will focus on describing aspects of the experience of beginning students in their first year of postsecondary education, including student debt and education experiences.
The request is to conduct all activities related to NPSAS:20, including materials and procedures related to: the NPSAS:20 student data collection, consisting of abstraction of student data from institutions and a student survey; panel maintenance activities for a NPSAS:20 follow-up field test (for BPS:20/22); and carried over respondent burden, procedures, and materials related to the NPSAS:20 institution sampling, enrollment list collection, and matching to administrative data files was approved by OMB in December 2019 (OMB#1859-0666 v.25). The NPSAS:20 enrollment list collection from institutions takes place from October 2019 through July 2020, the student records collection takes place from March through November 2020, and the student survey data collection takes place from February through early December 2020.
This request includes an update of the data collection design of the main NPSAS:20 study based on the results of the Calibration Experiment which investigated the use of prepaid incentives in combination with different baseline promised incentives. This request does not introduce significant changes to the estimated respondent burden or the costs to the federal government. The following revisions were made to Part A (p. 8) and Part B (NPSAS:20 Main Data Collection), beginning on p. 23 of that document.
Modifications to Part A, Section 9a (Provisions of Payments or Gifts to Respondents – Student Sample Members).
All
eligible cases in the NPSAS:20 full-scale study will be offered a
monetary incentive for completing the student survey. Below we
describe plans for an experiment using a subset of cases – a
calibration sample –to determine the final incentive plan that
will be submitted to OMB for a
consideration as a change request in May 2020 before it is
implemented for the remainder of the data collection
with the remainder of the sample. More information regarding the
timing and distribution of the incentives for the calibration and
main samples, as well as the results of the
calibration experiment and the final incentive plan, is
provided in the Supporting Statement Part B of this submission.
Modifications to Part B, Section 4d (NPSAS:20 Main Data Collection).
Revisions were made to Part B that address the NPSAS:20 data collection phases. Below is a summary of the changes.
Revised – Table 11 revised to reflect updated NPSAS:20 data collection design.
Revised – the introductory paragraph laying out how the results of the calibration sample will be used now reflects the actual results. Also, added a reference.
For your reference, below is the original calibration experiment design (page 21):
Calibration sample design by condition and phase of data collection
|
Group
1 |
Group
2 |
Group
3 (Control) |
Phase 1 |
$2 prepaid + $30 promised |
$2 prepaid + $15 promised |
$0 prepaid + $30 promised |
Phase 2 (nonresponse follow-up) |
$10 prepaid (via PayPal or check) + $20 promised |
$30 promised |
$30 promised |
Part B, Section 4d - NPSAS:20 Main Data Collection Insert
Revision of Table 11 to reflect updated NPSAS:20 data collection design (page 23).
Original table:
Table 11. NPSAS:20 Data Collection Design
Phase number |
Description |
Phase 1 |
Successful incentive from calibration sample offered to everyone. |
Phase 2 |
Successful incentive from calibration sample offered to remaining nonresponding cases. |
Phase 3 |
Abbreviated survey (15 minute) + $20 or $30 promised depending on the data collection group |
Phase 4 |
Mini survey for nonresponse adjustments (5 minute) + $5 promised |
Revised table:
Table 11. NPSAS:20 Data Collection Design
Phase number |
Description |
Phase 1 |
$30 promised incentive |
Phase 2 |
$30 promised incentive Contingency, for example, for cases fielded late in data collection or FTBs: $10 prepaid PayPal or check incentive + $20 promised incentive |
Phase 3 |
Abbreviated survey (15 minute) with $30 promised incentive Contingency, for example, for cases fielded late in data collection or FTBs if not already implemented in Phase 2: $10 prepaid PayPal or check incentive + $20 promised incentive |
Phase 4 |
Mini survey for nonresponse adjustments (5 minute) + $5 promised |
2) Revised the introductory paragraph on the calibration experiment to reflect the actual results (pages 24-26).
Because
Phase 1 and Phase 2 outcomes from the calibration sample cannot be
considered in isolation (e.g., the propensity to respond in Phase 2
will be affected not only by what is offered in Phase 2, but also by
what was previously offered in Phase 1), the successful incentive
strategy for each phase of the NPSAS:20 data collection will be
driven by the overall Phase 1 and 2 outcome of the calibration
sample. For example, if the control condition (Group 3) outperforms
the two experimental conditions (Groups 1 and 2) in week 3 of
calibration Phase 2, the incentive strategy implemented in Phases 1
and 2 of the main NPSAS:20 data collection will be Group 3’s
incentive design ($0 prepaid and $30 promised in Phase 1, and no
change in Phase 2), regardless of whether there was a significant
increase in response rates or representativeness based on the $2
prepaid incentive in Phase 1 calibration.
Table 12 provides an overview of the NPSAS:20 calibration sample response rates for each data collection protocol by data collection phase.
Phase 1 Response Rates. Comparing Group 1 (AAPOR RR11=54.5 percent) and Group 3 (53.7 percent) response rates allows us to assess the effect of offering a $2 prepaid incentive on response rates. Running a two-tailed z-test yields no statistically significant differences in response rates between the two groups at the end of Phase 1 (z = -0.52, p = 0.60). This finding is not unexpected given that the timing of the start of the NPSAS:20 calibration data collection coincided with the COVID-19 pandemic, when many schools closed shortly after the mailing of the initial invitation to complete the NPSAS:20 survey that contained the $2 prepaid incentive. Sampled students would have received the incentive mailing with a delay (assuming mail forwarding) and in a period of immense stress as they were moving and adjusting to the new situation. These unusual circumstances could explain why the $2 prepaid incentive did not have the initially anticipated impact.
Comparing the response rates between Group 1 and Group 2 is a direct test of the initial promised incentive amount ($30 vs. $15). Group 1 has a significantly higher response rate (54.5 percent) compared to Group 2 (44.7 percent), based on a two-tailed z-test (z = 6.25, p < 0.001), suggesting that offering a higher incentive from the start, i.e., front-loading, might be the preferred approach for shorter data collections (Phase 1 only).
Overall Response Rates (Phases 1 and 2). The response rate results two weeks after the start of Phase 2 still show significantly higher response rates for Groups 1 and 3 (60.3 percent [z = 3.82, p < 0.001] and 57.9 percent [z = 2.25, p < 0.05], respectively), relative to Group 2 (54.4 percent) despite increasing the Group 2 promised incentive from $15 to $30. This suggests that front-loading the incentive might still be the more successful approach and that doubling the incentive in a nonresponse follow-up does not seem to close the response rate gap so far.
The response rate difference between Group 1 and Group 3 remains statistically nonsignificant, despite the introduction of the $10 prepaid incentive in Group 1 (z = 1.57, p = 0.12). Due to the COVID-19 pandemic and increased student mobility at the time, we had decided to limit the mailing of the $10 prepaid incentive during Phase 2 to PayPal in Group 1 instead of also sending checks that could be perceived as more tangible and legitimate. We assume that this design change could explain the statistically nonsignificant results since so far approximately 66 percent of the nonresponding sample members did not claim their PayPal prepaid incentive.
Table 12: Cumulative response rates per phase by experimental condition (in percent) |
|||
Phase of NPSAS:20 Calibration |
Group 1 $2 prepaid + $30 promised $10 prepaid + $20 promised
n=2,030 |
Group 2 $2 prepaid + $15 promised $30 promised
n=2,030 |
Group 3 (Control) $30 promised $30 promised
n=2,030 |
Phase 1 |
54.5 |
44.7 |
53.7 |
Phase
1 + Phase 2 |
60.3 |
54.4 |
57.9 |
Note: Results exclude ineligible cases. Partial interviews are considered nonrespondents for analytic purposes. Source: U.S. Department of Education, National Center for Education Statistics, 2019–20 National Postsecondary Student Aid Study (NPSAS:20) |
Phase 1 and 2 Representativeness. In addition to monitoring response rates, we conducted nonresponse bias analyses to assess the representativeness of the responding sample for each data collection group across key demographic characteristics such as age, gender, race and ethnicity. Table 13 displays summary measures for the demographic distributions by group for the responding sample in Phase 1 and Phase 1 and Phase 2 combined, as well as the overall sample including nonresponding cases. Comparing the responding sample composition across the different phases with the overall sample composition shows the magnitude of nonresponse bias. For example, the overall eligible sample in Group 1 consists of 56.3 percent females. After Phase 1/Phase 1 and 2 the responding sample overrepresents females by 5.2/4.8 percentage points with a total of 61.5 percent/61.1 percent females.
The table shows that the three data collection protocols do not yield samples with a different demographic composition and suggests no differential nonresponse bias across the three experimental groups. A formal two-sided z-test shows that we fail to reject the null hypothesis of no difference in all instances across all phases so far with the exception of female in Group 2 in Phase 1 and 2 combined (z = -2.12, p < 0.05).
Table 13: Cumulative sample composition per phase by experimental condition |
|||
Phase of NPSAS:20 Calibration |
Group 1 $2 prepaid + $30 promised $10 prepaid + $20 promised |
Group 2 $2 prepaid + $15 promised $30 promised |
Group 3 (Control) $30 promised $30 promised |
Age (mean) |
|||
Phase 1 |
25.6 |
25.8 |
25.7 |
Phase
1 + Phase 2 |
25.7 |
25.6 |
25.8 |
Overall Sample (n=6,080)1 |
25.7 |
25.2 |
25.7 |
Female (in percent) |
|||
Phase 1 |
61.5 |
60.8 |
58.9 |
Phase
1 + Phase 2 |
61.1 |
59.8 |
58.0 |
Overall Sample (n=6,060)1 |
56.3 |
57.3 |
55.0 |
White (in percent) |
|||
Phase 1 |
66.3 |
67.9 |
69.0 |
Phase 1 + Phase 2 (nonresponse follow-up) |
66.3 |
67.9 |
69.3 |
Overall Sample (n=5,520)1 |
66.1 |
66.7 |
67.6 |
Hispanic (in percent) |
|||
Phase 1 |
13.7 |
15.1 |
14.0 |
Phase 1 + Phase 2 (nonresponse follow-up) |
13.7 |
14.4 |
13.7 |
Overall Sample (n=5,510)1 |
14.4 |
14.9 |
15.1 |
Potential FTB (in percent) |
|||
Phase 1 |
19.7 |
21.3 |
19.1 |
Phase 1 + Phase 2 (nonresponse follow-up) |
20.1 |
21.6 |
19.3 |
Overall Sample (n=6,080) |
22.4 |
23.7 |
22.4 |
Graduate Student (in percent) |
|||
Phase 1 |
15.1 |
15.0 |
15.9 |
Phase 1 + Phase 2 (nonresponse follow-up) |
15.2 |
14.1 |
15.8 |
Overall Sample (n=6,080) |
12.9 |
11.7 |
13.3 |
1 Sample sizes for the overall differ due to missing data. Note: Results exclude ineligible cases. Partial interviews are considered nonrespondents for analytic purposes. Source: U.S. Department of Education, National Center for Education Statistics, 2019–20 National Postsecondary Student Aid Study (NPSAS:20) |
Overall, given no significant advantage of the $2 prepaid incentive and no statistically significant differences between Groups 1 and 3, we recommend proceeding with the incentive design for Group 3 ($30 promised incentive) for the NPSAS:20 main data collection.
However, we recommend the use of the $10 prepaid PayPal or check incentives in the main data collection (as originally planned for the calibration before COVID-19) with a $20 promised incentive as needed as a last-effort-targeted intervention in either Phase 2 or 3 for nonresponding sample members who are hard to get (e.g., sample members from for-profit institutions, who are fielded later in data collection and tend to have lower response rates as a result of a shorter field period; FTBs, etc.). The Phase 2 data so far show that response rates are statistically significantly increased among students who did claim their prepaid $10 PayPal incentive (28.1 percent) compared to those who did not (4.9 percent; z = 8.71, p < 0.001). The overall difference is still statistically non-significant, but the gap between Group 1 and Group 3 does seem to be widening – increasing from .8 percentage points at the end of phase 1 to 2.4 percentage points currently. Unfortunately, we have too few for-profit institutions in the calibration sample to do any reliable sub-group analyses. We will continue to monitor phase capacity and response rates for these subgroups and exercise this option as needed.
Added a new reference for the footnote used in discussion of the calibration experiment results (page 27).
American Association for Public Opinion Research. 2016. Standard Definitions Final Dispositions of Case Codes and Outcome Rates for Surveys. Retrieved 05/07/2020: https://www.aapor.org/AAPOR_Main/media/publications/Standard-Definitions20169theditionfinal.pdf.
1 Unless noted otherwise all response rates reported refer to the response rate 1 (RR1) as defined by the standards of the American Association for Public Opinion Research (AAPOR 2016). The RR1 is the number of complete interviews (excluding partial interviews) divided by the number of complete and partial interviews plus all non-interviews (excluding confirmed ineligible).
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | Memorandum United States Department of Education |
Author | audrey.pendleton |
File Modified | 0000-00-00 |
File Created | 2021-01-14 |