20090821 OMB Mini Supporting Statements A B 8-25

20090821 OMB Mini Supporting Statements A B 8-25.doc

Generic Clearance for Satisfaction Surveys of Customers (CSR)

20090821 OMB Mini Supporting Statements A B 8-25

OMB: 0925-0474

Document [doc]
Download: doc | pdf

NIH External Constituency Surveys - CSR


Mini Supporting Statement

UNDER GENERIC CLEARANCE 0925-0474

NATIONAL INSTITUTES OF HEALTH


















Name: Dr. Andrea Kopstein

Address: Center for Scientific Review

RKL2 - Two Rockledge Center, 3030
6701 Rockledge Dr
Bethesda, MD

Telephone: 301-435-1111

Fax: 301-443-2636

Email: kopsteina@mail.nih.gov



August 2009

Table of Contents

Section Page

LIST OF ATTACHMENTS iii

A. JUSTIFICATION 1

A.1 Circumstances Requiring the Collection of Data 1

A.1.1 Purpose 1

A.1.2 Background 1

A.2 Purpose and Use of the Information Collection 2

A.3 Use of Information Technology and Burden Reduction 2

A.4 Efforts to Identify Duplication and Use of Similar Information 2

A.5 Impact on Small Business or Other Small Entities 3

A.6 Consequences of Collecting the Information Less Frequently 3

A.7 Special Circumstances Relating to the Guidelines of 5 C.F.R. 1320.5 3

A.8 Comments in Response to the Federal Register Notice and Efforts to Consult Outside Agency 3

A.9 Explanation of Any Payment of Gift to Respondents 3

A.10 Assurance of Confidentiality Provided to Respondents 3

A.11 Justification for Sensitive Questions 5

A.12 Estimates of Hour Burden Including Annualized Hourly Costs 5

A.13 Estimate of Other Total Annual Cost Burden to Respondents or Record Keepers 6

A.14 Annualized Cost to the Federal Government 6

A.15 Explanation for Program Changes or Adjustments 7

A.16 Plans for Tabulation and Publication and Project Time Schedule 8

A.16.1 Plans for Tabulation 8

A.16.2 Plans for Publication 10

A.16.3 Project Time Schedule 10

A.17 Reason(s) Display of OMB Expiration Date is Inappropriate 10

A.18 Exceptions to Certification for Paperwork Reduction Act Submissions 10

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS 10

B.1 Respondent Universe and Sampling Methods 10

B.1.1 Respondent Universe 10

B.1.2 Sample Selection 12

Initial Sample Sizes Based on Precision Requirements 12

B.1.3 Response Rates 15

B.1.4 Sample Weights 15

B.1.5 Estimation Procedure 16

B.2 Procedures for the Collection of Information 16

B.2.1   Data Collection Procedures 16

B.3 Methods to Maximize Response Rates and Deal with Non-response 16

B.4 Test of Procedures or Methods to be Undertaken 17

B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data 17



LIST OF ATTACHMENTS

Attachment 1 - Applicant Survey Instrument

Attachment 2 - Reviewer Survey Instrument

Attachment 3 - Privacy Act Determination Letter

Attachment 4 - IRB Exemption Letter

Attachment 5 - Hard Copy Lead Letter

Attachment 6 - Email Invitation

Attachment 7 - Email to Provide Access Code

Attachment 8 - Email Reminders

Attachment 9 - Hard Copy Final Reminder

Mini Supporting Statement

NIH External Constituency Surveys

under Generic Clearance No. 0925-0474

National Institutes of Health

A. JUSTIFICATION

A.1 Circumstances Requiring the Collection of Data

A.1.1 Purpose

This is a request to conduct voluntary customer satisfaction surveys of the National Institutes of Health’s (NIH’s) Enhancing Peer Review Initiative. These surveys will help fulfill the requirements of:

  • Executive Order 12862, “Setting Customer Service Standards,” which directs Agencies to continually reform their management practices and operations to provide service to the public that matches or exceeds the best service available in the private sector; and

  • The March 3, 1998 White House Memorandum, “Conducting Conversations with America to Further Improve Customer Service,” which directs Agencies to determine the kind and quality of service their customers want as well as their level of satisfaction with existing services.

A.1.2 Background

The peer review system is a cornerstone of NIH and has been adopted internationally as the best guarantor of scientific independence. However, the increasing breadth, complexity, and interdisciplinary nature of modern research have created many challenges for this system.1 The NIH recognizes that as the scientific and public health landscape continue to evolve, it is critical that the processes used to support science are fair, efficient, and effective. In June 2007, therefore, the NIH Director established working groups to examine peer review at NIH as part of a broad Enhancing Peer Review initiative. The initiative has consisted of two, discrete phases: a diagnostic phase and an implementation phase. The goal of the first, diagnostic phase was to identify the most significant challenges to the system used by the NIH to support science and propose recommendations that would enhance this system in the most transformative manner. Specific implementation issues were articulated during the second, implementation phase. As implementation proceeds, a process of assessment and continuous improvement was also established.

The peer review implementation plan was developed to accomplish three priority goals: 1) engage the best reviewers, 2) improve the quality and transparency of review and 3) ensure balanced and fair reviews across different scientific fields and career stages.  Additional information on the Enhancing Peer Review Initiative can be found at: http://enhancing-peer-review.nih.gov/

Peer review process changes were first implemented in January 2009 and other changes will occur in the future. The NIH is committed to a quality control and improvement process for peer review. It is crucial to get ongoing satisfaction information from constituents to inform this improvement.

A.2 Purpose and Use of the Information Collection

Two surveys are planned -- a Reviewer Survey and an Applicant Survey. The primary objective of these surveys is to assess peer reviewers’ and grant applicants’ experience with the peer review enhancements. The findings from the surveys will provide an important source of information for developing recommendations to further refine the enhanced peer review process. The information collected in these surveys is needed by NIH to obtain customer feedback about their satisfaction with the changes being implemented. The surveys will form one component of a variety of sources of information that NIH relies on for meeting the need for timely review of peer review. They will assess the procedural changes to the peer review system, particularly with respect to the R-series funding mechanisms (i.e., R01, R03, and R21). The surveys are intended to garner specific information about reviewers’ and applicants’ most recent experience with peer review enhancements.

The Reviewer Survey will focus on respondents’ experience with the new (enhanced) peer review procedures that began being implemented in January 2009 (e.g., enhanced review criteria, templates for structured critiques, scoring of individual review criteria, use of a 9-point scoring scale, and clustering of New Investigator/Early Stage Investigator applications for review).

The Applicant Survey will ask respondents to report on their most recent application experience. Those who have experienced the peer review enhancements will be asked to rate the usefulness of the 9-point rating scale, scoring of individual review criteria and overall impact/priority, and other key elements.

A cross-sectional design will be used in implementing the surveys. Every year for three years, a sample of reviewers and applicants will be selected to complete the surveys. This approach will provide annual “snapshots” of reviewers’ and applicants’ perceptions of and experience with the peer review process.

A.3 Use of Information Technology and Burden Reduction

The mode of data collection for these surveys was carefully considered with respondent burden in mind. It was determined that automated information technology will be used to collect and process the information. The surveys will be conducted online. Invitations to participate will be sent to the selected sample members via mail and email.

A.4 Efforts to Identify Duplication and Use of Similar Information

Collected information will be limited to that which is needed to assess customer satisfaction. Some of the data we are seeking is available through NIH data systems where administrative information relating to research grants and contracts is stored. For applicants, for example, this includes administrative data on individual grant applications (e.g., date of submission, type of application, and application status). For reviewers, NIH maintains data on the number, dates, type of review activities, and other topics.



As part of the preparations for these surveys, NIH consulted staff members involved in the development of peer review enhancements, Scientific Review Officers (SROs), and with external survey experts for input on the factors that should be included in the survey analysis. Based on these consultations, NIH determined that some of the data elements included in the NIH databases are essential for achieving the aims of this survey. However, OMB Generic Clearance 0925-0474 does not allow these data to be linked to the customer satisfaction survey responses. The proposed survey instruments minimize the duplication to the maximum extent possible. Only the essential demographic data are requested.

A.5 Impact on Small Business or Other Small Entities

No small businesses or other small entities will be impacted by this information collection.

A.6 Consequences of Collecting the Information Less Frequently

Individual applicants and reviewers will be asked to complete the survey only once in FY2010. For subsequent years, new samples of applicants and reviewers will be drawn.

Absent this survey frequency, changes to the peer review system would not be adapted to meet customer needs based on customer satisfaction, because satisfaction with the system would not be ascertained.

A.7 Special Circumstances Relating to the Guidelines of 5 C.F.R. 1320.5

This data collection fully complies with 5 C.F.R. 1320.5.

A.8 Comments in Response to the Federal Register Notice and Efforts to Consult Outside Agency

Not applicable.

A.9 Explanation of Any Payment of Gift to Respondents

No payment or gift will be offered to survey participants.

A.10 Assurance of Confidentiality Provided to Respondents

The NIH Privacy Act Officer has reviewed this OMB request and determined that the Privacy Act is applicable (Attachment 3).

Concern for privacy and protection of respondents’ rights will play a central part in the implementation of the surveys. All survey procedures will be consistent with OMB Generic Clearance 0925-0474, which assures “the responses to the questionnaire surveys are entirely anonymous and have no identifiers to link them to individual respondents.” Strict procedures will be followed for protecting the anonymity of information gathered from the participants. Participation will be fully voluntary, and non participation will have no impact on eligibility for or receipt of future funding.

Safeguarding procedures that we will implement include:

  • The safeguarding protections offered to survey participants are described in the informed consent language in the introduction to the survey instruments. Respondents will be informed their participation is voluntary and that no consequences will be associated with not responding or with responding. Individuals contacted in the course of these surveys will be assured of the confidentiality of their replies under 42 USC 1306, 20 CFR 401 and 422, 5 USC 552 (Freedom of Information Act), 5 USC 552a (Privacy Act of 1974), Privacy Act System of Records Notice: 09-25-036, and OMB Circular No.A-130.

  • All data will be analyzed and reported in an aggregate form that does not personally identify any applicants or reviewers.

  • An independent contractor, RTI International (RTI), will collect and collate the surveys electronically. RTI will also be responsible for initial analysis and reporting of the data. The data sets that will be transferred back to NIH staff will be fully de-identified. RTI has the required security clearances in order to assure confidentiality and protection of the data.

  • RTI’s Institutional Review Board (IRB) has determined that these surveys are exempt from IRB review (IRB ID Number 12444) based upon information provided by the RTI project manager (Attachment 4). In addition, all study staff members will receive Human Subjects Protection Awareness training. This training will promote awareness of the human subjects’ protection offered by the survey design, ethical issues and concerns, and regulations and assurances by which the survey is governed.

  • Access to data will be restricted to project staff members on an as needed basis.

RTI will observe high standards of information technology (IT) security to protect the confidentiality, integrity and availability of all computer-based systems and the data they contain. RTI IT security policies and procedures are designed to protect information systems and data from a wide range of risks and will educate their staff to be aware of their responsibilities for ensuring information security and to comply with these policies. RTI also participates with agencies to ensure that their policies conform to agency information security requirements and applicable laws and regulations as required by contract. RTI has System Security Plans for its infrastructures in which it documents how they secure their systems using administrative, technical, and physical controls.

All computer-based systems employed by RTI will comply with the Privacy Act of 1974. The system security features will include:

  • User ID and Password authentication required to access all computer systems

  • The Website will operate on a certified and accredited Internet-accessible Standard Security Infrastructure which has received an Authority to Operate in accordance with NIST special publication 800-37 (Guide for the Security Certification and Accreditation of Federal Information Systems).

  • Web content delivery will be on FIPS 140-2 compliant hardware.

  • Fully switched and routed Ethernet-based local area networks (LANs) support both corporate and project initiatives. RTI wide area networks (WANs) employ technologies which include site-to-site VPN, Metro Ethernet, MPLS, VSAT, Voice over IP (VoIP), and WAN Acceleration appliances.

  • Access from the Internet is available to authorized staff only and is controlled by RTI’s Internet firewalls. Remote access to RTI’s data networks is provided through the use of client-computer-installed VPN software, a clientless SSL/VPN portal, and direct dial-in connections. The use of RSA SecurID two-factor authentication for remote access is supported.

A.11 Justification for Sensitive Questions

The NIH is committed to providing high-quality service to its customers.  Given the diversity of its constituents, it is important for NIH to collect survey data from a wide range of customers. Hence, the Applicant Survey and Reviewer Survey contain questions regarding respondents’ race, ethnicity, gender, age and work-related information (type of employer organization and job title).  This information will allow NIH to analyze the survey data by subgroups and support NIH’s long-standing efforts to strengthen the diversity of the membership of its applicants and reviewers.  


Respondents may skip any or all of the questions concerning race, ethnicity, gender, age and work-related information in the surveys.  Those who choose to provide these demographic data will do so on a strictly voluntary basis.  The surveys will not collect any personally identifiable information.  Thus, any demographic information gathered by the surveys will not be able to be linked to individual respondents.     

A.12 Estimates of Hour Burden Including Annualized Hourly Costs

The total number of participants that will be sampled is 4,710. These participants are university and other members of the NIH research community. The total sample size is expected to be 1,521 applicants, 1,367 reviewers, and 1,822 who are both an applicant and a reviewer. All sample members are expected to be eligible, by definition, but we expect some sample members will be out of the country, retired, deceased, or otherwise unlocateable during the duration of this survey. We assume approximately 80% of each group will complete the survey (or 1,217 applicants, 1,094 reviewers, and 1,458 who are both reviewer and applicant). The total number of non respondents is thus estimated at 941.

It is estimated that each of these two surveys will take an average of 15 minutes to complete. The annual hour burden is, therefore, estimated to be 942.3 hours for approximately 3,769 respondents (1,094 reviewers, 1217 applicants, and 1,458 who are both reviewer and applicant) (Table A.12-1).

Estimated costs to the respondents consist entirely of their time. Costs for time were estimated using a rate of $40.00 per hour for adult science professionals. The estimated annual cost burden for respondents for each year for which the generic clearance is requested is $37,690 (Table A.12-2).

Table A.12 – 1 Estimates of Annual Hours Burden (Based on Expected 80% Response)

Types of Respondents

Number of Respondents

Frequency of Response

Average Response Time

Annual Hour Burden

FORM: Reviewer/Applicant Questionnaire





Adult Science Professionals – Applicant Only

1,217

1

.25

304.3

Adult Science Professionals – Reviewer Only

1,094

1

.25

273.5

Adult Science Professionals – Both an Applicant and a Reviewer

1,458

1

.25

364.5

Total

3,769

--

--

942.3

Table A.12 – 2 Annualized Cost to Respondents (Based on Expected 80% Response)

Types of Respondents

Number of Respondents

Frequency of Response

Average Time per Respondent

Hourly Wage Rate

Respondent Cost

FORM: Reviewer/Applicant Questionnaire






Adult Science Professionals – Applicant Only

1,217

1

.25

$40

$12,170

Adult Science Professionals – Reviewer Only

1,094

1

.25

$40

$10,940

Adult Science Professionals – Both an Applicant and a Reviewer

1,458

1

.25

$40

$14,580

Total

3,769

--

--

--

$37,690


A.13 Estimate of Other Total Annual Cost Burden to Respondents or Record Keepers

We do not require any additional record keeping.

A.14 Annualized Cost to the Federal Government

For the first year, the approximate annualized cost to the government for this data collection effort is approximately $757,056 (Table A.14-1). Total government personnel costs will be $205,740, taking into account benefits and estimated cost of living adjustments expected to occur midway through the first year. This figure assumes a median GS-15 annual salary of $136,941 - $140,912 for an NIH professional to manage the projects, a median GS-14 salary of $116,419 - $119,795 for three staff members to provide expert reviews and analysis, and a median GS-11 annual salary of $69,118 – $71,112 for administrative and technical support. Salaries are based on the January 2009 General Schedule for the Washington, DC metropolitan area (http://www.opm.gov/oca/09tables/html/dcb.asp) and are estimated to increase by 2.9 percent in January 2010. Details are provided in the table below. Estimated annual federal personnel costs for the second year are $112,848, which adjusts for a reduced level of effort for federal staff and an estimated 2.9 percent cost of living increase. Estimated annual federal personnel costs for the third year are $116,151, which includes the same level of effort as the second year and an estimated 2.9 percent cost of living increase.


Other costs to the government include travel for federal staff to work on-site in North Carolina and costs related to IT system security testing and evaluation. Travel to North Carolina for 3 federal staff is estimated to cost approximately $3,000 annually. Costs to test the IT system holding and maintaining the data are approximately $50,000 for the first year, and $25,000 for subsequent years.


Contractor support will be required to carry out the data collection efforts. It is estimated that the first year of this effort will cost approximately $496,336. Survey efforts in subsequent years will cost an estimated $298,180 for the second year and $233,692 for the third year. The cost reduction in the second and third years is because IT systems and surveys will have mostly been finalized in the first year and will likely require only minor changes. The NIH anticipates undertaking no more than one project in 12 months.

Mailing costs for paper surveys total $1,980 assuming a US Postal rate of $0.44 per ounce.


Table A.14-1. Annualized Costs

Activity

Cost Year 1

Cost Year 2

Cost Year 3

Administration of the Clearance




NIH staff (1 GS-15) – 30% FTE beginning @ $136,941/yr in Aug 2009 – Jul 2010; 15% thereafter

54,102

27,836

28,643

NIH staff (3 GS-14)– 30% FTE beginning @ $116,419/yr in Aug 2009 – Jul 2010; 15% thereafter

137,985

70,992

73,050

Administrative support (1 GS-11) – 15% FTE beginning at $69,118/yr in 2009

13,653

14,049

14,457

Federal Staff Travel

3,000

3,000

3,000

System Security Testing and Evaluation

50,000

25,000

25,000





Contract Support for Data Collection




1 project per year

496,336

298,180

233,692





Mailing Cost for Paper Surveys




4,500 surveys x $0.44

1,980

1,980

1,980





Total

$757,056

$441,037

$379,822


A.15 Explanation for Program Changes or Adjustments

This is a new sub-study request.

A.16 Plans for Tabulation and Publication and Project Time Schedule

A.16.1 Plans for Tabulation

The analysis plan is designed to examine the degree to which survey responses differ across key analysis groups or combinations of those groups. Key analysis groups are defined by combining the following information to form groups of interest, such as race and ethnicity.

Comparisons across key groups will focus on topics such as experience with the peer review process, satisfaction ratings about the peer review process, as well as the format of grant applications.


Analyses will focus mainly on descriptive information including two-way tables to compare groups of interest.


Data collected for this study will be aggregated. No results will be reported that identify respondents by name or another identifier that allows respondent’s identity to be disclosed. Specific procedures for analyzing the data are described in the following paragraphs.

Descriptive Information

Analysis will begin with a description of the applicants and peer reviewers who responded to the peer review surveys. The Reviewer Survey and the Applicant Survey are provided in Attachments 1 and 2. One analysis table will be created with the demographic variables collected in Section C on the applicant questionnaire and Section E on the peer reviewer questionnaire. This data will be presented in one table but the table will contain two columns, one column for applicant data and one column for peer reviewer questionnaire data.

Data will be presented in tabular format with frequencies and percents for categorical variables; means, minimum and maximum values will be displayed for continuous variables.

Table A.16-1 is an illustration of the table that will be compiled during analysis for the descriptive demographic related questions shown above. This table shell shows only a subset of the variables and the actual table produced from survey responses will contain additional rows. For categorical variables, the cells of the table will contain the frequency count of the responses as well as their respective percentages (based on non-missing data) and, for continuous variables, the cells of the table will contain the mean, responding sample size, and minimum and maximum values. The overall numbers of respondents will be given in the column headers.

Table A.16-1: Demographic Information – Sample Table Shell

Demographic Question

Applicant Questionnaire

Reviewer Questionnaire

N =

N =

Ethnicity



Hispanic

n (%)

n (%)

Non-Hispanic

n (%)

n (%)

Total Number Years funded as PI

mean (n, min-max)

mean (n, min-max)

Assessing Unit and Item Non-response

After an overall descriptive summary of the sample respondents, a Unit and Item non-response analysis will be carried out. While sampling weights will be adjusted for unit non-response within sampling strata, if the response rate within sampling strata is low (say less than 75%), then the sample respondents may not be representative of the relevant target population. In order to assess whether or not unit response rates are low, response rates will be tabulated for each race and ethnicity group within the three selected samples (Applicant only, Reviewer only, and individuals who are both Applicant and Reviewer).

Even when unit response rates are high, item nonresponse amongst respondents may reduce the degree to which inferences about such an item is trusted. Since there are a variety of analyses that may be carried out using the peer review surveys’ responses, one could calculate item nonresponse for a variety of analytical subgroups. We will tabulate item response rates, separately for the Applicant and Reviewer questionnaires, overall and within some key analytical subgroups (e.g., race and ethnicity).

Analysis of Surveys’ Responses of Applicant and Reviewer Surveys

Survey responses to various questions will be analyzed by comparing survey responses between the key groups described in the first section. Categorical responses will be analyzed by cross-tabulating weighted responses across given groups (such as race or ethnicity). Statistical differences will be assessed by performing sample survey appropriate Chi-square tests of proportions to test for independence of survey responses across the groups. Continuous responses will be analyzed by reporting weighted means across given domains. Statistical differences will be assessed by performing sample survey appropriate t-tests to test for differences in mean response across the domains. Two-way tables will be created for all satisfaction/opinion questions in order to compare the groups of interest. All categorical variables will contain the frequency counts of the responses as well as their respective percentage of non-missing data. All continuous variables will be displayed with means along with the number of non-missing responses, minimum and maximum values.

Where appropriate, a comparison will be made between those applicants who have experience with peer review enhancements and those without experience. The same comparison will be made with the reviewers’ responses.

Tables A.16-2 and A.16-3 are examples of tables to display the results of the analysis.

Table A.16-2. Experience of Applicants – Sample Table

Question

Applicant Questionnaire

N =

Application assigned numerical impact/priority score

n (%)

Application received NOA -- funded

n (%)

Number of years of research funding received

mean (n, min-max)


Table A.16-3. Experience of Peer Reviewer – Sample Table

Question

Reviewer Questionnaire

N =

Capacity as a NIH reviewer


Regular (appointed)

n (%)

Ad hoc (temporary)

n (%)

Both regular and ad hoc

n (%)

Reviewer for Components of NIH


Center for Scientific Review

n (%)

One or more NIH Institutes/Centers (ICs)

n (%)

Both CSR and ICs

n (%)

A.16.2 Plans for Publication

A written report with accompanying charts will be provided to NIH management for internal use. There are no plans to publish the results of these surveys.

A.16.3 Project Time Schedule

The project time schedule is provided in Table A.16-4. OMB clearance is being requested for one year.

Table A.16-4. Project Time Schedule

Activity

Time Schedule

Mail lead letters

1 day after OMB approval

Launch survey Website

3 days after OMB approval

Conduct data collection

1 – 6 weeks after OMB approval

Create analysis file and analyze data

2 – 3 months after OMB approval

Document findings

3 – 4 months after OMB approval

A.17 Reason(s) Display of OMB Expiration Date is Inappropriate

We are not requesting an exemption to the display of the OMB Expiration date.

A.18 Exceptions to Certification for Paperwork Reduction Act Submissions

These surveys will comply with the requirements in 5 CFR 1320.9.

B. COLLECTIONS OF INFORMATION EMPLOYING STATISTICAL METHODS

B.1 Respondent Universe and Sampling Methods

B.1.1 Respondent Universe

There are two populations of interest under the Peer Review Enhancement Surveys: an applicant population and a reviewer population. These populations are defined as follows:

Applicant Population

The applicant population comprises those individuals who submitted R01, R03, and/or R21 NIH applications reviewed in any of the Advisory Councils/Boards of NIH’s constituent Institutes and Centers (ICs) in October 2008, January 2009, and/or May 2009 which excludes individuals who have experience with only the enhanced (new) peer review system, and excludes those individuals whose only experience with the NIH peer review is limited to recent applications related to the American Recovery and Reinvestment Act (ARRA) of 2009.

Reviewer Population

The reviewer population comprises those individuals who reviewed applications of any IC’s mechanism/activity (R-series and others) that were subsequently reviewed by the Advisory Councils/Boards in October 2008, January 2009, and/or May 2009 which excludes individuals who have experience with only the enhanced (new) peer review system. The target population of reviewers includes regular (appointed/permanent) and ad hoc (temporary) reviewers.

Applicant and Reviewer Population

There are some individuals in both the applicant population and in the reviewer population. The sampling design for the peer review surveys was developed so that no individual who resides in both populations would be contacted for both the Applicant Survey and the Reviewer Survey. Table B.1-1 shows the total number of individuals in the universe of all applicants and reviewers (column 2), the number of individuals who are applicants but not reviewers (column 3) and the number of individuals who are reviewers but not applicants (column 4). Table B.1-1 also shows the numbers of individuals by race and ethnicity2 in the total applicant population (column 7), in the total reviewer population (column 6), and in the total population of individuals who are both an applicant and a reviewer (column 5).


The total number of applicants and reviewers (45,173) is equal to the sum of the number of individuals who are applicants only (22,444), the number of individuals who are reviewers only (13,804), and the number of individuals who are both applicant and reviewer (8,925). The number of individuals who are applicants (31,369) equals the number of individuals who are applicants only (22,444) plus the number of individuals who are both applicant and reviewer (8,925). The number of individuals who are reviewers (22,729) equals the number of individuals who are reviewers only (13,804) plus the number of individuals who are both an applicant and a reviewer (8,925).

Table B.1-1. Applicant and Reviewer Population Counts

Col. (1) (2) (3) (4) (5) (6) (7)

Strata

All Applicants and Reviewers

Applicants Only

Reviewers Only

Both Applicant and Reviewer

Population

Total Reviewer Population

Total Applicant Population

Asian, Hispanic

41

22

10

9

19

31

Black, Hispanic

36

22

10

4

14

26

Native American, Hispanic

60

40

12

8

20

48

Other, Hispanic

1,425

679

448

298

746

977

Asian, non-Hispanic

6,872

4,070

1,317

1,485

2,802

5,555

Black, non-Hispanic

891

377

390

124

514

501

Native American, non-Hispanic

177

77

76

24

100

101

Other, non-Hispanic

35,610

17,129

11,521

6,960

18,481

24,089

Pacific Islander, non-Hispanic

61

28

20

13

33

41

Total

45,173

22,444

13,804

8,925

22,729

31,369

B.1.2 Sample Selection

Determining Overall Sample Sizes

The total number of individuals that may be sampled and subsequently surveyed is defined by burden limits outlined in generic clearance OMB No. 0925-0474. For the Peer Review Enhancement Surveys, the total number of individuals that may be sampled, while remaining under the burden limits required by the NIH guidance, is 4,710. Given the total number of allowable sample members, the next step is to determine how many of the allowable 4,710 sampled individuals should be selected from the applicant population and how many should be selected from the reviewer population.

Broad Allocation Scheme

The total number of individuals to be sampled will be allocated to the following sets of individuals:

  1. Applicants only (22,444)

  2. Reviewers only (13,804)

  3. Those who are both Applicant and Reviewer (8,925)

Within each set, sample sizes must be sufficient in order to allow for estimates within race and ethnicity groups to meet precision requirements (discussed below).

Initial Sample Sizes Based on Precision Requirements

The following four steps were taken for the three groups of individuals: Applicants only, Reviewers only, and those who are both Applicant and Reviewer.

  1. Create a cross-tabulation of the number of individuals by Race (Asian, Black, Native American, Pacific Islander, and Other) and Hispanicity (Hispanic, non-Hispanic)

  2. Using the nQuery Advisor3 software, estimate the number of individuals required to be sampled in each Race by Hispanicity group such that, within each group, a two-sided 95% confidence interval for a population proportion of 50% has a half-width of 5%.

  3. For those groups with population counts under 30 or for whom nQuery reports that the sample size is not large enough for the sample calculation to be approximately normally distributed, include all individuals in those groups in the relevant sample. Such groups are said to be selected with certainty.

  4. For those groups not selected with certainty, nQuery will report the required sample size.

The following table (Table B.1-2) shows the number of individuals selected with certainty or estimated by nQuery as being required to meet the precision requirement outlined in Step 2 above.

Table B.1-2. Initial Sample Sizes Based on Precision Requirements

Col. (1) (2) (3) (4) (5) (6) (7)

Strata

Population Count: Applicants Only

Sample Size: Applicants Only

Population Count: Applicant and Reviewer

Sample Size: Applicant and Reviewer

Population Count: Reviewers Only

Sample Size: Reviewers Only

Asian, Hispanic

22

22

9

9

10

10

Black, Hispanic

22

22

4

4

10

10

Native American, Hispanic

40

40

8

8

12

12

Other, Hispanic

679

246

298

298

448

207

Asian, non-Hispanic

4,070

352

1,485

612

1,317

298

Black, non-Hispanic

377

191

124

124

390

194

Native American, non-Hispanic

77

77

24

24

76

76

Other, non-Hispanic

17,129

376

6,960

730

11,521

372

Pacific Islander, non-Hispanic

28

28

13

13

20

20

Total

22,444

1,354

8,925

1,822

13,804

1,199


Individuals who are in both the applicant target population and the reviewer target population will be selected into the applicant sample as follows:

  1. Select a probability sample of 1,822 individuals from the overlapping applicant and reviewer populations.

  2. Select the sample within strata defined by race (Asian, Black, Other) and Hispanicity (Hispanic, non-Hispanic).

  3. Randomly assign half of the selected 1,822 individuals to the applicant sample (the other half will be assigned to the reviewer sample).

The samples will be sub-sampled such that half of the individuals selected will be assigned the applicant questionnaire and half the reviewer questionnaire. The sample sizes listed in column 5 for those race and ethnicity groups not selected with certainty are designed such that the precision requirements are still met for the 911 (half of 1,822) individuals selected to receive the applicant questionnaire and for the 911 (half of 1,822) individuals selected to receive the reviewer questionnaire.


After determining the sample sizes required to meet the stated precision requirement, the total number of individuals required to be sampled is 1,354+1,822+1,199=4,375. Since the burden requirement allows for a total of 4,710 individuals to be sampled, 4,710-4,375=335 more individuals may be sampled and allocated to the sample groups; in this case, 167 were allocated to the applicant only group, and 168 to the respondent only group.

Allocating Remaining Sample of 335 by Consideration of Weighting

The remaining 335 individuals were allocated to the possible samples (Applicants only, Reviewers only, and those who are both an Applicant and a Reviewer) by considering the impact of sampling weights on the precision of estimates generated from each of the samples. A widely used measure of assessing the degree to which sampling weights affect the precision of statistical estimates is known as the Design Effect. The Design Effect for a given statistical estimate is the ratio of the variance of the estimate under the appropriate complex sampling process to the variance of the estimate when assuming the underlying data arose via a simple random sample. Since there are as many design effects are there are potential estimates, a particular approximation of a design effect is used in practice. It is defined as follows: Given a sample of n individuals with a set of associated sampling weights denoted wi and with the average sampling weight denoted W, then:

Deff=1+(1/n)([Sum(wi-W)^2]/(W)^2)

In the case of simple random sampling, the design effect is 1. When design effects are larger than one then the variance of estimates are larger than they would be if the samples had been selected with simple random sampling. The general rule is to try and keep the Deff from going above 2. With that in mind, the additional 335 individuals were allocated in order to reduce this design effect.


After allocating the additional 335 individuals, the design effects were 1.78 for the applicant only sample, 1.82 for the reviewer only sample, and 1.75 for the sample of individuals who were both an applicant and a reviewer.

Sample Power Analysis

Power estimates were calculated for comparing various race and ethnicity groups for each of the following samples: 1) Applicant only; 2) Reviewer only; 3) Applicant only plus those who are both an Applicant and a Reviewer and were selected to receive the applicant questionnaire; and 4) Reviewer only plus those who are both an applicant and a reviewer and were selected to receive the reviewer questionnaire.  The power estimates ranged from a minimum of 14% to a maximum of 44% for detecting a difference of 5%.  The power estimates ranged from a minimum of 51% to a maximum of 99% for detecting a difference of 10%.

B.1.3 Response Rates

The response rates for the survey will be calculated based on the recommendations of the American Association for Public Opinion Research (AAPOR) published in its Standard Definitions: Final Dispositions of Case Codes and Outcome Rates for Surveys. The formula for the response rate will be defined as follows:


RR4=(I+P)/[(I+P)+(R+NC+O)]


Where

I = Complete Interview

P = Partial Interview

R = Refusal

NC = Non-Contact

O = Other Non-Response


Note that this formula differs from the AAPOR formula in that, since all individuals in the NIH provided sampling frame are assumed to be eligible for the study, no estimate of the number of eligible individuals amongst those with unknown eligibility is included in the denominator. Adjustments to the response rate formula can be made if it is determined that some individuals are not eligible at a later date.

B.1.4 Sample Weights

One nonresponse-adjusted sample weight will be created for the applicant sample and one weight for the reviewer sample. These weights will consist of a product of two factors: the base weight and the nonresponse adjustment. These are defined as follows:

  1. The base weight (for a given sample) is the inverse of the unconditional probability of selecting a sample member into the sample. This weight accounts for the stratification used in the sample design. Note that if all sampled individuals respond then no nonresponse adjustment is necessary.

  2. The nonresponse adjustment (for a given sample) is an adjustment imposed on the sampling weight of the respondents to account for those applicants that do not respond to the survey. In general, this adjustment will be greater than “1”so that each respondent will represent themselves as well as some portion of the nonrespondents.

There are numerous ways of constructing a nonresponse adjustment. For each of the applicant and reviewer samples, we plan to use adjusted base weights within strata using a simple ratio adjustment.

B.1.5 Estimation Procedure

Data analysis will be performed using the SUDAAN® software. SUDAAN® can handle correlated observations in a general sense, with nonparametric and parametric approaches available. Base SAS® software will be used for data manipulation and tabulation of results.

B.2 Procedures for the Collection of Information

B.2.1   Data Collection Procedures

Sample members will be asked to complete the surveys online. The basic steps involved in the data collection process for both the Reviewer Survey and the Applicant Survey include:

  • A lead letter will be sent to each sample member via regular U.S. Postal Service mail (Attachment 5).  The letter will be printed on NIH letterhead and signed by a senior NIH official. It will explain the purpose of the survey and why they were selected to participate. 

  • Three to five days after the lead letter is sent, an e-mail invitation will be sent to all sample members (Attachment 6).  It will again invite the sample member to participate in the survey and will provide a hyperlink to the survey Website. Immediately afterwards, a separate email (Attachment 7) containing a username and password will be sent to the sample members for them to access the survey online.

  • One week after the e-mail invitation, a reminder e-mail will be sent to all sample members (Attachment 8). The e-mail will encourage those who have not yet logged in to the Website to participate in the survey. Immediately afterwards, a separate email (Attachment 7) containing a username and password will be sent to the sample members for them to access the survey online.

  • One week after the first e-mail reminder, a second e-mail reminder will be sent to all non-respondents (Attachment 8).   The e-mail will reinforce the purpose and relevance of the survey.  Immediately afterwards, a separate email (Attachment 7) containing a username and password will be sent to the non-respondents for them to access the survey online.

  • One week after the second e-mail reminder, a third e-mail reminder will be sent to all remaining non-respondents (Attachment 8).   Immediately afterwards, a separate email (Attachment 7) containing a username and password will be sent to the remaining non-respondents for them to access the survey online.

  • In addition, a final reminder letter (Attachment 9) will be mailed by express mail (FedEx) along with a hardcopy version of the survey.   Enclosed with the letter will be a postage paid, business reply envelope for returning the completed questionnaire.

B.3 Methods to Maximize Response Rates and Deal with Non-response

The ability to gain the cooperation of potential respondents is key to the success of these two surveys. Consistent with sound survey methodology, the design of the survey will include approaches to maximize response rates, while retaining the voluntary nature of the effort. We will use the following approaches to maximize response rates for the surveys:

  • Participation will be made as easy and non-burdensome as possible by designing each questionnaire to take no more than an average of 15 minutes to complete.

  • The online instruments will be designed to be clear and easy to understand. Thorough usability testing of the survey instruments will be conducted to eliminate technical errors and to ensure ease of navigation and use.

  • Advanced outreach will raise awareness about the surveys and to encourage participation (e.g., announcements on NIH Websites and newsletters).

  • The lead letter and introductory e-mail invitations will inform sample members of the study. They will contain enough information to generate interest in the surveys. The letter and email will provide a point of contact at RTI for additional information.

  • Follow-up e-mails will remind sample members about the survey, and encourage participation. These reminders will always include a link to the survey.

  • A final reminder letter will include a hardcopy version of the survey to provide an alternative mode for answering the questions.

B.4 Test of Procedures or Methods to be Undertaken

The questions in each survey have been pre-tested with members of the respondent groups. A total of 16 interviews were completed (9 with reviewers, 7 with applicants). All interviews were conducted by telephone in a secure, private setting, by trained cognitive interviewers. The interviews included a debriefing of the pretest respondents to clarify responses.

The survey instruments have also been tested through a modified Question Appraisal System (QAS). With the QAS, the questions in the instrument were analyzed in relation to the tasks required of the respondents (to understand and respond to the questions) and evaluate the structure and effectiveness of the questionnaire form itself. RTI International’s Question Appraisal System (QAS-04) was used to guide this instrument review. This coding system constitutes an item taxonomy that describes the cognitive demands of the questionnaire and documents the question features that are likely to lead to response error. These potential errors include comprehension, task definition, information retrieval, judgment, and response generation. This appraisal analysis was used to identify possible revisions in item wording, response wording, questionnaire formats, and question ordering/instrument flow.

B.5 Individuals Consulted on Statistical Aspects and Individuals Collecting and/or Analyzing Data

Dr. David Wilson

RTI International

3040 Cornwallis Road

Research Triangle Park, NC 27709

Phone: 919-541-6990

E-mail: dwilson@rti.org

1 NIH (2008) 2007-2008 Peer Review Self-Study. Final Draft.(URL: http://enhancing-peer-review.nih.gov/meetings/NIHPeerReviewReportFINALDRAFT.pdf)



2 1,746 Individuals with Unknown Hispanicity were assumed to be non-Hispanic for purposes of sample selection. 7,523 Individuals with Unknown Race were included in the “Other” race category along with 29,504 Whites and four multi-racial individuals who were not classifiable as Asian, Black, Native American, or Pacific Islander. Four Hispanic, Pacific Islanders were classified as Hispanic, Other for purposes of sample selection because of the extremely limited number of individuals in this group.

3 Janet D. Elashoff (2005). nQuery Advisor version 6.0, Statistical Solutions, Saugus, MA, USA, 01906.


File Typeapplication/msword
File TitleNLM Reading Room example - 04/03/2009
SubjectNLM Reading Room example - 04/03/2009
AuthorOD/USER
Last Modified ByKOPSTEINA
File Modified2009-08-25
File Created2009-08-25

© 2024 OMB.report | Privacy Policy