OF COMMUNICATION TESTING FOR DRUG PRODUCTS (0910-0695)
TITLE OF INFORMATION COLLECTION: Promotional Implications of Proprietary Prescription Drug Names: Pretest
DESCRIPTION OF THIS SPECIFIC COLLECTION
Statement of need:
As part of the prescription drug approval process, the Food and Drug Administration’s (FDA) Center for Drug Evaluation and Research’s (CDER’s) Office of Prescription Drug Promotion (OPDP) conducts a premarket review of proposed proprietary drug names to evaluate if the proposed names “overstate the efficacy, minimize the risk, broaden the indication, or make unsubstantiated superiority claims for the product, or is overly ‘fanciful’ by misleadingly implying unique effectiveness or composition, or is otherwise false or misleading” (Ref. 1).
Previous research has suggested that the properties of names can convey promotional implications about the product’s characteristics. Researchers have found that brand names can use words (semantic symbolism) or sounds (sound symbolism) to imply information about a product’s benefits (e.g., using “zzz” in a drug name to imply the idea of sleep), or to demonstrate properties such as size, shape, and speed (Refs. 2-4). In an experimental study comparing brand names with sound symbolism and brand names without sound symbolism, participants perceived the benefits to be greater for the product with the brand name that used sound symbolism. People may also have higher recall of brand names with promotional implications than neutral brand names (Ref. 2). Research has also suggested that there is an inverse relationship between perceptions of risks and benefits (Ref. 5), so inflated perceived benefits may influence risk perceptions. Understanding if the characteristics (e.g., number of letters in the name, location of prefixes/suffixes/intervening letters, or the connection between two parts of a name) of proprietary drug names can have promotional implications will help FDA make informed policy decisions.
We plan to conduct an experimental study that examines the impact of features of drug names to determine if and when proprietary names can influence consumers’ overall perceptions of a product, as well as benefit and risk perceptions. The first step of this process is to conduct a pretest to develop appropriate target and control names to use in the main study. That is the purpose of this current generic clearance request.
Intended use of information:
The results of this research will provide us with a selection of target names as well as control names to use in the main study collection. The research described in this generic clearance will not be used beyond this purpose.
Description of respondents:
The pretest sample will include 120 healthcare professionals (HCPs) and 120 consumers. Both samples will be recruited from Lightspeed Health’s Internet panels. Participants for our study will be randomly selected using the study’s profile criteria, taking into account predicted response rates by target demographic to avoid over-contacting panelists and to ensure that we do not introduce a bias in the responses. No weighting of the data will be required because the objective of the studies is to estimate the causal effects of experimental manipulations rather than to estimate descriptive statistics for these populations (Ref. 6). To avoid self-selection bias, Lightspeed Health will not disclose project details, such as the true purpose of the study, in its email invitations to participants.
The HCPs for this study will include primary care providers who are physicians, nurse practitioners, and physician assistants. They must engage in patient care at least 50% of the time for inclusion.
The consumer sample will be drawn from the general population of individuals who are 18 years of age or older. We will exclude individuals who work in the healthcare, marketing, advertising, and pharmaceutical industries or for HHS. The study participants will not be probability-based samples of consumers, but we will aim to recruit a mix of participants in terms of race/ethnicity, gender, and other characteristics.
Date(s) to be conducted and location(s):
We plan to collect data between July and August of 2019, depending on date of OMB and FDA IRB approval.
How the Information is being collected:
Lightspeed Health will recruit study participants and send invitations to the online survey. Invited panelists will review an online informed consent form, and panelists who agree to participate will begin the survey. We will begin the data collection with a soft launch (10% of completes) to ensure randomization is working as intended and check for any other potential errors in programming, etc. Lightspeed maintains the quality of its panel by rigorously validating HCPs against known HCP databases, which include license numbers and work emails, for example. During the course of the survey, they will conduct Internet Protocol checks to weed out duplicates in the consumer sample and require a pin-code for redeeming honoraria. They will also conduct data quality and consistency checks and cleanse poor performers. In addition, they have made mobile compatibility standard on all surveys. All questions are compatible with any device’s screen size and orientation.
The primary purpose of the pretest is to establish neutral names and extreme names to be used for each of two medical indications (e.g., high blood pressure and allergies) in the main study. We will have 4 extreme candidates and 4 neutral candidates for each indication. We will use a within-subjects design for the pretest to increase efficiency and keep the sample size down. Participants will see the candidate names in random order.
The survey will not exceed 20 minutes. Survey items will include risk and benefit perception items that will be used in the main study as well as more direct measures of the “extremeness” and “neutralness” of the names.
Participants will first be randomized to see either indication 1 or indication 2 first and then within each indication they will see the candidate items in random order and answer questions after each. Survey items will include risk and benefit perception items that will be used in the main study as well as more direct measures of the “extremeness” and “neutralness” of the names.
Confidentiality of Respondents:
Assurance of Privacy Provided to Participants
RTI has designated IT Security and Privacy Offices to review and ensure compliance with current federal regulations, guidelines, and client requirements. RTI’s network meets all National Institute of Standards and Technology confidentiality, integrity, and availability security standards, allowing RTI to provide appropriate security for the information. RTI complies with all ethical principles and regulatory requirements involving human subjects research as specified in the Federal Regulations for the Protection of Human Subjects, 45 CFR Part 46.
Recordkeeping and Confidentiality
All data will be collected with an assurance that participants’ identity, along with their personal demographic information, will be held confidential and not used for reasons outside the scope of the research described unless with their consent. The consent form will contain a statement emphasizing that a participant’s identity or personal information will not be linked to his/her responses and that participants can withdraw from the study at any time. All analyses will be done in the aggregate and respondent information will not be appended to the data file used.
Lightspeed Health and RTI will not share personal information regarding participants with any third party without the participant’s permission unless it is required by law to protect their rights or to comply with judicial proceedings, court orders, or other legal processes. Further, if a participant makes a direct threat of harm to his/herself or others, RTI reserves the right to take action out of concern for him or her and for others.
No personally identifiable information will be sent to FDA. All information that can identify individual respondents will be maintained by the independent contractor in a form that is separate from the data provided to FDA. The information will be kept in a secured fashion that will not permit unauthorized access. The privacy of the information submitted is protected from disclosure under the Freedom of Information Act (FOIA) under sections 552(a) and (b) (5 U.S.C. 552(a) and (b)), and by Part 20 of the agency’s regulations (21 CFR part 20).
All electronic data will be maintained in a manner consistent with the Department of Health and Human Services’ ADP Systems Security Policy as described in the DHHS ADP Systems Manual, Part 6, chapters 6-30 and 6-35. All data will also be maintained in consistency with the FDA Privacy Act System of Records #09-10-0009 (Special Studies and Surveys on FDA Regulated Products).
Amount and justification for any proposed incentive:
At completion of the study, HCPs will receive honoraria in the amount of $50. General population participants will receive points equivalent to $1.50. These points can be redeemed for the cash equivalent in Paypal; Amazon e-certificates; Macy’s gift cards or Bloomin’ Brands (Outback Steakhouse, Carrabbas, etc.) gift cards among other options.
The incentives proposed for this study for HCPs are lower than average, reflecting the fact that physicians are more willing to participate in surveys from Government agencies compared with commercial organizations. The incentives proposed are the only remuneration offered to participants for participating in the survey. They do not receive additional points or awards. If no incentive was offered, it is unlikely that a sufficient number of physicians would agree to participate in the study.
Incentives are intended to recognize the time burden placed on participants, encourage their cooperation, and convey appreciation for their contributions to the research. Numerous empirical studies have established that incentives can significantly increase participation rates (Refs. 7-8). Based on the research team’s extensive experience conducting online survey research of a similar nature with the identified populations, we have learned that incentives are necessary to sufficiently attract participants and ensure participants are incentivized to carefully answer the survey items.
In reviewing OMB’s guidance on the factors that may justify provision of incentives to research participants, we have determined that the following principles apply:
a. Improved coverage of specialized respondents.
Physicians are a difficult population to recruit to participate in research, and their response rates have been decreasing in the recent years. OMB offers a justification which supports the use of honoraria, in this case “to improve coverage of specialized respondents, rare groups, or minority populations” (Ref. 9).
Physicians are specialized respondents and require unique incentives to ensure participation. There have been numerous studies that show difficulties in recruiting physicians to participate in research (Ref. 7). Recruiting physicians to participate in research has been shown to be difficult for reasons related primarily to the time burden (Ref. 10). Physicians time is limited and, thus, quite valuable. Cash incentives, rather than nonmonetary gifts or lottery entries, can help improve response rates and survey completion rates (Refs. 11-14). A meta-analysis on methodologies for improving response rates in physician surveys examined 21 studies published between 1981 and 2006 that investigated the effect of monetary incentives on response rates in surveys of physicians. The authors found that the odds of responding to a survey with an incentive were 2.13 times greater than responding to a survey without incentives (Ref. 7). Martins et al 2012 conducted a review of published oncology-focused studies to investigate methods for improving response rates. Their meta-analysis also showed that monetary incentives were effective at increasing response rates (Ref. 15).
Additionally, a high honorarium has proven to be more successful than lower honoraria. For the Comparative Price Information in Direct-to-Consumer and Professional Prescription Drug Advertisements pretest (OMB Control # 0910-0791), we found that among PCPs and endocrinologists receiving higher incentives (around $45–$60) response rates were 4 to 11 percentage points higher than when lower incentives ($10 or $15) were used (Ref. 16). Because providing a market-rate incentive tends to increase response rates, it also improves data quality. Previous research suggests that providing incentives may help reduce sampling bias by increasing rates among individuals who are typically less likely to participate in research (such as primary care physicians or physician specialists, e.g., Refs. 17-18) and ensuring participation from a cross section of physicians, which will improve data quality by improving validity and reliability.
b. An honorarium of up to $100 was previously approved under recent OMB packages.
Similar honoraria have been used on other recent surveys. Below are higher incentives that have also been approved for online surveys of similar length.
$100 for PCPs and specialists for a 20-minute survey web mixed mode (OMB Control #0990-0415)
$75 for specialists and $55 for primary care providers (OMB Control #0910-0730)
$45 for PCPs and $60 for specialists (OMB Control # 0910-0791),
According to item 76 in the Memorandum for the President’s Management Council, past experience can be utilized to justify a more elevated honorarium: “Agencies may be able to justify the use of incentives by relating past survey experience, results from pretests or pilot tests, or findings from similar studies. This is especially true where there is evidence of attrition and/or poor prior response rates (Ref. 9).”
c. An incentive will improve data quality by improving validity and reliability.
OMB’s guidance states that a “justification for requesting use of an incentive is improvement in data quality. For example, agencies may be able to provide evidence that, because of an increase in response rates, an incentive will significantly improve validity and reliability to an extent beyond that possible through other means” (Ref. 9).
There are only a limited number of physicians in the online panel. Therefore, it is critical to maximize the number who respond to ensure sufficient power to determine meaningful differences by experimental conditions. An underpowered study increases the chance for Type II error, which may result in erroneously rejecting hypothesized models (Ref. 19).
The honoraria are intended to recognize the time burden placed on participants, encourage their cooperation, and to convey appreciation for contributing to this important study. The use of modest incentives is expected to enhance survey response rates and reduce nonresponse bias. Numerous studies have shown that incentives can reduce nonresponse bias for key subgroups. Relevant to the proposed study, Juster and Suzman (1995) found that high incentives ($100 per individual) reduced nonresponse bias for people with high incomes (Ref. 20).
In terms of studies using online panels, use of monetary incentives is particularly important as the use of such incentives has been found to increase initial response rates, convert refusals, and reduce subsequent attrition (Ref. 21).
d. This incentive is consistent with those used in online studies between the contractor (RTI) and the vendor.
Agencies may justify the use of incentives by “relating past survey experience” (Ref. 9). The contractor (RTI) and their online panel vendor are experts in their field. In their experience in the recruitment of physicians, an honorarium of $40-50 is the minimum amount to ensure successful recruitment and achieve high data quality. If a lower honorarium is offered, in their experience, this could result in longer fielding times and project delays, less attentive respondents (which could result in more item nonresponse), lower response rates, and it could increase panel attrition.
Questions of a Sensitive Nature:
None.
Description of Statistical Methods (i.e. Sample Size & Method of Selection):
Using an 8 × 8 replicated Latin squares design for each sample and within each indication, we will be able to detect moderately small effects between drug names (f ≥ 0.15), assuming α = 0.05 and power = 0.90. Because the purpose of the pretest is to identify a single pair of neutral and extreme drug names for each study population in the main study, we will not use a family-wise error adjustment when examining pairwise comparisons between names. Instead, we will focus on the pair of names with the largest mean difference meeting the significance threshold. Post hoc pairwise comparisons will be sensitive to detect a medium-small standardized mean difference between candidate names (dz ≥ 0.30).
At the conclusion of the pretest survey, participants will complete a ranking task in which they would view all of the candidate items on the screen and be asked to drag and drop them in order of most neutral to least neutral and most extreme to least extreme.
BURDEN HOUR COMPUTATION (Number of responses (X) estimated response or participation time in minutes (/60) = annual burden hours):
|
No. of Respondents |
No. of Responses per Respondent |
Total Annual Responses |
Average
Burden per Response |
Total Hours |
Number to complete the screener |
2,460 |
1 |
2,460 |
.08 (5 min.) |
197 |
Number to complete the study (included in number to complete screener) |
240 |
1 |
240 |
.33 (20 min.) |
79 |
Total |
|
|
|
|
276 |
REQUESTED APPROVAL DATE: July, 2019
NAME OF PRA ANALYST & PROGRAM CONTACT:
Ila S. Mizrachi
Paperwork Reduction Act Staff
(301)796-7726
Amie C. O’Donoghue, Ph.D.
Social Science Analyst
Amie.odonoghue@fda.hhs.gov
301-796-0574
FDA CENTER: Center for Drug Evaluation and Research, Office of Prescription Drug Promotion
References
U.S. Food and Drug Administration (FDA). (2016). Contents of a complete submission for the evaluation of proprietary names: Guidance for industry. Retrieved from https://www.fda.gov/downloads/Drugs/Guidances/ucm075068.pdf
Keller, K. L., Heckler, S. E., & Houston, M. J. (1998). The effects of brand name suggestiveness on advertising recall. Journal of Marketing, 62(1), 48–57. doi:10.2307/1251802
Klink, R. R. (2001). Creating meaningful new brand names: A study of semantics and sound symbolism. Journal of Marketing Theory and Practice, 9(2), 27–34.
Preziosi, M. A., & Coane, J. H. (2017). Remembering that big things sound big: Sound symbolism and associative memory. Cognitive Research: Principles and Implications, 2(1), 10. doi:10.1186/s41235-016-0047-y
Finucane, M. L., Alhakami, A., Slovic, P., & Johnson, S. M. (2000). The affect heuristic in judgments of risks and benefits. Journal of Behavioral Decision Making, 13(1), 1–17. doi:10.1002/(Sici)1099-0771(200001/03)13:1<1::Aid-Bdm333>3.0.Co;2-S
Solon, G., Haider, S. J., & Wooldridge, J. (2015). What are we weighting for? The Journal of Human Resources, 50(2), 301–316. http://dx.doi.org/10.3368/jhr.50.2.301
VanGeest, J., T. Johnson, and V. Welch, “Methodologies for Improving Response Rates in Surveys of Physicians: A Systematic Review,” Evaluation and the Health Professions, 30, 303-321, 2007.
Shettle, C., & G. Mooney, “Monetary incentives in U.S. government surveys,” Journal of Official Statistics, 15, 231–250, 1999.
Office of Management and Budget. “Questions and Answers When Designing Surveys for Information Collections,” 2006. https://obamawhitehouse.archives.gov/sites/default/files/omb/inforeg/pmc_survey_guidance_2006.pdf. Accessed December 18, 2018.
Asch, S., S.E. Connor, E.G. Hamilton, & S.A. Fox, “Problems in Recruiting Community-based Physicians for Health Services Research,” Journal of General Internal Medicine, 15(8), 591-599, 2000.
Epley, N. and T. Gilovich, “The Anchoring-and-Adjustment Heuristic: Why the Adjustments are Insufficient,” Psychological Science, 17(4):311-318, 2006.
Höhne, J.K. and D. Krebs, “Scale Direction Effects in Agree/Disagree and Item-Specific Questions: A Comparison of Question Formats,” International Journal of Social Research Methodology,21(1):91-103, 2017.
Saris, W.E., M. Revilla, J.A. Krosnick, and E.M. Shaeffer, “Comparing Questions with Agree/Disagree Response Options to Questions with Item-Specific Response Options” Survey Research Methods, 4:61–79, 2010.
Krosnick, J.A. and S. Presser, “Question and Questionnaire Design,” In: Handbook of Survey Research. (pp. 263‒314). Bingley, United Kingdom: Emerald Group Publishing Limited, 2010.
Martins, Y., R. Lederman, C. Lowenstein, et al., “Increasing Response Rates from Physicians in Oncology Research: A Structured Literature Review and Data From a Recent Physician Survey,” British Journal of Cancer, 106(6), 1021-6, 2012.
Aikin, K., K. Betts, V. Boudewyns, A. Stine, & B. Southwell, “Physician responsiveness to survey incentives and sponsorship in prescription drug advertising research,” Annals of Behavioral Medicine, 50(Suppl), s251, 2016.
Converse, J.M. and S. Presser, Survey Questions: Handcrafting the Standardized Questionnaire, (No. 63). Thousand Oaks, CA: SAGE Publications, 1986.
DeVellis, R.F. Scale Development: Theory and Applications, (Vol. 26). Thousand Oaks, CA: SAGE Publications, 2016.
Cohen, J., P. Cohen, S.G. West, & L.S. Aiken, Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (3rd ed.). Mahwah, NJ: Lawrence Erlbaum Associates, 2003.
Juster, F. T. & R. Suzman, “An Overview of the Health and Retirement Study,” Journal of Human Resources, 30, S7-S56, 1995.
Singer, E. & R.A. Kulka, “Paying Respondents for Survey Participation,” In M. Ver Ploeg, R. A. Moffitt, & C. F. Citro (Eds.), Studies of Welfare Populations: Data Collection and Research Issues, Washington, D.C.: National Academy Press, 2002.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | OMBMemoMERCPtP |
Subject | MERC OMB MEP |
Author | Hillabrant |
File Modified | 0000-00-00 |
File Created | 2021-01-15 |