SUPPORTING STATEMENT
Part A
Evaluating the Implementation of Products to Help Learning Health Systems Use Findings from AHRQ Evidence Reports
Version: January 7, 2020
Agency of Healthcare Research and Quality (AHRQ)
Table of Contents
A. Justification 3
1. Circumstances That Make the Collection of Information Necessary 3
2. Purpose and Use of Information 12
3. Use of Improved Information Technology 13
4. Efforts to Identify Duplication 13
5. Involvement of Small Entities 14
6. Consequences if Information Collected Less Frequently 14
7. Special Circumstances 14
8. Consultation outside the Agency 14
9. Payments/Gifts to Respondents 15 10. Assurance of Confidentiality 16
11. Questions of a Sensitive Nature 16
12. Estimates of Annualized Burden Hours and Costs 17
13. Estimates of Annualized Respondent Capital and Maintenance Costs 19
14. Estimates of Annualized Cost to the Government 19
15. Changes in Hour Burden 20
16. Time Schedule, Publication and Analysis Plans 20
17. Exemption for Display of Expiration Date 22
List of Attachments 23
The mission of the Agency for Healthcare Research and Quality (AHRQ) set out in its authorizing legislation, The Healthcare Research and Quality Act of 1999 (see https://www.ahrq.gov/sites/default/files/wysiwyg/policymakers/hrqa99.pdf), is to enhance the quality, appropriateness, and effectiveness of health services, and access to such services. This is done through the establishment of a broad base of scientific research and through the promotion of improvements in clinical and health systems practices, including the prevention of diseases and other health conditions. AHRQ promotes health care quality improvement by conducting and supporting:
1. research that develops and presents scientific evidence regarding all aspects of health care;
2. the synthesis and dissemination of available scientific evidence for use by patients, consumers, practitioners, providers, purchasers, policy makers, and educators; and
3. initiatives to advance private and public efforts to improve health care quality.
Also, AHRQ conducts and supports research, evaluations, and support demonstration projects focused on (A) the delivery of health care in inner-city areas and in rural areas (including frontier areas); and (B) health care for priority populations, which shall include (1) low-income groups, (2) minority groups, (3) women, (4) children, (5) the elderly, and (6) individuals with special health care needs, including individuals with disabilities and individuals who need chronic care or end-of-life health care.
AHRQ’s Evidence-based Practice Center (EPC) Program has 20 years of experience in synthesizing research to inform evidence-based health care practice, delivery, policies, and research. The AHRQ EPC program is committed to partnering with organizations to make sure its evidence reports can be used in practice. Historically, most of its evidence reports have been used by clinical professional organizations to support the development of clinical practice guidelines or Federal agencies to inform their program planning and research priorities. To improve uptake and relevance of the AHRQ EPC’s evidence reports, specifically for health systems, AHRQ has funded the American Institutes for Research (AIR) to convene a panel of learning health systems (LHSs) to provide feedback to the AHRQ EPC program in developing and disseminating evidence reports that can be used to improve the quality and effectiveness of patient care.
Even if an EPC evidence report topic addresses LHS-specific evidence needs, the density of the information in an evidence report may preclude its easy review by busy LHS leaders and decisionmakers. AHRQ understands that to facilitate use by LHSs, complex evidence reports must be translated into a format that promotes LHS evidence-based decision making and can be contextualized within each LHSs’ own system-generated evidence. Such translational products, for the purposes of this supporting statement, are referred to simply as “products”.
The purpose of this information collection is to support a process evaluation of decisionmaking around, and use and implementation of, two such products into LHS decision processes, workflows, and clinical care. The evaluation has the following goals:
Document how LHSs prioritize filling evidence gaps, make decisions about using evidence, and implement tools to support and promote evidence use in clinical care.
Assess the contextual factors that may influence implementation success; associated implementation resources, barriers and facilitators; and satisfaction of LHS leaders and clinical staff.
Provide the AHRQ EPC program with necessary insights about the perspectives, needs, and preferences of LHS leaders and clinical staff as related to decisions and implementation of products into practice.
To achieve the goals of this project, the following data collection activities will be implemented:
key informant interviews with health system leaders, clinicians and staff; and
compilation and coding of notes from “implementation support” meetings (“check-ins”) between an implementation facilitator and site champions who are implementing the products.
These sources of data are are further described below. A listing of key topics to be addressed in the interviews and check-ins are provided below in Exhibits 2, 3 and 4.
Brief Background on the Products to be Implemented by LHSs in this Study
Through the LHS panel and AHRQ EPC contracts, AHRQ is funding the development of two products that are specifically intended to make the findings from EPC evidence reports more accessible and usable by health systems. These are the products that will be offered to LHSs for potential implementation during this project.The LHS panel provides guidance to AHRQ in: (1) the development of new products suited to LHS stakeholder needs; and (2) the experience of health systems as they review, test, and implement products into practice within the panelists’ respective health systems. Exhibit 1 provides a brief description of the products in development, a “triage tool” and a “data visualization tool”, that have been designed to support LHS use of AHRQ evidence reports.
Exhibit 1. Description of products in development to support LHS use of AHRQ evidence reports
Product |
Description |
Triage tool |
The LHS triage tool presents high-level results of evidence reports that enable leaders within LHSs to quickly understand the relevance of the reports to their organization, share high-level information with key stakeholders (e.g., healthcare executives), and link to more granular data from the report. The goal of the tool is to help LHS leaders and key stakeholders triage the information to get more detailed information from the report and, ultimately, to help them make decisions on implementing the evidence. |
|
|
Note: Names of products are subject to change.
AIR is in process of working with the LHS panel to determine the specific AHRQ evidence report(s) that will be used in the products. Depending on the topic(s) of the report(s) selected by the LHS panel members, an LHS may opt to explore using information portrayed by additional products already developed by AHRQ, or in use within their own health systems, to complement the selected triage tool or data visualization tool. In some cases, an LHS might implement a full suite of products around the selected clinical topic. The evaluation will capture the variation and unique experience of LHSs.
Key Informant Interviews
There will be two rounds of key informant interviews: (1) in-person preliminary interviews will be conducted early in the implementation period (months 1-3) with LHS leaders and clinicians and will focus on health systems’ rationale for selecting each product and early experiences with its roll-out into practice; (2) remote follow-up interviews will be conducted via telephone later in the implementation period (months 10-11) with two sets of stakeholders: (a) LHS leaders and (b) clinicians/staff (hearafter, “clinical staff”) actively implementing the product. These follow-up interviews will focus on health systems’ experiences implementing their selected product(s). All interviews (preliminary and follow-up) will be 60-minutes in duration, recorded with permission of the key informants, and transcribed for analysis. Up to 88 total interviews will be be conducted across the two rounds of key informant interviews. Assuming the same LHS leaders participate in the preliminary and follow-up interviews, the key informant interviews will involve 4-5 LHS leaders and clinical staff from each of the eleven LHS panel member organizations implementing the study. Additional detail about the information collection components is provided below.
In-person preliminary interviews. The preliminary interviews will include 2-3 LHS leaders/decisionmakers at each of eleven implementation sites for a maximum of 33 interviews in the first round of data collection. The interviews will be conducted during implementation site visits that are occurring early in the project to support the health systems’ testing and/or roll out of the products into clinical workflows. Specific topics explored in the preliminary interviews are noted in Exhibit 2 and include LHSs’ decision to participate in implementation, decision considerations for the selected product, experiences leading the implementation, and early experiences and perceptions of the selected product(s). Attachment A contains the draft interview guide for the preliminary interviews. To limit respondent burden, we will use the implementation site visits as an opportunity for conducting the preliminary interviews, thereby limiting the need to schedule additional time with respondents for a phone interview. If a respondent has limited availability during the site visit, however, we may need to do the preliminary interview remotely or substitute the respondent with another qualified staff member who is available during the implementation site visit.
Exhibit 2. Questions for preliminary key informant interviews
Implementation domains |
Leadership interview questions |
Implementation update |
Confirm what was learned as part of early planning and support calls and ask for updates:
|
Relative advantage, acceptability, and compatibility |
|
Appropriateness |
|
Feasibility |
|
Fidelity |
|
Adoption |
|
Actionability |
|
Implementation cost |
|
Reach |
|
Effectiveness |
|
Cost of care |
|
Sustainability |
|
Note: Some of the responses to questions will be supplemented by what is learned during the implementation support calls / check-ins.
Remote follow-up interviews. The follow-up interviews will include the 2-3 LHS leaders/decisionmakers from the preliminary interviews (maximim n= 33), along with 2 additional clinical staff (n=22) at each of eleven implementation sites for a maximum of 55 follow-up interviews. Specific topics explored in the follow-up interviews are noted in Exhibit 3 and include LHS leaders’ and clinical staff’s experiences with each product as well as their perceptions of the relative advantage, acceptability/ compatibility, appropriateness, and feasibility of using the product; implementation fidelity (i.e., if the implementation went as planned), reach, barriers and facilitators, and associated costs; any outcomes of implementing the product (e.g., achieved any intended systemic changes); and likely sustainability of continuing to use the product in practice. Attachments B and C contain the draft guides for the follow-up interview data collection with LHS leaders and clinical staff, respectively.
Exhibit 3. Questions for follow-up key informant interviews
Implementation domains |
Leadership questions |
Clinical staff questions |
Implementation update |
Confirm what was learned as part of implementation support calls and ask for updates:
|
|
Relative advantage, acceptability, and compatibility |
|
|
Appropriateness |
|
|
Feasibility |
|
|
Fidelity |
|
|
Adoption |
|
|
Actionability |
|
|
Implementation cost |
|
|
Reach |
|
|
Effectiveness |
|
|
Cost of Care |
|
|
Sustainability |
|
|
The two sets of in-depth qualitative interviews will allow for a nuanced exploration of both what LHSs value about the products and what it takes to successfully implement such tools into practice. The research on implementation and uptake of products to promote use of evidence in LHS settings is sparse, thus it is important to use a data collection strategy for the evaluation that will yield rich information about the experience of health systems, LHS decisionmakers, and the staff implementing the tools into practice. A quantitative survey would not yield the depth of individual feedback that is needed to capture the experience of implementing these tools and the unique contexts of the health systems. Thus, interviews are the preferred method of systematically collecting this data.
Implementation Support Meetings/ “Check-ins”
In addition to key informant interviews, which will be conducted only at the beginning and end of implementation, AHRQ will gather information throughout the implementation period by using monthly implementation support meetings between implementation facilitators and site champions as an ongoing opportunity to ask key questions about implementation progress. Although the primary goal of these check-in meetings is to provide technical assistance with implementation and recommendations for handling emergent challenges in the implementation process, they will also be a source of rich information for the evaluation. Because these meetings occur in real time as the implementation unfolds, they will reduce the potential biases (e.g., selective memory, recency effects, forgetting details about key events and their sequence) associated with only collecting data at the beginning or end of the implementation period.
These check-in meetings will occur by telephone and are intended to monitor implementation progress, provide support to health systems, and discuss next steps. AIR implementaion facilitators for each site will schedule telephone conference calls with site champions (N=11), during which structured notes will be taken. These notes will be supplemented with relevant information from other touchpoints between the facilitators and champions (e.g., ad hoc calls, email exchanges, and voluntary participation in monthly shared learning events) as they naturally occur. Notetakers will capture and document information related to key implementation domains noted in Exhibit 4 as these topics arise in check-in meetings and other facilitator/champion encounters throughout implementation. Exhibit 5 notes example topics that could be probed by facilitators during touchpoints with champions over the course of the implementation period.
Exhibit 4. Implementation domains framing the evaluation and structured notetaking for check-in sessions and other facilitator/champion encounters
Adapted from Source: Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, Griffey R, Hensley M, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Administration and Policy in Mental Health and Mental Health Services Research. 2011 Mar;38(2):65-76
Exhibit 5. Guiding probes to assess implementation progress through informal facilitator/champion encounters
Possible facilitator/champion touchpoints |
Facilitator’s probes for implementation status updates |
|
|
Statutory Authority
This implementation evaluation is being conducted by AHRQ through its LHS contractor, AIR, pursuant to AHRQ’s statutory authority to conduct and support research on health care and on systems for the delivery of such care, including activities with respect to the quality, effectiveness, efficiency, appropriateness and value of health care services and with respect to quality measurement and improvement (42 U.S.C. 299a(a)(1) and (2)).
The purpose of the information collection is to understand the decision considerations of LHSs as they choose to implement products that facilitate the use of information from evidence reports, and evaluate the implementation of such products into LHSs.
The purpose of developing, implementing, and evaluating new products that translate systematic reviews is to provide AHRQ EPCs with guidance on how best to deliver the results of their work so that health systems can more readily use the findings from evidence reviews to improve care and practice. Bridging the gap between evidence and practice requires products to deliver evidence that is timely, trustworthy, actionable, flexible, contextualized, and integrated. Even if a systematic review topic addresses LHS-specific evidence needs, the information still must be translated into a format that promotes LHS evidence-based decisionmaking and can be contextualized with each LHS’s own system-generated evidence.
Through consultation with the LHS panel, AHRQ has sought to help LHSs use evidence from systematic reviews to improve care. AHRQ will ultimately use the information gained from this information collection to inform strategies to better foster and promote uptake and use of the information in AHRQ’s evidence reports among LHSs. Studying the implementation of LHS-informed products will provide the AHRQ EPC program with needed insights about the perspectives, needs, and preferences of LHS leaders and clinical staff, with an overarching goal of applying those insights as design and methodological considerations for future evidence reports and/or products that are suited to the needs of LHSs. Further, information gained from the key informant interviews will help AHRQ understand (1) characteristics of health systems and implementation teams, (2) resources and needed support, and (3) other contextual factors associated with successful implementation of LHS-optimized products.
The evaluation of each product’s implementation is largely formative in nature as AHRQ seeks information on the implementation process and context, as well as faciliators and barriers of health systems adopting and using the newly-derived and LHS-infomed products.
Data collection efforts will not involve the use of automated, electronic, mechanical, or other technological collection techniques. Data collection techniques for the current study will only involve telephone or in-person qualitative key informant interviews or informal check-ins. Interviews will be audio recorded.
AHRQ has made numerous efforts to identify duplicative information collection efforts. The Agency concludes there is some complementary work, including other projects within it’s own portfolio to inform the use of evidence reports and products among LHSs, but is not aware of duplicative work. These complementary efforts are outlined below:
As part of the development of the request for task order (RFTO) proposal for the LHS panel contract (RFTO# 18-233-SOL-00521), AHRQ reviewed published literature on programs and interventions to increase uptake of evidence reports in LHSs and determined that no duplicative efforts exist. While AHRQ found some relevant literature, it was not clear what could help LHSs use findings from AHRQ evidence reports.
AHRQ has reviewed its own pilot work and other agency-wide LHS work and found two projects that inform these efforts, but do not duplicate the planned information collection. The AHRQ EPC program has been involved with developing and pilot testing new potential products for health systems. These pilots helped inform the AIR team’s development of new products. In addition, one existing EPC program product, a data visualization for AHRQ’s NextGen architecture, will be implemented and evaluated through this information collection, along with the triage tool developed through the AIR LHS panel contract and informed by the LHS panel.
AHRQ is continuing close collaboration with its support contractors and projects, like the Systematic Review Data Repository (SRDR) and the AHRQ Scientfic Resource Center’s (SRC’s) NextGen, to ensure that their work, as it relates to LHS use of evidence reports and LHS-focused products, is complementary and not duplicative.
None of the LHSs represented in this project qualify as small entities or small businesses.
The preliminary and follow-up key informant interviews described in this supporting statement will each be done one time in order to evaluate a one-time study. The check-in meetings will occur monthly, primarily to support implementation but with some of the information gathered from the check-ins also being used to evaluate the one-time study. The interviews are planned both early (i.e., preliminary interviews) and later (i.e., follow-up interviews) in the implementation period to capture the change in experiences over time, while the check-in meetings will provide brief feedback about the status of the evaluation at multiple timepoints as implementation unfolds in real time. Information gained about LHS leaders’ decisionmaking, expectations, and early challenges are best captured when these ideas are fresh and untainted by the passage of time, hence the importance of the early preliminary interviews. Early interviews will also help implementation coaches to identify supports that may be helpful to struggling teams. Interviews, conducted towards the end of the implementation period will yield insights on lessons learned, whether implementation went as planned and achieved any systemic and process changes that may have been part of an LHS’s unique implementation goals, and any plans for sustained use of the product after respondents have had sufficient exposure to it. The check-in meetings and other facilitator/champion touchpoints, including ongoing opportunities to probe about the implementation experience (e.g., progress, successes, challenges), help avoid inaccuracies of recollection, such as omitted details about the sequencing of events that may be forgotten or mis-remembered in a retrospective interview after implementation concludes. Together, the interviews and check-in meetings will help contextualize barriers and facilitators to implementing products in practice and perhaps identify key points in time at which emergent obstacles necessitate intervention.
This request is consistent with the general information collection guidelines of 5 CFR 1320.5(d)(2). No special circumstances apply.
8.a. Federal Register Notice
As required by 5 CFR 1320.8(d), notice was published in the Federal Register on February 4, 2020 on page 6194 for 60 days. AHRQ did not receive any comments from the public during this period. (see Attachment D).
AHRQ has consulted with a LHS panel to provide expertise and guidance on the development of the products and to inform the implementation of the products in each LHS setting. The panel consists of representatives from eleven LHS. Each panel member has knowledge of and experience with incorporating evidence into practice through products that translate evidence, and each is an advocate for evidence use in their health systems. The kick-off LHS panel meeting was held in-person on December 7, 2018 with subsequent meetings to date in January 2019, June 2019, and October 2019.
The LHS panel is tasked with providing critical feedback on all aspects of this project, including informing the development of the LHS-focused products created by the AIR team. For example, LHS panel members provided feedback on key barriers to incorporating reports in clinical settings, which has been incorporated into the products being implemented and evaluated in the study. The LHS member organizations comprise the testing sites for the triage tool and data visualization products in the implementation study. A list of the LHS panel members is included as Attachment E.
AHRQ is also working with former AHRQ Director, Andy Bindman, M.D., of the University of California, San Francisco, an expert in helping LHS clinical leaders identify evidence gaps; and Lucy Savitz, Ph.D., of Kaiser Permanente, an expert in working directly with delivery systems to translate scientific findings into clinical practice. Drs. Bindman and Savitz co-facilitate the LHS panel. Dr. Savitz will also lead the implementation of the triage tool and data visualization tool in the eleven LHS panel member organizations.
While there are no other known related federal projects, AHRQ regularly talks with other federal agencies including CMS and the CDC to ensure synergistic efforts aimed to promote the adoption of evidence use products.
Through these various consultations, opinions about what is of value to LHSs have been generally consistent. With respect to the planned evaluation, there are no unresolved issues.
There are no individual payments or gifts offered to individual interview respondents or persons participating in the coaching sessions. However, AHRQ will provide each of the eleven learning health systems with a one-time payment of $1000 for each implemented product per each organizational unit within the health system that is implementing the product(s). Thus, the one-time payments will range from a minimum of $1,000 for LHSs that are implementing a single product in a single organizational unit, to a maximum of $15,000 for LHSs that are either (1) implementing a single product in 15 or more organizational units, OR (2) implementing both products in more than seven organizational units. AHRQ will cap the one-time payments at a ceiling of $15,000. This payment will help offset the costs to the LHSs for participating in the evaluation activities and to allow LHS leadership/champion(s) and designated liaison(s) across participating organizational units to engage in several planned touch points with the AHRQ contractor’s “implementation facilitators” to receive implmention technical support throughout the 11 month implementation period.
Evaluation/Interview Support. In support of staff labor for the evaluation activities, specifically the key informant interviews, the site payment is intended to help offset staff labor costs associated with reviewing the study materials, obtaining any necessary Institutional Review Board (IRB) approvals, signing off on any applicable agreements and paperwork, identifying key informants and assisting with their recruitment, facilitating scheduling of LHS leaders and clinical staff for the preliminary in-person interviews and follow-up phone interviews, and to offset the costs to the organization of LHS leaders and clinical staff participating in interviews.
Implementation Technical Support. Implementation facilitators will work closely with LHS leaders and staff to provide support to ensure that the products are implemented in accordance with tailored plans manifested in implementation playbooks co-developed by implementation facilitators and champions at each participating LHS. Implementation playbooks will be unique to each system’s implementation context and needs. LHS touch points with implementation facilitators over the course of the project include collaboration on the playbooks, receiving ongoing implementation support from the facilitators through the monthly check-ins, and opportunities to engage in monthly shared learning events to discuss implementation challenges and successes with peers and coaches in an open, unstructured forum. The evaluation team will use structured notes from the check-ins and shared learning events, as well as ad hoc touchpoints like emails and phone calls between facilitators and champions in the evaluation.
Individuals and organizations will be assured of the confidentiality of their replies under Section 944(c) of the Public Health Service Act. 42 U.S.C. 299c-3(c). That law requires that information collected for research conducted or supported by AHRQ, that identifies individuals or establishments be used only for the purpose for which it was supplied.
Information that can directly identify the respondent will be obtained solely for purposes of scheduling the interviews and providing implementation support. These identifiers will include name, job title, and contact information (i.e., phone, organization address, email address). They will be stored in a secure location accessible only to project personnel requiring access to perform their assigned tasks. Audio recordings of the key informant interviews will be secured by password on digital recording devices requiring access via a personal pin number unique to that device. Recordings will be destroyed from portable devices after the recordings are transferred to the secured location on the AIR network and encrypted with a password shared on a “need to know” basis with evaluation team staff. Interviews will only be recorded if the participant consents to the audio recording.
We do not believe there are questions of a sensitive nature included in the interview guides.
Exhibit 6 shows the total estimated annualized burden of 214.5 hours for the two rounds of key informant interviews and implementation “check-ins” combined. For the key informant interviews (totaling 154 hours), burden is included for: (1) LHS leaders/decisionmakers participating in the preliminary interviews (a maximum of 33 hours), (2) LHS leaders/decisionmakers participating in the follow-up interviews (a maximum of 33 hours), (3) clinical staff participating in the follow-up interviews (a maximum of 22 hours), (4) interviewee review of materials, consent forms, and logistics in advance of their respective interviews (i.e., 16.5+5.5=22 hours) and (5) time for designated LHS staff (e.g., the LHS panel member, a designated site liaison, selected interviewees) to recommend key informants, coordinate implementation support, and help with scheduling of in-person preliminary interviews and remote follow-up interviews (44 hours). Also included in Exhibit 6 is the estimated annualized burden hours for monthly check-ins between implementation facilitators and LHS champions for informal technical assistance support and the quick status probes on implementation progress that were described above in Exhibit 5 (a maximum of 60.5 hours). These annualized burden estimates for the key informant interviews and the coaching sessions are futher explained below.
Key Informant Interviews: Expanded detail on burden estimates
We estimate 1 hour for each key informant interview for: (1) LHS leaders/decisionmakers participating in the preliminary interviews (a maximum of 33 hours), (2) LHS leaders/decisionmakers participating in the follow-up interviews (a maximum of 33 hours), (3) clinical staff participating in the follow-up interviews (a maximum of 22 hours), (Total interview burden = 1.00 hour X maximum of 88 interviews = 88 hours). We estimate an additional 15 minutes (0.25 hours) will be needed for key informants to prepare for their respective interview(s) (Total interview preparation burden = 0.25 hours X maximum of 88 interviews = 22 hours; of which 16.5 hours is for leaders/decisionmakers to prepare for both preliminary and follow-up interviews and 5.5 is for clinical staff to prepare for their participation in the follow-up interviews only). Finally we estimate time for LHS leaders and staff to identify interview candidates, facilitate recruitment, coordinate implementation support, and assist with interview scheduling (4.00 hours per each of 11 LHSs; Total staff assistance burden = 4.00 hours X 11 sites = 44 hours). The “staff assistance” burden involves the following:
In each of the eleven LHS organizations implementing the product(s), the LHS panel member (and/or site liaison/champion) will identify prospective key informants (i.e., other LHS leaders/decisionmakers and appropriate clinical staff), with additional key informants subsequently identified through snowball sampling.
Designated LHS staff (i.e., LHS panel member, designee and/or site liaison/champion) will provide needed contact information to the AIR evaluation team for outreach and recruitment of the prospective key informant interview candidates, assist with interview scheduling, and coordinate implementation support with the AIR team.
We will develop standardized email messages to reach out to interview candidates and a written overview of the project, the evaluation, and the purpose of the interview (see Attachment F for recruitment materials). We will coordinate scheduling of both the implementation support check-ins and the 60-minute interviews at the most convenient time, considering the needs of the LHS leadership and staff. For the preliminary interviews, if prospective interviewees are not available during our site visit, we will ask for suggestions of other LHS staff who meet our recruitment criteria or arrange a telephone interview, if needed.
Implementation Support Meetings/ Check-ins: Expanded detail on burden estimates
We estimate 60.5 hours for the monthly check-ins between implementation facilitators and LHS champions. This includes an average of 30 minutes of implementation support/ check-in meetings per each of the 11 LHSs for each month of implementation (11 months). (11 months x 0.5 hours = 5.5 hours). Across LHSs, the estimated burden associated with check-ins is approximately 61 hours across the implementation period (5.5 hours x 11 LHSs = 60.5 hours).
Exhibit 6. Estimated annualized burden hours
Form Name |
Number of respondents* |
Number of responses per respondent |
Hours per response |
Total burden hours |
In-person preliminary interviews with LHS leaders/decisionmakers |
33** |
1 |
1.00 |
33 |
Remote follow-up interviews with LHS leaders/decisionmakers |
33** |
1 |
1.00 |
33 |
Remote follow-up interviews with clinical staff |
22 |
1 |
1.00 |
22 |
Review of materials prior to BOTH preliminary and follow-up interviews – LHS leaders/decisionmakers |
33 |
2 |
0.25 |
16.5 |
Review of materials prior to interviews – clinical staff |
22 |
1 |
0.25 |
5.5 |
Interview scheduling and other staff assistance |
11 |
1 |
4.00 |
44 |
Implementation check-ins: Brief monthly implementation progress checks, documented for the evaluation as structured notes on implementation topics naturally occurring in coach/champion encounters |
11 |
11 |
0.5 |
60.5 |
Total |
|
|
|
*
The numbers in this column give the maximum number of respondents for
each listed activity based on a range in the number of recruits per
site (e.g., “2-3 LHS leaders/decisionmakers”). The
balance may shift some between LHS leaders/decisionmakers and
clinical staff depending on implementation team and leadership
composition at each site. In any case, 88 interviews (33+33+22=88) is
a maximum possible in the event each of the 11 sites contributes 3
“LHS leaders/decsionmakers” (likely the same people for
preliminary and follow-up interviews) and 2 additional clinical staff
(for follow-up interviews only) as key informants. It is more likely
that the total number of interviews will be around 80.
** These
are likely to be the same 33 respondents in both preliminary and
follow-up interviews
*** Total maximum burdened hours estimate based on maximum of 88 interviews.
Costs associated with the estimated annualized burden hours are provided in Exhibit 7.
Exhibit 7. Estimated annualized cost burden
Form Name |
Number of respondents* |
Total burden hours |
Average hourly wage rate** |
Total cost burden |
In-person preliminary interviews with leaders/decisionmakers |
33 |
33 |
$94.47a |
$3,117.51 |
Remote follow-up interviews with leaders/decisionmakers |
33 |
33 |
$94.47a |
$3,117.51 |
Remote follow-up interviews with clinical staff |
22 |
22 |
$52.13b |
$1,146.86 |
Review of materials prior to BOTH preliminary and follow-up interviews – LHS leaders/decisionmakers |
33 |
16.5 |
$94.47a |
$1,558.76 |
Review of materials prior to interviews – clinical staff |
22 |
5.5 |
$52.13b |
$286.72 |
Interview scheduling and other staff assistancec |
11 |
44 |
$20.34c |
$894.96 |
Implementation check-ins (documented for the evaluation as structured notes on implementation progress) |
11 |
60.5 |
$94.47a |
$5,715.44 |
Total |
|
|
|
$15,837.76 |
* The numbers in this column give the maximum number of respondents for each listed activity based on a range in the number of recruits per site (e.g., “2-3 LHS leaders/decisionmakers”). As noted in the comment to Exhibit 1, the balance may shift some between LHS leaders/decisionmakers and clinical staff depending on implementation team and leadership composition at each site. In any case, 88 interviews (33+33+22=88) is a maximum possible.
**National Compensation Survey: Occupational wages in the United States May 2018 “U.S. Department of Labor, Bureau of Labor Statistics.”
a Based on the mean wages for Internists, General. 29-1063; annual salary of $196,490
b Based on the mean wages for Physician Assistants, 29-1071; annual salary of $108,430
c Based on the mean wages for Secretaries and Administrative Assistants, 43-6010; annual salary of $42,320
There are no direct costs to respondents other than their time to participate in the study.
Assuming each LHS implements either (1) both products in more than 7 organizational units or (2) one product in 15 or more organizational units, the estimated total contractor cost to the government for the proposed information collection is estimated to be $461,229 (annualized at $230,615). As shown in Exhibit 8a, this amount includes costs related to study design and development of interview guides ($95,706); data collection and implementation site support ($251,676); data processing and analysis ($68,364); publication of results ($34,131); and project management ($11,352).
Exhibit 8a. Estimated Total and Annualized Cost
Cost Component |
Total Cost
|
Total Annualized Cost |
Study Design and Development of Interview Guides |
$95,706 |
$47,853 |
Data Collection Activities |
$251,676 |
$125,838 |
Data Processing and Analysis |
$68,364 |
$34,182 |
Publication of Results |
$34,131 |
$17,066 |
Project Management |
$11,352 |
$5,676 |
Total |
$461,229 |
$230,615 |
On average, two Senior Management personnel (GS-15, step 5) will provide oversight of the project and one Program Management personnel (GS-13, step 5) will assist with program oversight and review of the results. This includes oversight of data collection activities and review of the report of summarized results. The estimated cost to the Federal Government for these activities is provided in Exhibit 8b. The average hourly salary for the position of Senior Management personnel at the GS-15 grade level, Step 5 is $74.86 per hour. The average hourly salary for the position of Program Management personnel at the GS-13 grade level, step 9 is $60.19 per hour. The Federal hourly salary information is available on the OPM website at https://www.opm.gov/policy-data-oversight/pay-leave/salaries-wages/salary-tables/pdf/2019/DCB_h.pdf.
Exhibit 8b. Federal Government Personnel Cost
Activity |
Federal Personnel |
Annual Salary |
% of time |
Cost |
Senior Management Oversight: GS-15, Step 5 average |
2 |
$156,228 |
2% |
$6,249.12 |
Program Management Oversight and Review of Results: GS-13, Step 9 average |
1 |
$125,615 |
12% |
$15,073.80 |
Total |
$21,322.92 |
Annual salaries based on 2019 OPM Pay Schedule for Washington/DC area: https://www.opm.gov/policy-data-oversight/pay-leave/salaries-wages/salary-tables/pdf/2019/DCB_h.pdf
This is a new collection of information.
Schedule. Pending OMB approval, AHRQ aims to conduct the in-person preliminary interviews one to three months following product implementation (May to July, 2020) and again with virtual interviews at the end of the project (February and March, 2021). Implementation support check-ins to provide technical assistance on implementation issues and probe for implementation progress will occur, on average, monthly throughout the implementation period (May 2020 to March 2021). As soon as OMB approval is received, product implementation and evaluation activities will begin. The estimated time schedule to conduct data collection activities is shown below:
Finalize recruitment (April to May, 2020)
Preliminary key informant interviews during early implementation (May to July, 2020)
Follow-up key informant interviews (during later implementation (February and March, 2021)
Check-ins and other coach/champion touchpoints (May 2020 to March 2021)
Data analysis, development of draft technical report (March to July, 2021)
Final technical report (September 2021)
Publication. The final report of the Health System Panel to Inform and Encourage Use of Evidence Reports project, and accompanying documentation, will be made available in public domain on the AHRQ Web site. Additional manuscripts and/or conference presentations may also be produced and disseminated.
Analysis. For information gathered during key informant interviews, the evaluation team will use a priori and inductive methods and NVivo software to analyze the interview transcripts. The evaluation team will begin by developing a preliminary list of codes using the 11 evaluation domains used to develop the interview guides as a guiding framework and referring to notes from the debriefing session with interviewers and notetakers. The team will refine the code list after reviewing a sample of transcripts representing a mix of product type, user type and LHSs. The coders will then each code four transcripts, making note of where additions, revisions or deletions to the codes are needed to better fit the data. The coding team will ensure consistency in how the codes are applied by agreeing on operational definitions of the revised code list. They will then each code five transcripts and compare the results to identify discrepancies in interpretation. The evaluation team will resolve any discrepancies in application of the codes by consensus.
The evaluation team will analyze the coded data by product type and user type, using the constant comparative method and considering how LHSs respond differently to the products based on their implementation approach and system characteristics.
For purposes of this project, we have defined “implementation” to refer to product use, usability, and adoption, as defined below. The evaluation will assess all three elements of implementation.
Usability. LHS staff provide feedback on product features, such as how easy the product was to navigate, efficiency of locating information, ease of use after returning to product later, and how well they like the product(s).
Use. Extent to which LHS staff are actively engaged in reviewing the product(s) and considering how the content applies to the LHS patient population and current practices.
Adoption of the product(s). How and what the product was intended to be used for when selected compared to how it is actually used and for what. Adoption includes the extent to which the project leaders choose to share the product(s) within their system for others to review and with whom they share it. or This applies to not only sharing the product but also includes taking evidence information from the product and creating a summary to facilitate review of the product content by others.
Adoption of the evidence. LHSs may choose to change practice as a result of reviewing the content or evidence in the product(s). For example, LHSs might decide to develop policies and systems to eliminate or reduce use of a particular drug for a certain condition. (Please note: Adoption of the content or evidence in this manner is not a condition for participation in the project; however, it will be encouraged. The timeframe for the project is not sufficient for us to fully evaluate whether or not evidence has been adopted across an LHS; instead, we will assess whether systemic changes have been implemented to support adoption of evidence, e.g., policy changes or changes in the electronic health record system.
The evaluation team will summarize each theme identified from analyzing the coded data into a memorandum, which will serve as a basis for the final evaluation report.
For information gathered during implementation check-ins, summary themes will be created. Structured notes from the coaching sessions (documenting emergent points of discussion related to any of the eleven implementation domains in the framework shown earlier in Exhibit 4) and any other facilitator/champion touchpoints will be entered as text into a tracking spreadsheet and included in NVIVO for analysis. Considering the implementation domains, we will review the data to develop a preliminary list of codes. We will then follow an inductive and deductive approach to coding and apply constant comparative methods to data analysis as described under the interview data analysis.
In the final evaluation report, we will describe findings from the data collections (i.e., preliminary and follow-up interviews and coaching sessions/ check-ins) and provide detailed recommendations and rationale for revising the implemented products, i.e., the triage tool and data visualization tool. The report will include a summary of the 11 LHSs’ implementation experiences that other organizations may want to consider as they select or plan to implement products. We will develop a preliminary evaluation report following the first data collection (i.e., the preliminary interviews). The preliminary and final evaluation reports will begin by providing a brief overview of the research methods, product descriptions, and the LHS and interviewee characteristics. We will organize themes by product and evaluation domain and identify detailed and actionable recommendations and rationale for revising the products to be most useful to LHSs.
AHRQ does not seek this exemption.
List of Attachments:
Attachment A -- In-person Preliminary Interview Guide for LHS Leaders
Attachment B -- Remote Follow-up Interview Guide for LHS Leaders
Attachment C -- Remote Follow-up Interview Guide for Clinical Staff
Attachment D -- Federal Register Notice
Attachment E -- List of LHS Panel Members
Attachment F -- Recruitment Materials
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | OMB Clearance Application |
Author | hamlin-ben |
File Modified | 0000-00-00 |
File Created | 2021-01-14 |