to
REL Peer Review: Pilot Data Collection Methods for Examining the Use of Research Evidence
Part A: Supporting Statement for Paperwork Reduction Act Submission
November 6, 2023
REL Peer Review: Pilot Data Collection Methods for Examining the Use of Research Evidence
Part A: Supporting Statement for Paperwork Reduction Act Submission
November 6, 2023
Submitted to: |
Submitted by: |
U.S. Department of Education Institute of Education Sciences National Center for Education Evaluation and Regional Assistance 550 12th Street, S.W. Washington, DC 20202 Project Officer: Christopher Boccanfuso Contract Number: 91990023C0008 |
Mathematica P.O. Box 2393 Princeton, NJ 08543-2393 Telephone: (609) 799-3535 Fax: (609) 799-0005
Project Director: Ruth Neild Reference Number: 51746 |
CONTENTS
A. Justification 4
Introduction 4
A1. Circumstances making the collection of information necessary 4
A2. Purposes and use of the information collection 7
A3. Use of information technology to reduce burden 8
A4. Efforts to identify duplication 8
A5. Efforts to minimize burden in small businesses 8
A6. Consequences of not collecting the information 8
A7. Special circumstances justifying inconsistencies with guidelines in 5 CFR 1320.6 9
A8. Federal register announcement and consultation 9
A9. Payments or gifts 10
A10. Assurances of confidentiality 10
A11. Questions of a sensitive nature 12
A12. Estimates of response burden 12
A13. Estimate
of total capital and startup costs/operation and maintenance
costs
to respondents or record-keepers 13
A14. Annualized cost to the federal government 13
A15. Reasons for program changes or adjustments 13
A16. Plans for tabulation and publication of results 13
A18. Exception to the certification statement 14
References 15
Appendix A: REL USE OF RESEARCH EVIDENCE (URE) SURVEY
APPENDIX B: FOLLOW-UP INTERVIEW PROTOCOL
Appendix C: outreach materials
Exhibits
Exhibit A.1. Use of research evidence constructs to test in study survey 6
The Institute of Education Sciences (IES) within the U.S. Department of Education (ED) requests clearance for data collection activities to support a pilot study of the reliability and validity of survey items used to assess the use of research evidence (URE) among education agencies and other partners served by the Regional Educational Laboratories (RELs). The REL program is an essential IES investment focused on partnering with state and local education agencies use evidence to improve education outcomes by creating tangible research products and providing engaging learning experiences and consultation. IES seeks to better understand how REL partners use research evidence to improve education outcomes and the role of RELs in promoting URE among partners.
This study will test the reliability and validity of new and extant URE items in the REL context. Specifically, the study will (1) assess how existing items from the URE literature perform in a REL context and (2) assess the reliability and validity of a small set of items from the Stakeholder Feedback Surveys (SFS) that are currently administered to REL partners and used by IES to improve the work of REL contractors, inform the REL program as a whole, and address internal requests such as the Congressional Budget Justification. The reliability and validity of the new and existing survey items will be assessed through two data collection activities: an online survey administered to a set of partnerships across RELs and follow-up interviews with a subset of REL partners.
IES contracted with Mathematica to conduct this study. At the end of the study, the study team will provide a finalized list of valid and reliable survey items that RELs can use in future iterations of the SFS.
The ten RELs bring together partners with disparate areas of expertise and interest to provide technical support, conduct research, and offer research-based learning opportunities that inform changes to policy and practice in an effort to improve educational outcomes for students. Each REL contains several partnerships that are each designed, developed, and executed to improve long-term student success on a focused, high-leverage topic within a specific state. RELs conduct original research and develop tools to support evidence use; produce evidence that is methodologically rigorous and presented in plainly written and engaging ways; meet local evidence needs; and contribute to a broader literature when possible. REL work is change-oriented, supporting meaningful local, regional, or state decisions about education policies, programs, and practices designed to improve learner outcomes.
IES and RELs seek to better understand how REL partners use research evidence to improve education outcomes as part of their work, including research evidence developed or identified with support from REL. The data collection plan described in this submission is necessary for IES to understand URE among REL partners and ensure that future iterations of the SFS include valid and reliable survey items. The REL partnerships will ultimately use these items to measure their short- and medium-term outcomes, particularly around actions taken as a result of research evidence. Therefore, the items that will be examined through this study will help indicate whether REL activities help their partners build and apply new evidence capacities that enable them to improve education outcomes in the long term.
The past decade has seen significant advances in theoretical frameworks, measures, and empirical understanding of URE in education (Farley-Ripple et al. 2020; Gitomer & Crouse, 2019; Penuel et al. 2017; Rickinson et al., 2022). Currently, RELs collect some information about their partner’s URE through the existing SFS, which is already approved by OMB (approval number: 202001-1880-00). RELs use a set of required SFS items and a bank of optional items to design feedback surveys for participants after the completion of training, coaching, and technical support (TCTS) activities and annually for partnership members. RELs can also administer a follow-up SFS six months after TCTS activities to assess evidence use in the slightly longer term. However, the items in the current SFS have not been assessed for reliability and validity. This study will assess the reliability and validity of the current required SFS items that RELs use and IES provides to Congress as performance outcomes (U.S. Department of Education, 2022). This study will also select additional survey questions covering a broader set of URE-related topics from the URE literature, tailor them for REL partners, and then test and validate them among REL partners. Adding these new validated survey items to future iterations of the SFS may help RELs better understand URE in their partnerships, the contributions of REL activities to evidence use, and inform the improvement of future REL projects.
The study will include a diverse sample of people who have received REL services to capture a variety of perspectives from both core and non-core REL partners serving different roles across schools, districts, states, higher education, and other organizations. Core partners are people involved in planning the overall direction of the partnership with REL staff to address high-leverage needs. These people may or may not attend REL activities but regularly attend planning meetings for projects and activities under the partnership. Non-core partners are not highly involved in the overall planning, direction, or strategy of the work of the partnership. However, they receive REL services, such and training and coaching, and contribute their expertise to applied research and dissemination projects. They may also attend planning meetings for projects and activities under the partnership.
The study will be directly applicable to REL partnerships’ logic models and measurement plans for mapping anticipated steps toward desired education outcomes. REL partnerships commonly use SFS items to measure whether they have achieved their short- and medium term outcomes, such as partners’ confidence and capacity to use REL products and research evidence to support school and district staff. These short- and medium-term outcomes indicate whether REL activities help their partners build and apply new evidence capacities that enable improved education outcomes in the long term. Although the bank of required and optional SFS items currently covers several URE-related topics, it only includes single questions or small sets of survey items that have not been tested for reliability and validity. By testing new items on a broader set of URE-related topics, IES can better equip RELs to measure the effects of their activities more comprehensively on their URE-related logic model outcomes in a valid and reliable way.
The study team reviewed existing URE research to identify key URE constructs, which are defined as the underlying themes or categories of URE that can be measured using survey questions. Based on this work, the study team will test survey items under eight key URE constructs that are most relevant to the REL context (Exhibit A.1).
Exhibit A.1. Use of research evidence constructs to test in study survey
URE construct |
Source for existing measures |
URE, instrumental use |
|
URE, conceptual use |
|
Quality of research use |
|
Value of REL resources |
|
Usefulness of research |
|
Confidence in ability to use research |
|
Organizational support for URE |
|
Organizational infrastructure for URE |
|
To determine these eight URE constructs, the study team reviewed the following sources:
The William T. Grant Foundation’s URE Methods Repository. This is a resource for URE researchers to share and learn about methods used to understand and improve the use of evidence in policy and practice. For example, the study team reviewed the Survey of Practitioners’ Use of Research (SPUR) instrument (Penuel et al., 2017) which examined how school and district leaders access, value, and use research.
Other large-scale surveys focused on people’s capacity to engage with and use research. The study team initially worked with four leading experts in the URE field (see Exhibit A.2 under Section A.8) to identify existing questionnaires and sets of survey items used to measure URE. These included the Questionnaire About the Use of Research-Based Information (Lysenko et al., 2014); Seeking, Engaging with and Evaluating Research (SEER) (Brennan et al., 2017); the Q Project’s survey of educators (Rickinson et al., 2021, 2022); and the Center for Research Use in Education’s Survey of Evidence in Education for Schools (SEE-S) (May et al., 2022). In addition to accessing potential items for use in the survey, these resources have important information on techniques for assessing reliability and validity of survey items.
The study team finalized the list of eight constructs for the survey based on feedback from the content experts. The content experts also recommended question stems, wording, and response scales, suggested which survey questions might be most (and least) relevant to REL partners, and identified additional constructs and existing survey items that the study team had not known of. The wording in the study survey items is substantially similar to existing measures in the literature, with minor changes to align with the REL context. For example, some existing questions were originally developed for use with district leaders, so the study team revised the wording to make the questions relevant to the broader group of study respondents.
Before administering the survey, the study team will pretest the survey items with up to nine REL partners in various educational roles to make sure the question stems, wording, and response scales are relevant and make sense to them. Although most of the existing measures have reported reliability and validity information, the study team will report this information for each construct using the data obtained from the survey.
In addition to identifying and confirming the eight URE constructs that are important to capture in the study, the study team considered which existing SFS items it should test and potentially revise for future use. Because most of the existing SFS items overlap with the survey items within the eight URE constructs, most of the existing items will be replaced with the new items based on validated questions from the URE literature. However, the study team will test a small number of existing SFS items, including a few questions IES currently uses for Congressional Budget Justification and a small set of questions currently used to gather feedback about REL participation and the degree to which respondents apply their specific REL engagement. The study team will also report reliability and validity for these items using the data obtained from the survey.
Data collection will take place during 2024, with the web survey starting in winter/spring 2024 and follow-up interviews with a small subset of participants occurring in summer/fall 2024. The follow-up interviews will assess whether the earlier survey responses are consistent with what respondents actually do in practice, which will help inform tests of validity.
The purpose of the current study is to (1) explore, incorporate, and test additional survey items to help IES and RELs learn about URE and assess the reliability and validity of these new items and (2) assess the reliability and validity of a small set of items currently used in the SFS that are administered to REL participants after TCTS activities and annually for partnership members. More specifically, this study aims to answer the following research questions:
What survey questions should IES and RELs add to future iterations of the SFS to measure URE more comprehensively and reliably among REL partners?
How do survey items from the URE literature perform in a REL context? Are they reliable and valid?
Are the SFS questions RELs currently use to study URE among their partners reliable and valid?
The study team will answer these questions by conducting an online survey that includes the new survey items and a small set of existing items from the SFS (Appendix A). Additionally, the study team will conduct follow-up interviews to test whether survey responses are consistent with what respondents actually do in practice, which will help inform tests of validity (Appendix B). The study team will conduct the survey during participating RELs’ ongoing activities to learn about evidence use in those contexts. At the end of the study, the study team will provide a set of valid and reliable survey items that RELs can use in future iterations of the SFS. The data collected in a future iteration of the SFS could help answer research questions that are of particular interest to IES, for example:
How do REL partners use research evidence and evidence-based practices?
Do REL partners value REL research evidence and services? What types of support and resources are most valuable?
Does REL support align with and enhance other support for increasing REL partners’ skills in understanding and using research?
How do RELs use the information they obtain to improve the quality and effectiveness of their work?
The survey will be web-based, accessible to respondents via a live secure web link. To further reduce burden, the surveys will employ (1) automated skip patterns so respondents see only the questions that apply to them (including those based on answers provided previously in the survey), (2) logical rules for response options so respondents’ answers are restricted to those intended by the question, and (3) easy to use navigation tools so respondents can navigate between survey sections and access previous responses. Additionally, the survey will automatically save entered responses and respondents will be able to revisit the web link as many times as needed to complete the survey.
The study team will use QuestionPro to conduct the survey. Documentation of QuestionPro’s accessibility is available at this link: https://www.questionpro.com/security/section-508.html. Some components are not accessible and this linked document identifies them. The study team will not use any QuestionPro question types that are not 508 compliant.
The follow-up interviews will be hosted on Cisco WebEx, a secure online platform. Documentation of Cisco WebEx security features is available at this link: https://www.cisco.com/c/en/us/solutions/collaboration/webex-security.html. Documentation of Cisco WebEx accessibility is available at this link: https://help.webex.com/en-us/article/WBX18864/Is-Webex-ADA-Section-508-Compliant?
No similar studies are being conducted, and there is no equivalent source for the information that is collected in a systematic way. The survey and follow-up interviews will only collect information from REL partners that is not otherwise available.
No small businesses will be involved in this study, but some of the REL partners in the study may be from small educational entities such as smaller school districts. To minimize burden, the study team will provide these participants with a secure, web-based system they can use to complete the survey on their computer, laptop, tablet, or phone. The study team will also communicate with REL partners in advance of the data collection period (Appendix C) to make sure they understand why the study is taking place and the information they will be asked to provide. The team will schedule data collection activities around participating RELs’ ongoing partnership activities they have already planned, and administer the pilot survey in place of the existing SFS when possible.
The data collection plan described in this submission is necessary for ED to understand URE among REL partners and ensure that future iterations of the SFS include valid and reliable survey items. By testing new items on a broader set of URE-related topics, IES can better equip RELs to measure the effects of their activities more comprehensively against short- and medium-term outcomes that REL partners have identified, such as partners’ confidence and capacity to use REL products and research evidence to support school and district staff and students.a These short- and medium-term outcomes indicate whether REL activities help their partners build and apply new evidence-based capacities and practices. RELs and their state and local partners expect that building these capacities and practices will enable improved education outcomes in the long term, such as student success on focused, high-leverage topics within specific states.
This data collection has no special circumstances associated with it.
A 60-day notice to solicit public comments was published in the Federal Register, Volume 88, No. 168, page 60194 on August 31, 2023. Two public comments were received. One comment was non-substantive and does not require a response. National Center for Education Evaluation and Regional Assistance received and appreciates the second comment regarding the importance of information collection. The study team did not take any action based on these comments as they did not raise any issues for which this data collection request is seeking approval. A 30-day notice will be published to solicit additional public comments.
To inform the study design and the development of survey questions, the study team sought input from four people with expertise in URE, design and validation of measures, educators’ use of research, and large-scale studies. The experts were proposed by the study team, and after a review of their resumes and their existing publications in this area, were approved by IES. Input from the content experts will help ensure the study is of the highest quality and that findings are relevant to state education staff, district leaders, teachers, and other educators and REL partners. Exhibit A.2 lists the names, titles, and affiliations of the four experts who contributed to the study design and measure development.
Name |
Title and affiliation |
Dr. Elizabeth Farley-Ripple |
Professor and director of the Partnership for Public Education, School of Education at the University of Delaware |
Dr. Drew Gitomer |
Professor, Graduate School of Education at Rutgers University |
Dr. Caitlin Farrell |
Director, National Center for Research in Policy and Practice, and associate research professor, School of Education at University of Colorado Boulder |
Dr. Mark Rickinson |
Associate professor, School of Education at Monash University, Melbourne, Australia |
In addition to the four people who contributed to the study design, the HML Institutional Review Board will review the study before data collection begins.
There are no unresolved issues.
The study team will compensate participants in accordance with IES guidance. REL partners will receive $30 for time spent completing the survey. Additionally, REL partners selected to participate in the follow-up interviews will receive $30 for time spent completing the interview. Payments will be made in the form of a gift card.
This amount was based on the estimated length of the survey and follow-up interviews and is close to the average hourly wage for REL partners, which is approximately $41. Budget limitations prevent a payment above $30; additionally, we expect each separate task to take less than an hour. Incentives will be distributed electronically (i.e., a link to a gift card) after respondents complete each data collection activity. Because the study team expects some participants will not be able to accept incentives due to organizational policies, the incentive will include an option to donate the amount to a charity. Participants will be able to select a charity of their choice from a list provided by the study team.
The study team has established procedures to protect the confidentiality and security of its data. The data collection efforts that are the focus of this clearance package will be conducted in accordance with all relevant federal regulations and requirements, in particular the Education Sciences Reform Act of 2002, Title I, Subsection (c) of Section 183, which requires all collection, maintenance, use, and wide dissemination of data by the Institute to “conform with the requirements of section 552 of Title 5, United States Code, the confidentiality standards of subsection (c) of this section, and sections 444 and 445 of the General Education Provision Act (20 U.S.C. 1232g, 1232h).” These citations refer to the Privacy Act, the Family Educational Rights and Privacy Act, and the Protection of Pupil Rights Amendment. The names and email addresses of potential survey respondents will be collected for the limited purpose of drawing a sample, contacting those selected to complete surveys, and following up with non-respondents. This information is typically already available in the public domain as directory information (i.e., district and school websites).
Specific steps to guarantee confidentiality of the information collected include the following:
Subsection (c) of section 183 referenced above requires the Director of IES to “develop and enforce standards designed to protect the confidentiality of persons in the collection, reporting, and publication of data”.
Subsection (d) of section 183 prohibits disclosure of individually identifiable information as well as making the publishing or communicating of individually identifiable information by employees or staff a felony.
No information that identifies any study participant will be released. Information from participating institutions and respondents will be presented at aggregate levels in reports. All paper protocols will be stored in a locked facility and data stored in digital files will be maintained on a secure server that is backed up daily. Only persons conducting this study and maintaining its records will have access to the records collected that contain individually identifying information.
After the study team collects all data, all personally identifiable information (PII) will be replaced with study-created identifiers and all PII will be destroyed. As noted above, information collected for this project comes under the confidentiality and data protection requirements of the Institute of Education Sciences (The Education Sciences Reform Act of 2002, Title I, Part E, Section 183). The study team will create a dataset for ED that will include de-identified data from the survey. These data could be distributed to another investigator for future research studies without additional informed consent. The study team will destroy all data five years after the end of the study, as required by ED.
All voluntary requests for data will include the following or a similar statement:
The researchers conducting this study follow the confidentiality and data protection requirements of the U.S. Department of Education’s Institute of Education Sciences (The Education Sciences Reform Act of 2002, Title I, Part E, Section 183). The reports prepared for the study will summarize findings across the sample and will not associate responses with a specific district, school, institution, or individual. All information you provide will be used only for statistical purposes and may not be disclosed, or used, in identifiable form for any other purpose except as required by law (20 U.S.C. §9573 and 6 U.S.C. §151).
Mathematica routinely uses the following safeguards to maintain data confidentiality, which it will consistently apply to this study:
All Mathematica employees are required to sign a confidentiality pledge that emphasizes the importance of confidentiality and describes employees’ obligations to maintain it.
PII is maintained on separate forms and files, which are linked only by random, study-specific identification numbers.
Access to computer data files is protected by secure usernames and passwords, which are available only to specific users who need to access the data and who have the appropriate security clearances.
Mathematica’s standard for maintaining confidentiality includes training staff on the meaning of confidentiality, particularly as it relates to handling requests for information, and assuring respondents about the protection of their responses. It also includes built-in safeguards on status monitoring and receipt control systems. In addition, all study staff who have access to confidential data must obtain security clearance through the U.S. Department of Education e-QIP system, which requires completing personnel security forms, providing fingerprints, and undergoing a background check.
This study will not include any questions of a sensitive nature. In addition, participants will be informed that their responses are voluntary, and they may decline to answer any question.
The total response burden for these data collection activities is 43.33 hours, which is based on the target sample size of 85 partners for the web survey. The study team conducted power calculations (Arifin, 2017; Bonett, 2002) and Monte Carlo simulations (Lee, 2015) to determine the sample size it would need to replicate previous types of analyses conducted on existing URE items. Results from the power calculations suggested that at least 30 participants are needed to calculate Cronbach’s alpha for each construct and at least 50 participants are needed to calculate confirmatory factor analyses. To provide a large enough sample size for these analyses and ensure the sample represents key variation across categories (for example, core/non-core partner, different education roles, multiple RELs), the study team aims to obtain a final sample of 70 participants. To achieve this sample, the study team anticipates needing to invite at least 85 partners to participate in the web survey and achieve an 85 percent response rate. The study team will also invite 30 of the survey respondents to participate in the follow-up interview.
Exhibit A.3 shows estimates of time burden for the data collection activities, broken down by data collection activity. In addition, the exhibit presents estimates of indirect costs to all respondents for each data collection activity.
Exhibit A.3. Estimated respondent time burden and cost
Respondent type and data collection activity |
Time per response (hours) |
Maximum number of responses per respondent |
Number of respondents |
Total time burden (hours) |
Average hourly wage |
Cost per response |
Total cost burden |
REL partners |
|||||||
Web survey |
20 minutes (.33) |
1 |
85 |
28.33 |
$41.07a |
$13.69 |
$1,163.50 |
Follow-up interview |
30 minutes (.5) |
1 |
30 |
15 |
$41.07 |
$20.54 |
$616.05 |
Total hours and costs across the 2024 data collection period |
|
|
|
43.33 |
|
|
$1,779.55 |
a The study team will conduct the web survey and follow-up interviews with a range of REL partners, primarily teachers and education administrators, including staff at the district, post-secondary, and state level. Thus, the average hourly wage is the average of the wages for “Kindergarten and Elementary School Teachers,” “Middle School Teachers,” “Secondary Teachers,” “Education Administrators, Kindergarten through Secondary,” and “Education Administrators, Postsecondary” wages from the U.S. Bureau of Labor Statistics (2022).
This data collection has no direct, start-up, or maintenance costs to respondents or recordkeepers. Additionally, this is a one-time series of data collection activities with no plans for follow-up studies or other recurring data collections outside of what is being proposed in this package.
The estimated cost to the federal government for this study, including preparing initial OMB clearance forms, conducting data collection, preparing the final memo, and creating data files is $640,655, or approximately $213,551.67 per year.
This is a request for a new collection of information.
When selecting and revising items for the study, the survey team worked with URE experts to establish the face validity and content validity of the survey questions under each of the eight URE constructs. In addition, the study team will use the following analyses to test the reliability and validity of the survey items:
Cronbach’s alpha. Other URE studies typically use Cronbach’s alpha to measure the reliability, specifically the internal consistency, of items within a construct (May et al., 2022; Penuel et al., 2017). The study team plans to calculate Cronbach’s alpha separately for each construct across all study participants. The study team will also provide descriptive statistics for each construct separately by key subgroups (e.g., school-level, district-level, and state-level staff; core and non-core REL partners).
Confirmatory factor analysis (CFA). The study team will use CFA to test the reliability and validity of at least some of the constructs and survey items. Findings from CFA are more psychometrically robust than findings based on Cronbach’s alpha alone, as CFA can be used to calculate a McDonald’s omega for each construct, providing another measure of reliability (Hayes & Coutts, 2020). McDonald’s omega corrects for the amount of measurement error associated with each survey item and thus offers a more accurate assessment of reliability than Cronbach’s alpha. In addition, Cronbach’s alpha is positively affected by the number of items in a scale, whereas McDonald’s omega does not depend on the number of items.
The study team will also use CFA to confirm the structure of the constructs. For example, CFA could help indicate whether two sets of related survey questions load on two separate factors, or if they load together on one factor. If the study team finds that items generally load on one factor, it may recommend combining items across the two groups into one construct. The factor loadings could also provide information about which items might be the best candidates to drop, if there are opportunities to pare down the items while still fully capturing the construct(s).
Pearson’s product moment correlation coefficient. To test convergent validity, which is the degree to which constructs are related to other similar variables and measures, the study team will calculate Pearson’s product moment correlation coefficients (Brennan et al., 2017) between measures. For example, the study team may examine the correlation between six items it developed based on existing URE literature to measure respondents’ confidence in ability to use research evidence and two existing SFS items that measure respondents’ capacity to use research evidence and data to inform decisions.
Construct validity. To test construct validity, the study team will conduct follow-up interviews with a small subset of the study sample to test whether survey responses are consistent with what respondents actually do in practice. This will confirm whether the survey items are a proxy for actual behaviors. For example, the study team could identify a subset of respondents who indicate in the survey that they are planning to use their REL’s study findings to inform improvements to policy or practice. The study team could then follow up with these respondents a few months later to see whether they followed through with that plan or still intend to. The study team will determine which constructs to follow up on, based on the results of the survey.
Descriptive statistics. In addition to testing the reliability and validity of constructs, the study team will also calculate descriptive statistics of the pilot study data including means, standard deviations, and ranges for each construct. The study team will present descriptive statistics across the full sample and for key organizational subgroups that IES is interested in (school-level, district-level, and state level staff; core and non-core partners). As requested by IES, the study team will also produce descriptive statistics for each participating REL region or partnership.
In summer 2025, the study team will provide findings and recommendations from the study in a memo and presentation to IES. The content of the memo will describe reliable and valid survey items RELs can use to learn about URE in their partnerships, drawing on all the data collected. In spring 2026, the RPR team will publish a public report to share study findings and recommendations, aimed at future REL offerors and the URE research community. A timetable for each key study activity is below:
Feb 2024-June 2024: Web survey data collection period
May 2024-October 2024: Follow-up interview data collection period
Fall 2024-Spring 2025: Data processing and analysis
Summer 2025: Submit internal memo and presentation to IES
Spring 2026: Release public report
A17. Approval not to display the expiration date for OMB approval
IES is not requesting a waiver for the display of the OMB approval number and expiration date. The study will display the OMB expiration date.
The study does not request or require any exceptions to the certification statement.
Brennan, S. E., McKenzie, J. E., Turner, T., Redman, S., Makkar, S., Williamson, A., Haynes, A., & Green, S. E. (2017). Development and validation of SEER (Seeking, Engaging with and Evaluating Research): A measure of policymakers’ capacity to engage with and use research. Health Research and Policy Systems, 15(1). https://doi.org/10.1186/s12961-016-0162-8
Farley-Ripple, E. N., Oliver, K., & Boaz, A. (2020). Mapping the community: Use of research evidence in policy and practice. Humanities and Social Sciences Communications, 7, 83. https://doi.org/10.1057/s41599-020-00571-2
Gitomer, D., & Crouse, K. (2019). Studying the use of research evidence: A review of methods. William T. Grant Foundation. https://wtgrantfoundation.org/wp-content/uploads/2019/02/A-Review-of-Methods-FINAL003.pdf
Lysenko, L. V., Abrami, P. C., Bernard, R. M., Dagenais, C., & Janosz, M. (2014). Educational research in educational practice: Predictors of use. Canadian Journal of Education/Revue Canadienne De l’éducation, 37(2), 1–26. https://journals.sfu.ca/cje/index.php/cje-rce/article/view/1477
May, H., Blackman, H., Van Horne, S., Tilley, K., Farley-Ripple, E. N., Shewchuk, S., Agboh, D., & Micklos, D. A. (2022). Survey of Evidence in Education for Schools (SEE-S) technical report. The Center for Research Use in Education (CRUE) & the Center for Research in Education and Social Policy (CRESP), University of Delaware. Survey of Evidence in Education for Schools (SEE-S) Technical Report - Center for Research Use In Education (udel.edu)
Penuel, W. R., Briggs, D. C., Davidson, K. L., Herlihy, C., Sherer, D., Hill, H. C., Farrell, C., & Allen, A. (2017). How school and district leaders access, perceive, and use research. AERA Open, 3(2). https://files.eric.ed.gov/fulltext/EJ1194150.pdf
Rickinson, M., Cirkony, C., Walsh, L., Gleeson, J., Cutler, B., & Salisbury, M. (2022). A framework for understanding the quality of evidence use in education. Educational Research, 64(2), 133–158. https://doi.org/10.1080/00131881.2022.2054452
Rickinson, M., Gleeson, J., Walsh, L., Cutler, B., Cirkony, C., & Salisbury, M. (2021). Research and evidence use in Australian schools: Survey, analysis and key findings. Q Project, Monash University. https://bridges.monash.edu/articles/report/Research_and_Evidence_Use_in_Australian_Schools_Q_Project_Survey_analysis_and_key_findings/14445663
U.S. Bureau of Labor Statistics (2022). May 2022 Occupation Profiles. https://www.bls.gov/oes/current/oes_stru.htm
U.S. Department of Education. (2022). Institute of Education Sciences: Fiscal year 2022 budget request. https://www2.ed.gov/about/overview/budget/budget22/justifications/x-ies.pdf
Mathematica Inc.
Our employee-owners work nationwide and around the world.
Find us at mathematica.org and edi-global.com.
Mathematica, Progress Together, and the “spotlight M” logo are registered trademarks of Mathematica Inc.
a Each REL partnership is required to co-develop a logic model that outlines how the REL’s supports will lead to these short-, medium-, and long-term outcomes; see here for one example of such a logic model, and how SFS items are used to measure outcomes.
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
File Title | OMB Part A: REL Peer Review |
Subject | REL Peer Review: Pilot Data Collection Methods for Examining the Use of Research Evidence |
Author | MATHEMATICA |
File Modified | 0000-00-00 |
File Created | 2023-11-14 |