Download:
docx |
pdf
Mathematica Policy
Research
Appendix
T
specific
evaluation questions
for All categories
Category A:
Evaluation Questions
Note: Potential
evaluation questions from the HHS solicitation are included in
regular font; additional evaluation questions developed by the
national evaluation team are italicized.
Evaluation
Question
|
|
Was
the grantee able to collect and report on the full set of core
measures?
|
To
what extent were pediatric quality measures already being
collected through an existing state health information exchange
or other means?
|
|
Which,
if any, measures was the grantee unable to collect and report
on? Why? What would need to happen to enable collecting and
reporting on these measures?
|
|
How
complete and faithful to CMS specifications were the core
measures that were collected and reported on?
|
|
How
often were the core measures collected?
|
|
What
was the cost of collecting and reporting on core measures
(including provider as well as State costs)?
|
|
To
what extent is the collection of core measures sustainable after
the grant’s end and replicable by others?
|
|
What
does the program look like in a steady state?
|
|
What
is required to maintain the program?
|
|
What
are the prerequisites for successful replication?
|
|
Which
elements of the HIT are essential to achieving the same results?
|
|
How
did the grantee collect data for and generate the core measures?
|
How
did the state implement Category A?
|
|
Which
resources (e.g., cross-state workgroups) played the most
critical role?
|
|
To
what extent did stakeholders have input on how Cat. A was
implemented?
|
|
Were
data use and end user data agreements established for all
necessary providers/sites?
|
|
Which,
if any, measures were easy to collect, and why?
|
|
What
problems were encountered in collecting and reporting on the
core measures? How were they addressed? How could other States
avoid such problems?
|
|
What
data infrastructure limitations/ barriers were encountered in
testing the measures and how were they overcome?
|
|
What
changes to data infrastructure or reporting systems were needed?
|
|
What
kind of technical assistance was obtained? From whom? Who
received the technical assistance (e.g., State personnel,
providers)? What technical assistance was critical to the
ability to collect and report on the core measures?
|
|
Did
the grantee integrate data collection for core measures with
other data collection activities, and if so, how?
|
|
Did
the collection and reporting of core measures displace any
pre-existing quality measurement or improvement activities?
|
|
How
did stakeholders use core measures?
|
How
are HEDIS quality measure and/or CAHPS patient satisfaction data
currently used, and by whom?
|
|
Who
used the core measures and how were they used?
|
|
Did
all stakeholders endorse measures and associated reports?
|
|
Which
stakeholders used which reports for what purpose? With what
measurable impact?
|
|
Who
prepared the reports?
|
|
What
did they report on?
|
|
What
audiences were the reports tailored to?
|
|
To
whom were the reports distributed?
|
|
How
did the target audience respond to the reports?
|
|
Who
analyzed the core measures and decided what quality improvement
activities to undertake?
|
|
What
quality improvement activities were undertaken?
|
|
Who
implemented the quality improvement activities?
|
|
How
were the core measures used to construct the incentive?
|
|
How
was it decided which of the core measures to use in constructing
the incentive and how to weight them?
|
|
What
were providers’ reactions to this use of the core
measures? Was consensus achieved, and if so, how?
|
|
What
is the impact of the core measures on improving the child health
care delivery system?
|
Did
the core measures inform the state’s quality strategies
related to children’s health care, and if so, how?
|
|
Did
the collection and reporting of the core measures have an impact
on any other quality measurement activities? If so, what was the
impact?
|
|
How
useful were the core measures in assessing program quality and
managing the Medicaid and/or CHIP program?
|
|
Were
they useful to measure improvement over time?
|
|
Were
they useful to compare provider performance?
|
|
Were
they useful to compare with other payers or States?
|
|
What
would make them more useful?
|
|
Did
the core measures increase evidence-based decision making by
consumers, payers, providers, the State, or other stakeholders?
|
|
Did
the core measures meet stakeholders’ needs for reporting
on child health access and quality in Medicaid and CHIP? If not,
how could these needs be met?
|
|
What
has been the impact on child health access and quality of any
reporting, payment, quality improvement, or other activities
based on the core measures?
|
|
What
was the impact of the core measures on health care for children
not enrolled in Medicaid or CHIP?
|
|
What
were the unanticipated impacts, if any, of collecting and using
the core measures (e.g., decreased provider participation in
Medicaid and/or CHIP)?
|
|
Category B:
Evaluation Questions
Note: Potential
evaluation questions from the HHS solicitation are included in
regular font; additional evaluation questions developed by the
national evaluation team are italicized.
Evaluation
Question
|
|
What
kind of HIT or HIT enhancements was designed to improve the
quality of children’s health care and/or reduce costs?
|
What
hardware and software was used?
|
|
What
was the functionality of the HIT?
|
|
What
systems were connected by the HIT?
|
|
What
providers had access to the HIT?
|
|
What
kind of information was communicated?
|
|
Did
the design specifically take into consideration children with
special health care needs?
|
|
To
what extent is HIT sustainable after grant’s end and
replicable by others?
|
|
What
does the program look like in a steady state?
|
|
What
is required to maintain the program?
|
|
What
are the prerequisites for successful replication?
|
|
Which
elements of the HIT are essential to achieving the same results?
|
|
How
did the grantee, its partners, and its sub-contractors,
implement the HIT?
|
Who
was involved in the implementation effort? What roles did they
play? Were all the important stakeholders included?
|
|
To
what extent did previous knowledge, skills, or experience
related to HIT provide support for the current project?
|
|
To
what extent did the Governor, Medicaid director, HHS director,
and key stakeholders provide support for the current project?
|
|
What
were the critical baseline features of the state’s
delivery system for children in Medicaid/CHIP?
|
|
What
were the critical baseline features of the state’s HIT
infrastructure?
|
|
How
did interventions under other grant categories contribute to the
success of the Category B HIT intervention?
|
|
How
many and what types of practices participated in the HIT effort
under this category?
|
|
How
many and what types of children had clinical data made available
to participating providers?
|
|
What
were the start-up and on-going costs associated with HIT
implementation (including provider as well as State costs)?
|
|
What
incentives, if any, were used to promote adoption and use of the
HIT? Were they effective?
|
|
How
else was adoption and use of the HIT promoted? How did actual
HIT adoption and use compare with the demonstration project’s
goals?
|
|
What
kind of technical assistance or
training
was obtained? From whom? What
quantity?
Who received the technical assistance or
training?
What technical assistance or
training
was critical to implementation of the HIT?
|
|
How
was implementation of the HIT monitored? What measures were used
and what was learned from them?
|
|
What
systems of quality assurance were used? Were they effective?
|
|
Did
the grantee integrate the HIT with other state or provider HIT
activities, and if so, how?
|
|
Was
the HIT implemented as planned? If not, why not? What kind of
adjustments had to be made?
|
|
What
implementation problems were encountered (e.g., delays,
incompatible systems or other technical problems, privacy
issues, cost overruns)?
|
|
Were
any of these problems unique to the pediatric setting?
|
|
How
were they addressed?
|
|
How
could other States avoid such problems?
|
|
Was
the implementation plan adequate? If not, what elements were
missing?
|
|
How
was the HIT used?
|
Who
conducted data entry?
|
|
Who
aggregated/analyzed data?
|
|
With
whom were data or analyses shared?
|
|
Who
used the data or analyses?
|
|
What
was the goal of using the data or analyses (e.g., reducing
errors or duplication, increasing access, continuity,
coordination)?
|
|
How
were they in fact used?
|
|
Were
the data accurate and complete enough to use them for their
intended purpose?
|
|
What
quality improvement/cost containment activities were undertaken
as a result of the HIT?
|
|
Who
implemented the quality improvement/cost containment activities?
|
|
How
were the results of quality improvement/cost containment
activities monitored? What measures were used and what was
learned from them?
|
|
What
was the impact of the HIT on the health care quality of children
enrolled in Medicaid or CHIP?
|
Did
partnering providers gain the knowledge and skills to use the
new HIT tools and system linkages?
|
|
Did
partnering providers actually use the new HIT tools and system
linkages in the development and sharing of care plans?
|
|
Did
patients and families become more satisfied with the care
received?
|
|
Did
the project improve the comprehensiveness of patient records?
(e.g,. increase the number of patients that had ER data in their
provider’s EMR)
|
|
Did
the project improve children’s access to health care?
|
|
Did
the project reduce the chances of children experiencing a
medical error?
|
|
Did
the project improve the timeliness of children’s health
care?
|
|
Did
the project increase the delivery of effective children’s
health care?
|
|
Did
the project increase rates of behavioral health screening and
visits to mental health specialists (if applicable)? decrease
time elapsed between the referral and the visit?
|
|
Did
the project reduce hospital admissions, ED use, and/or
hospitalizations for ambulatory care-sensitive conditions?
|
|
Did
the project reduce redundant tests?
|
|
Did
the project improve the patient-/family-centeredness of
children’s health care?
|
|
Did
the project improve the coordination of care (e.g., increase the
number of providers who were informed of care a child received
from another provider)?
|
|
Did
the project have an impact on efficiency (e.g., decrease
inappropriate health services, decrease duplication of
services)?
|
|
Was
the cost of care per participating child reduced?
|
|
Did
the HIT result in cost savings, and if so, who received the
benefit?
|
|
Was
it sufficient to offset the cost of implementing the HIT?
|
|
What
elements of the model were responsible for the cost savings?
|
|
Did
the project reduce health care disparities?
|
|
Did
the project increase evidence-based decision-making by
consumers, payers, providers, the State, or other stakeholders?
|
|
What
was the impact of the HIT on health care for children not
enrolled in Medicaid or CHIP?
|
|
What
were the unanticipated impacts, if any, of the HIT?
|
|
Which
aspects of the HIT were largely responsible for its impact?
Which aspects are essential to achieving the same results?
|
|
How
long must the HIT be in effect to begin demonstrating results?
|
|
Did
the model HIT increase transparency and consumer (youth/family)
choice? (For consumer facing HIT only.)
|
Did
consumers use the HIT?
|
|
What
proportion of consumers who had the HIT available to them used
them?
|
|
What
were the characteristics of consumers that used the HIT? That
did not use the HIT?
|
|
For
what purpose did consumers use the HIT?
|
|
Did
consumers find the HIT useful? Was the EHR/PHR easy to use?
|
|
Did
consumers make better informed decisions based on information
from the HIT?
|
|
Category C:
Evaluation Questions
Note: Potential
evaluation questions from the HHS solicitation are included in
regular font; additional evaluation questions developed by the
national evaluation team are italicized.
Evaluation
Question
|
|
What
was the provider-based model of care that was implemented?
|
Who
was involved in planning the provider-based model of care? Over
what period of time?
|
|
What
was the level of cooperation among stakeholders, and how was it
maintained?
|
|
Was
a stakeholder collaborative framework used to design, implement,
and sustain the provider-based model of care?
|
|
What
practices work best to encourage provider participation as well
as collaboration among participating providers, payers, and
stakeholders?
|
|
What
were the most common benefits resulting from such collaboration
and did these benefits extend beyond the particular
provider-based model under review?
|
|
What
specific strategies were planned to improve quality?
|
|
What
provider-based model was implemented? (e.g., detailed
description, including PCMH definition/standards used)
|
|
What
types and amounts of payments were offered to participating
providers? How did this differ from the prior payment approach?
|
|
What
types of resources and technical assistance were available to
practices participating in the Learning Collaborative (if
applicable)?
|
|
How
was the provider-based model to improve health care quality
implemented?
|
Who
was involved in the implementation effort? What roles did they
play? Were all the important stakeholders included?
|
|
To
what extent did previous knowledge, skills, or experience
related to provider-based models (e.g., PCMH) and/or other
quality improvement approaches provide support for the current
project?
|
|
To
what extent did the Governor, Medicaid director, HHS director,
and key stakeholders provide support for the current project?
|
|
What
were the critical baseline features of the state’s
delivery system for children in Medicaid/CHIP?
|
|
How
did interventions under other grant categories contribute to the
success of the Category C intervention?
|
|
How
many and what types of practices implemented the provider-based
model? (e.g., implemented a PCMH, participating in learning
collaborative, received technical assistance, etc.)
|
|
How
many and what types of children received services through the
new provider-based model?
|
|
What
incentives, if any, were used to promote implementation? How
else was implementation promoted?
|
|
What
kind of technical assistance or
training
was obtained? From whom? What
quantity?
Who received the technical assistance or
training?
What technical assistance or
training
was critical to implementation of the provider-based model?
|
|
How
many and what types of practices participated in the Learning
Collaborative? What types of content was delivered? How many
sessions were held? (if applicable)
|
|
How
was implementation of the provider-based model monitored? What
measures were used and what was learned from them?
|
|
What
were the start-up and on-going costs associated with
implementation of the provider-based model (including provider
as well as State costs)?
|
|
Did
the grantee integrate the provider-based model with other state
or provider quality improvement activities, and if so, how?
|
|
Was
the provider-based model implemented as planned? If not, why
not? What kind of adjustments had to be made?
|
|
What
implementation problems were encountered? Were any of these
problems unique to the pediatric setting? How were they
addressed? How could other States avoid such problems?
|
|
Was
the implementation plan adequate? If not, what elements were
missing?
|
|
To
what extent is the provider-based model sustainable after
grant’s end and replicable by others?
|
|
What
does the program look like in a steady state?
|
|
To
what extent is the State prepared to expand implementation of
this model?
|
|
What
is required to maintain the program?
|
|
What
are the prerequisites for successful replication?
|
|
Which
elements of the model are essential to achieving the same
results?
|
|
What
was the impact of the provider-based model on children’s
health care quality?
|
Did
partnering providers gain knowledge of the provider-based
interventions being implemented (e.g., PCMH concept, T.A.
availability, etc.)?
|
|
Did
partnering providers believe that PCMHs could improve quality of
care for children?
|
|
Did
partnering providers gain the skills to implement the model?
(e.g., gain competencies in population management tools, care
coordination, evidence-based care, systems-based quality and
safety, leadership, family & community engagement,
advocacy, and increasing access to care)
|
|
Did
partnering providers have the motivation to implement the model?
|
|
Did
partnering providers believe that sufficient incentives were
offered to cover the cost of making practice transformations?
|
|
Did
partnering providers increase their “medical homeness”?
|
|
Did
partnering providers report high satisfaction with learning
collaborative and/or technical assistance recieved?
|
|
Did
partnering providers make changes as a result of participation
in the learning collaborative (if applicable)?
|
|
Did
partnering providers use practice-level data for quality
improvement?
|
|
Did
partnering providers have the infrastructure to track the
provider-based model’s impact on quality and health
outcomes?
|
|
Did
patients and families believe that the provider-based model
could improve the quality of care?
|
|
Did
patients and families perceive changes related to the new
provider-based model? (e.g., changes in values, systems,
principles, operating characteristics in line with medical home
concepts, and increased shared decision-making, clinician
compassion, coordination of care, culturally-sensitivity,
access)
|
|
Did
patients and families develop a better understanding of their
conditions and how to manage them?
|
|
Did
patients and families receive help arranging care or other
services?
|
|
Did
patients and families become more involved in care decisions?
|
|
Did
patients and families become more satisfied with the care
received?
|
|
Did
the project improve children’s access to health care?
|
|
Did
the project reduce the chances of children experiencing a
medical error?
|
|
Did
the project improve the timeliness of children’s health
care?
|
|
Did
the project increase the delivery of effective children’s
health care?
|
|
Did
the project increase EPSDT rates (if applicable)?
|
|
Did
the project increase immunization rates (if applicable)?
|
|
Did
the project increase rates of behavioral health screening and
visits to mental health specialists (if applicable)? decrease
time elapsed between the referral and the visit?
|
|
Did
the project reduce hospital admissions, ED use, and/or
hospitalizations for ambulatory care-sensitive conditions?
|
|
Did
the project reduce redundant tests?
|
|
Did
the project increase the use of community-based services and
social services (if applicable)?
|
|
Did
the project decrease the rate of prescriptions for psychotropic
drugs? (for starts targeting children with several emotional
disturbances)
|
|
Did
the project improve the patient-/ family-centeredness of
children’s health care?
|
|
Did
the project improve the coordination of care (e.g., increase the
number of providers who were informed of care a child received
from another provider)?
|
|
Did
the project have an impact on efficiency (e.g., decrease
inappropriate health services and
psychotropic drug use, if applicable,
decrease duplication of services)?
|
|
Was
the cost of care per participating child reduced?
|
|
Did
the provider-based model result in cost savings, and if so, who
received the benefit?
|
|
Was
it sufficient to offset the cost of implementing the
provider-based model?
|
|
What
elements of the model were responsible for the cost savings?
|
|
Did
the project reduce health care disparities?
|
|
Did
the project increase evidence-based decision-making by
consumers, payers, providers, the State, or other stakeholders?
|
|
What
was the impact of the provider-based model on health care for
children not enrolled in Medicaid or CHIP?
|
|
What
were the unanticipated impacts, if any, of the provider-based
model?
|
|
Which
aspects of the provider-based model were largely responsible for
its impact? Which aspects are essential to achieving the same
results?
|
|
What
practice barriers and facilitators affect the process of
transformation into a medical home?
|
|
How
long must the provider-based model be in effect to begin
demonstrating results?
|
|
Category D:
Evaluation Questions
Evaluation
Question
|
|
Was
the grantee able to get the model pediatric EHR adopted and
used?
|
How
did the pediatric EHR intersect with CHIPRA demonstration
activities in other categories?
|
|
Who
was involved in promoting adoption and use of the model
pediatric EHR? What roles did they play? Were all the important
stakeholders included?
|
|
What
incentives, if any, were used to promote adoption and use of the
model pediatric EHR? Were they effective?
|
|
How
else was adoption and use of the model pediatric EHR promoted?
|
|
How
was adoption and use of the model pediatric EHR monitored? What
measures were used and what was learned from them?
|
|
How
did actual model pediatric EHR adoption and use compare with the
demonstration project’s goals?
|
|
Who
adopted the model pediatric EHR?
|
|
What
were the characteristics of providers who adopted and who chose
not to adopt the model pediatric EHR?
|
|
Did
any providers that decided to adopt the model pediatric EHR fail
to implement it, and if so, why?
|
|
What
were the start-up and on-going costs associated with promoting
the adoption and use of the model pediatric EHR implementation?
|
|
To
what extent is the model pediatric EHR sustainable after the
grant’s end and replicable by others?
|
|
What
is required to maintain the program?
|
|
What
are the prerequisites for successful replication?
|
|
How
was the model pediatric EHR implemented by providers?
|
|
What
hardware and software was used?
|
|
What
aspects of the model pediatric EHR, if any, did not get
implemented?
|
|
What
systems were connected to the model pediatric EHR?
|
|
Was
the implementation plan adequate? If not, what elements were
missing?
|
|
Who
was involved in the implementation effort? What roles did they
play? Were all the important stakeholders included?
|
|
How
was implementation of the model pediatric EHR monitored? What
measures were used and what was learned from them?
|
|
Was
the model pediatric EHR implemented as planned? If not, why not?
What kind of adjustments had to be made?
|
|
What
implementation problems were encountered (e.g., delays,
incompatible systems or other technical problems, privacy
issues, cost overruns)?
|
|
Were
any of these problems unique to the pediatric setting?
|
|
How
were they addressed?
|
|
How
could other States avoid such problems?
|
|
What
kind of technical assistance was obtained? From whom? Who
received the technical assistance? What technical assistance was
critical to implementation of the model pediatric EHR?
|
|
What
systems of quality assurance were used? Were they effective?
|
|
What
were the start-up and on-going costs associated with the model
pediatric EHR implementation?
|
|
Did
the Grantee integrate the model pediatric EHR with other State
or provider Health IT systems or activities, and if so, how?
|
|
How
were data from the model pediatric EHR used?
|
What
quality improvement/cost containment/consumer empowerment
activities were undertaken as a result the model pediatric EHR?
|
|
Were
data from the EHR used to report on the core quality measure set
for children’s health care or to demonstrate meaningful
use for the Recovery Act Medicaid Health IT incentive payments?
|
|
Who
implemented the quality improvement/cost containment/consumer
empowerment activities?
|
|
How
were the results of quality improvement/cost
containment/consumer empowerment activities monitored? What
measures were used and what was learned from them?
|
|
What
was the impact the model pediatric EHR on children’s
health care quality?
|
Did
the model pediatric EHR improve children’s access to
health care?
|
|
Did
the model pediatric EHR reduce the chances of children
experiencing a medical error?
|
|
Did
the model pediatric EHR improve the timeliness of children’s
health care?
|
|
Did
the model pediatric EHR increase the delivery of effective
children’s health care?
|
|
Did
the model pediatric EHR improve the patient-/family-centeredness
of children’s health care?
|
|
Did
the model pediatric EHR have an impact on efficiency (e.g.,
decrease inappropriate health services, decrease duplication of
services)?
|
|
Did
the model pediatric EHR result in cost savings, and if so, who
received the benefit?
|
|
Was
it sufficient to offset the cost of implementing the model
pediatric EHR?
|
|
What
elements of the model were responsible for the cost savings?
|
|
Did
the model pediatric EHR increase evidence-based decision making
by consumers, payers, providers, the State, or other
stakeholders?
|
|
What
were the unanticipated impacts, if any, of the model pediatric
EHR?
|
|
Which
aspects of the model pediatric EHR were largely responsible for
its impact? Which aspects are essential to achieving the same
results?
|
|
How
long from the time of implementation of the model pediatric EHR
will there begin to be demonstrable results?
|
|
Did
the model pediatric EHR increase transparency and consumer
(youth/family) choice?
|
If
the model pediatric EHR contains a personal health record
portal: what are the characteristics of those who used it, how
was it used and how was it perceived?
|
|
Did
consumers make decisions based on information from their EHR?
|
|
Category E:
Evaluation Questions
Evaluation
Question
|
|
What
was the model that was implemented?
|
Who
was involved in planning the model? Over what period of time?
|
|
What
was the level of cooperation among stakeholders, and how was it
maintained?
|
|
Was
a stakeholder collaborative framework used to design, implement,
and sustain the model?
|
|
What
practices work best to encourage provider participation as well
as collaboration among participating providers, payers, and
stakeholders?
|
|
What
were the most common benefits resulting from such collaboration
and did these benefits extend beyond the particular model under
review?
|
|
What
specific strategies were planned to improve quality?
|
|
How
was the model to improve health care quality implemented?
|
Was
the implementation plan adequate? If not, what elements were
missing?
|
|
Who
was involved in the implementation effort? What roles did they
play? Were all the important stakeholders included?
|
|
What
incentives, if any, were used to promote implementation? How
else was implementation promoted?
|
|
Was
the model implemented as planned? If not, why not? What kind of
adjustments had to be made?
|
|
What
implementation problems were encountered? Were any of these
problems unique to the pediatric setting? How were they
addressed? How could other States avoid such problems?
|
|
What
kind of technical assistance was obtained? From whom? Who
received the technical assistance? What technical assistance was
critical to implementation of the model?
|
|
How
was implementation monitored? What measures were used and what
was learned from them?
|
|
What
were the start-up and on-going costs associated with
implementation (including stakeholder as well as State costs)?
|
|
Did
the grantee integrate the program with other State or provider
quality improvement activities, and if so, how?
|
|
To
what extent is the model sustainable after grant’s end and
replicable by others?
|
|
What
does the program look like in a steady state?
|
|
To
what extent is the State prepared to expand implementation of
this model?
|
|
What
is required to maintain the model?
|
|
What
are the prerequisites for successful replication?
|
|
Which
elements of the model are essential to achieving the same
results?
|
|
What
was the impact of the model on children’s health care
quality?
|
Did
the model increase knowledge of providers or stakeholders?
|
|
Did
the model help providers or stakeholders acquire new skills?
|
|
How
did Category E initiatives contribute to the overall impacts
achieved by a state?
|
|
Did
the model improve children’s access to health care?
|
|
Did
the model reduce the chances of children experiencing a medical
error?
|
|
Did
the model improve the timeliness of children’s health
care?
|
|
Did
the model increase the delivery of effective children’s
health care?
|
|
Did
the model improve the patient-/family-centeredness of children’s
health care?
|
|
Did
the model have an impact on efficiency (e.g., decrease
inappropriate health services, decrease duplication of
services)?
|
|
Did
the program model result in cost savings, and if so, who
received the benefit?
|
|
Was
it sufficient to offset the cost of implementing the model?
|
|
What
elements of the model were responsible for the cost savings?
|
|
Did
the model reduce health care disparities?
|
|
Did
the model increase evidence-based decision making by consumers,
payers, providers, the State, or other stakeholders?
|
|
What
was the impact of the model on health care for children not
enrolled in Medicaid or CHIP?
|
|
What
were the unanticipated impacts, if any, of the program?
|
|
Which
aspects of the model were largely responsible for its impact?
Which aspects are essential to achieving the same results?
|
|
How
long must the program be in effect to begin demonstrating
results?
|
|
File Type | application/vnd.openxmlformats-officedocument.wordprocessingml.document |
Author | Sharon D. Clark |
File Modified | 0000-00-00 |
File Created | 2021-01-28 |