Assessment;
Discussion
stating the problem and research question
Assessment type;
Discussion
Word limit/length;
700 words (all inclusive of in-text citations, reference list,)
Overview
Choose a topic, develop a problem statement and formulate a research question that critically examines an area of personal interest in Mental Health located within your discipline or course specialisation. The format of the research question will fit the type of critical review that suits your topic and problem statement. You will discuss this process.
Learning Outcomes
This assessment task is aligned with the following learning outcomes:
1. Develop a research question and problem statement to guide critical examination of an area of personal interest in Mental Health, located within your discipline or course specialisation.
Instructions;
Use , associated readings 1.2a-e, and the Assignment rubric (over page) as guidance:
Write your assignment as a discussion, including:
1. Problem statement (significance of the research problem);
2. The systematic review type that best fits the problem;
3. Structured question;
4. The question format (e.g. ‘PICO’, ‘PICo’, ‘PEO’) and how the question fits the format;
5.
Justification of question format referring to Reading 1.2 by Munn et al. (2018) and your problem statement.
6.
Reference/s (a minimum of 5 references)
Assignment Rubric – CRH
Grade: |
Evidence |
|||
High Distinction |
Assignment instructions have been precisely followed and no unnecessary material has been included. Your persuasive yet concise problem statement perfectly suits the chosen review type and is explicitly relevant to your research question. The outstanding question is flawlessly structured and its relationship to the format is skilfully shown. An excellent, suitably brief justification for the question format coherently refers to the problem statement and includes sound referencing in the correct style. |
|||
Distinction |
Assignment instructions have been followed and no unnecessary material has been included. Your problem statement is coherent, concise, and strong; it is suitable for the chosen review type and relevant to the research question. The question is well-structured, and the parts of its format are very appropriately specified. A thorough, suitably brief justification is given for the question format, refers to the problem statement, and includes at least one correctly cited reference in the recommended style. |
|||
(a.) Problem Statement
Staff morale or workplace culture is a workforce phenomenon that certainly, on occasion, challenges
every organisation (Day, Minichello & Madison, 2006). Morale is a formidable indication of
organisational well-being and efficiency (Brode, 2012). The consideration of morale is imperative as it
can have substantial and widespread impacts and consequences for an organisation (Day et al., 2006).
So often, organisational culture can be deep-seated and challenging to shift. (Brunges & Foley-Brinza,
2014). Attaining, supporting, and maintaining workforce culture is one of the many tests associated
with leadership (Brode, 2012).
Both intrinsic and extrinsic factors can influence staff morale. Professional support, leadership traits
and management styles, which are all extrinsic factors, are the leading themes in poor morale amongst
nurses and healthcare workers. By conducting a literature review, these factors can be explored and
analysed. The significance of morale has been thoroughly documented in nursing literature, however,
there are no systematic reviews pertaining to how differing factors of leadership generate workplace
culture influence (Stapleton, Henderson, Creedy, Cooke, Patterson, Alexander, Haywood, & Dalton,
2007).
(b.) The Systematic Review Type
The most appropriate systematic review typology for this topic of research is an experiential
(qualitative) review, with the emphasis on evaluating nurses’ perceptions of leadership traits that
influence staff morale. Munn, Stern, Aromataris, Lockwood, & Jordan (2018) convey that the question
format guides its development, therefore influencing the type of review required. As the research
question is specifically examining the subjective experience of nurses’ perceptions, a non-positivist
approach is best suited
(Munn et al., 2018).
(c.) The Structured Question
What factors of leadership do nurses perceive as influential on staff morale?
(d.) The Question Format
The PICo format, in this instance, is employed to drive the formation of the question and examine the
population’s subjective perception of the phenomenon of significance within a particular environment
(Munn et al., 2018).
Population – nurses
Phenomenon of Interest – factors of leadership
Context – influential on staff morale
(e.) Justification of Question Format
In guaranteeing that primarily an appropriate question i ssolicited, and that it is associated with the
issue, this stipulates the foundation for retrieving the material from diverse areas, (Munn et al, 2018).
Submitting the formulated question using the PICo technique, is a methodical formula which identifies
the problem statement and makes sure all sections of the question will augment evidence-based
searching of the research (Milner & Cosme, 2017). Munn et al (2018) highlights the magnitude of
generating a well-structured and precise question to advance with collecting applicable documentation
on a subject for additional study or employing a practice modification or guideline. The question
criteria, population, phenomenon of interest, and context guarantee that the search of the primary
literature is thorough, and recognises bias while developing the systematic review, (Pollock & Berge,
2018). The aim of this critical literature review is to identify and explore the factors of leadership that
influence the morale of nurses.
(f.) Reference List
Brode, A. M. (2012). The leadership role in organizational morale: A case study (Order No. 3490498).
Available from ProQuest Central. (916923673). Retrieved from http://ezproxy.scu.edu.au/
login?url=https://search-proquest-com.ezproxy.scu.edu.au/docview/916923673?accountid=
16926
Brunges, M. & Foley-Brinza, C. (2014). Projects for Increasing Job Satisfaction and Creating a Healthy
Work Environment. AORN Journal, 100 (6), 670-681. doi: 10.1016/j.aorn.2014.01.029
Day, G. E., Minichello, V, & Madison, J. (2006). Nursing morale: what does the literature reveal?
Australia Health Review, 30 (4), 516-524.
Hoffman, T., Bennett, S., & Del Mar, C. (Eds.). (2017). Evidence-based practice across the health
professions (3rd ed.). Chatswood, NSW: Elsevier.
Milner, K. A., & Cosme, S. (2017). The PICO Game: An Innovative Strategy for Teaching Step 1 in
Evidence‐Based Practice. Worldviews on Evidence‐Based Nursing, 14(6), 514-516.
doi:10.1111/wvn.12255
Munn, Z., Stern, C., Aromataris, E., Lockwood, C., & Jordan, Z. (2018). What kind of systematic
review should I conduct? A proposed typology and guidance for systematic reviewers in the
medical and health sciences. BMC Medical Research Methodology, 18 (1), 1-9.
doi:10.1186/s12874-017-0468-4
Pollock, A., & Berge, E. (2018). How to do a systematic review. International Journal of Stroke, 13
(2), 138-156. doi:10.1177/1747493017743796
Stapleton, P., Henderson, A., Creedy, D. K., Cooke, M., Patterson, E., Alexander, H., Haywood, A., &
Dalton, M. (2007). Boosting morale and improving performance in the nursing setting. (15),
811-816. https://doi-org.ezproxy.scu.edu.au/10.1111/j.1365-2934.2007.00745.x|
CORRESPONDENCE Open Access
What kind of systematic review should I
conduct? A proposed typology and
guidance for systematic reviewers in the
medical and health sciences
Zachary Munn* , Cindy Stern, Edoardo Aromataris, Craig Lockwood and Zoe Jordan
Background: Systematic reviews have been considered as the pillar on which evidence-based healthcare rests.
Systematic review methodology has evolved and been modified over the years to accommodate the range of
questions that may arise in the health and medical sciences. This paper explores a concept still rarely considered by
novice authors and in the literature: determining the type of systematic review to undertake based on a research
question or priority.
Results: Within the framework of the evidence-based healthcare paradigm, defining the question and type of systematic
review to conduct is a pivotal first step that will guide the rest of the process and has the potential to impact on other
aspects of the evidence-based healthcare cycle (evidence generation, transfer and implementation). It is something that
novice reviewers (and others not familiar with the range of review types available) need to take account of but frequently
overlook. Our aim is to provide a typology of review types and describe key elements that need to be addressed during
question development for each type.
s: In this paper a typology is proposed of various systematic review methodologies. The review types are
defined and situated with regard to establishing corresponding questions and inclusion criteria. The ultimate objective is
to provide clarified guidance for both novice and experienced reviewers and a unified typology with respect to review
types.
Keywords: Systematic reviews, Evidence-based healthcare, Question development
Systematic reviews are the gold standard to search for, col-
late, critique and summarize the best available evidence re-
garding a clinical question [1, 2]. The results of systematic
reviews provide the most valid evidence base to inform the
development of trustworthy clinical guidelines (and their
recommendations) and clinical decision making [2]. They
follow a structured research process that requires rigorous
methods to ensure that the results are both reliable and
meaningful to end users. Systematic reviews are therefore
seen as the pillar of evidence-based healthcare [3–6]. How-
ever, systematic review methodology and the language used
to express that methodology, has progressed significantly
since their appearance in healthcare in the 1970’s and 80’s
[7, 8]. The diachronic nature of this evolution has caused,
and continues to cause, great confusion for both novice
and experienced researchers seeking to synthesise various
forms of evidence. Indeed, it has already been argued that
the current proliferation of review types is creating chal-
lenges for the terminology for describing such reviews [9].
These fundamental issues primarily relate to a) the types of
questions being asked and b) the types of evidence used to
answer those questions.
Traditionally, systematic reviews have been predomin-
antly conducted to assess the effectiveness of health in-
terventions by critically examining and summarizing the
results of randomized controlled trials (RCTs) (using
* Correspondence: Zachary.Munn@adelaide.edu.au
The Joanna Briggs Institute, The University of Adelaide, 55 King William Road,
North Adelaide, Soueth Australia 5005, Australia
© The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Munn et al. BMC Medical Research Methodology (2018) 18:5
DOI 10.1186/s12874-017-0468-4
http://crossmark.crossref.org/dialog/?doi=10.1186/s12874-017-0468-4&domain=pdf
http://orcid.org/0000-0002-7091-5842
mailto:Zachary.Munn@adelaide.edu.au
http://creativecommons.org/licenses/by/4.0/
http://creativecommons.org/publicdomain/zero/1.0/
meta-analysis where feasible) [4, 10]. However, health
professionals are concerned with questions other than
whether an intervention or therapy is effective, and this
is reflected in the wide range of research approaches uti-
lized in the health field to generate knowledge for prac-
tice. As such, Pearson and colleagues have argued for a
pluralistic approach when considering what counts as
evidence in health care; suggesting that not all questions
can be answered from studies measuring effectiveness
alone [4, 11]. As the methods to conduct systematic re-
views have evolved and advanced, so too has the think-
ing around the types of questions we want and need to
answer in order to provide the best possible, evidence-
based care [4, 11].
Even though most systematic reviews conducted today
still focus on questions relating to the effectiveness of
medical interventions, many other review types which
adhere to the principles and nomenclature of a system-
atic review have emerged to address the diverse informa-
tion needs of healthcare professionals and policy makers.
This increasing array of systematic review options may
be confusing for the novice systematic reviewer, and in
our experience as educators, peer reviewers and editors
we find that many beginner reviewers struggle to achieve
conceptual clarity when planning for a systematic review
on an issue other than effectiveness. For example, re-
viewers regularly try to force their question into the
PICO format (population, intervention, comparator and
outcome), even though their question may be an issue of
diagnostic test accuracy or prognosis; attempting to de-
fine all the elements of PICO can confound the remain-
der of the review process. The aim of this article is to
propose a typology of systematic review types aligned to
review questions to assist and guide the novice system-
atic reviewer and editors, peer-reviewers and policy
makers. To our knowledge, this is the first classification
of types of systematic reviews foci conducted in the
medical and health sciences into one central typology.
Review typology
For the purpose of this typology a systematic review is
defined as a robust, reproducible, structured critical syn-
thesis of existing research. While other approaches to
the synthesis of evidence exist (including but not limited
to literature reviews, evidence maps, rapid reviews, inte-
grative reviews, scoping and umbrella reviews), this
paper seeks only to include approaches that subscribe to
the above definition. As such, ten different types of sys-
tematic review foci are listed below and in Table 1. In
this proposed typology, we provide the key elements for
formulating a question for each of the 10 review types.
1. Effectiveness reviews [12]
2. Experiential (Qualitative) reviews [13]
3. Costs/Economic Evaluation reviews [14]
4. Prevalence and/or Incidence reviews [15]
5. Diagnostic Test Accuracy reviews [16]
6. Etiology and/or Risk reviews [17]
7. Expert opinion/policy reviews [18]
8. Psychometric reviews [19]
9. Prognostic reviews [20]
10.Methodological systematic reviews [21, 22]
Effectiveness reviews
Systematic reviews assessing the effectiveness of an inter-
vention or therapy are by far the most common. Essen-
tially effectiveness is the extent to which an intervention,
when used appropriately, achieves the intended effect [11].
The PICO approach (see Table 1) to question develop-
ment is well known [23] and comprehensive guidance for
these types of reviews is available [24]. Characteristics re-
garding the population (e.g. demographic and socioeco-
nomic factors and setting), intervention (e.g. variations in
dosage/intensity, delivery mode, and frequency/duration/
timing of delivery), comparator (active or passive) and
outcomes (primary and secondary including benefits and
harms, how outcomes will be measured including the tim-
ing of measurement) need to be carefully considered and
appropriately justified.
Experiential (qualitative) reviews
Experiential (qualitative) reviews focus on analyzing hu-
man experiences and cultural and social phenomena. Re-
views including qualitative evidence may focus on the
engagement between the participant and the intervention,
as such a qualitative review may describe an intervention,
but its question focuses on the perspective of the individ-
uals experiencing it as part of a larger phenomenon. They
can be important in exploring and explaining why inter-
ventions are or are not effective from a person-centered
perspective. Similarly, this type of review can explain and
explore why an intervention is not adopted in spite of evi-
dence of its effectiveness [4, 13, 25]. They are important in
providing information on the patient’s experience, which
can enable the health professional to better understand
and interact with patients. The mnemonic PICo can be
used to guide question development (see Table 1). With
qualitative evidence there is no outcome or comparator to
be considered. A phenomenon of interest is the experi-
ence, event or process occurring that is under study, such
as response to pain or coping with breast cancer; it differs
from an intervention in its focus. Context will vary de-
pending on the objective of the review; it may include
consideration of cultural factors such as geographic loca-
tion, specific racial or gender based interests, and details
about the setting such as acute care, primary healthcare,
or the community [4, 13, 25]. Reviews assessing the ex-
perience of a phenomenon may opt to use a mixed
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 2 of 9
methods approach and also include quantitative data, such
as that from surveys. There are reporting guidelines avail-
able for qualitative reviews, including the ‘Enhancing
transparency in reporting the synthesis of qualitative re-
search’ (ENTREQ) statement [26] and the newly proposed
meta-ethnography reporting guidelines (eMERGe) [27].
Costs/economic evaluation reviews
Costs/Economics reviews assess the costs of a certain
intervention, process, or procedure. In any society, re-
sources available (including dollars) have alternative
uses. In order to make the best decisions about alterna-
tive courses of action evidence is needed on the health
benefits and also on the types and amount of resources
needed for these courses of action. Health economic
evaluations are particularly useful to inform health
policy decisions attempting to achieve equality in health-
care provision to all members of society and are com-
monly used to justify the existence and development of
health services, new health technologies and also, clin-
ical guideline development [14]. Issues of cost and re-
source use may be standalone reviews or components of
effectiveness reviews [28]. Cost/Economic evaluations
are examples of a quantitative review and as such can
follow the PICO mnemonic (see Table 1). Consideration
should be given to whether the entire world/inter-
national population is to be considered or only a popula-
tion (or sub-population) of a particular country. Details
of the intervention and comparator should include the
nature of services/care delivered, time period of delivery,
dosage/intensity, co-interventions, and personnel under-
taking delivery. Consider if outcomes will only focus on
Table 1 Types of reviews
Review Type Aim Question Format Question Example
Effectiveness To evaluate the effectiveness of a certain
treatment/practice in terms of its impact
on outcomes
Population, Intervention,
Comparator/s, Outcomes
(PICO) [23]
What is the effectiveness of exercise for
treating depression in adults compared to
no treatment or a comparison treatment? [69]
Experiential
(Qualitative)
To investigate the experience or
meaningfulness of a particular
phenomenon
Population, Phenomena of
Interest, Context (PICo) [13]
What is the experience of undergoing high
technology medical imaging (such as Magnetic
Resonance Imaging) in adult patients in high
income countries? [70]
Costs/Economic
Evaluation
To determine the costs associated with a
particular approach/treatment strategy,
particularly in terms of cost effectiveness
or benefit
Population, Intervention,
Comparator/s, Outcomes,
Context (PICOC) [14]
What is the cost effectiveness of self-monitoring
of blood glucose in type 2 diabetes mellitus in
high income countries? [71]
Prevalence and/
or Incidence
To determine the prevalence and/or
incidence of a certain condition
Condition, Context,
Population (CoCoPop)
[15]
What is the prevalence/incidence of claustrophobia
and claustrophobic reactions in adult patients
undergoing MRI? [72]
Diagnostic Test
Accuracy
To determine how well a diagnostic
test works in terms of its sensitivity
and specificity for a particular
diagnosis
Population, Index Test,
Reference Test, Diagnosis
of Interest (PIRD) [16]
What is the diagnostic test accuracy of nutritional
tools (such as the Malnutrition Screening Tool)
compared to the Patient Generated Subjective
Global Assessment amongst patients with colorectal
cancer to identify undernutrition? [73]
Etiology and/or
Risk
To determine the association between
particular exposures/risk factors and
outcomes
Population, Exposure,
Outcome (PEO) [17]
Are adults exposed to radon at risk for developing
lung cancer? [74]
Expert opinion/
policy
To review and synthesize current expert
opinion, text or policy on a certain
phenomena
Population, Intervention or
Phenomena of Interest,
Context (PICo) [18]
What are the policy strategies to reduce maternal
mortality in pregnant and birthing women in
Cambodia, Thailand, Malaysia and Sri Lanka? [75]
Psychometric To evaluate the psychometric properties
of a certain test, normally to determine
how the reliability and validity of a
particular test or assessment.
Construct of interest or the
name of the measurement
instrument(s), Population,
Type of measurement
instrument, Measurement
properties [31, 32]
What is the reliability, validity, responsiveness and
interpretability of methods (manual muscle testing,
isokinetic dynamometry, hand held dynamometry)
to assess muscle strength in adults? [76]
Prognostic To determine the overall prognosis for
a condition, the link between specific
prognostic factors and an outcome and/
or prognostic/prediction models and
prognostic tests.
Population, Prognostic
Factors (or models of
interest), Outcome
(PFO) [20, 34–36]
In adults with low back pain, what is the association
between individual recovery expectations and
disability outcomes? [77]
Methodology To examine and investigate current
research methods and potentially their
impact on research quality.
Types of Studies, Types of
Data, Types of Methods,
Outcomes [39] (SDMO)
What is the effect of masked (blind) peer review for
quantitative studies in terms of the study quality as
reported in published reports? (question modified
from Jefferson 2007) [40]
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 3 of 9
resource usage and costs of the intervention and its
comparator(s) or additionally on cost-effectiveness. Con-
text (including perspective) can also be considered in
these types of questions e.g. health setting(s).
Prevalence and/or incidence reviews
Essentially prevalence or incidence reviews measure dis-
ease burden (whether at a local, national or global level).
Prevalence refers to the proportion of a population who
have a certain disease whereas incidence relates to how
often a disease occurs. These types of reviews enable
governments, policy makers, health professionals and
the general population to inform the development and
delivery of health services and evaluate changes and
trends in diseases over time [15, 29]. Prevalence or inci-
dence reviews are important in the description of geo-
graphical distribution of a variable and the variation
between subgroups (such as gender or socioeconomic
status), and for informing health care planning and re-
source allocation. The CoCoPop framework can be used
for reviews addressing a question relevant to prevalence
or incidence (see Table 1). Condition refers to the vari-
able of interest and can be a health condition, disease,
symptom, event of factor. Information regarding how
the condition will be measured, diagnosed or confirmed
should be provided. Environmental factors can have a
substantial impact on the prevalence or incidence of a
condition so it is important that authors define the con-
text or specific setting relevant to their review question
[15, 29]. The population or study subjects should be
clearly defined and described in detail.
Diagnostic test accuracy reviews
Systematic reviews assessing diagnostic test accuracy
provide a summary of test performance and are import-
ant for clinicians and other healthcare practitioners in
order to determine the accuracy of the diagnostic tests
they use or are considering using [16]. Diagnostic tests
are used by clinicians to identify the presence or absence
of a condition in a patient for the purpose of developing
an appropriate treatment plan. Often there are several
tests available for diagnosis. The mnemonic PIRD is rec-
ommended for question development for these types of
systematic reviews (see Table 1). The population is all
participants who will undergo the diagnostic test while
the index test(s) is the diagnostic test whose accuracy is
being investigated in the review. Consider if multiple it-
erations of a test exist and who carries out or interprets
the test, the conditions the test is conducted under and
specific details regarding how the test will be conducted.
The reference test is the ‘gold standard’ test to which the
results of the index test will be compared. It should be
the best test currently available for the diagnosis of the
condition of interest. Diagnosis of interest relates to
what diagnosis is being investigated in the systematic re-
view. This may be a disease, injury, disability or any
other pathological condition [16].
Etiology and/or risk reviews
Systematic reviews of etiology and risk are important for
informing healthcare planning and resource allocation,
and are particularly valuable for decision makers when
making decisions regarding health policy and prevention
of adverse health outcomes. The common objective of
many of these types of reviews is to determine whether
and to what degree a relationship exists between an ex-
posure and a health outcome. Use of the PEO
mnemonic is recommended (see Table 1). The review
question should outline the exposure, disease, symptom
or health condition of interest, the population or groups
at risk, as well as the context/location, the time period
and the length of time where relevant [17]. The exposure
of interest refers to a particular risk factor or several risk
factors associated with a disease/condition of interest in
a population, group or cohort who have been exposed to
them. It should be clearly reported what the exposure or
risk factor is, and how it may be measured/identified in-
cluding the dose and nature of exposure and the dur-
ation of exposure, if relevant. Important outcomes of
interest relevant to the health issue and important to key
stakeholders (e.g. knowledge users, consumers, policy
makers, payers etc.) must be specified. Guidance now
exists for conducting these types of reviews [17]. As
these reviews rely heavily on observational studies, the
Meta-analysis Of Observational Studies in Epidemiology
(MOOSE) [30] reporting guidelines should be referred
to in addition to the PRISMA guidelines.
Expert opinion/policy reviews
Expert opinion and policy analysis systematic reviews
focus on the synthesis of narrative text and/or policy.
Expert opinion has a role to play in evidence-based
healthcare, as it can be used to either complement em-
pirical evidence or, in the absence of research studies,
stand alone as the best available evidence. The synthesis
of findings from expert opinion within the systematic re-
view process is not well recognized in mainstream
evidence-based practice. However, in the absence of re-
search studies, the use of a transparent systematic
process to identify the best available evidence drawn
from text and opinion can provide practical guidance to
practitioners and policy makers [18]. While a number of
mnemonics have been discussed previously that can be
used for opinion and text, not all elements necessarily
apply to every text or opinion-based review, and use of
mnemonics should be considered a guide rather than a
policy. Broadly PICo can be used where I can refer to ei-
ther the intervention or a phenomena of interest (see
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 4 of 9
Table 1). Reviewers will need to describe the population,
giving attention to whether specific characteristics of
interest, such as age, gender, level of education or pro-
fessional qualification are important to the question. As
with other types of reviews, interventions may be broad
areas of practice management, or specific, singular inter-
ventions. However, reviews of text or opinion may also
reflect an interest in opinions around power, politics or
other aspects of health care other than direct interven-
tions, in which case, these should be described in detail.
The use of a comparator and specific outcome statement
is not necessarily required for a review of text and opin-
ion based literature. In circumstances where they are
considered appropriate, the nature and characteristics of
the comparator and outcomes should be described [18].
Psychometric reviews
Psychometric systematic reviews (or systematic reviews
of measurement properties) are conducted to assess the
quality/characteristics of health measurement instru-
ments to determine the best tool for use (in terms of its
validity, reliability, responsiveness etc.) in practice for a
certain condition or factor [31–33]. A psychometric sys-
tematic review may be undertaken on a) the measure-
ment properties of one measurement instrument, b) the
measurement properties of the most commonly utilized
measurement instruments measuring a specific con-
struct, c) the measurement properties of all available
measurement instruments to measure a specific con-
struct in a specific population or d) the measurement
properties of all available measurement instruments in a
specific population that does not specify the construct to
be measured. The COnsensus-based Standards for the
selection of health Measurement Instruments (COS-
MIN) group have developed guidance for conducting
these types of reviews [19, 31]. They recommend firstly
defining the type of review to be conducted as well as
the construct or the name(s) of the outcome measure-
ment instrument(s) of interest, the target population, the
type of measurement instrument of interest (e.g. ques-
tionnaires, imaging tests) and the measurement proper-
ties on which the review investigates (see Table 1).
Prognostic reviews
Prognostic research is of high value as it provides clini-
cians and patients with information regarding the course
of a disease and potential outcomes, in addition to poten-
tially providing useful information to deliver targeted ther-
apy relating to specific prognostic factors [20, 34, 35].
Prognostic reviews are complex and methodology for
these types of reviews is still under development, although
a Cochrane methods group exists to support this ap-
proach [20]. Potential systematic reviewers wishing to
conduct a prognostic review may be interested in
determining the overall prognosis for a condition, the
link between specific prognostic factors and an out-
come and/or prognostic/prediction models and prog-
nostic tests [20, 34–37]. Currently there is little
information available to guide the development of a
well-defined review question however the Quality in
Prognosis Studies (QUIPS) tool [34] and the Checklist
for critical appraisal and data extraction for systematic
reviews of prediction modelling studies (CHARMS
Checklist) [38] have been developed to assist in this
process (see Table 1).
Methodology systematic reviews
Systematic reviews can be conducted for methodological
purposes [39], and examples of these reviews are avail-
able in the Cochrane Database [40, 41] and elsewhere
[21]. These reviews can be performed to examine any
methodological issues relating to the design, conduct
and review of research studies and also evidence synthe-
ses. There is limited guidance for conducting these re-
views, although there does exist an appendix in the
Cochrane Handbook focusing specifically on methodo-
logical reviews [39]. They suggest following the SDMO
approach where the types of studies should define all eli-
gible study designs as well as any thresholds for inclu-
sion (e.g. RCTS and quasi-RCTs). Types of data should
detail the raw material for the methodology studies (e.g.
original research submitted to biomedical journals) and
the comparisons of interest should be described under
types of methods (e.g. blinded peer review versus un-
blinded peer review) (see Table 1). Lastly both primary
and secondary outcome measures should be listed (e.g.
quality of published report) [39].
The need to establish a specific, focussed question that
can be utilized to define search terms, inclusion and ex-
clusion criteria and interpretation of data within a sys-
tematic review is an ongoing issue [42]. This paper
provides an up-to-date typology for systematic reviews
which reflects the current state of systematic review
conduct. It is now possible that almost any question can
be subjected to the process of systematic review. How-
ever, it can be daunting and difficult for the novice re-
searcher to determine what type of review they require
and how they should conceptualize and phrase their re-
view question, inclusion criteria and the appropriate
methods for analysis and synthesis [23]. Ensuring that
the review question is well formed is of the utmost im-
portance as question design has the most significant im-
pact on the conduct of a systematic review as the
subsequent inclusion criteria are drawn from the ques-
tion and provide the operational framework for the re-
view [23]. In this proposed typology, we provide the key
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 5 of 9
elements for formulating a question for each of the 10
review types.
When structuring a systematic review question some
of these key elements are universally agreed (such as
PICO for effectiveness reviews) whilst others are more
novel. For example, the use of PIRD for diagnostic re-
views contrasts with other mnemonics, such as PITR
[43], PPP-ICP-TR [44] or PIRATE [45]. Qualitative re-
views have sometimes been guided by the mnemonic
SPIDER, however this has been recommended against
for guiding searching due to it not identifying papers
that are relevant [46]. Variations on our guidance exist,
with the additional question elements of ‘time’ (PICOT)
and study types (PICOS) also existing. Reviewers are ad-
vised to consider these elements when crafting their
question to determine if they are relevant for their topic.
We believe that based on the guidance included in this
typology, constructing a well-built question for a system-
atic review is a skill that can be mastered even for the
novice reviewer.
Related to this discussion of a typology for systematic
reviews is the issue of how to distinguish a systematic
review from a literature review. When searching the lit-
erature, you may come across papers referred to as ‘sys-
tematic reviews,’ however, in reality they do not
necessarily fit this description [21]. This is of significant
concern given the common acceptance of systematic re-
views as ‘level 1’ evidence and the best study design to
inform practice. However, many of these reviews are
simply literature reviews masquerading as the ideal
product. It is therefore important to have a critical eye
when assessing publications identified as systematic re-
views. Today, the methodology of systematic reviews
continues to evolve. However, there is general accept-
ance of certain steps being required in a systematic re-
view of any evidence type [2] and these should be used
to distinguish between a literature review and a system-
atic review. The following can be viewed as the defining
features of a systematic review and its conduct [1, 2]:
1. Clearly articulated objectives and questions to be
addressed
2. Inclusion and exclusion criteria, stipulated a priori
(in a protocol), that determine the eligibility of
studies
3. A comprehensive search to identify all relevant
studies, both published and unpublished
4. A process of study screening and selection
5. Appraisal of the quality of included studies/ papers
(risk of bias) and assessment of the validity of their
results/findings/ conclusions
6. Analysis of data extracted from the included research
7. Presentation and synthesis of the results/ findings
extracted
8. Interpret the results, potentially establishing the
certainty of the results and making and implications
for practice and research
9. Transparent reporting of the methodology and
methods used to conduct the review
Prior to deciding what type of review to conduct, the
reviewer should be clear that a systematic review is the
best approach. A systematic review may be undertaken
to confirm whether current practice is based on evi-
dence (or not) and to address any uncertainty or vari-
ation in practice that may be occurring. Conducting a
systematic review also identifies where evidence is not
available and can help categorize future research in the
area. Most importantly, they are used to produce state-
ments to guide decision-making. Indications for system-
atic reviews:
1. uncover the international evidence
2. confirm current practice/ address any variation
3. identify areas for future research
4. investigate conflicting results
5. produce statements to guide decision-making
The popularity of systematic reviews has resulted in
the creation of various evidence review processes over
the last 30 years. These include integrative reviews,
scoping reviews [47], evidence maps [48], realist synthe-
ses [49], rapid reviews [50], umbrella reviews (systematic
reviews of reviews) [51], mixed methods reviews [52],
concept analyses [53] and others. Useful typologies of
these diverse review types can be used as reference for
researchers, policy makers and funders when discussing
a review approach [54, 55]. It was not the purpose of
this article to describe and define each of these di-
verse evidence synthesis methods as our focus was
purely on systematic review questions. Depending on
the researcher, their question/s and their resources at
hand, one of these approaches may be the best fit for
answering a particular question.
Gough and colleagues [9] provided clarification be-
tween different review designs and methods but stopped
short of providing a taxonomy of review types. The ra-
tionale for this was that in the field of evidence synthesis
‘the rate of development of new approaches to reviewing
is too fast and the overlap of approaches too great for
that to be helpful.’ [9] They instead provide a useful de-
scription of how reviews may differ and more import-
antly why this may be the case. It is also our view that
evidence synthesis methodology is a rapidly developing
field, and that even within the review types classified
here (such as effectiveness [56] or experiential [qualita-
tive [57]]) there may be many different subsets and com-
plexities that need to be addressed. Essentially, the
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 6 of 9
classifications listed above may be just the initial level of
a much larger family tree. We believe that this typology
will provide a useful contribution to efforts to sort and
classify evidence review approaches and understand the
need for this to be updated over time. A useful next step
might be the development of a comprehensive taxonomy
to further guide reviewers in making a determination
about the most appropriate evidence synthesis product
to undertake for a particular purpose or question.
Systematic reviews of animal studies (or preclinical
systematic reviews) have not been common practice in
the past (when comparing to clinical research) although
this is changing [58–61]. Systematic reviews of these
types of studies can be useful to inform the design of fu-
ture experiments (both preclinical and clinical) [59] and
address an important gap in translation science [5, 60].
Guidance for these types of reviews is now emerging
[58, 60, 62–64]. These review types, which are often hy-
pothesis generating, were excluded from our typology as
they are only very rarely used to answer a clinical question.
Systematic reviews are clearly an indispensable com-
ponent in the chain of scientific enquiry in a much
broader sense than simply to inform policy and practice
and therefore ensuring that they are designed in a rigor-
ous manner, addressing appropriate questions driven by
clinical and policy needs is essential. With the ever-
increasing global investment in health research it is im-
perative that the needs of health service providers
and end users are met. It has been suggested that
one way to ensure this occurs is to precede any re-
search investment with a systematic review of existing
research [65]. However, the only way that such a
strategy would be effective would be if all reviews
conducted are done so with due rigour.
It has been argued recently that there is mass produc-
tion of reviews that are often unnecessary, misleading and
conflicted with most having weak or insufficient evidence
to inform decision making [66]. Indeed, asking has been
identified as a core functional competency associated with
obtaining and applying the best available evidence [67].
Fundamental to the tenets of evidence-based healthcare
and, in particular evidence implementation, is the ability
to formulate a question that is amenable to obtaining evi-
dence and “structured thinking” around question develop-
ment is critical to its success [67]. The application of
evidence can be significantly hampered when existing evi-
dence does not correspond to the situations that practi-
tioners (or guideline developers) are faced with. Hence,
determination of appropriate review types that respond to
relevant clinical and policy questions is essential.
The revised JBI Model of Evidence-Based Healthcare
clarifies the conceptual integration of evidence gener-
ation, synthesis, transfer and implementation, “linking
how these occur with the necessarily challenging dynamics
that contribute to whether translation of evidence into
policy and practice is successful” [68]. Fundamental to
this approach is the recognition that the process of
evidence-based healthcare is not prescriptive or linear,
but bi-directional, with each component having the po-
tential to affect what occurs on either side of it. Thus, a
systematic review can impact upon the types of primary
research that are generated as a result of recommenda-
tions produced in the review (evidence generation) but
also on the success of their uptake in policy and prac-
tice (evidence implementation). It is therefore critical
for those undertaking systematic reviews to have a solid
understanding of the type of review required to respond
to their question.
For novice reviewers, or those unfamiliar with the
broad range of review types now available, access to a
typology to inform their question development is timely.
The typology described above provides a framework that
indicates the antecedents and determinants of undertak-
ing a systematic review. There are several factors that
may lead an author to conduct a review and these may
or may not start with a clearly articulated clinical or pol-
icy question. Having a better understanding of the re-
view types available and the questions that these reviews
types lend themselves to answering is critical to the suc-
cess or otherwise of a review. Given the significant re-
source required to undertake a review this first step is
critical as it will impact upon what occurs in both evi-
dence generation and evidence implementation. Thus,
enabling novice and experienced reviewers to ensure
that they are undertaking the “right” review to respond
to a clinical or policy question appropriately has stra-
tegic implications from a broader evidence-based health-
care perspective.
Conclusion
Systematic reviews are the ideal method to rigorously col-
late, examine and synthesize a body of literature. System-
atic review methods now exist for most questions that
may arise in healthcare. This article provides a typology
for systematic reviewers when deciding on their approach
in addition to guidance on structuring their review ques-
tion. This proposed typology provides the first known at-
tempt to sort and classify systematic review types and
their question development frameworks and therefore it
can be a useful tool for researchers, policy makers and
funders when deciding on an appropriate approach.
CHARMS: CHecklist for critical Appraisal and data extraction for systematic
Reviews of prediction Modelling Studies; CoCoPop: Condition, Context,
Population; COSMIN: COnsensus-based Standards for the selection of health
Measurement Instruments; EBHC: Evidence-based healthcare; eMERGe: Meta-
ethnography reporting guidelines; ENTREQ: Enhancing transparency in
reporting the synthesis of qualitative research; JBI: Joanna Briggs Institute;
MOOSE: Meta-analysis Of Observational Studies in Epidemiology;
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 7 of 9
PEO: Population, Exposure, Outcome; PFO: Population, Prognostic Factors (or
models of interest), Outcome; PICO: Population, Intervention, Comparator,
Outcome; PICo: Population, Phenomena of Interest, Context; PICOC: Population,
Intervention, Comparator/s, Outcomes, Context; PIRD: Population, Index Test,
Reference Test, Diagnosis of Interest; QUIPS: Quality in Prognosis Studies;
RCT: Randomised controlled trial; SDMO: Studies, Data, Methods, Outcomes
Acknowledgements
None
No funding was provided for this paper.
Not applicable
ZM: Led the development of this paper and conceptualised the idea for a
systematic review typology. Provided final approval for submission. CS:
Contributed conceptually to the paper and wrote sections of the paper.
Provided final approval for submission. EA: Contributed conceptually to the
paper and reviewed and provided feedback on all drafts. Provided final
approval for submission. CL: Contributed conceptually to the paper and
reviewed and provided feedback on all drafts. Provided final approval for
submission. ZJ: Contributed conceptually to the paper and reviewed and
provided feedback on all drafts. Provided approval and encouragement for
the work to proceed. Provided final approval for submission.
Not applicable
Not applicable
All the authors are members of the Joanna Briggs Institute, an evidence-based
healthcare research institute which provides formal guidance regarding evidence
synthesis, transfer and implementation.
The authors have no other competing interests to declare.
Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
Received: 29 May 2017 Accepted: 28 December 2017
1. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting
systematic reviews and meta-analyses of studies that evaluate healthcare
interventions: explanation and elaboration. BMJ (Clinical research ed). 2009;
339:b2700.
2. Aromataris E, Pearson A. The systematic review: an overview. AJN. Am J
Nurs. 2014;114(3):53–8.
3. Munn Z, Porritt K, Lockwood C, Aromataris E, Pearson A. Establishing
confidence in the output of qualitative research synthesis: the ConQual
approach. BMC Med Res Methodol. 2014;14:108.
4. Pearson A. Balancing the evidence: incorporating the synthesis of
qualitative data into systematic reviews. JBI Reports. 2004;2:45–64.
5. Pearson A, Jordan Z, Munn Z. Translational science and evidence-based
healthcare: a clarification and reconceptualization of how knowledge is
generated and used in healthcare. Nursing research and practice. 2012;2012:
792519.
6. Steinberg E, Greenfield S, Mancher M, Wolman DM, Graham R. Clinical
practice guidelines we can trust: National Academies Press 2011.
7. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic
reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.
8. Chalmers I, Hedges LV, Cooper HA. Brief history of research synthesis. Eval
Health Prof. 2002;25(1):12–37.
9. Gough D, Thomas J, Oliver S. Clarifying differences between review designs
and methods. Systematic Reviews. 2012;1:28.
10. Munn Z, Tufanaru C, Aromataris EJBI. S systematic reviews: data extraction
and synthesis. Am J Nurs. 2014;114(7):49–54.
11. Pearson A, Wiechula R, Court A, Lockwood C. The JBI model of evidence-
based healthcare. International Journal of Evidence-Based Healthcare. 2005;
3(8):207–15.
12. Tufanaru C, Munn Z, Stephenson M, Aromataris E. Fixed or random effects
meta-analysis? Common methodological issues in systematic reviews of
effectiveness. Int J Evid Based Healthc. 2015;13(3):196–207.
13. Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological
guidance for systematic reviewers utilizing meta-aggregation. Int J Evid Based
Healthc. 2015;13(3):179–87.
14. Gomersall JS, Jadotte YT, Xue Y, Lockwood S, Riddle D, Preda A. Conducting
systematic reviews of economic evaluations. Int J Evid Based Healthc. 2015;
13(3):170–8.
15. Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological guidance for
systematic reviews of observational epidemiological studies reporting
prevalence and cumulative incidence data. Int J Evid Based Healthc. 2015;
13(3):147–53.
16. Campbell JM, Klugar M, Ding S, et al. Diagnostic test accuracy: methods for
systematic review and meta-analysis. Int J Evid Based Healthc. 2015;13(3):
154–62.
17. Moola S, Munn Z, Sears K, et al. Conducting systematic reviews of association
(etiology): the Joanna Briggs Institute’s approach. Int J Evid Based Healthc.
2015;13(3):163–9.
18. McArthur A, Klugarova J, Yan H, Florescu S. Innovations in the systematic
review of text and opinion. Int J Evid Based Healthc. 2015;13(3):188–95.
19. Mokkink LB, Terwee CB, Patrick DL, et al. The COSMIN checklist for assessing
the methodological quality of studies on measurement properties of health
status measurement instruments: an international Delphi study. Qual Life
Res. 2010;19(4):539–49.
20. Dretzke J, Ensor J, Bayliss S, et al. Methodological issues and recommendations
for systematic reviews of prognostic studies: an example from cardiovascular
disease. Systematic reviews. 2014;3(1):1.
21. Campbell JM, Kavanagh S, Kurmis R, Munn Z. Systematic Reviews in Burns
Care: Poor Quality and Getting Worse. Journal of Burn Care & Research.
9000;Publish Ahead of Print.
22. France EF, Ring N, Thomas R, Noyes J, Maxwell M, Jepson RA. Methodological
systematic review of what’s wrong with meta-ethnography reporting. BMC
Med Res Methodol. 2014;14(1):1.
23. Stern C, Jordan Z, McArthur A. Developing the review question and
inclusion criteria. Am J Nurs. 2014;114(4):53–6.
24. Higgins J, Green S, eds. Cochrane Handbook for Systematic Reviews of
Interventions. Version 5.1.0 [updated March 2011]. ed: The Cochrane
Collaboration 2011.
25. Hannes K, Lockwood C, Pearson AA. Comparative analysis of three online
appraisal instruments’ ability to assess validity in qualitative research. Qual
Health Res. 2010;20(12):1736–43.
26. Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in
reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol.
2012;12:181.
27. France EF, Ring N, Noyes J, et al. Protocol-developing meta-ethnography
reporting guidelines (eMERGe). BMC Med Res Methodol. 2015;15:103.
28. Shemilt I, Mugford M, Byford S, et al. In: JPT H, Green S, editors. Chapter 15:
incorporating economics evidence. Cochrane Handbook for Systematic
Reviews of Interventions. The Cochrane Collaboration: In; 2011.
29. Munn Z, Moola S, Riitano D, Lisy K. The development of a critical appraisal
tool for use in systematic reviews addressing questions of prevalence. Int J
Health Policy Manag. 2014;3(3):123–8.
30. Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies
in epidemiology: a proposal for reporting. Meta-analysis of observational
studies in epidemiology (MOOSE) group. JAMA. 2000;283(15):2008–12.
31. COSMIN: COnsensus-based Standards for the selection of health
Measurement INstruments. Systematic reviews of measurement
properties. [cited 8th December 2016]; Available from: http://www.
cosmin.nl/Systematic%20reviews%20of%20measurement%20properties.html
32. Terwee CB, HCWd V, CAC P, Mokkink LB. Protocol for systematic reviews of
measurement properties. COSMIN: Knowledgecenter Measurement
Instruments; 2011.
33. Mokkink LB, Terwee CB, Stratford PW, et al. Evaluation of the methodological
quality of systematic reviews of health status measurement instruments. Qual
Life Res. 2009;18(3):313–33.
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 8 of 9
http://www.cosmin.nl/Systematic%20reviews%20of%20measurement%20properties.html
http://www.cosmin.nl/Systematic%20reviews%20of%20measurement%20properties.html
34. Hayden JA, van der Windt DA, Cartwright JL, CÃ P, Bombardier C. Assessing
bias in studies of prognostic factors. Ann Intern Med. 2013;158(4):280–6.
35. The Cochrane Collaboration. Cochrane Methods Prognosis. 2016 [cited 7th
December 2016]; Available from: http://methods.cochrane.org/prognosis/
scope-our-work.
36. Rector TS, Taylor BC, Wilt TJ. Chapter 12: systematic review of prognostic
tests. J Gen Intern Med. 2012;27(Suppl 1):S94–101.
37. Peters S, Johnston V, Hines S, Ross M, Coppieters M. Prognostic factors for
return-to-work following surgery for carpal tunnel syndrome: a systematic
review. JBI Database of Systematic Reviews and Implementation Reports.
2016;14(9):135–216.
38. Moons KG, de Groot JA, Bouwmeester W, et al. Critical appraisal and data
extraction for systematic reviews of prediction modelling studies: the
CHARMS checklist. PLoS Med. 2014;11(10):e1001744.
39. Clarke M, Oxman AD, Paulsen E, Higgins JP, Green S, Appendix A: Guide to
the contents of a Cochrane Methodology protocol and review. In: Higgins
JP, Green S, eds. Cochrane Handbook for Systematic Reviews of
Interventions. Version 5.1.0 ed: The Cochrane Collaboration 2011.
40. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for
improving the quality of reports of biomedical studies. Cochrane Database
Syst Rev. 2007;2:MR000016.
41. Djulbegovic B, Kumar A, Glasziou PP, et al. New treatments compared to
established treatments in randomized trials. Cochrane Database Syst Rev.
2012;10:MR000024.
42. Thoma A, Eaves FF 3rd. What is wrong with systematic reviews and meta-
analyses: if you want the right answer, ask the right question! Aesthet Surg
J. 2016;36(10):1198–201.
43. Deeks JJ, Wisniewski S, Davenport C. In: Deeks JJ, Bossuyt PM, Gatsonis C,
editors. Chapter 4: guide to the contents of a Cochrane diagnostic test
accuracy protocol. Cochrane Handbook for Systematic Reviews of Diagnostic
Test Accuracy The Cochrane Collaboration: In; 2013.
44. Bae J-M. An overview of systematic reviews of diagnostic tests accuracy.
Epidemiology and Health. 2014;36:e2014016.
45. White S, Schultz T. Enuameh YAK. Lippincott Wiliams & Wilkins: Synthesizing
evidence of diagnostic accuracy; 2011.
46. Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi SPICO.
PICOS and SPIDER: a comparison study of specificity and sensitivity in three
search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.
47. Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB.
Guidance for conducting systematic scoping reviews. International journal
of evidence-based healthcare. 2015;13(3):141–6.
48. Hetrick SE, Parker AG, Callahan P, Purcell R. Evidence mapping: illustrating
an emerging methodology to improve evidence-based practice in youth
mental health. J Eval Clin Pract. 2010;16(6):1025–30.
49. Wong G, Greenhalgh T, Westhorp G, Pawson R. Development of
methodological guidance, publication standards and training materials for
realist and meta-narrative reviews: the RAMESES (Realist And Meta-narrative
Evidence Syntheses – Evolving Standards) project. Southampton UK:
Queen’s Printer and Controller of HMSO 2014. This work was produced by
Wong et al. under the terms of a commissioning contract issued by the
secretary of state for health. This issue may be freely reproduced for the
purposes of private research and study and extracts (or indeed, the full
report) may be included in professional journals provided that suitable
acknowledgement is made and the reproduction is not associated with any
form of advertising. Applications for commercial reproduction should be
addressed to: NIHR journals library, National Institute for Health Research,
evaluation, trials and studies coordinating Centre, alpha house, University of
Southampton Science Park, Southampton SO16 7NS, UK. 2014.
50. Munn Z, Lockwood C, Moola S. The development and use of evidence
summaries for point of care information systems: a streamlined rapid review
approach. Worldviews Evid-Based Nurs. 2015;12(3):131–8.
51. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P.
Summarizing systematic reviews: methodological development, conduct
and reporting of an umbrella review approach. Int J Evid Based Healthc.
2015;13(3):132–40.
52. Pearson A, White H, Bath-Hextall F, Salmond S, Apostolo J, Kirkpatrick PA.
Mixed-methods approach to systematic reviews. Int J Evid Based Healthc.
2015;13(3):121–31.
53. Draper PA. Critique of concept analysis. J Adv Nurs. 2014;70(6):1207–8.
54. Grant MJ, Booth A. A Typology of reviews: an analysis of 14 review types
and associated methodologies. Health Inf Libr J. 2009;26(2):91–108.
55. Tricco AC, Tetzlaff J, Moher D. The art and science of knowledge synthesis. J
Clin Epidemiol. 2011;64(1):11–20.
56. Bender R. A practical taxonomy proposal for systematic reviews of
therapeutic interventions. 21st Cochrane Colloquium Quebec, Canada 2013.
57. Kastner M, Tricco AC, Soobiah C, et al. What is the most appropriate
knowledge synthesis method to conduct a review? Protocol for a scoping
review. BMC Med Res Methodol. 2012;12:114.
58. Leenaars M, Hooijmans CR, van Veggel N, et al. A step-by-step guide to
systematically identify all relevant animal studies. Lab Anim. 2012;46(1):24–31.
59. de Vries RB, Wever KE, Avey MT, Stephens ML, Sena ES, Leenaars M. The
usefulness of systematic reviews of animal experiments for the design of
preclinical and clinical studies. ILAR J. 2014;55(3):427–37.
60. Hooijmans CR, Ritskes-Hoitinga M. Progress in using systematic reviews of
animal studies to improve translational research. PLoS Med. 2013;10(7):
e1001482.
61. Mignini LE, Khan KS. Methodological quality of systematic reviews of animal
studies: a survey of reviews of basic research. BMC Med Res Methodol. 2006;
6:10.
62. van Luijk J, Bakker B, Rovers MM, Ritskes-Hoitinga M, de Vries RB, Leenaars
M. Systematic reviews of animal studies; missing link in translational research?
PLoS One. 2014;9(3):e89981.
63. Vesterinen HM, Sena ES, Egan KJ, et al. Meta-analysis of data from animal
studies: a practical guide. J Neurosci Methods. 2014;221:92–102.
64. CAMARADES. Collaborative Approach to Meta-Analysis and Review of
Animal Data from Experimental Studies. 2014 [cited 8th December 2016];
Available from: http://www.dcn.ed.ac.uk/camarades/default.htm#about
65. Moher D, Glasziou P, Chalmers I, et al. Increasing value and reducing waste
in biomedical research: who’s listening? Lancet. 2016;387(10027):1573–86.
66. Ioannidis J. The mass production of redundant, misleading, and conflicted
systematic reviews and meta-analyses. The Milbank Quarterly. 2016;94(3):
485–514.
67. Rousseau DM, Gunia BC. Evidence-based practice: the psychology of EBP
implementation. Annu Rev Psychol. 2016;67:667–92.
68. Jordan Z, Lockwood C, Aromataris E. Munn Z. The Joanna Briggs Institute:
The updated JBI model for evidence-based healthcare; 2016.
69. Cooney GM, Dwan K, Greig CA, et al. Exercise for depression. Cochrane
Database Syst Rev. 2013;9:CD004366.
70. Munn Z, Jordan Z. The patient experience of high technology medical
imaging: a systematic review of the qualitative evidence. JBI Libr. Syst Rev.
2011;9(19):631–78.
71. de Verteuil R, Tan WS. Self-monitoring of blood glucose in type 2 diabetes
mellitus: systematic review of economic evidence. JBI Libr. Syst Rev. 2010;
8(7):302–42.
72. Munn Z, Moola S, Lisy K, Riitano D, Murphy F. Claustrophobia in magnetic
resonance imaging: a systematic review and meta-analysis. Radiography.
2015;21(2):e59–63.
73. Hakonsen SJ, Pedersen PU, Bath-Hextall F, Kirkpatrick P. Diagnostic test
accuracy of nutritional tools used to identify undernutrition in patients with
colorectal cancer: a systematic review. JBI Database System Rev Implement
Rep. 2015;13(4):141–87.
74. Australia C. Risk factors for lung cancer: a systematic review. NSW: Surry
Hills; 2014.
75. McArthur A, Lockwood C. Maternal mortality in Cambodia, Thailand,
Malaysia and Sri Lanka: a systematic review of local and national policy and
practice initiatives. JBI Libr Syst Rev. 2010;8(16 Suppl):1–10.
76. Peek K. Muscle strength in adults with spinal cord injury: a systematic
review of manual muscle testing, isokinetic and hand held dynamometry
clinimetrics. JBI Database of Systematic Reviews and Implementation
Reports. 2014;12(5):349–429.
77. Hayden JA, Tougas ME, Riley R, Iles R, Pincus T. Individual recovery
expectations and prognosis of outcomes in non-specific low back pain:
prognostic factor exemplar review. Cochrane Libr. 2014. http://onlinelibrary.
wiley.com/doi/10.1002/14651858.CD011284/full.
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 9 of 9
http://methods.cochrane.org/prognosis/scope-our-work
http://methods.cochrane.org/prognosis/scope-our-work
http://www.dcn.ed.ac.uk/camarades/default.htm#about
http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD011284/full
http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD011284/full
-
Abstract
Background
Results
Conclusions
Introduction
Review typology
Effectiveness reviews
Experiential (qualitative) reviews
Costs/economic evaluation reviews
Prevalence and/or incidence reviews
Diagnostic test accuracy reviews
Etiology and/or risk reviews
Expert opinion/policy reviews
Psychometric reviews
Prognostic reviews
Methodology systematic reviews
Discussion
Conclusion
Abbreviations
Funding
Availability of data and materials
Authors’ contributions
Ethics approval and consent to participate
Consent for publication
Competing interests
Publisher’s Note
References
Editorial
What makes a good title?
Abstract
The chances are the first thing you when you set
out to write an article is the title. But what fac-
tors transform a mediocre title into a good title?
Firstly, it should be both informative and spe-
cific, using words or phrases likely to be used
when searching for information, for example
‘nurse education’ rather than simply ‘nurse’.
Secondly, it should be concise yet convey the
main ideas clearly; articles with short titles
reporting study findings have been found to
attract higher numbers of viewing and citations.
Thirdly, provide details of the study design to
assist the reader in making an informed choice
about the type of project your article is report-
ing.
In taking these small steps when developing
your title, your title can present a more concise,
retrievable and clear articulation of your article.
Keywords: Publishers and publishing, Writing
What’s the first thing you write when you set
out to write an article? The chances are that it is a
title, to get you over the hurdle of the blank page
and having a strong ‘working title’ can help you
stay focused during the writing process.
But titles are not only about getting started and
it is important to consider the wider purpose of a
title, because choosing the right title can be crucial
on a number of levels. A well-written title can
help someone searching for an article on your
topic area to find your paper and provides a clear
statement to the reader of what to expect.
So what makes a good title? First and foremost,
the title should be informative. In her analysis of
article titles, Cynthia Whissel1 notes that while the
use of emotive or abstract language varies over
time, there has been a consistent trend towards
more concrete and definitive titles since the mid-
1980s. This trend could partly be explained by the
rise of Internet searches to locate the literature,
with authors considering the likely words or
phrases used to identify papers on their subject.
Being specific in your title can aid its retrieval so,
for example, instead of searching simply for
papers on ‘education’ or ‘libraries’, someone is
more likely to search for a particular type of edu-
cation or library, for example ‘nurse education’ or
‘health libraries’, something which can easily be
reflected in your article’s title.
Secondly, be concise. Most journals will have a
word or character limit for titles and may well use
a shortened version of the title as a heading across
all pages of the article, so conveying a shortened
yet comprehensive version of the main ideas dis-
cussed clearly and briefly is imperative. Interest-
ingly, a recent study of publication metrics also
found that articles with short titles, particularly
those describing results, are associated with higher
numbers of views and citations.2
Thirdly, where appropriate, give details of the
research design. As noted above, a key role for a
title is to be informative while being concise, and
colons can assist in this process. For example, the
title ‘ Cost-effective ways of delivering enquiry
services: a rapid review’3 immediately informs the
reader that rather than being merely a discursive
piece, this article is a synthesis of published evi-
dence thereby adding potential value and signifi-
cance for someone seeking evidence on how to
develop their own enquiry service. Depending on
the discipline, Hartley has also reported that some
groups of readers actually prefer titles with colons
to titles without them.4
While we may consider some or even all of
these features when we first put pen to paper,
ideas tend to evolve during the writing process, so
the title you started with may not be the one you
end up submitting with your article. When you
have finished writing, just like the abstract, ensure
that the title you are using still reflects the core
message of your writing and, if it does not, change
it! And bear in mind that titles are usually read in
conjunction with an abstract so it is important that
they are complementary and convey the same
point. This may seem obvious, but you would be
surprised how many people forget this simple fact.
© 2013 The authors. Health Information and Libraries Journal © 2013 Health Libraries Group
Health Information & Libraries Journal, 30, pp. 259–260 259
DOI: 10.1111/hir.12049
14711842, 2013, 4, D
ow
nloaded from
https://onlinelibrary.w
iley.com
/doi/10.1111/hir.12049 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
In taking these small steps when developing
your title, your title can present a more concise,
retrievable and clear articulation of your article.
Maria J. Grant
Editor, Health Information and Libraries Journal
Email: m.j.grant@salford.ac.uk
Twitter: @MariaJGrant @HILJnl #hilj
Facebook: http://on.fb.me/ovBuiM
http://wileyonlinelibrary.com/journal/hilj
References
1 Whissell, C. The trend towards more attractive and informa-
tive titles: American Psychologist 1946–2010. Psychological
Reports 2012, 110, 427–44.
2 Paiva, C. E., Lima, J. P. & Paiva, B. S. Articles with short
titles describing the results are cited more often. Clinics
2012, 67, 509–13.
3 Sutton, A. & Grant, M. J. Cost-effective ways of delivering
enquiry services: a rapid review. Health Information and
Libraries Journal 2011, 28, 249–63.
4 Hartley, J. Planning that title: practices and preferences for
titles with colons in academic articles. Library & Information
Science Research 2007, 29, 553–568.
In this issue…
In this issue of the Health Information and
Libraries Journal, author(s) investigate the informa-
tion seeking behaviour1–2 and satisfaction3 of pub-
lic1 and healthcare workers,2 consider enhanced
methods of data collection4 and tools to accelerate
the adoption of research into practice.5
The delays in getting research into practice are
well known, and Mairs et al. seek to expedite the
transition by conducting a review of online technol-
ogies available to facilitate health-related communi-
cation and knowledge translation, identifying great
potential in the diversity of tools (wikis, forums,
blogs, virtual communities of practice and confer-
encing technology) available.5 Acknowledging that
speed is important in dissemination, the quality of
that evidence is essential and Urquhart et al., using
the National Minimum Dataset for Social Care
(NMDS-SC) as an example, discuss a novel wide-
ranging bibliometric approach in which interviews
are conducted with key informants to provide a
more rounded picture of the impact of a data set.4
With access to information in mind, Austvoll-
Dahlgren et al. describes the development of a struc-
tured set of tools seeking to improve health literacy
skills of the general public1, while the existing infor-
mation seeking behaviour of students and physicians
in low and middle income countries are explored by
Gavino et al.2 They present the findings of their sur-
vey in relation to three broad areas: therapy and man-
agement questions (PubMed), diagnostic dilemmas
(a colleague) and medication queries (a formulary).
With the need to positively demonstrate the
impact of our services, Mairaj et al.3 round up this
years final manuscript by considering the eternal
issue of user satisfaction with a teaching hospital
library service.
Remember that you can receive updates on all
forthcoming papers published in the Health Infor-
mation and Libraries Journal, together with news
items and a weekly writing tip, via the @HILJnl
twitter account and my Facebook account at http://
on.fb.me/ovBuiM
Maria J. Grant
Editor, Health Information and Libraries Journal
Email: m.j.grant@salford.ac.uk
Twitter: @MariaJGrant @HILJnl #hilj
Facebook: http://on.fb.me/ovBuiM
http://wileyonlinelibrary.com/journal/hilj
References
1 Austvoll-Dahlgren, A., Danielsen, S., Opheim, E., Bjorndal,
A., Reinar, L. M., Flottorp, S. A., Oxman, A. D. & Helseth,
S. Development of a complex intervention to improve health
literacy skills. Health Information and Libraries Journal
2013, 30, 278–293.
2 Gavino, A., Ho, B. L., Wee, P. A., Marcelo, A. & Fontelo,
P. Information-seeking trends of medical professionals and
students from middle- income countries: a focus on the Phil-
ippines. Health Information and Libraries Journal 2013, 30,
303–317.
3 Mairaj, M. I. & Mirza, M. N. Library services and user satis-
faction in developing countries: a case study. Health Informa-
tion and Libraries Journal 2013, 30, 318–326.
4 Urquhart, C. & Dunn, S. A bibliometric approach demon-
strates the impact of a social care data set on research and
policy. Health Information and Libraries Journal 2013, 30,
294–302.
5 Mairs, K., McNeil, H., McLeod, J., Prorok, J. & Stolee, P.
Online strategies for facilitate health-related knowledge trans-
fer: a systematic search and research. Health Information and
Libraries Journal 2013, 30, 261–277.
© 2013 The authors. Health Information and Libraries Journal © 2013 Health Libraries Group
Health Information & Libraries Journal, 30, pp. 259–260
Editorial260
14711842, 2013, 4, D
ow
nloaded from
https://onlinelibrary.w
iley.com
/doi/10.1111/hir.12049 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
Best Practice & Research Clinical Rheumatology 27 (2013) 295–306
Contents lists available at SciVerse ScienceDirect
Best Practice & Research Clinical
Rheumatology
journal homepage: www.elsevierheal th.com/berh
10
Else Marie Bartels, PhD, DSc, Research Librarian DB *
The Parker Institute, Department of Rheumatology, Copenhagen University Hospital Frederiksberg and
Bispebjerg, Ndr. Fasanvej 57, 2000 Frederiksberg, Denmark
Keywords:
Bibliographic databases
Evidence-based medicine
Information literacy
Information services
Internet
Literature
* Tel.: þ45 38164168; fax: þ45 38164159.
E-mail address: else.marie.bartels@regionh.dk.
1521-6942/$ – see front matter � 2013 Elsevier Lt
http://dx.doi.org/10.1016/j.berh.2013.02.001
All medical practice and research must be evidence-based, as far as
this is possible. With medical knowledge constantly growing, it has
become necessary to possess a high level of information literacy to
stay competent and professional. Furthermore, as patients can now
search information on the Internet, clinicians must be able to
respond to this type of information in a professional way, when
needed. Here, the development of viable systematic search stra-
tegies for journal articles, books, book chapters and other sources,
selection of appropriate databases, search tools and selection
methods are described and illustrated with examples from rheu-
matology. The up-keep of skills over time, and the acquisition of
localised information sources, are discussed.
� 2013 Elsevier Lt
d. All rights reserved.
Introduction
Medical information, mainly in the form of scientific papers but also as books and other types of
resources (mostly as Internet sites), is growing at a remarkable rate. One result of this is that a high
level of information literacy is required by all who wish to keep up-to-date in their field. Another part
of information literacy is to be able to trace the information patients have found on the Internet, and to
assess this in a professional way, in order to keep a good patient–doctor relationship, where the patient
has confidence in the doctor’s knowledge and skills.
Although general-purpose searching via a search engine (e.g., using Google [1]) or a search-engine-
type search inMedline via PubMed [2] (see later) may cover your information need to some degree, it is
important to be disciplined and focussed and to know the available information sources if you wish to
practise your daily work in an evidence-based way [3,4]. It is only when you wish to find “something
about a subject” that you might try your luck with a general-purpose search; and in this case you still
d. All rights reserved.
mailto:else.marie.bartels@regionh.dk
http://crossmark.dyndns.org/dialog/?doi=10.1016/j.berh.2013.02.001&domain=pdf
www.sciencedirect.com/science/journal/15216942
http://www.elsevierhealth.com/berh
http://dx.doi.org/10.1016/j.berh.2013.02.001
http://dx.doi.org/10.1016/j.berh.2013.02.001
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306296
have to make sure that you have found valid information, at least in the form of a review article from a
peer-reviewed journal or a textbook chapter of an acceptable standard and level. For efficient up-keep
of the needed level of professional knowledge at any given time, you need to be able to carry out a
proper systematic search and make a correct choice of information sources.
There are four steps towards reaching a level of information literacy that will make keeping up with
medical literature manageable: (1) learn to define your questions in a meaningful way; (2) get to grips
with the ins and outs of literature searching; (3) make a time schedule for necessary searches; and (4)
update yourself in new information sources at least once a year.
Literature search
Scientific papers in peer-reviewed journals
Most of the new medical literature appears as papers in peer-reviewed journals. To keep up with
this part of the information flow, you have to follow steps 1–8, below:
1. Define your problem.
2. Create a search strategy.
3. Select the right bibliographic databases.
4. Search.
5. Select suitable references from those that have been retrieved.
6. Assess whether the search was satisfactory.
7. Redesign the search strategy and/or choose other databases/search tools, where needed.
8. Repeat steps 2–6, if necessary.
Define your problem
A successful search is based on looking for the key issues, but how do you ensure that you do exactly
that? As an illustration, suppose that you wish to be updated on the effects of biologics on rheumatoid
arthritis (RA). This is not exactly awell-defined problem in search terms. There are three important key
issues: effects, biologics and RA.
Starting with RA, this might very well be as fully defined as it should be, but ask yourself whether a
further specification is needed. Is it a particular patient group in terms of age, gender, genetics or
similar? It could be that the group concerned is ‘young women with RA during pregnancy’ or another
specific group of RA patients.
Effects must also be specified – effects measured in what manner and compared to what? An
example of the question that has to be addressed could be: What is the effect of treatment with bi-
ologics compared to non-steroidal anti-inflammatory drugs (NSAIDs) treatment or to steroids?
Furthermore, is it the effect of a specific type of biologics compared with a specific type of NSAIDs? The
other part of the question is which effect am I looking at? Is it DAS28, pain reduction, joint destruction,
function or quality of life? There are many types of outcome measures, and usually you will have a fair
idea of the important ones for a particular patient group or for a specific treatment.
The last step towards a clear definition of the problem in question is to define biologics. Howbroad a
definition is allowed and which biologics are the most important ones to include. Is it really a com-
parison between biologics in general with another defined type of treatment, or is it a specified biologic
treatment, you have in mind?
The definition of the problem about which you want to find information is the base on which the
whole procedure is built, and more experienced practitioners and researchers will have a great
advantage here, being able to write out the problem of interest quickly. Often, it will be necessary to
break down the problem into sub-questions to create clear search strategies, which will lead to better
results. You must also decide if your question asks for an epidemiological approach, where you are
looking at effects of the past in awhole population and therefore cannot ask for randomised controlled
trials (RCTs), or if you aremainly interested in looking for designed studies in controlled and, if possible,
randomised studies. In all this, you also have to think clearly, and make the best use of the material
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306 297
available. Nearly all high-powered RCTs started as pilot studies. Many large epidemiological studies
began as more humble studies of smaller groups, which provided the ideas for the full-scale studies. It
is important to understand what type of studies you are looking for to get a valid answer to your
question [5].
Create a search strategy
Having delineated a well-defined question, it is possible to create a search strategy based on a
search table. Instead of rushing into a search by typing in the first words that come to mind, it is
worthwhile working out a search table. If the question is effect of exercise on physical function in
juvenile idiopathic arthritis, a search table could look like Fig. 1.
Juvenile idiopathic arthritis has several names, and it is wise to check with the MeSH database
in PubMed [2] or with keywords in EMBASE (Exerpta medica) [6] (see below) to get further ideas
concerning the various names used for the same condition by different authors.
For the juvenile idiopathic arthritis patients exercise will be limited, and it might be carried out
under the supervision of a physiotherapist. Keeping this in mind whenworking out the search table, a
list of suggestions covering exercise, again using some known keywords from the medical databases, is
given, and more possibilities may be added. For the last term, physical function, a set of known
outcome measures are given. There are more of these, and the choices here are scales that specifically
are used for children, because this is a child-specific disease. In each column, each representing a key
issue, at least one of the given terms has to be found in a reference to include this in the retrieved
references. The terms in each column are therefore combined with ‘OR’ when searching. This will give
rise to three sets of results, one from each column.
As all of the threemain issues in this example (other problems could havemoremain issues) have to
be included in the total search, the results of the three searches, one for each column, have to be
combined with ‘AND’, when searched. The end search will be (juvenile idiopathic arthritis OR juvenile
rheumatoid arthritis OR juvenile chronic arthritis OR juvenile onset Still’s disease) AND (exercise*
OR
physical therapy OR jogging OR swimming OR pool therapy OR dancing) AND (physical function OR
exercise test OR CHAQ OR JASI OR JAFAS OR joint range of motion).
The above is a simple example. The search can usefully be extended much further, and the given list
here is a mixture of ‘free text words’, words appearing anywhere in a reference in the searched
database, and defined keywords (MeSH words), which might be specific for the particular database. It
is necessary to search both terms that are given keywords in the database, if a keyword covering the
term exists, and other words covering the term in question. Although it is important to search the
keywords/MeSH words when available, they should in general be searched as both keywords and as
Juvenile idiopathic arthritis Exercise* Physical function
Juvenile rheumatoid arthritis Physical therapy Exercise test
Juvenile chronic arthritis Jogging CHAQ
Juvenile-onset Still’s disease Running JASI
SAFAJgnimmiwS
Aquatic therapy Joint range motion
gnicnaD
gnilcyC
AND
OR
Fig. 1. A possible search table for the question ‘Effect of exercise on physical function in juvenile idiopathic arthritis’. The table may
be developed further, especially in the Exercise and in the Physical function column.
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306298
free text words. This will be the case if all search words are searched as ‘free text words’. The reason for
this is that a keyword may only have been in use for some of the years covered by a database and not
always been used to index a relevant reference during these years. In the example, it should be noted
that exercise has an * after the word. This causes the search to include any term that starts with ‘ex-
ercise’. This is called truncation of or adding a wild card to the search term.
Truncation is also used to account for different types of English language spelling, for instance
*edema for edema or oedema (American and British English spellings).
The broader you search in terms of keeping each term less defined, the more ‘noise’ (useless/un-
wanted references) will come out of the search. As an example, you could search osteoarthritis without
limiting your search to a joint such as the knee. You will get several studies on other joints than the
knee. However, you will also catch some studies of knee osteoarthritis that did not appear in your
narrower search. You will therefore achieve a higher sensitivity (a better coverage of the literature)
with the broader search, but the cost is the high number of retrieved references you have to assess for
inclusion, where several do not give an answer to your request.
The alternative is a very specific, narrow definition of the problem in question in your designed
search strategy. This will give a high specificity (more or less all of the retrieved references will be
relevant), but the coverage of the problem will most likely not be optimal. Depending on your infor-
mation needs, you should aim at a search strategy that will give ‘enough information’ for your purpose.
For updating, a more specific searchmay be preferred, whereas youmay need several broader searches
for research purposes.
Select the right bibliographic databases
When the question is clearly defined and a search strategy created, it is time to choose the right
database(s) or other search tools. For medical literature, there is a good range of bibliographic
databases.
Bibliographic databases cover only a certain chosen number of journals, and the selection varies
from database to database. This is why you have to consider which databases are the ones to search for
your particular subject area. Furthermore, it is important to remember that although databases may
seem alike, they are not. Two databases may have the same structure but their keywords and names for
the various fields (‘tags’ such as author, address and abstract) are very often different. The way one
database searches may also differ from all the others. When designing a search strategy, it is important
to look for possible keywords that define your search terms andmake sure the meaning of these words
really is the same as your understanding of the words. You have to understand that a term used in your
local clinical or laboratory setting is not necessarily used for the same notion in a database and that a
term used in a database might have a different meaning than the one you normally would expect.
Further, the terminology and spelling will vary between countries and languages (e.g., between
American English and British English). You will, as mentioned, take this into account by using trun-
cation/wild cards.
In rheumatology there will be a need to search a set of bibliographic databases, depending on the
area of interest. For a quick update, you will get by with a search in Medline [2] and/or EMBASE [6], but
if you want to get a complete update of your field, you will probably need to search more than these
two databases.
Table 1 shows a selection of bibliographic databases of interest, with a short explanation and a
suggestion of what to search where. It is worth knowing that new research areas first will appear as
meeting abstracts, and these are mainly found in Web of Science [7], Biosis Previews [8] and EMBASE
[6]. For special areas, a set of smaller databases is also shown in Table 1.
EMBASE [6] and – to a certain degree –Medline [2] cover some journals written in languages other
than English, but if you wish to search articles in other languages you may find that the country
speaking the language provides a bibliographic database inwhich you can search in the given language.
Bibliographic databases come in various wrappings, depending on who delivers them to the user.
The delivery firm is called the ‘database host’, and one host can give access to a wide variety of da-
tabases. What the host provides is the design of the search page. PubMed is really the host for several
databases, apart from Medline. Other examples of hosts are STN, OVID and EBSCO. A database will
Table 1
List of databases of interest for rheumatologists.
Name [Ref.] Description Seen from the rheumatologist’s point of
view, good for searching:
Medline/PubMed [2] Medicine, human biology, general physiology,
cell biology. Medline is provided from most
database hosts; the PubMed version is the
hosting service from US National Library of
Medicine, which is the creator of the database
Medline.
All clinical and physiological questions.
Good coverage of accreditation and
management.
EMBASE [6] Medicine, human biology, general physiology,
cell biology. Abstracts from some larger
medical conferences.
All clinical and physiological questions.
Very good coverage of musculoskeletal
diseases and their treatment.
Strong in the area of Pharmacology.
Broader coverage of European journals
than Medline.
Psycinfo [9] Psychology. Includes books and book chapters,
as well as journal articles.
References include reference lists.
Human psychology such as patient–
doctor relationships and cognitive therapy.
Useful references from reference lists
of found references.
Cochrane Library [10] The database consists of Cochrane Database of
Systematic Reviews (CDSR), Database of
Abstracts of Reviews of Effectiveness (DARE)
and Cochrane Controlled Trials Register (CCTR).
A small limited database of high quality.
Systematic reviews with meta-analyses,
and RCTs, concerning clinical treatment
approaches as a base for evidence-based
treatment.
Web of Science [7] Science, technology, health sciences, sociology,
and humanities. Abstracts from some larger
medical conferences.
Coverage of a wide variety of subjects.
Not so strong on a specific search of a
subject area, but excellent for catching
references from interdisciplinary areas,
as well as conference abstracts.
Biosis Previews [8] Microbiology, genetics, cell biology, general
physiology and biochemistry, behaviour,
botany, ecology. Some conference abstracts.
Cell biology, genetics, and other basic
bio-medical areas. Includes a good
selection of conference abstracts.
Chemical Abstracts [11] Chemistry and biochemistry. Drugs and drug treatment.
Search at very high level due to use of
CAS numbers which will relate to any
name given to a particular drug.
Toxnet [12] NLMs special entrance to free databases
concerning toxicology.
LactMed for drug effects during breast
feeding.
TOXLINE for adverse effects of drugs.
PEDro [13] PEDro covers physiotherapy and includes
references to systematic reviews, RCTs, and
clinical practice guidelines. Small, limited,
high-quality database. Very basic search system.
Evidence-based physiotherapy.
Only very basic searches can be
performed
CINAHL [14] Nursing and allied health research database.
References include reference lists.
Useful when searching for nursing and
physiotherapy information.
Reference lists of found references may
give some useful guidelines etc.
AMED [15] Allied and complementary medicine. Physiotherapy, palliative care,
occupational therapy
MANTIS� [16] Manual, alternative, and natural therapy
index system (earlier chirolars)
Manual medicine, chiropractic,
osteopathy. Generally strong on
rehabilitation.
Clinical Trials.gov [17] A registry and results database of publicly and
privately supported clinical studies of human
participants conducted around the world.
Clinical trials which have happened or
are on the way.
PROSPERO (International
Prospective Register of
Systematic Reviews) [18]
Prospective register of systematic reviews. Place for registering protocols for
systematic reviews and meta-analyses.
(continued on next page)
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306 299
Table 1 (continued )
Name [Ref.] Description Seen from the rheumatologist’s point of
view, good for searching:
CrossRef [19] CrossRef’s classifies itself as the citation
linking backbone for all scholarly information
in electronic form. CrossRef links via CrossRef
Digital Object Identifiers (CrossRef DOI) a
reference to the electronic full-text reference,
such that the DOI-number is unique for the
particular reference.
Searching references via DOI number,
or finding DOI-number for a reference.
Derwent Innovation
Index [20]
Patents. Useful if one wishes to patent a
treatment and need to find what is
already patented in the area.
Journal Citation
Reports [21]
Gives impact at the journal and category
levels, as well as presenting the relationship
between citing and cited journals.
Useful for finding impact factors
(remember these change every year),
as well as cited and citing half life.
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306300
therefore look different on different hosts, but – despite this – the database behind will be exactly the
same, the search system will be the same and you should not search the same database twice by
searching the same database via two hosts. For example, it is only necessary to search Medline either
via PubMed or via OVID, despite the different appearances of this database in the two hosting systems.
Another problem can occur when searching several databases provided from the same host in one
search. This will not give the best and most professional results because the databases behind the
common search face are different in structure, and the full benefit of your selection of keywords,
publication types, etc. will not be obtained. It is valuable to search each database separately. This will
also allow you to download chosen references from your searches for import to reference handling
systems (Reference Manager [22], Endnote [23], Procite [24] and others), where knowledge of both
database and host is demanded for successful import into your own reference databases.
Search
There is a continuous development in the ways search systems search databases.
Artificial intelligence is part of many search systems, but its quality varies. Occasionally you will get
great benefits from the artificial intelligence and get a better search, but at other times you will find
some search results that seem very far from your intended search. It all depends on the way you
approach your database in the form it is made available to you. In Medline searched via PubMed [2],
which uses artificial intelligence, you can search by introducing the whole search string (the end
search). You can see how the database has been searched by looking at ‘search details’ (the box at the
right-hand side of the screen), which explains why your search gives the references retrieved. Instead
of relying on the artificial-intelligence approach, you could choose to search each term alone. When
you have carried out the single-term searches, you can then go to ‘Advanced’where from ‘History’ you
combine the terms with AND or OR as appropriate, where each searched termwill be represented with
#1 or #2, etc. Your further search will look something like this: (#1 OR #3) AND (#4 OR #5).
In the PubMed version of Medline, it is easier to learn to handle the total search string created from
your search strategy as soon as you feel confidentwith thewhole search technique and have designed a
good search strategy for updating. The advantage in searching one term at a time to start with is, on the
other hand, that you are able to understand if your choice of search terms is satisfactory. If you have
only two hits on one of your search terms, and you know the area is well described, you know you have
to find another word for your term.
Apart from the operators AND and OR, you can use NOT, although this operator should be used
carefully and only in situations inwhich you are completely certain about what to exclude. For instance,
you can use NOT when you have carried out two different searches with different search strategies and
you want to exclude the references in the second set of results that have already appeared in the first
set. If your first set of results come from search #15 and your second set of results come from search
#26, you will get the results only appearing in #26 and not in #15 by searching: #26 NOT #15.
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306 301
The consequences of using the Boolean operators AND, OR or NOT is shown Fig. 2.
The main medical databases such as Medline and EMBASE have keywords organised as a tree
structure, withmain headings and subheadings. In PubMed [2], you can find these in theMeSHDatabase
(choose MeSH in the search field next to the PubMed logo). If you search the term ‘osteoarthritis’ you
will get an explanation of the use of this term in Medline. By going one step deeper, you will get sub-
headings and the tree structure (where this term appears in the Medline hierarchy of keywords):
All MeSH categories
Disease category
Musculoskeletal diseases
Joint diseases
Arthritis
Osteoarthritis
Osteoarthritis, hip
Osteoarthritis, knee
Osteoarthritis, spine
All MeSH categories
Disease category
Musculoskeletal diseases
Rheumatic diseases
Osteoarthritis
Osteoarthritis, hip
Osteoarthritis, knee
Osteoarthritis, spine
You can see that osteoarthritis appears in two parts of the tree: rheumatic diseases and joint
diseases.
InMedline youwill search the term in all parts of the tree, and youwill also search all narrower terms
belonging to the particular MeSH. If you search for ‘osteoarthritis’, you will automatically search ref-
erences with osteoarthritis; osteoarthritis, hip; osteoarthritis, knee; and osteoarthritis, spine. However,
if you wish to limit your search to, for instance, diagnosis, you may in the PubMedMeSH database do so
by choosing the subheading Diagnosis and ‘send to box with AND’, search and see the results.
In Medline searches via other hosts than PubMed, and in EMBASE, there might be a choice of
‘explode search’ or not to do so. This will achieve exactlywhat it describes: ‘explode search’will include
all references with the search term or with any of the subheadings; by not exploding, you will limit
yourself to the chosen term.
Select suitable references from the retrieved ones
Depending on howmuch you choose to limit your search, you are likely to find several (and perhaps
many) references which are not relevant for your purpose. As mentioned above, you have to find the
Fig. 2. The Boolean operators.
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306302
right balance between sensitivity and specificity in the given situation. Around 200 or less is a
reasonable number to skim through by looking at titles and abstracts, where these are available. Often,
the title will help you decide whether you read further or not. Many databases have the possibility of
looking at ‘related articles’ and, if you go just one layer into the ‘related articles’ search for the chosen
references, you will get a useful supplement and a more complete coverage of the literature with your
search.
The last step is to go through the reference lists of your chosen references to determine whether
there are other references you have missed, either because they have not been included in the data-
bases searched or because your search has not been able to catch them. You need to be aware that the
electronic versions of the bibliographic databases start at different times; some go back a long way
whereas others start around 1980. Reference lists are good sources for covering important, older
literature, as well as covering important book chapters and conference proceedings.
Assess whether the search was acceptable
Whenyour initial set of relevant references has been selected, it is time to checkwhether something
is missing. A person who is new to a particular field will have a problem in assessing the expected
number of references, because it may not be easy to see whether one should expect five studies or
1000. If the subject is a ‘hot topic’, a lot of studies ought to appear in that area. However, if the subject is
brand new, there might not be more than – say – three conference abstracts. If you already know about
one or two important studies, these ought to appear as part of your retrieved references. If they do not,
your search strategies are not good enough, or you have not chosen the right combination of databases,
or the references in question are not included in any of the available databases and have to be found
elsewhere.
Redesign the search strategy and/or choose other databases/search tools
If your coverage of the literature was found to be unsatisfactory, you have to go back to step 1 and
start again. You may get some help by looking at the references you knowwere missing to see whether
you can get a clue about which additional search terms could usefully be added. In addition, look at
Table 1 to see whether a search in another database would be the answer. You will then have to repeat
steps 2–6. Otherwise, try other search tools.
Keeping up with the peer-reviewed journal articles
When you are satisfied with your search result, save your search strategy for use at later up-dates.
Every time you update your knowledge in a field, you have to consider whether it is necessary to
improve your search strategies. This is because your field develops all the time and you have to follow
the various ways diseases and treatments change names over time to be able to include the new terms
in your future searches. You will also find that keywords develop over time. As an example, fibro-
myalgia as a MeSH word appeared in 1989. If you want to search a term as a keyword earlier than the
keyword appears in a database, you can try to search the keyword word above in the hierarchy/tree.
Most rheumatologists will have access to a variety of bibliographic databases via their workplace or
research library. The licence fees for many of these databases are high, and you cannot expect to have
access to all databases shown in Table 1. For anybody who has no access to databases paid by their
study or workplace, the US National Library of Medicine (NLM) provides free access to Medline [2] and
some other databases via PubMed, and PEDro [13] is also free.
A major part of the medical profession will have access to the full-text versions of the journals
through their hospital or research libraries. For those who do not have access to all licenced scientific
journals, it is important to be aware of that more and more journals are becoming ‘open access’. This
means that it is free to access the papers published and that the author or the author’s employer have
paid a fee to make their paper freely available. Today many grant-giving bodies demand that the
results of the research paid by their grants must be published as an open-access paper. Several of the
old journals will therefore now provide an open-access option, if the authors or the grant-giving body
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306 303
will pay for this. This has suddenly given a much wider free access to electronic journals. A large
selection of open-access papers in health science can be searched in PubMed Central, which is a part
of PubMed. To get to PubMed Central, choose PMC in the box for choosing database at the top of the
front page next to the PubMed logo. When you open PubMed, this box shows ‘PubMed’, which is the
entrance to Medline via PubMed. There is a choice of several databases you can search via the search
host PubMed.
Some open-access journals, which may be too new to fulfil the entrance criteria for bibliographic
databases such as Medline or EMBASE, will be acceptable if they at least are searchable via CrossRef [19]
and have a digital object identifier (DOI) number. This will guarantee peer review and some scientific
standard. All electronic scientific papers should have a DOI registration number, which is presented at
the front page of the paper.
Other search tools
Bibliographic databases are the ‘safe’ places to search because the references included in these
databases are from peer-reviewed journals or, in cases where books or book chapters are included,
from recognised scientific publishers. However, it should be recognised that there are other limits to
this ‘safe strategy’, because a useful part of the available information is published in journals that are
not included in the described databases and these journals may also be peer reviewed and of perfectly
acceptable quality. The major bibliographic databases concentrate on covering publications from
publishers in Europe and the United States, and to some degree Japan, Australia and New Zealand, but
valuable information could still be missed by searching these major bibliographic databases. The other
issue is that many patients, and their relatives, nowget their information about illnesses and treatment
from the Internet. It is therefore also important to keep up with what the layman’s sources of
knowledge are communicating in your field in order to be ahead of your patients and to supplement
your more scientific sources of medical information, before you are asked questions about a certain
condition or therapy or drug treatment.
Search engines
For scientific search via Internet search engines, it is important to understand the difference in
search techniques in these engines from search in bibliographic databases.When using a search engine
such as Google [1] or Ixquick [25], you have to think of the main words and put them in the order of
importance in the search field. Different search engines search in different ways, but most place a high
importance to words appearing in headings and in the first paragraph of the text. The word placed as
number one in a search will be counted as more important than the second or the third. Although it is
possible to perform an ‘advanced search’, this does not make a great difference to the result and you
cannot combine searches in the way described for bibliographic databases.
For scientific questions, it is recommended to search either Google Scholar [26] or Scirus [27], which
aim to provide high-quality answers. Another choice could be a multi-search engine such as Ixquick
[25] (this type of tool harvests results from searches in several search engines and presents them in one
list) due to its large coverage.
Through search engines you will find many homepages on rheumatology subjects, including
everything from stories of individual patients or from a next of kin to the information pages of patient
organisations, homepages of learnt societies and lectures from university courses. You can also find
online encyclopaedias such as Wikipedia [28]. Wikipedia is not peer reviewed but because it is free on
the Internet and anybody can make corrections in the written articles, you can often find solid and up-
to-date information. However, it is also possible for the information to be incomplete or even some-
times incorrect, and you therefore should consider yourself responsible for evaluating any information
that you take from Wikipedia or similar sources.
There are some important sites on the Internet if you need health statistics. The main ones are the
World Health Organization’s (WHO) extended homepage [29] and free pages of useful statistics from
various countries’ National Board of Health. These sources can be considered reliable and need no
evaluation.
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306304
Another important issue, well covered by the Internet, is bioethics. If you need up-to-date advice on
bioethics, you can find the legal documents or interpretations of these under NLM’s Bioethics Infor-
mation Resources [30] or under Council of Bioethics Europe Division [31].
Evaluation of Internet resources where peer-review is not applied
It is necessary to evaluate any found useful resource before the information provided can be taken
into account. The evaluation is simple and builds on common sense:
1. Does the page cover the information you are looking for at an acceptable level?
2. What is the URL (Uniform Resource Locator) address?
3. Is the information given sufficiently complete, correct and precise?
4. Who is the creator of this page?
5. Has the page an acceptable structure?
6. If there are links, what quality are these?
If you find that a page gives you the desired information, you can check whether the address in-
dicates commercial interests or a private person, or whether it takes you to a well-known university,
hospital site or a similar acceptable institution. Although the medical industry has commercial in-
terests, research laboratories may post acceptable information on the Internet. You just have to use
your judgement about the firms’ main marketing objectives when assessing the information given.
If a page gives any information you know is incorrect, you should not use any of the information
given there. You should also require some recognisable expertise of the creator of a page that you will
accept as an information source. Today, it is fairly easy to create a well-structured homepage, so you
may ask yourself if you can trust a pagewhere it is very difficult to find the information. Assessing links
is usually easy at sites of interest for rheumatologists. There will most often be only a couple of ref-
erences to peer-reviewed journals or to health-care institutions, and therewill be very few others. Only
commercial pages will have advertising and similar andmust be looked uponwith some apprehension.
Useful sites are homepages from patients’ organisations who, in many cases, will have professional
staff providing the information on their homepages.
Books and book chapters
During your professional life, it is important to keep up with the development of the subjects in
your areas, and this is often most easily done at any given time by reading the newest textbooks in the
field. Another source of useful books is the many doctoral theses. To follow the books published in your
field over time, you can search any medical research library’s catalogue. You may not find all published
literature of interest, but you will, if you search at yearly intervals, be made aware when it is time to
bring your knowledge up-to-date. If there are a couple of theses and two new textbooks published
during the year, it is time to read up on the subject.
Open access has also reached the book market. You may therefore find several new subject-specific
books in monograph series such as for instance InTech Press [32] or Future Medicine [33], where all
books are edited by specialists and all are free to access. Older handbooks may also appear on the
Internet for free. With these you must be aware that the freely available version often may be an older
edition, or for book chapters, perhaps a pre-proof version with yet-to-be-corrected mistakes.
Searching for a systematic review
The most advanced type of literature search is the search necessary to be able to write a systematic
review. A systematic review has to qualify for being ‘systematic’ and not being an editorial or similar,
whichmeans that it has to cover ‘all’ relevant published studies in the area. The reviewhas to be based on
a protocol, and this protocol has to be published via sites such as PROSPERO [34] prior to starting the
search. The Cochrane Handbook [35] will guide through how such a review – and a possible planned
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306 305
meta-analysis–has to be structured, and the (PRISMA) Preferred Reporting Items for Systematic Reviews
and Meta-Analyses statement [36] must also be considered, but this is outside the scope of this paper.
The main message when preparing a search for a systematic review is:
– Define the aim and objectives of the systematic review clearly, you are not just writing an essay.
– Work hard on the search strategy, it has to be good and comprehensive, as you are bound to follow
it after publishing your protocol.
– Choose all bibliographic databases you can imagine.
– Search meeting abstracts, as well as databases covering protocols for studies such as Clinical
Trials.gov [17].
– Scrutinise reference lists of chosen studies and major reviews.
Summary
Keeping up with the literature is necessary in all medical practice and research to provide up-to-
date diagnosis and treatment. Possessing the latest knowledge is essential if you are to find the best
way of achieving the highest level of competence in your field. Defining a subject for the purpose of
carrying out a literature search helps to clarify how everybody sees the problem, both broadly and in
detail, and this is useful in itself. All soundly based research and innovation projects start with an
information search, supplemented with further searches when new aspects appear.
The virtual library, with its many electronic search tools, may look as if it changes all the time and
may deter a busy professional from carrying out thorough searches. As a general rule, there should be
no worries in terms of searching. The basic design remains the same. You just have to find where
everything is located.Whatever the appearance, the aim seen from the user’s point of view is the same:
“Find me the relevant information in my subject over a defined period of time.” No search tool is so
complicated that a personwith basic information-literacy skills and an education in the health sciences
cannot work out how to use it. However, courses in information literacy are available and important, if
you wish to be highly competent in this area, especially if you intend to carry out a systematic review.
To keep up with the literature, you must build searching into your work routines, and it is good
practice to search at least every 6 months to keep up with your field.
Practice points
– Search the literature on a regular basis, at least every 6 months.
– Define your subject using a search table.
– Choose relevant databases.
– Create a search strategy adjusted to the database(s) you need to search.
– Search each database separately.
– Search the Internet for supplementary information.
– Remember to evaluate all resources acquired via an Internet search.
Conflict of interest statement
The author had no conflicts of interest concerning this work.
Acknowledgement
This work was supported by the OAK Foundation.
References
[1] Google. http://www.google.com [accessed 17.01.2013].
http://www.google.com
E.M. Bartels / Best Practice & Research Clinical Rheumatology 27 (2013) 295–306306
*[2] PubMed. http://www.ncbi.nlm.nih.gov/pubmed [accessed 17.01.2013].
*[3] Grant MJ. How does your searching grow? A survey of search preferences and the use of optimal search strategies in the
identification of qualitative research. Health Information & Libraries Journal 2004;21:21–32.
*[4] Glasziou P, Vandenbroucke J, Chalmers I. Assessing the quality of research. British Medical Journal 2004;328:39–41.
[5] Hacksaw A. A concise guide to clinical trials. Oxford, UK: Wiley-Blackwell; 2009.
*[6] EMBASE. http://www.embase.com [accessed 17.01.2013].
[7] Web of Science. http://thomsonreuters.com/products_services/science/science_products/a-z/web_of_science/ [accessed
17.01.2013].
[8] BiosisPreviews. http://thomsonreuters.com/products_services/science/science_products/a-z/biosis_previews/ [accessed
17.01.2013].
[9] Psycinfo. http://www.apa.org/pubs/databases/psycinfo/index.aspx [accessed 17.01.2013].
[10] The Cochrane Library. http://www.cochranelibrary.com/view/o/index.html [accessed 17.01.2013].
[11] Chemical Abstracts. http://cas.org [accessed 17.01.2013].
[12] TOXNET. http://toxnet.nlm.nih.gov/ [accessed 19.07.2008].
[13] PEDro. http://www.pedro.org.au [accessed 17.01.2013].
[14] CINAHL. http://www.ebscohost.com/academic/cinahl-plus-with-full-text/ [accessed 17.01.2013].
[15] AMED. http://www.library.nhs.uk/help/resource/amed [accessed 17.01.2013].
[16] MANTIS�. http://healthindex.com [accessed 17.01.2013].
*[17] Clinical Trials.gov. http://www.clinicaltrials.gov/ [accessed 17.01.2013].
*[18] Prospero. http://www.crd.york.ac.uk/PROSPERO/ [accessed 17.01.2013].
[19] CrossRef. http://www.crossref.org [accessed 17.01.2013].
[20] Derwent Innovations Index. http://thomsonreuters.com/products_services/legal/legal_products/a-z/derwent_
innovations_index/ [accessed 17.01.2013].
[21] Journal Citation Reports. http://thomsonreuters.com/products_services/science/science_products/a-z/journal_citation_
reports/ [accessed 17.01.2013].
[22] Reference Manager. http://www.refman.com/ [accessed 17.01.2013].
[23] Endnote. http://www.endnote.com/ [accessed 17.01.2013].
[24] Procite. http://www.procite.com/ [accessed 17.01.2013].
[25] Ixquick. http://www.ixquick.com/ [accessed 17.01.2013].
[26] Google Scholar. http://scholar.google.com [accessed 17.01.2013].
*[27] Scirus. http://www.scirus.com/ [accessed 17.01.2013].
[28] Wikipedia. http://www.wikipedia.org/ [accessed 17.01.2013].
*[29] WHO. http://www.who.int/en/ [accessed 17.01.2013].
*[30] NLM’s Bioethics Information Resources. http://www.nlm.nih.gov/bsd/bioethics.html [accessed 17.01.2013].
*[31] Council of Bioethics Europe Division. http://hub.coe.int/what-we-do/health/bioethics [accessed 17.01.2013].
[32] InTech Press. http://www.intechopen.com/ [accessed 17.01.2013].
[33] Future Science Group. http://www.future-science-group.com/ [accessed 17.01.2013].
[34] PROSPERO. http://www.crd.york.ac.uk/PROSPERO/ [accessed 17.01.2013].
[35] The Cochrane Handbook for Systematic Reviews. http://www.cochrane.org/training/cochrane-handbook [accessed 17.01.
2013].
[36] Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting sys-
tematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration.
Journal of Clinical Epidemiology 2009;62(10):e1–34.
http://www.ncbi.nlm.nih.gov/pubmed
http://www.embase.com
http://thomsonreuters.com/products_services/science/science_products/a-z/web_of_science/
http://thomsonreuters.com/products_services/science/science_products/a-z/biosis_previews/
http://www.apa.org/pubs/databases/psycinfo/index.aspx
http://www.cochranelibrary.com/view/o/index.html
http://cas.org
http://toxnet.nlm.nih.gov/
http://www.ebscohost.com/academic/cinahl-plus-with-full-text/
http://www.library.nhs.uk/help/resource/amed
http://healthindex.com
http://www.clinicaltrials.gov/
http://www.crd.york.ac.uk/PROSPERO/
http://www.crossref.org
http://thomsonreuters.com/products_services/legal/legal_products/a-z/derwent_innovations_index/
http://thomsonreuters.com/products_services/legal/legal_products/a-z/derwent_innovations_index/
http://thomsonreuters.com/products_services/science/science_products/a-z/journal_citation_reports/
http://thomsonreuters.com/products_services/science/science_products/a-z/journal_citation_reports/
http://www.refman.com/
http://www.endnote.com/
http://www.procite.com/
http://www.ixquick.com/
http://scholar.google.com
http://www.scirus.com/
http://www.wikipedia.org/
http://www.who.int/en/
http://www.nlm.nih.gov/bsd/bioethics.html
http://hub.coe.int/what-we-do/health/bioethics
http://www.intechopen.com/
http://www.future-science-group.com/
http://www.crd.york.ac.uk/PROSPERO/
http://www.cochrane.org/training/cochrane-handbook
-
How to perform a systematic search
Introduction
Literature search
Scientific papers in peer-reviewed journals
Define your problem
Create a search strategy
Select the right bibliographic databases
Search
Select suitable references from the retrieved ones
Assess whether the search was acceptable
Redesign the search strategy and/or choose other databases/search tools
Keeping up with the peer-reviewed journal articles
Other search tools
Search engines
Evaluation of Internet resources where peer-review is not applied
Books and book chapters
Searching for a systematic review
Summary
Conflict of interest statement
Acknowledgement
References
Original Article
A Guide to Writing a Qualitative Systematic
Review Protocol to Enhance Evidence-Based
Practice in Nursing and Health Care
Ashleigh Butler, MNurs, BNurs, RN • Helen Hall, PhD, MMid, RN, ND •
Beverley Copnell, PhD, RN
Keywords
systematic review
protocol,
qualitative,
meta synthesis,
guidelines
ABSTRACT
Background: The qualitative systematic review is a rapidly developing area of nursing research.
In order to present trustworthy, high-quality recommendations, such reviews should be based on
a review protocol to minimize bias and enhance transparency and reproducibility. Although there
are a number of resources available to guide researchers in developing a quantitative review
protocol, very few resources exist for qualitative reviews.
Aims: To guide researchers through the process of developing a qualitative systematic review
protocol, using an example review question.
Methodology: The key elements required in a systematic review protocol are discussed, with
a focus on application to qualitative reviews: Development of a research question; formulation
of key search terms and strategies; designing a multistage review process; critical appraisal
of qualitative literature; development of data extraction techniques; and data synthesis. The
paper highlights important considerations during the protocol development process, and uses a
previously developed review question as a working example.
Implications for Research: This paper will assist novice researchers in developing a qualitative
systematic review protocol. By providing a worked example of a protocol, the paper encourages
the development of review protocols, enhancing the trustworthiness and value of the completed
qualitative systematic review findings.
Linking Evidence to Action: Qualitative systematic reviews should be based on well planned,
peer reviewed protocols to enhance the trustworthiness of results and thus their usefulness in
clinical practice. Protocols should outline, in detail, the processes which will be used to undertake
the review, including key search terms, inclusion and exclusion criteria, and the methods used for
critical appraisal, data extraction and data analysis to facilitate transparency of the review process.
Additionally, journals should encourage and support the publication of review protocols, and
should require reference to a protocol prior to publication of the review results.
INTRODUCTION
The qualitative systematic review is a newly emerging area of
health care research. Qualitative reviews differ from their quan-
titative counterparts in that they aim to present a comprehen-
sive understanding of participant experiences and perceptions,
rather than assess the effectiveness of an intervention (Stern,
Jordan, & McArthur, 2014). However, their goal remains the
same: to produce high-quality recommendations for patient
care based on a scrupulous review of the best available evi-
dence at the time (Aromataris & Pearson, 2014; Risenberg &
Justice, 2014a). In order to achieve this, the review process
must be well developed and preplanned to reduce researcher
bias and eliminate irrelevant or low quality studies. Typically,
a systematic review is planned by developing a protocol, which
forms the foundation of the entire process.
Developing the protocol before undertaking the review en-
sures that all methodological decisions, from identifying search
terms to data extraction and synthesis processes, are carefully
considered and justified, enhancing the integrity and trustwor-
thiness of the results (Moher et al., 2015; Risenberg & Jus-
tice, 2014a). Additionally, it encourages consistency between
reviewers, reduces the ambiguity of what constitutes “data,ˮ
and ensures the data extraction and synthesis processes are
not arbitrary (Moher et al., 2015).
Although the processes used in quantitative systematic re-
views are well developed, with many guidelines available to
assist novice researchers, there are very few examples of a qual-
itative systematic review protocol available. This paper aims to
guide readers through the process of developing a qualitative
systematic review protocol, using a meta synthesis protocol
Worldviews on Evidence-Based Nursing, 2016; 13:3, 241–249. 241
C© 2016 Sigma Theta Tau International
17416787, 2016, 3, D
ow
nloaded from
https://sigm
apubs.onlinelibrary.w
iley.com
/doi/10.1111/w
vn.12134 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
The Qualitative Systematic Review Protocol
Table 1. Example SR: Modified PICO
Population Parents, family, siblings (deceased child)
Context Death of a child in PICU
Outcome Family experiences
entitled “The family experience of the death of a child in the
Pediatric Intensive Care Unit (PICU)” as an example.
Where to Start: Choose a Topic and Aim
Systematic reviews aim to answer a specific question, rather
than provide a simple overview of the evidence (Aromataris
& Pearson, 2014). It is important to have a well-developed
question from the outset, as it will form the basis for the entire
review protocol, guiding the formation of the search strategy,
inclusion criteria, and data extraction (Bettany-Saltikov, 2012).
However, developing a focused, answerable question for a
review can be challenging for novice researchers. There are
numerous frameworks to aid in designing a question for quali-
tative studies: Population, Exposure, Outcomes (PEO); Sample,
Phenomena of Interest, Design, Evaluation, Research type
(SPIDER); and Setting, Perspective, Intervention, Comparison,
Evaluation (SPICE). The acronym PICO, (Population, Interven-
tion, Comparison, Outcome) developed for quantitative review
questions, (Bettany-Saltikov, 2012; Risenberg & Justice, 2014a;
Stern et al., 2014) can also be modified to Population, Context,
Outcome (PCO) or Population, Interest, Context (PICo), to
more appropriately suit a qualitative methodology (Risenberg
& Justice, 2014a; Stern et al., 2014). For example, the question
“What is the experience of the family when a child dies in the
PICU?ˮwas designed using the modified PCO framework (see
Table 1).
The review question is used to design the overall study aim.
The aim should be a clear statement of the intention of the
review, and is typically phrased as a statement. For the above
example, the aim would be stated as follows: “The aim of this
review is to synthesize the best available evidence exploring
the experiences of the death of a child in the PICU, from the
perspective of the child’s family.ˮ
Locating the Literature
Once a focused question has been developed and the aim writ-
ten, the search strategy must be designed. This is one of the
most important parts of the systematic review protocol, because
it outlines a priori the strategies reviewers will use to find, se-
lect, appraise and utilize the data. It is advisable to conduct
a brief search of the literature before planning the review, to
ensure it has not previously been done. Consulting an expert
librarian at this stage may also provide valuable assistance in
identifying keywords and appropriate databases, and develop-
ing a robust search strategy.
Stage One: Developing a Search Strategy
Keywords and search terms. The next step in writing a
qualitative systematic review protocol is developing the key-
words and search terms. The PICO framework can be used
to identify the keywords in the review question. The example
from Table 1 outlines five main keywords: Population-Family,
Context-Death, Context-Child, Context-PICU, and Outcome-
Experiences. Once the keywords are ascertained, a table listing
all of the synonyms can be developed to guide the search, such
as in Table 2. This table of synonyms will then form the ba-
sis of the search strategy. Examining some of the key studies
on the topic can help to uncover commonly used synonyms
and keywords in the literature and help to focus the search
terms. Familiarity with the truncation or wildcard operators for
each database will enable searching for all alternative spellings
or endings to a word, ensuring all possibilities are captured.
Plans to use relevant MeSH headings or similar should also be
documented.
Determining inclusion and exclusion criteria. The inclusion
criteria provide boundaries for the review, defining which stud-
ies will be potentially included, and which ones are irrelevant
to the topic (Stern et al., 2014). Additionally, inclusion criteria
help to mitigate any personal bias of the reviewer; they ensure
that studies are selected only on the basis of predefined, jus-
tified criteria, rather than because they are of interest to the
reviewer, fit into a preconceived framework, or match emerg-
ing findings (Aromataris & Pearson, 2014). The researcher
must negotiate the fine balance between having too narrow
or specific inclusion criteria, where there is a risk of eliminat-
ing relevant papers, and having too few or too broad criteria,
capturing a large number of irrelevant papers. Commonly, in-
clusion criteria consist of aspects such as type of study, type
of data (qualitative or quantitative), phenomena under study,
date of study and age or sex of participants (Stern et al., 2014).
Excluding papers based on language may introduce a language
bias into the review, limiting the transferability of the results;
however, this may be difficult to avoid as translating papers
is often not possible. Whatever the inclusion criteria, they
should be justifiable based on the requirements of the review,
and clearly documented in the protocol. The inclusion criteria
used for the example question are outlined in Table 3, and pro-
vide an illustration of the typical types of justifications used in
a qualitative systematic review protocol.
Designing the search strategy. A systematic review requires
a comprehensive search of multiple databases, using the
same search strategy for each database. It is important that
the protocol clearly outlines the planned search strategy; it
ensures the search is undertaken in exactly the same way
each time, and also allows the search to be replicated by other
researchers in the future with the same results (Aromataris &
Riitano, 2014). Ideally, the search will contain three parts: the
242 Worldviews on Evidence-Based Nursing, 2016; 13:3, 241–249.
C© 2016 Sigma Theta Tau International
17416787, 2016, 3, D
ow
nloaded from
https://sigm
apubs.onlinelibrary.w
iley.com
/doi/10.1111/w
vn.12134 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
Original Article
Table 2. Example SR PICO Search Terms
Population Context-death Context-child Context-PICU Outcome
Mother Death Child* PICU Experience
Father Die Daughter P*ediatric ICU Perception
Grandparent Dead Son P*ediatric Intensive Care Perspective
Grandmother Deceased P*ediatric P*ediatric critical care View
Grandfather Dying P*ediatric Intensive therapy unit Need
Sibling Loss
Brother ‘Passed away’
Sister Bereav*
Famil* ‘End of life’
Parent*
Note. The * is used as a truncation indicator.
Table 3. Example SR Question: Inclusion Criteria
Criteria Justification
Conducted between 1990 and 2014 The development of a formal definition of family centred care in 1987 (Shelton, Jeppson, &
Johnson, 1987) led to a change in the way pediatric departments recognize and incorporate
parents and family members into a child’s care delivery. Studies published before 1990 will be
excluded, to ensure the review examines current practice and philosophical standpoints.
Examines family member
experiences, perspectives or
needs as a primary aim
Family experiences and needs surrounding child death in PICU must be a primary aim of each
study. Studies examining family experiences of organ donation, bereavement follow up or
family presence during resuscitation will be excluded, owing to the expansive number of
reviews on each topic.
Relates to the death of a child aged
less than 18 years in a PICU setting
The child’s death must have occurred in a PICU setting. Any studies which focus on the death of a
child in the neonatal ICU (NICU) will be excluded, due to the difference in the philosophy of
care delivery. Studies which examine data from both NICU and PICU settings will be included if
the data from PICU parents is reported separately.
Original qualitative data The review will focus on the experiences, needs or perspectives of family members, which is most
appropriately answered through qualitative research. Any study which utilizes survey data or
statistical reporting of results will be excluded, as will commentaries or discussions on the
subject. Qualitative data from a mixed methods study will be included.
Published in the English Language Due to limited resources, studies published in languages other than English are unable to be
translated and included into the review.
databases, the reference lists and hand searching, and the grey
literature sources.
Identifying the most appropriate databases for the review
topic is crucial. Searching inappropriate databases leads to
inappropriate results, which may impact on the overall review
findings. Librarians are often well positioned to identify
the most useful databases for the area under study. Typical
nursing databases include CINAHL Plus, PubMed, OVID
Medline, and Scopus. These databases, alongside PsychINFO
and EMBASE, were proposed in the example review protocol,
due to their relevance to the review question.
Once the databases are identified, the search strategy should
be developed. The protocol should document who will under-
take the search, how the search terms will be combined and
used, and whether any limits will be applied.
The search strategy used to answer the example question is
outlined in Figure 1, and was based on the recommendations
given by Bettany-Saltikov (2012) and Aromataris and Riitano
Worldviews on Evidence-Based Nursing, 2016; 13:3, 241–249. 243
C© 2016 Sigma Theta Tau International
17416787, 2016, 3, D
ow
nloaded from
https://sigm
apubs.onlinelibrary.w
iley.com
/doi/10.1111/w
vn.12134 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
The Qualitative Systematic Review Protocol
Each database will be searched by the research student, in consulta�on with an expert librarian,
based on the following strategy. Each column in Table 2 contains a set of synonyms for the key
search terms. Each term in the column will be entered into the database and will be truncated
where appropriate. All individual searches for that column will be combined using the “OR” Boolean
operator into a single group. Each overall group will then be combined using the “AND” func�on to
produce a final list of cita�ons, which will be saved into Endnote, and screened for duplicates.
Records of all searches in each database will be maintained.
Figure 1. Example SR question: search strategy.
(2014). It provides a systematic way to search each database,
minimizing the impact of the researcher on the outcome of the
search.
It is important that thorough records of all searches are
maintained for future reference, as this provides an audit trail
and enhances trustworthiness of the review findings. Addition-
ally, use of a PRISMA flowchart is recommended as a pictorial
representation of the search process (Moher, Liberati, Tetzlaff,
& Altman, 2009).
Another common search strategy is examination of refer-
ence lists, or hand searching key journals in the area of interest.
The reference lists of relevant papers, especially other literature
reviews on the topic, may identify citations which did not ap-
pear during a database search. The protocol should outline
whether this type of search will be undertaken, and if key jour-
nals will be manually searched for potentially relevant articles,
these should be identified as well.
Lastly, the protocol should also outline whether or not
grey literature will be sourced, and which databases will be
searched. Grey literature is the term given to unpublished
studies, theses, conference proceedings, presentations, gov-
ernment documents, or any other relevant documents that are
not published in journals and will not appear in a database
search (Aromataris & Riitano, 2014; Bellefontaine & Lee,
2014). The inclusion of grey literature helps to reduce publi-
cation bias—the notion that studies with limited, negative, or
neutral outcomes are less likely to be published (Aromataris
& Riitano, 2014; Pappas & Williams, 2011). Grey literature
can be obtained from government websites, Google scholar,
these databases (such as trove.nla.gov.au; worldcat.org), or
grey literature data bases (such as opengrey.eu; greylit.org).
Stage Two: Reviewing the Literature
In order to uncover the studies most relevant to the review, a
multistage process for reviewing and selecting citations must
be developed. The protocol should stipulate how many review-
ers will undertake the review, how many stages there are, and
what each stage will encompass.
How many reviewers? A systematic review requires at least
two independent reviewers (Aromataris & Pearson, 2014;
Porritt, Gomersall, & Lockwood, 2014; Risenberg & Justice,
2014b). Having more than one reviewer at each stage increases
the trustworthiness of the review findings by removing per-
sonal bias from the review process, and minimizing the poten-
tial for error. The protocol should clearly stipulate what each
reviewer’s role will be in each stage of the review, such as in
Figure 2.
How many stages? Typically, the review process is under-
taken in a series of stages, with articles moving through
screening based on title and abstract, and then full text review.
Only those with titles and abstracts that meet inclusion criteria
are retrieved and included for full text review (Aromataris &
Pearson, 2014; Porritt et al., 2014). The protocol should outline
how many review stages each article will undergo, what each
stage involves, and how many reviewers will be included at
each stage. The protocol should also clearly document what
will occur if reviewers disagree. Generally, most reviewers tend
to err on the side of caution and include any citations that are
unclear when screening based on title and abstract, and then
utilize a third reviewer if reviewers disagree during full text
review (Porritt et al., 2014). The protocol should also discuss
what will occur if there is insufficient or unclear information
in an article. Many reviewers will attempt to contact the author
for clarification; however, the protocol should stipulate a
timeframe for reply before the article is excluded on the basis
of insufficient information. An outline of the review process
for the example SR question can be viewed in Figure 3.
The Critical Appraisal
The aim of critical appraisal in a systematic review is to as-
sess the potential studies for rigour, and ensure they are
free from significant methodological issues which may impact
on the quality of the review findings (Bettany-Saltikov, 2012;
Korhonen, Hakulinen-Viitanen, Jylha, & Holopainen, 2013).
Whilst the more traditional qualitative literature provides am-
ple guidance on what constitutes rigor in the various qualita-
tive methodologies (Charmaz, 2006, 2014; Corbin & Stauss,
2008; Holloway & Wheeler, 2010; Lincoln & Guba, 1985; Polit
& Beck, 2010; Sandelowski, 1986; Thomas & Magilvy, 2011;
Whittemore, Chase, & Mandle, 2001), very few of these guide-
lines have been incorporated into critical appraisal tools. Thus,
critical appraisal of qualitative studies remains a contentious is-
sue, with little consensus on what makes a good study, whether
critical appraisal should be undertaken at all, and if so, what
244 Worldviews on Evidence-Based Nursing, 2016; 13:3, 241–249.
C© 2016 Sigma Theta Tau International
17416787, 2016, 3, D
ow
nloaded from
https://sigm
apubs.onlinelibrary.w
iley.com
/doi/10.1111/w
vn.12134 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
Original Article
The review process will use four reviewers – one research student, and three supervisors. Ar�cles will
be distributed across the four reviewers in such as way that the research student reviews each cita�on,
and the three supervisors independently review one third of the total cita�ons at each stage.
Figure 2. Example SR question: reviewer roles.
All poten�al ar�cles will undergo a two stage screening process based on the inclusion criteria, and
undertaken by four reviewers, as outlined in Figure 2.
Stage 1: All cita�ons will be screened based on �tle and abstract. Reviewers will meet to discuss
results. All uncertain cita�ons will be included for full text review.
Stage 2: Full text of each included cita�on will be obtained. Each study will be read in full and
assessed for inclusion. Any discrepancies which cannot be resolved through discussion will be sent to
a third reviewer for a decision. Authors will be contacted for missing or incomplete informa�on. If
there is no response within 2 weeks, the ar�cle may be excluded on the basis of missing informa�on.
Figure 3. Example SR question: screening and review.
should be done with the findings (Dixon-Woods et al., 2006;
Downe, 2008; Porritt et al., 2014; Thomas & Harden, 2008;
Toye et al., 2014). To further complicate the issue, there are
a number of different tools available to aid in the critical ap-
praisal of qualitative research, with ongoing debate over which
is most suitable for use in systematic reviews (Dixon-Woods
et al., 2006; Downe, 2008; Toye et al., 2014).
In light of these issues, there are a number of aspects
the protocol must consider and discuss in relation to critical
appraisal:
� Whether critical appraisal will be carried out, and by
whom. The protocol should provide justification if no
appraisal will occur.
� Which appraisal tool will be used, and why. The pro-
tocol should also outline any information or instruc-
tions for reviewers when using the tool.
� Whether the papers will be scored or ranked, and how
this will occur. Generally, most critical appraisal tools
provide a checklist for reviewers, but do not provide
any guidance as to what constitutes a high or low
quality study. The protocol should therefore clearly
document any scoring system which will be imple-
mented, and what will happen if reviewers disagree
during this process.
� How the results of the appraisal will be used. This
decision will depend largely on the purpose of the
review: those which aim to present an overview of
findings may opt to include all studies, whilst those
reviews which aim to inform practice or policy may
omit lower quality studies to enhance trustworthiness.
The protocol should outline the definition of a low- or
high-quality article, and discuss whether any studies
will be excluded and why. It is wise to trial the tool
and scoring system on a small sample of papers from
the initial scoping literature review during this stage
of protocol design, to examine the scores provided
and inform development of an appropriate ranking
system and cut-off point.
For the example systematic review, the researchers took
the view that the use of critical appraisal was necessary to
assess the extent to which the authors’ findings represent the
participants’ experiences or views, and decided that studies
would be excluded based on quality. The Critical Appraisal
Skills Programme (CASP; CASP International Network, 2013)
qualitative checklist was used for critical appraisal, which had
been widely used in recent similar reviews. The tool allows for
appraisal of all types of qualitative data, and the tool contains
only 10 questions, facilitating rapid evaluation; however, it does
not provide a scoring system. Based on previous experience, the
scoring system outlined in Table 4 was designed, and was used
without issue.
Data Extraction
The next step in developing a systematic review protocol is data
extraction. Designing this stage of a qualitative review is often
more difficult than for a quantitative review, because what con-
stitutes data is often unclear. The protocol should clearly outline
what “dataˮ is before outlining how it will be extracted. Com-
monly, qualitative reviews define data as first order constructs
(participants’ quotes), or second order constructs (researcher
interpretation, statements, assumptions and ideas; Toye et al.,
2014). Extracting both forms of data allows the reviewers to
Worldviews on Evidence-Based Nursing, 2016; 13:3, 241–249. 245
C© 2016 Sigma Theta Tau International
17416787, 2016, 3, D
ow
nloaded from
https://sigm
apubs.onlinelibrary.w
iley.com
/doi/10.1111/w
vn.12134 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
The Qualitative Systematic Review Protocol
Table 4. Example SR Question: Reviewer Guidelines for Using the CASP Checklist
Item Guidelines
Question 2: Appropriate for qualitative methodology Exclude if inappropriate
Question 3: Research design Yes- Specifically states research design, with justification
Unsure- Outline of research design only
No- Not discussed or inappropriate to research question
Question 5: Data collection Yes- Addresses 4 or more items listed on the CASP checklist
Unsure- Addresses 2–3 items listed on the CASP checklist
No- Addresses less than 2 items
Question 7: Ethical considerations Exclude if unclear or unstated ethical approval
Question 10: Recommendations Yes- The following must be discussed: Contributions to existing knowledge,
identifies areas for future research, makes recommendations based on results
Unsure- only 2 items discussed
No- only 1 item discussed
Scoring system:
Yes: 1 point High-quality paper: Scores 9–10
Unsure: 0.5 points Moderate-quality paper: Scores 7.5-9
No: 0 points Low-quality paper: Less than 7.5
Exclude: Less than 6
view and work with the raw data (quotes) as well as the au-
thors’ interpretations, which we argue helps ensure the review
findings are thoroughly grounded in the original experiences
of the participants.
After the concept of data is well defined, the protocol should
outline how it will be extracted, whether any other informa-
tion will be gathered during the extraction process, and how
many reviewers will be involved, similarly to the example pro-
vided in Figure 4. Generally, data is extracted using a data
extraction tool, which also facilitates the extraction of bibli-
ographic and methodological information about each study,
and ensures that data extraction is consistent amongst all re-
viewers and across all studies (Aromataris & Pearson, 2014;
Bettany-Saltikov, 2012; Risenberg & Justice, 2014b). The ex-
traction tool should be designed by the reviewers based on the
needs of the study, and should be attached as an appendix in
the protocol. Additionally, the protocol should outline whether
the tool will be piloted before use, and how any modifications
will be managed and reported.
Data Synthesis
Developing a plan for data analysis is the final stage of writing a
systematic review protocol. Generally speaking, the aim of data
synthesis or analysis is to assemble the collective findings into
a meaning statement or set of statements which represent and
explain the phenomena under study (Munn, Tufanaru, & Aro-
mataris, 2014). The meta synthesis of qualitative data has long
been a contentious issue. Many scholars argue that by inter-
preting an interpretation, qualitative synthesis risks losing the
essence of the original studies (Korhonen et al., 2013; Thomas
& Harden, 2008; Toye et al., 2014). However, a well-planned
data synthesis process can help to ensure that the review find-
ings remain firmly grounded in the original data, ensuring the
results reflect the original participants’ experiences.
Several methods exist to guide the synthesis and analysis of
qualitative systematic review data, each with its own strengths
and limitations (Dixon-Woods, Agarwal, Jones, Young, &
Sutton, 2005). The chosen method will depend largely on the
type and purpose of the review being undertaken; for example,
a meta synthesis typically requires reviewers reinterpret the
qualitative data into a higher level of abstraction and may use
similar thematic analysis techniques to those used in original
studies, whereas a meta summary may only require content
analysis to provide an aggregation of the overall findings
(Dixon-Woods et al., 2005; Korhonen et al., 2013; Sandelowski,
2006). Whatever the chosen method, each step should be
clearly outlined in the protocol (see Figure 5 for an example),
alongside who will undertake the analysis and whether the
246 Worldviews on Evidence-Based Nursing, 2016; 13:3, 241–249.
C© 2016 Sigma Theta Tau International
17416787, 2016, 3, D
ow
nloaded from
https://sigm
apubs.onlinelibrary.w
iley.com
/doi/10.1111/w
vn.12134 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
Original Article
A data extrac�on tool has been developed for the purpose of this review. The tool will be piloted on
2-4 ar�cles prior to use, and will then be modified as required. Data extrac�on will be undertaken by
4 reviewers as per cita�on screening.
The following informa�on will be extracted from each ar�cle: Bibliographic informa�on; study aims;
study design: methodological underpinnings; sample: strategy, size, inclusion/exclusion criteria and
par�cipant characteris�cs; data collec�on methods; data analysis techniques; ethical considera�ons
and issues; results: themes, quotes, author interpreta�ons or explana�ons; strengths and
limita�ons; and reviewer comments.
Figure 4. Example SR question: data extraction.
The extracted data will be analyzed u�lising thema�c analysis techniques, allowing clear
iden�fica�on of themes arising from the data, and facilita�ng higher order abstrac�on and theory
development. The thema�c analysis and meta synthesis processes outlined by J. Thomas and
Harden (2008) are outlined below, and will be used to enhance transparency in the review process.
Data analysis will primarily be undertaken by the student reviewer, with findings con�nually
discussed in team mee�ngs to ensure they appropriately reflect the original data.
Stage 1: Coding text: Free line by line coding of the findings from the primary studies will occur.
Data will be examined for meaning and content during the coding. The codes will then be entered
into a code book. This process will allow the transla�on of codes and concepts between studies.
Stage 2: Developing descrip�ve themes: The codes will then be examined and analysed for their
meanings, and reorganized into related categories. Each category will be analyzed for its proper�es.
Stage 3: Genera�ng analy�cal themes: Each category will then be examined and compared to other
categories, specifically looking for similari�es and differences. Similar categories will be merged into
higher level constructs and then themes, going beyond the findings of the original studies into a
higher order abstrac�on of the phenomena.
Figure 5. Example SR question: data synthesis.
findings will be discussed with other reviewers. This not only
allows the results to be reproduced by other researchers, but
also enhances the transparency and overall trustworthiness of
the review findings.
Publishing the Protocol
Once completed, the protocol should be made available to
other researchers. Most commonly, this is achieved by regis-
tering the protocol with review databases such as the Joanna
Briggs Institute, The Cochrane Collaboration, or PROSPERO,
although there are also a limited number of nursing journals
which will publish a review protocol (Booth et al., 2011; Moher
et al., 2015). Publication encourages transparency of the
review methodology and enables peer review and feedback
prior to the review being undertaken, improving the quality
and trustworthiness of the subsequent review findings and
recommendations (Aromataris & Pearson, 2014; Booth et al.,
2011; Moher et al., 2015). It also ensures that reviewers adhere
to the predefined review processes, as deviation from the
protocol is easily identifiable and requires justification during
publication of the review findings (Booth et al., 2011; Moher
et al., 2015). Additionally, publication of the review protocol
ensures other researchers are aware that the review is being
undertaken, minimizing the amount of time and resources
wasted on duplicate reviews (Booth et al., 2011). Overall,
the publication or registration of review protocols increases
the trustworthiness of the review findings, ensuring that the
recommendations are based on high-quality review of the best
available evidence at the time.
CONCLUSIONS
The qualitative systematic review remains relatively new to the
discipline of nursing, providing greater insight into the needs
Worldviews on Evidence-Based Nursing, 2016; 13:3, 241–249. 247
C© 2016 Sigma Theta Tau International
17416787, 2016, 3, D
ow
nloaded from
https://sigm
apubs.onlinelibrary.w
iley.com
/doi/10.1111/w
vn.12134 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
The Qualitative Systematic Review Protocol
of participants than any one single study. The systematic review
should be based on a predeveloped protocol which outlines the
methods and processes which will be used in the review before
it is undertaken, enhancing transparency and trustworthiness
of the review findings. However, given that the techniques
used to design and undertake the qualitative review itself are
still developing, there are very few resources available to guide
nurse researchers through the process of developing a review
protocol. This paper highlights the importance of developing
a systematic review protocol for qualitative reviews, and uses
an example review question to guide researchers through the
protocol development process. By learning to design and im-
plement a systematic review protocol, researchers can help to
ensure that their findings and recommendations are based on
trustworthy, high-quality evidence, improving care delivery to
patients and their families. WVN
LINKING EVIDENCE TO ACTION
� Develop a review protocol prior to undertaking the
review to enhance rigor.
� Utilize a framework (such as PICO) to design an
appropriate and answerable review question.
� Consult an expert librarian for assistance in
developing keywords, identifying appropriate
databases, and designing the search strategy.
� Use two or more reviewers at each stage of the
review to reduce personal bias and minimize po-
tential for error.
� Publish the protocol before undertaking the review
to enhance transparency of the review process and
trustworthiness of the findings.
Author information
Ashleigh Butler, PhD candidate, School of Nursing and Mid-
wifey, Monash University, and Clinical Nurse Specialist, Adult
and Pediatric Intensive Care Unit, Monash Health, Melbourne,
Victoria, Australia; Helen Hall, Lecturer, School of Nursing and
Midwifery, Monash University, Melbourne, Victoria, Australia;
Beverley Copnell, Senior Lecturer, School of Nursing and Mid-
wifery, Monash University, Melbourne, Victoria, Australia
Address correspondence to Ashleigh Butler, C/O Intensive
Care Unit, Monash Medical Centre, 246 Clayton Rd., Clayton,
Victoria, Australia, 3168; aebut2@student.monash.edu
Accepted 14 June 2015
Copyright C© 2016, Sigma Theta Tau International
REFERENCES
Aromataris, E., & Pearson, A. (2014). The systematic review: An
overview. Synthesizing research evidence to inform nursing
practice. American Journal of Nursing, 114(3), 53–58.
Aromataris, E., & Riitano, D. (2014). Constructing a search strategy
and searching the evidence: A guide to the literature search for
systematic review. American Journal of Nursing, 114(5), 49–56.
Bellefontaine, S., & Lee, C. (2014). Between black and white: Exam-
ining grey literature in meta-analyses of psychological research.
Journal of Child & Family Studies, 23(8), 1378–1388.
Bettany-Saltikov, J. (2012). How to do a systematic literature review
in nursing: A step-by-step guide. Berkshire, England: McGraw-Hill
Education.
Booth, A., Clarke, M., Ghersi, D., Moher, D., Petticrew, M., &
Stewart, L. (2011). An international registry of systematic-review
protocols. The Lancet, 377(9760), 108–109.
CASP International Network. (2013). 10 questions to help you
make sense of qualitative research. Retrieved from http://www.
caspinternational.org/mod_product/uploads/CASP%20Quali-
tative%20Research%20Checklist%2031.05.13
Charmaz, K. (2006). Constructing grounded theory: A practical guide
through qualitative analysis. London, England: SAGE Publica-
tions.
Charmaz, K. (2014). Constructing grounded theory (2nd ed.). Lon-
don, England: SAGE Publications.
Corbin, J., & Stauss, A. (2008). Basics of qualitative research: Tech-
niques and procedures for developing grounded theory (3rd ed.). Los
Angeles, CA: SAGE Publications.
Dixon-Woods, M., Agarwal, S., Jones, D., Young, B., & Sutton,
A. (2005). Synthesising qualitative and quantitative evidence: A
review of possible methods. Journal of Health Services Research &
Policy, 10(1), 45–53.
Dixon-Woods, M., Bonas, S., Booth, A., Jones, D. R., Miller, T.,
Sutton, A. J., . . . Young, B. (2006). How can systematic reviews
incorporate qualitative research? A critical perspective. Qualita-
tive Research, 6(1), 27–44.
Downe, S. (2008). Metasynthesis: A guide to knitting smoke.
Evidence-Based Midwifery, 6, 4–8.
Holloway, I., & Wheeler, S. (2010). Qualitative research in nursing
and healthcare (3rd ed.). Oxford, England: Wiley-Blackwell.
Korhonen, A., Hakulinen-Viitanen, T., Jylha, V., & Holopainen,
A. (2013). Meta-synethesis and evidence-based health care—A
method for systematic review. Scandinavian Journal of Caring
Sciences, 27(4), 1027–1034.
Lincoln, Y., & Guba, E. (1985). Naturalistic inquiry. Beverly Hills,
CA: SAGE Publications.
Moher, D., Liberati, A., Tetzlaff, J., & Altman, D. G. (2009). Pre-
ferred reporting items for systematic reviews and meta-analyses:
The PRISMA statement. BMJ, 339(7716), 332–336.
Moher, D., Shamseer, L., Clarke, M., Ghersi, D., Liberati, A., Petti-
crew, M., & . . . Group, P.-P. (2015). Preferred reporting items for
systematic review and meta-analysis protocols (PRISMA-P) 2015
statement. Systematic Reviews, 4(1). doi: 10.1186/2046-4053-4-1
Munn, Z., Tufanaru, C., & Aromataris, E. (2014). Data extraction
and synthesis: The steps following study selection in a systematic
review. American Journal of Nursing, 114(7), 49–54.
Pappas, C., & Williams, I. (2011). Grey literature: Its emerging
importance. Journal of Hospital Librarianship, 11(3), 228–234.
248 Worldviews on Evidence-Based Nursing, 2016; 13:3, 241–249.
C© 2016 Sigma Theta Tau International
17416787, 2016, 3, D
ow
nloaded from
https://sigm
apubs.onlinelibrary.w
iley.com
/doi/10.1111/w
vn.12134 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
Original Article
Polit, D., & Beck, C. (2010). Essentials of nursing research: Appraising
evidence for nursing practice (7th ed.). Philadelphia, PA: Lippincott
Williams and Wilkins.
Porritt, K., Gomersall, J., & Lockwood, C. (2014). Study selection
and critical appraisal: The steps following the literature search
in a systematic review. American Journal of Nursing, 114(6), 47–
52.
Risenberg, L., & Justice, E. (2014). Conducting a successful sys-
tematic review of the literature, part 1. Nursing, 44(4), 13–17.
Risenberg, L., & Justice, E. (2014). Conducting a successful sys-
tematic review of the literature, part 2. Nursing, 44(6), 22–26.
Sandelowski, M. (1986). The problem of rigor in qualitative re-
search. Advances in Nursing Science, 8(3), 27–37.
Sandelowski, M. (2006). ‘Meta-jeopardy’: The crisis of representa-
tion in qualitative metasynthesis. Nursing Outlook, 54(1), 10–16.
Shelton, T., Jeppson, E., & Johnson, B. (1987). Family-centered care
for children with special health needs. Washington, D.C.: Associa-
tion for the Care of Children’s Health.
Stern, C., Jordan, Z., & McArthur, A. (2014). Developing the review
question and inclusion criteria: The first steps in conducting a
systematic review. American Journal of Nursing, 114(4), 53–56.
Thomas, E., & Magilvy, J. K. (2011). Qualitative rigor or research
validity in qualitative research. Journal for Specialists in Pediatric
Nursing, 16(2), 151–155.
Thomas, J., & Harden, A. (2008). Methods for the thematic synthe-
sis of qualitative research in systematic reviews. BMC Medical
Research Methodology, 8, 45–45.
Toye, F., Seers, K., Allcock, N., Briggs, M., Carr, E., & Barker, K.
(2014). Meta-ethnography 25 years on: Challenges and insights
for synthesising a large number of qualitative studies. BMC
Medical Research Methodology, 14, 80–80.
Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Validity in
qualitative research. Qualitative Health Research, 11(4), 522–537.
doi 10.1111/wvn.12134
WVN 2016;13:241–249
Worldviews on Evidence-Based Nursing, 2016; 13:3, 241–249. 249
C© 2016 Sigma Theta Tau International
17416787, 2016, 3, D
ow
nloaded from
https://sigm
apubs.onlinelibrary.w
iley.com
/doi/10.1111/w
vn.12134 by Southern C
ross U
niversity, W
iley O
nline L
ibrary on [07/01/2023]. See the T
erm
s and C
onditions (https://onlinelibrary.w
iley.com
/term
s-and-conditions) on W
iley O
nline L
ibrary for rules of use; O
A
articles are governed by the applicable C
reative C
om
m
ons L
icense
CORRESPONDENCE Open Access
What kind of systematic review should I
conduct? A proposed typology and
guidance for systematic reviewers in the
medical and health sciences
Zachary Munn* , Cindy Stern, Edoardo Aromataris, Craig Lockwood and Zoe Jordan
Background: Systematic reviews have been considered as the pillar on which evidence-based healthcare rests.
Systematic review methodology has evolved and been modified over the years to accommodate the range of
questions that may arise in the health and medical sciences. This paper explores a concept still rarely considered by
novice authors and in the literature: determining the type of systematic review to undertake based on a research
question or priority.
Results: Within the framework of the evidence-based healthcare paradigm, defining the question and type of systematic
review to conduct is a pivotal first step that will guide the rest of the process and has the potential to impact on other
aspects of the evidence-based healthcare cycle (evidence generation, transfer and implementation). It is something that
novice reviewers (and others not familiar with the range of review types available) need to take account of but frequently
overlook. Our aim is to provide a typology of review types and describe key elements that need to be addressed during
question development for each type.
s: In this paper a typology is proposed of various systematic review methodologies. The review types are
defined and situated with regard to establishing corresponding questions and inclusion criteria. The ultimate objective is
to provide clarified guidance for both novice and experienced reviewers and a unified typology with respect to review
types.
Keywords: Systematic reviews, Evidence-based healthcare, Question development
Systematic reviews are the gold standard to search for, col-
late, critique and summarize the best available evidence re-
garding a clinical question [1, 2]. The results of systematic
reviews provide the most valid evidence base to inform the
development of trustworthy clinical guidelines (and their
recommendations) and clinical decision making [2]. They
follow a structured research process that requires rigorous
methods to ensure that the results are both reliable and
meaningful to end users. Systematic reviews are therefore
seen as the pillar of evidence-based healthcare [3–6]. How-
ever, systematic review methodology and the language used
to express that methodology, has progressed significantly
since their appearance in healthcare in the 1970’s and 80’s
[7, 8]. The diachronic nature of this evolution has caused,
and continues to cause, great confusion for both novice
and experienced researchers seeking to synthesise various
forms of evidence. Indeed, it has already been argued that
the current proliferation of review types is creating chal-
lenges for the terminology for describing such reviews [9].
These fundamental issues primarily relate to a) the types of
questions being asked and b) the types of evidence used to
answer those questions.
Traditionally, systematic reviews have been predomin-
antly conducted to assess the effectiveness of health in-
terventions by critically examining and summarizing the
results of randomized controlled trials (RCTs) (using
* Correspondence: Zachary.Munn@adelaide.edu.au
The Joanna Briggs Institute, The University of Adelaide, 55 King William Road,
North Adelaide, Soueth Australia 5005, Australia
© The Author(s). 2018 Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0
International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and
reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to
the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver
(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Munn et al. BMC Medical Research Methodology (2018) 18:5
DOI 10.1186/s12874-017-0468-4
http://crossmark.crossref.org/dialog/?doi=10.1186/s12874-017-0468-4&domain=pdf
http://orcid.org/0000-0002-7091-5842
mailto:Zachary.Munn@adelaide.edu.au
http://creativecommons.org/licenses/by/4.0/
http://creativecommons.org/publicdomain/zero/1.0/
meta-analysis where feasible) [4, 10]. However, health
professionals are concerned with questions other than
whether an intervention or therapy is effective, and this
is reflected in the wide range of research approaches uti-
lized in the health field to generate knowledge for prac-
tice. As such, Pearson and colleagues have argued for a
pluralistic approach when considering what counts as
evidence in health care; suggesting that not all questions
can be answered from studies measuring effectiveness
alone [4, 11]. As the methods to conduct systematic re-
views have evolved and advanced, so too has the think-
ing around the types of questions we want and need to
answer in order to provide the best possible, evidence-
based care [4, 11].
Even though most systematic reviews conducted today
still focus on questions relating to the effectiveness of
medical interventions, many other review types which
adhere to the principles and nomenclature of a system-
atic review have emerged to address the diverse informa-
tion needs of healthcare professionals and policy makers.
This increasing array of systematic review options may
be confusing for the novice systematic reviewer, and in
our experience as educators, peer reviewers and editors
we find that many beginner reviewers struggle to achieve
conceptual clarity when planning for a systematic review
on an issue other than effectiveness. For example, re-
viewers regularly try to force their question into the
PICO format (population, intervention, comparator and
outcome), even though their question may be an issue of
diagnostic test accuracy or prognosis; attempting to de-
fine all the elements of PICO can confound the remain-
der of the review process. The aim of this article is to
propose a typology of systematic review types aligned to
review questions to assist and guide the novice system-
atic reviewer and editors, peer-reviewers and policy
makers. To our knowledge, this is the first classification
of types of systematic reviews foci conducted in the
medical and health sciences into one central typology.
Review typology
For the purpose of this typology a systematic review is
defined as a robust, reproducible, structured critical syn-
thesis of existing research. While other approaches to
the synthesis of evidence exist (including but not limited
to literature reviews, evidence maps, rapid reviews, inte-
grative reviews, scoping and umbrella reviews), this
paper seeks only to include approaches that subscribe to
the above definition. As such, ten different types of sys-
tematic review foci are listed below and in Table 1. In
this proposed typology, we provide the key elements for
formulating a question for each of the 10 review types.
1. Effectiveness reviews [12]
2. Experiential (Qualitative) reviews [13]
3. Costs/Economic Evaluation reviews [14]
4. Prevalence and/or Incidence reviews [15]
5. Diagnostic Test Accuracy reviews [16]
6. Etiology and/or Risk reviews [17]
7. Expert opinion/policy reviews [18]
8. Psychometric reviews [19]
9. Prognostic reviews [20]
10.Methodological systematic reviews [21, 22]
Effectiveness reviews
Systematic reviews assessing the effectiveness of an inter-
vention or therapy are by far the most common. Essen-
tially effectiveness is the extent to which an intervention,
when used appropriately, achieves the intended effect [11].
The PICO approach (see Table 1) to question develop-
ment is well known [23] and comprehensive guidance for
these types of reviews is available [24]. Characteristics re-
garding the population (e.g. demographic and socioeco-
nomic factors and setting), intervention (e.g. variations in
dosage/intensity, delivery mode, and frequency/duration/
timing of delivery), comparator (active or passive) and
outcomes (primary and secondary including benefits and
harms, how outcomes will be measured including the tim-
ing of measurement) need to be carefully considered and
appropriately justified.
Experiential (qualitative) reviews
Experiential (qualitative) reviews focus on analyzing hu-
man experiences and cultural and social phenomena. Re-
views including qualitative evidence may focus on the
engagement between the participant and the intervention,
as such a qualitative review may describe an intervention,
but its question focuses on the perspective of the individ-
uals experiencing it as part of a larger phenomenon. They
can be important in exploring and explaining why inter-
ventions are or are not effective from a person-centered
perspective. Similarly, this type of review can explain and
explore why an intervention is not adopted in spite of evi-
dence of its effectiveness [4, 13, 25]. They are important in
providing information on the patient’s experience, which
can enable the health professional to better understand
and interact with patients. The mnemonic PICo can be
used to guide question development (see Table 1). With
qualitative evidence there is no outcome or comparator to
be considered. A phenomenon of interest is the experi-
ence, event or process occurring that is under study, such
as response to pain or coping with breast cancer; it differs
from an intervention in its focus. Context will vary de-
pending on the objective of the review; it may include
consideration of cultural factors such as geographic loca-
tion, specific racial or gender based interests, and details
about the setting such as acute care, primary healthcare,
or the community [4, 13, 25]. Reviews assessing the ex-
perience of a phenomenon may opt to use a mixed
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 2 of 9
methods approach and also include quantitative data, such
as that from surveys. There are reporting guidelines avail-
able for qualitative reviews, including the ‘Enhancing
transparency in reporting the synthesis of qualitative re-
search’ (ENTREQ) statement [26] and the newly proposed
meta-ethnography reporting guidelines (eMERGe) [27].
Costs/economic evaluation reviews
Costs/Economics reviews assess the costs of a certain
intervention, process, or procedure. In any society, re-
sources available (including dollars) have alternative
uses. In order to make the best decisions about alterna-
tive courses of action evidence is needed on the health
benefits and also on the types and amount of resources
needed for these courses of action. Health economic
evaluations are particularly useful to inform health
policy decisions attempting to achieve equality in health-
care provision to all members of society and are com-
monly used to justify the existence and development of
health services, new health technologies and also, clin-
ical guideline development [14]. Issues of cost and re-
source use may be standalone reviews or components of
effectiveness reviews [28]. Cost/Economic evaluations
are examples of a quantitative review and as such can
follow the PICO mnemonic (see Table 1). Consideration
should be given to whether the entire world/inter-
national population is to be considered or only a popula-
tion (or sub-population) of a particular country. Details
of the intervention and comparator should include the
nature of services/care delivered, time period of delivery,
dosage/intensity, co-interventions, and personnel under-
taking delivery. Consider if outcomes will only focus on
Table 1 Types of reviews
Review Type Aim Question Format Question Example
Effectiveness To evaluate the effectiveness of a certain
treatment/practice in terms of its impact
on outcomes
Population, Intervention,
Comparator/s, Outcomes
(PICO) [23]
What is the effectiveness of exercise for
treating depression in adults compared to
no treatment or a comparison treatment? [69]
Experiential
(Qualitative)
To investigate the experience or
meaningfulness of a particular
phenomenon
Population, Phenomena of
Interest, Context (PICo) [13]
What is the experience of undergoing high
technology medical imaging (such as Magnetic
Resonance Imaging) in adult patients in high
income countries? [70]
Costs/Economic
Evaluation
To determine the costs associated with a
particular approach/treatment strategy,
particularly in terms of cost effectiveness
or benefit
Population, Intervention,
Comparator/s, Outcomes,
Context (PICOC) [14]
What is the cost effectiveness of self-monitoring
of blood glucose in type 2 diabetes mellitus in
high income countries? [71]
Prevalence and/
or Incidence
To determine the prevalence and/or
incidence of a certain condition
Condition, Context,
Population (CoCoPop)
[15]
What is the prevalence/incidence of claustrophobia
and claustrophobic reactions in adult patients
undergoing MRI? [72]
Diagnostic Test
Accuracy
To determine how well a diagnostic
test works in terms of its sensitivity
and specificity for a particular
diagnosis
Population, Index Test,
Reference Test, Diagnosis
of Interest (PIRD) [16]
What is the diagnostic test accuracy of nutritional
tools (such as the Malnutrition Screening Tool)
compared to the Patient Generated Subjective
Global Assessment amongst patients with colorectal
cancer to identify undernutrition? [73]
Etiology and/or
Risk
To determine the association between
particular exposures/risk factors and
outcomes
Population, Exposure,
Outcome (PEO) [17]
Are adults exposed to radon at risk for developing
lung cancer? [74]
Expert opinion/
policy
To review and synthesize current expert
opinion, text or policy on a certain
phenomena
Population, Intervention or
Phenomena of Interest,
Context (PICo) [18]
What are the policy strategies to reduce maternal
mortality in pregnant and birthing women in
Cambodia, Thailand, Malaysia and Sri Lanka? [75]
Psychometric To evaluate the psychometric properties
of a certain test, normally to determine
how the reliability and validity of a
particular test or assessment.
Construct of interest or the
name of the measurement
instrument(s), Population,
Type of measurement
instrument, Measurement
properties [31, 32]
What is the reliability, validity, responsiveness and
interpretability of methods (manual muscle testing,
isokinetic dynamometry, hand held dynamometry)
to assess muscle strength in adults? [76]
Prognostic To determine the overall prognosis for
a condition, the link between specific
prognostic factors and an outcome and/
or prognostic/prediction models and
prognostic tests.
Population, Prognostic
Factors (or models of
interest), Outcome
(PFO) [20, 34–36]
In adults with low back pain, what is the association
between individual recovery expectations and
disability outcomes? [77]
Methodology To examine and investigate current
research methods and potentially their
impact on research quality.
Types of Studies, Types of
Data, Types of Methods,
Outcomes [39] (SDMO)
What is the effect of masked (blind) peer review for
quantitative studies in terms of the study quality as
reported in published reports? (question modified
from Jefferson 2007) [40]
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 3 of 9
resource usage and costs of the intervention and its
comparator(s) or additionally on cost-effectiveness. Con-
text (including perspective) can also be considered in
these types of questions e.g. health setting(s).
Prevalence and/or incidence reviews
Essentially prevalence or incidence reviews measure dis-
ease burden (whether at a local, national or global level).
Prevalence refers to the proportion of a population who
have a certain disease whereas incidence relates to how
often a disease occurs. These types of reviews enable
governments, policy makers, health professionals and
the general population to inform the development and
delivery of health services and evaluate changes and
trends in diseases over time [15, 29]. Prevalence or inci-
dence reviews are important in the description of geo-
graphical distribution of a variable and the variation
between subgroups (such as gender or socioeconomic
status), and for informing health care planning and re-
source allocation. The CoCoPop framework can be used
for reviews addressing a question relevant to prevalence
or incidence (see Table 1). Condition refers to the vari-
able of interest and can be a health condition, disease,
symptom, event of factor. Information regarding how
the condition will be measured, diagnosed or confirmed
should be provided. Environmental factors can have a
substantial impact on the prevalence or incidence of a
condition so it is important that authors define the con-
text or specific setting relevant to their review question
[15, 29]. The population or study subjects should be
clearly defined and described in detail.
Diagnostic test accuracy reviews
Systematic reviews assessing diagnostic test accuracy
provide a summary of test performance and are import-
ant for clinicians and other healthcare practitioners in
order to determine the accuracy of the diagnostic tests
they use or are considering using [16]. Diagnostic tests
are used by clinicians to identify the presence or absence
of a condition in a patient for the purpose of developing
an appropriate treatment plan. Often there are several
tests available for diagnosis. The mnemonic PIRD is rec-
ommended for question development for these types of
systematic reviews (see Table 1). The population is all
participants who will undergo the diagnostic test while
the index test(s) is the diagnostic test whose accuracy is
being investigated in the review. Consider if multiple it-
erations of a test exist and who carries out or interprets
the test, the conditions the test is conducted under and
specific details regarding how the test will be conducted.
The reference test is the ‘gold standard’ test to which the
results of the index test will be compared. It should be
the best test currently available for the diagnosis of the
condition of interest. Diagnosis of interest relates to
what diagnosis is being investigated in the systematic re-
view. This may be a disease, injury, disability or any
other pathological condition [16].
Etiology and/or risk reviews
Systematic reviews of etiology and risk are important for
informing healthcare planning and resource allocation,
and are particularly valuable for decision makers when
making decisions regarding health policy and prevention
of adverse health outcomes. The common objective of
many of these types of reviews is to determine whether
and to what degree a relationship exists between an ex-
posure and a health outcome. Use of the PEO
mnemonic is recommended (see Table 1). The review
question should outline the exposure, disease, symptom
or health condition of interest, the population or groups
at risk, as well as the context/location, the time period
and the length of time where relevant [17]. The exposure
of interest refers to a particular risk factor or several risk
factors associated with a disease/condition of interest in
a population, group or cohort who have been exposed to
them. It should be clearly reported what the exposure or
risk factor is, and how it may be measured/identified in-
cluding the dose and nature of exposure and the dur-
ation of exposure, if relevant. Important outcomes of
interest relevant to the health issue and important to key
stakeholders (e.g. knowledge users, consumers, policy
makers, payers etc.) must be specified. Guidance now
exists for conducting these types of reviews [17]. As
these reviews rely heavily on observational studies, the
Meta-analysis Of Observational Studies in Epidemiology
(MOOSE) [30] reporting guidelines should be referred
to in addition to the PRISMA guidelines.
Expert opinion/policy reviews
Expert opinion and policy analysis systematic reviews
focus on the synthesis of narrative text and/or policy.
Expert opinion has a role to play in evidence-based
healthcare, as it can be used to either complement em-
pirical evidence or, in the absence of research studies,
stand alone as the best available evidence. The synthesis
of findings from expert opinion within the systematic re-
view process is not well recognized in mainstream
evidence-based practice. However, in the absence of re-
search studies, the use of a transparent systematic
process to identify the best available evidence drawn
from text and opinion can provide practical guidance to
practitioners and policy makers [18]. While a number of
mnemonics have been discussed previously that can be
used for opinion and text, not all elements necessarily
apply to every text or opinion-based review, and use of
mnemonics should be considered a guide rather than a
policy. Broadly PICo can be used where I can refer to ei-
ther the intervention or a phenomena of interest (see
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 4 of 9
Table 1). Reviewers will need to describe the population,
giving attention to whether specific characteristics of
interest, such as age, gender, level of education or pro-
fessional qualification are important to the question. As
with other types of reviews, interventions may be broad
areas of practice management, or specific, singular inter-
ventions. However, reviews of text or opinion may also
reflect an interest in opinions around power, politics or
other aspects of health care other than direct interven-
tions, in which case, these should be described in detail.
The use of a comparator and specific outcome statement
is not necessarily required for a review of text and opin-
ion based literature. In circumstances where they are
considered appropriate, the nature and characteristics of
the comparator and outcomes should be described [18].
Psychometric reviews
Psychometric systematic reviews (or systematic reviews
of measurement properties) are conducted to assess the
quality/characteristics of health measurement instru-
ments to determine the best tool for use (in terms of its
validity, reliability, responsiveness etc.) in practice for a
certain condition or factor [31–33]. A psychometric sys-
tematic review may be undertaken on a) the measure-
ment properties of one measurement instrument, b) the
measurement properties of the most commonly utilized
measurement instruments measuring a specific con-
struct, c) the measurement properties of all available
measurement instruments to measure a specific con-
struct in a specific population or d) the measurement
properties of all available measurement instruments in a
specific population that does not specify the construct to
be measured. The COnsensus-based Standards for the
selection of health Measurement Instruments (COS-
MIN) group have developed guidance for conducting
these types of reviews [19, 31]. They recommend firstly
defining the type of review to be conducted as well as
the construct or the name(s) of the outcome measure-
ment instrument(s) of interest, the target population, the
type of measurement instrument of interest (e.g. ques-
tionnaires, imaging tests) and the measurement proper-
ties on which the review investigates (see Table 1).
Prognostic reviews
Prognostic research is of high value as it provides clini-
cians and patients with information regarding the course
of a disease and potential outcomes, in addition to poten-
tially providing useful information to deliver targeted ther-
apy relating to specific prognostic factors [20, 34, 35].
Prognostic reviews are complex and methodology for
these types of reviews is still under development, although
a Cochrane methods group exists to support this ap-
proach [20]. Potential systematic reviewers wishing to
conduct a prognostic review may be interested in
determining the overall prognosis for a condition, the
link between specific prognostic factors and an out-
come and/or prognostic/prediction models and prog-
nostic tests [20, 34–37]. Currently there is little
information available to guide the development of a
well-defined review question however the Quality in
Prognosis Studies (QUIPS) tool [34] and the Checklist
for critical appraisal and data extraction for systematic
reviews of prediction modelling studies (CHARMS
Checklist) [38] have been developed to assist in this
process (see Table 1).
Methodology systematic reviews
Systematic reviews can be conducted for methodological
purposes [39], and examples of these reviews are avail-
able in the Cochrane Database [40, 41] and elsewhere
[21]. These reviews can be performed to examine any
methodological issues relating to the design, conduct
and review of research studies and also evidence synthe-
ses. There is limited guidance for conducting these re-
views, although there does exist an appendix in the
Cochrane Handbook focusing specifically on methodo-
logical reviews [39]. They suggest following the SDMO
approach where the types of studies should define all eli-
gible study designs as well as any thresholds for inclu-
sion (e.g. RCTS and quasi-RCTs). Types of data should
detail the raw material for the methodology studies (e.g.
original research submitted to biomedical journals) and
the comparisons of interest should be described under
types of methods (e.g. blinded peer review versus un-
blinded peer review) (see Table 1). Lastly both primary
and secondary outcome measures should be listed (e.g.
quality of published report) [39].
The need to establish a specific, focussed question that
can be utilized to define search terms, inclusion and ex-
clusion criteria and interpretation of data within a sys-
tematic review is an ongoing issue [42]. This paper
provides an up-to-date typology for systematic reviews
which reflects the current state of systematic review
conduct. It is now possible that almost any question can
be subjected to the process of systematic review. How-
ever, it can be daunting and difficult for the novice re-
searcher to determine what type of review they require
and how they should conceptualize and phrase their re-
view question, inclusion criteria and the appropriate
methods for analysis and synthesis [23]. Ensuring that
the review question is well formed is of the utmost im-
portance as question design has the most significant im-
pact on the conduct of a systematic review as the
subsequent inclusion criteria are drawn from the ques-
tion and provide the operational framework for the re-
view [23]. In this proposed typology, we provide the key
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 5 of 9
elements for formulating a question for each of the 10
review types.
When structuring a systematic review question some
of these key elements are universally agreed (such as
PICO for effectiveness reviews) whilst others are more
novel. For example, the use of PIRD for diagnostic re-
views contrasts with other mnemonics, such as PITR
[43], PPP-ICP-TR [44] or PIRATE [45]. Qualitative re-
views have sometimes been guided by the mnemonic
SPIDER, however this has been recommended against
for guiding searching due to it not identifying papers
that are relevant [46]. Variations on our guidance exist,
with the additional question elements of ‘time’ (PICOT)
and study types (PICOS) also existing. Reviewers are ad-
vised to consider these elements when crafting their
question to determine if they are relevant for their topic.
We believe that based on the guidance included in this
typology, constructing a well-built question for a system-
atic review is a skill that can be mastered even for the
novice reviewer.
Related to this discussion of a typology for systematic
reviews is the issue of how to distinguish a systematic
review from a literature review. When searching the lit-
erature, you may come across papers referred to as ‘sys-
tematic reviews,’ however, in reality they do not
necessarily fit this description [21]. This is of significant
concern given the common acceptance of systematic re-
views as ‘level 1’ evidence and the best study design to
inform practice. However, many of these reviews are
simply literature reviews masquerading as the ideal
product. It is therefore important to have a critical eye
when assessing publications identified as systematic re-
views. Today, the methodology of systematic reviews
continues to evolve. However, there is general accept-
ance of certain steps being required in a systematic re-
view of any evidence type [2] and these should be used
to distinguish between a literature review and a system-
atic review. The following can be viewed as the defining
features of a systematic review and its conduct [1, 2]:
1. Clearly articulated objectives and questions to be
addressed
2. Inclusion and exclusion criteria, stipulated a priori
(in a protocol), that determine the eligibility of
studies
3. A comprehensive search to identify all relevant
studies, both published and unpublished
4. A process of study screening and selection
5. Appraisal of the quality of included studies/ papers
(risk of bias) and assessment of the validity of their
results/findings/ conclusions
6. Analysis of data extracted from the included research
7. Presentation and synthesis of the results/ findings
extracted
8. Interpret the results, potentially establishing the
certainty of the results and making and implications
for practice and research
9. Transparent reporting of the methodology and
methods used to conduct the review
Prior to deciding what type of review to conduct, the
reviewer should be clear that a systematic review is the
best approach. A systematic review may be undertaken
to confirm whether current practice is based on evi-
dence (or not) and to address any uncertainty or vari-
ation in practice that may be occurring. Conducting a
systematic review also identifies where evidence is not
available and can help categorize future research in the
area. Most importantly, they are used to produce state-
ments to guide decision-making. Indications for system-
atic reviews:
1. uncover the international evidence
2. confirm current practice/ address any variation
3. identify areas for future research
4. investigate conflicting results
5. produce statements to guide decision-making
The popularity of systematic reviews has resulted in
the creation of various evidence review processes over
the last 30 years. These include integrative reviews,
scoping reviews [47], evidence maps [48], realist synthe-
ses [49], rapid reviews [50], umbrella reviews (systematic
reviews of reviews) [51], mixed methods reviews [52],
concept analyses [53] and others. Useful typologies of
these diverse review types can be used as reference for
researchers, policy makers and funders when discussing
a review approach [54, 55]. It was not the purpose of
this article to describe and define each of these di-
verse evidence synthesis methods as our focus was
purely on systematic review questions. Depending on
the researcher, their question/s and their resources at
hand, one of these approaches may be the best fit for
answering a particular question.
Gough and colleagues [9] provided clarification be-
tween different review designs and methods but stopped
short of providing a taxonomy of review types. The ra-
tionale for this was that in the field of evidence synthesis
‘the rate of development of new approaches to reviewing
is too fast and the overlap of approaches too great for
that to be helpful.’ [9] They instead provide a useful de-
scription of how reviews may differ and more import-
antly why this may be the case. It is also our view that
evidence synthesis methodology is a rapidly developing
field, and that even within the review types classified
here (such as effectiveness [56] or experiential [qualita-
tive [57]]) there may be many different subsets and com-
plexities that need to be addressed. Essentially, the
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 6 of 9
classifications listed above may be just the initial level of
a much larger family tree. We believe that this typology
will provide a useful contribution to efforts to sort and
classify evidence review approaches and understand the
need for this to be updated over time. A useful next step
might be the development of a comprehensive taxonomy
to further guide reviewers in making a determination
about the most appropriate evidence synthesis product
to undertake for a particular purpose or question.
Systematic reviews of animal studies (or preclinical
systematic reviews) have not been common practice in
the past (when comparing to clinical research) although
this is changing [58–61]. Systematic reviews of these
types of studies can be useful to inform the design of fu-
ture experiments (both preclinical and clinical) [59] and
address an important gap in translation science [5, 60].
Guidance for these types of reviews is now emerging
[58, 60, 62–64]. These review types, which are often hy-
pothesis generating, were excluded from our typology as
they are only very rarely used to answer a clinical question.
Systematic reviews are clearly an indispensable com-
ponent in the chain of scientific enquiry in a much
broader sense than simply to inform policy and practice
and therefore ensuring that they are designed in a rigor-
ous manner, addressing appropriate questions driven by
clinical and policy needs is essential. With the ever-
increasing global investment in health research it is im-
perative that the needs of health service providers
and end users are met. It has been suggested that
one way to ensure this occurs is to precede any re-
search investment with a systematic review of existing
research [65]. However, the only way that such a
strategy would be effective would be if all reviews
conducted are done so with due rigour.
It has been argued recently that there is mass produc-
tion of reviews that are often unnecessary, misleading and
conflicted with most having weak or insufficient evidence
to inform decision making [66]. Indeed, asking has been
identified as a core functional competency associated with
obtaining and applying the best available evidence [67].
Fundamental to the tenets of evidence-based healthcare
and, in particular evidence implementation, is the ability
to formulate a question that is amenable to obtaining evi-
dence and “structured thinking” around question develop-
ment is critical to its success [67]. The application of
evidence can be significantly hampered when existing evi-
dence does not correspond to the situations that practi-
tioners (or guideline developers) are faced with. Hence,
determination of appropriate review types that respond to
relevant clinical and policy questions is essential.
The revised JBI Model of Evidence-Based Healthcare
clarifies the conceptual integration of evidence gener-
ation, synthesis, transfer and implementation, “linking
how these occur with the necessarily challenging dynamics
that contribute to whether translation of evidence into
policy and practice is successful” [68]. Fundamental to
this approach is the recognition that the process of
evidence-based healthcare is not prescriptive or linear,
but bi-directional, with each component having the po-
tential to affect what occurs on either side of it. Thus, a
systematic review can impact upon the types of primary
research that are generated as a result of recommenda-
tions produced in the review (evidence generation) but
also on the success of their uptake in policy and prac-
tice (evidence implementation). It is therefore critical
for those undertaking systematic reviews to have a solid
understanding of the type of review required to respond
to their question.
For novice reviewers, or those unfamiliar with the
broad range of review types now available, access to a
typology to inform their question development is timely.
The typology described above provides a framework that
indicates the antecedents and determinants of undertak-
ing a systematic review. There are several factors that
may lead an author to conduct a review and these may
or may not start with a clearly articulated clinical or pol-
icy question. Having a better understanding of the re-
view types available and the questions that these reviews
types lend themselves to answering is critical to the suc-
cess or otherwise of a review. Given the significant re-
source required to undertake a review this first step is
critical as it will impact upon what occurs in both evi-
dence generation and evidence implementation. Thus,
enabling novice and experienced reviewers to ensure
that they are undertaking the “right” review to respond
to a clinical or policy question appropriately has stra-
tegic implications from a broader evidence-based health-
care perspective.
Conclusion
Systematic reviews are the ideal method to rigorously col-
late, examine and synthesize a body of literature. System-
atic review methods now exist for most questions that
may arise in healthcare. This article provides a typology
for systematic reviewers when deciding on their approach
in addition to guidance on structuring their review ques-
tion. This proposed typology provides the first known at-
tempt to sort and classify systematic review types and
their question development frameworks and therefore it
can be a useful tool for researchers, policy makers and
funders when deciding on an appropriate approach.
CHARMS: CHecklist for critical Appraisal and data extraction for systematic
Reviews of prediction Modelling Studies; CoCoPop: Condition, Context,
Population; COSMIN: COnsensus-based Standards for the selection of health
Measurement Instruments; EBHC: Evidence-based healthcare; eMERGe: Meta-
ethnography reporting guidelines; ENTREQ: Enhancing transparency in
reporting the synthesis of qualitative research; JBI: Joanna Briggs Institute;
MOOSE: Meta-analysis Of Observational Studies in Epidemiology;
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 7 of 9
PEO: Population, Exposure, Outcome; PFO: Population, Prognostic Factors (or
models of interest), Outcome; PICO: Population, Intervention, Comparator,
Outcome; PICo: Population, Phenomena of Interest, Context; PICOC: Population,
Intervention, Comparator/s, Outcomes, Context; PIRD: Population, Index Test,
Reference Test, Diagnosis of Interest; QUIPS: Quality in Prognosis Studies;
RCT: Randomised controlled trial; SDMO: Studies, Data, Methods, Outcomes
Acknowledgements
None
No funding was provided for this paper.
Not applicable
ZM: Led the development of this paper and conceptualised the idea for a
systematic review typology. Provided final approval for submission. CS:
Contributed conceptually to the paper and wrote sections of the paper.
Provided final approval for submission. EA: Contributed conceptually to the
paper and reviewed and provided feedback on all drafts. Provided final
approval for submission. CL: Contributed conceptually to the paper and
reviewed and provided feedback on all drafts. Provided final approval for
submission. ZJ: Contributed conceptually to the paper and reviewed and
provided feedback on all drafts. Provided approval and encouragement for
the work to proceed. Provided final approval for submission.
Not applicable
Not applicable
All the authors are members of the Joanna Briggs Institute, an evidence-based
healthcare research institute which provides formal guidance regarding evidence
synthesis, transfer and implementation.
The authors have no other competing interests to declare.
Springer Nature remains neutral with regard to jurisdictional claims in published
maps and institutional affiliations.
Received: 29 May 2017 Accepted: 28 December 2017
1. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting
systematic reviews and meta-analyses of studies that evaluate healthcare
interventions: explanation and elaboration. BMJ (Clinical research ed). 2009;
339:b2700.
2. Aromataris E, Pearson A. The systematic review: an overview. AJN. Am J
Nurs. 2014;114(3):53–8.
3. Munn Z, Porritt K, Lockwood C, Aromataris E, Pearson A. Establishing
confidence in the output of qualitative research synthesis: the ConQual
approach. BMC Med Res Methodol. 2014;14:108.
4. Pearson A. Balancing the evidence: incorporating the synthesis of
qualitative data into systematic reviews. JBI Reports. 2004;2:45–64.
5. Pearson A, Jordan Z, Munn Z. Translational science and evidence-based
healthcare: a clarification and reconceptualization of how knowledge is
generated and used in healthcare. Nursing research and practice. 2012;2012:
792519.
6. Steinberg E, Greenfield S, Mancher M, Wolman DM, Graham R. Clinical
practice guidelines we can trust: National Academies Press 2011.
7. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic
reviews a day: how will we ever keep up? PLoS Med. 2010;7(9):e1000326.
8. Chalmers I, Hedges LV, Cooper HA. Brief history of research synthesis. Eval
Health Prof. 2002;25(1):12–37.
9. Gough D, Thomas J, Oliver S. Clarifying differences between review designs
and methods. Systematic Reviews. 2012;1:28.
10. Munn Z, Tufanaru C, Aromataris EJBI. S systematic reviews: data extraction
and synthesis. Am J Nurs. 2014;114(7):49–54.
11. Pearson A, Wiechula R, Court A, Lockwood C. The JBI model of evidence-
based healthcare. International Journal of Evidence-Based Healthcare. 2005;
3(8):207–15.
12. Tufanaru C, Munn Z, Stephenson M, Aromataris E. Fixed or random effects
meta-analysis? Common methodological issues in systematic reviews of
effectiveness. Int J Evid Based Healthc. 2015;13(3):196–207.
13. Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological
guidance for systematic reviewers utilizing meta-aggregation. Int J Evid Based
Healthc. 2015;13(3):179–87.
14. Gomersall JS, Jadotte YT, Xue Y, Lockwood S, Riddle D, Preda A. Conducting
systematic reviews of economic evaluations. Int J Evid Based Healthc. 2015;
13(3):170–8.
15. Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological guidance for
systematic reviews of observational epidemiological studies reporting
prevalence and cumulative incidence data. Int J Evid Based Healthc. 2015;
13(3):147–53.
16. Campbell JM, Klugar M, Ding S, et al. Diagnostic test accuracy: methods for
systematic review and meta-analysis. Int J Evid Based Healthc. 2015;13(3):
154–62.
17. Moola S, Munn Z, Sears K, et al. Conducting systematic reviews of association
(etiology): the Joanna Briggs Institute’s approach. Int J Evid Based Healthc.
2015;13(3):163–9.
18. McArthur A, Klugarova J, Yan H, Florescu S. Innovations in the systematic
review of text and opinion. Int J Evid Based Healthc. 2015;13(3):188–95.
19. Mokkink LB, Terwee CB, Patrick DL, et al. The COSMIN checklist for assessing
the methodological quality of studies on measurement properties of health
status measurement instruments: an international Delphi study. Qual Life
Res. 2010;19(4):539–49.
20. Dretzke J, Ensor J, Bayliss S, et al. Methodological issues and recommendations
for systematic reviews of prognostic studies: an example from cardiovascular
disease. Systematic reviews. 2014;3(1):1.
21. Campbell JM, Kavanagh S, Kurmis R, Munn Z. Systematic Reviews in Burns
Care: Poor Quality and Getting Worse. Journal of Burn Care & Research.
9000;Publish Ahead of Print.
22. France EF, Ring N, Thomas R, Noyes J, Maxwell M, Jepson RA. Methodological
systematic review of what’s wrong with meta-ethnography reporting. BMC
Med Res Methodol. 2014;14(1):1.
23. Stern C, Jordan Z, McArthur A. Developing the review question and
inclusion criteria. Am J Nurs. 2014;114(4):53–6.
24. Higgins J, Green S, eds. Cochrane Handbook for Systematic Reviews of
Interventions. Version 5.1.0 [updated March 2011]. ed: The Cochrane
Collaboration 2011.
25. Hannes K, Lockwood C, Pearson AA. Comparative analysis of three online
appraisal instruments’ ability to assess validity in qualitative research. Qual
Health Res. 2010;20(12):1736–43.
26. Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in
reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol.
2012;12:181.
27. France EF, Ring N, Noyes J, et al. Protocol-developing meta-ethnography
reporting guidelines (eMERGe). BMC Med Res Methodol. 2015;15:103.
28. Shemilt I, Mugford M, Byford S, et al. In: JPT H, Green S, editors. Chapter 15:
incorporating economics evidence. Cochrane Handbook for Systematic
Reviews of Interventions. The Cochrane Collaboration: In; 2011.
29. Munn Z, Moola S, Riitano D, Lisy K. The development of a critical appraisal
tool for use in systematic reviews addressing questions of prevalence. Int J
Health Policy Manag. 2014;3(3):123–8.
30. Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies
in epidemiology: a proposal for reporting. Meta-analysis of observational
studies in epidemiology (MOOSE) group. JAMA. 2000;283(15):2008–12.
31. COSMIN: COnsensus-based Standards for the selection of health
Measurement INstruments. Systematic reviews of measurement
properties. [cited 8th December 2016]; Available from: http://www.
cosmin.nl/Systematic%20reviews%20of%20measurement%20properties.html
32. Terwee CB, HCWd V, CAC P, Mokkink LB. Protocol for systematic reviews of
measurement properties. COSMIN: Knowledgecenter Measurement
Instruments; 2011.
33. Mokkink LB, Terwee CB, Stratford PW, et al. Evaluation of the methodological
quality of systematic reviews of health status measurement instruments. Qual
Life Res. 2009;18(3):313–33.
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 8 of 9
http://www.cosmin.nl/Systematic%20reviews%20of%20measurement%20properties.html
http://www.cosmin.nl/Systematic%20reviews%20of%20measurement%20properties.html
34. Hayden JA, van der Windt DA, Cartwright JL, CÃ P, Bombardier C. Assessing
bias in studies of prognostic factors. Ann Intern Med. 2013;158(4):280–6.
35. The Cochrane Collaboration. Cochrane Methods Prognosis. 2016 [cited 7th
December 2016]; Available from: http://methods.cochrane.org/prognosis/
scope-our-work.
36. Rector TS, Taylor BC, Wilt TJ. Chapter 12: systematic review of prognostic
tests. J Gen Intern Med. 2012;27(Suppl 1):S94–101.
37. Peters S, Johnston V, Hines S, Ross M, Coppieters M. Prognostic factors for
return-to-work following surgery for carpal tunnel syndrome: a systematic
review. JBI Database of Systematic Reviews and Implementation Reports.
2016;14(9):135–216.
38. Moons KG, de Groot JA, Bouwmeester W, et al. Critical appraisal and data
extraction for systematic reviews of prediction modelling studies: the
CHARMS checklist. PLoS Med. 2014;11(10):e1001744.
39. Clarke M, Oxman AD, Paulsen E, Higgins JP, Green S, Appendix A: Guide to
the contents of a Cochrane Methodology protocol and review. In: Higgins
JP, Green S, eds. Cochrane Handbook for Systematic Reviews of
Interventions. Version 5.1.0 ed: The Cochrane Collaboration 2011.
40. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for
improving the quality of reports of biomedical studies. Cochrane Database
Syst Rev. 2007;2:MR000016.
41. Djulbegovic B, Kumar A, Glasziou PP, et al. New treatments compared to
established treatments in randomized trials. Cochrane Database Syst Rev.
2012;10:MR000024.
42. Thoma A, Eaves FF 3rd. What is wrong with systematic reviews and meta-
analyses: if you want the right answer, ask the right question! Aesthet Surg
J. 2016;36(10):1198–201.
43. Deeks JJ, Wisniewski S, Davenport C. In: Deeks JJ, Bossuyt PM, Gatsonis C,
editors. Chapter 4: guide to the contents of a Cochrane diagnostic test
accuracy protocol. Cochrane Handbook for Systematic Reviews of Diagnostic
Test Accuracy The Cochrane Collaboration: In; 2013.
44. Bae J-M. An overview of systematic reviews of diagnostic tests accuracy.
Epidemiology and Health. 2014;36:e2014016.
45. White S, Schultz T. Enuameh YAK. Lippincott Wiliams & Wilkins: Synthesizing
evidence of diagnostic accuracy; 2011.
46. Methley AM, Campbell S, Chew-Graham C, McNally R, Cheraghi-Sohi SPICO.
PICOS and SPIDER: a comparison study of specificity and sensitivity in three
search tools for qualitative systematic reviews. BMC Health Serv Res. 2014;14:579.
47. Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB.
Guidance for conducting systematic scoping reviews. International journal
of evidence-based healthcare. 2015;13(3):141–6.
48. Hetrick SE, Parker AG, Callahan P, Purcell R. Evidence mapping: illustrating
an emerging methodology to improve evidence-based practice in youth
mental health. J Eval Clin Pract. 2010;16(6):1025–30.
49. Wong G, Greenhalgh T, Westhorp G, Pawson R. Development of
methodological guidance, publication standards and training materials for
realist and meta-narrative reviews: the RAMESES (Realist And Meta-narrative
Evidence Syntheses – Evolving Standards) project. Southampton UK:
Queen’s Printer and Controller of HMSO 2014. This work was produced by
Wong et al. under the terms of a commissioning contract issued by the
secretary of state for health. This issue may be freely reproduced for the
purposes of private research and study and extracts (or indeed, the full
report) may be included in professional journals provided that suitable
acknowledgement is made and the reproduction is not associated with any
form of advertising. Applications for commercial reproduction should be
addressed to: NIHR journals library, National Institute for Health Research,
evaluation, trials and studies coordinating Centre, alpha house, University of
Southampton Science Park, Southampton SO16 7NS, UK. 2014.
50. Munn Z, Lockwood C, Moola S. The development and use of evidence
summaries for point of care information systems: a streamlined rapid review
approach. Worldviews Evid-Based Nurs. 2015;12(3):131–8.
51. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P.
Summarizing systematic reviews: methodological development, conduct
and reporting of an umbrella review approach. Int J Evid Based Healthc.
2015;13(3):132–40.
52. Pearson A, White H, Bath-Hextall F, Salmond S, Apostolo J, Kirkpatrick PA.
Mixed-methods approach to systematic reviews. Int J Evid Based Healthc.
2015;13(3):121–31.
53. Draper PA. Critique of concept analysis. J Adv Nurs. 2014;70(6):1207–8.
54. Grant MJ, Booth A. A Typology of reviews: an analysis of 14 review types
and associated methodologies. Health Inf Libr J. 2009;26(2):91–108.
55. Tricco AC, Tetzlaff J, Moher D. The art and science of knowledge synthesis. J
Clin Epidemiol. 2011;64(1):11–20.
56. Bender R. A practical taxonomy proposal for systematic reviews of
therapeutic interventions. 21st Cochrane Colloquium Quebec, Canada 2013.
57. Kastner M, Tricco AC, Soobiah C, et al. What is the most appropriate
knowledge synthesis method to conduct a review? Protocol for a scoping
review. BMC Med Res Methodol. 2012;12:114.
58. Leenaars M, Hooijmans CR, van Veggel N, et al. A step-by-step guide to
systematically identify all relevant animal studies. Lab Anim. 2012;46(1):24–31.
59. de Vries RB, Wever KE, Avey MT, Stephens ML, Sena ES, Leenaars M. The
usefulness of systematic reviews of animal experiments for the design of
preclinical and clinical studies. ILAR J. 2014;55(3):427–37.
60. Hooijmans CR, Ritskes-Hoitinga M. Progress in using systematic reviews of
animal studies to improve translational research. PLoS Med. 2013;10(7):
e1001482.
61. Mignini LE, Khan KS. Methodological quality of systematic reviews of animal
studies: a survey of reviews of basic research. BMC Med Res Methodol. 2006;
6:10.
62. van Luijk J, Bakker B, Rovers MM, Ritskes-Hoitinga M, de Vries RB, Leenaars
M. Systematic reviews of animal studies; missing link in translational research?
PLoS One. 2014;9(3):e89981.
63. Vesterinen HM, Sena ES, Egan KJ, et al. Meta-analysis of data from animal
studies: a practical guide. J Neurosci Methods. 2014;221:92–102.
64. CAMARADES. Collaborative Approach to Meta-Analysis and Review of
Animal Data from Experimental Studies. 2014 [cited 8th December 2016];
Available from: http://www.dcn.ed.ac.uk/camarades/default.htm#about
65. Moher D, Glasziou P, Chalmers I, et al. Increasing value and reducing waste
in biomedical research: who’s listening? Lancet. 2016;387(10027):1573–86.
66. Ioannidis J. The mass production of redundant, misleading, and conflicted
systematic reviews and meta-analyses. The Milbank Quarterly. 2016;94(3):
485–514.
67. Rousseau DM, Gunia BC. Evidence-based practice: the psychology of EBP
implementation. Annu Rev Psychol. 2016;67:667–92.
68. Jordan Z, Lockwood C, Aromataris E. Munn Z. The Joanna Briggs Institute:
The updated JBI model for evidence-based healthcare; 2016.
69. Cooney GM, Dwan K, Greig CA, et al. Exercise for depression. Cochrane
Database Syst Rev. 2013;9:CD004366.
70. Munn Z, Jordan Z. The patient experience of high technology medical
imaging: a systematic review of the qualitative evidence. JBI Libr. Syst Rev.
2011;9(19):631–78.
71. de Verteuil R, Tan WS. Self-monitoring of blood glucose in type 2 diabetes
mellitus: systematic review of economic evidence. JBI Libr. Syst Rev. 2010;
8(7):302–42.
72. Munn Z, Moola S, Lisy K, Riitano D, Murphy F. Claustrophobia in magnetic
resonance imaging: a systematic review and meta-analysis. Radiography.
2015;21(2):e59–63.
73. Hakonsen SJ, Pedersen PU, Bath-Hextall F, Kirkpatrick P. Diagnostic test
accuracy of nutritional tools used to identify undernutrition in patients with
colorectal cancer: a systematic review. JBI Database System Rev Implement
Rep. 2015;13(4):141–87.
74. Australia C. Risk factors for lung cancer: a systematic review. NSW: Surry
Hills; 2014.
75. McArthur A, Lockwood C. Maternal mortality in Cambodia, Thailand,
Malaysia and Sri Lanka: a systematic review of local and national policy and
practice initiatives. JBI Libr Syst Rev. 2010;8(16 Suppl):1–10.
76. Peek K. Muscle strength in adults with spinal cord injury: a systematic
review of manual muscle testing, isokinetic and hand held dynamometry
clinimetrics. JBI Database of Systematic Reviews and Implementation
Reports. 2014;12(5):349–429.
77. Hayden JA, Tougas ME, Riley R, Iles R, Pincus T. Individual recovery
expectations and prognosis of outcomes in non-specific low back pain:
prognostic factor exemplar review. Cochrane Libr. 2014. http://onlinelibrary.
wiley.com/doi/10.1002/14651858.CD011284/full.
Munn et al. BMC Medical Research Methodology (2018) 18:5 Page 9 of 9
http://methods.cochrane.org/prognosis/scope-our-work
http://methods.cochrane.org/prognosis/scope-our-work
http://www.dcn.ed.ac.uk/camarades/default.htm#about
http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD011284/full
http://onlinelibrary.wiley.com/doi/10.1002/14651858.CD011284/full
-
Abstract
Background
Results
Conclusions
Introduction
Review typology
Effectiveness reviews
Experiential (qualitative) reviews
Costs/economic evaluation reviews
Prevalence and/or incidence reviews
Diagnostic test accuracy reviews
Etiology and/or risk reviews
Expert opinion/policy reviews
Psychometric reviews
Prognostic reviews
Methodology systematic reviews
Discussion
Conclusion
Abbreviations
Funding
Availability of data and materials
Authors’ contributions
Ethics approval and consent to participate
Consent for publication
Competing interests
Publisher’s Note
References