Instructions
Continue with the CITI training, which must be completed by Week 8.
For the Week 6 assignment, perform qualitative thematic data analysis using the three video transcripts (located in the Books and Resources for this Week section). The topic of the transcripts is Business Startups.
Describe your chosen approach to analyze the data from these three transcripts; support your chosen approach using references. Be sure to consider the type of data collected to create procedures for a comprehensive analysis. In your description, clearly define your approach to:
(a) organizing data
(b) coding and thematic development
(c) triangulation
(d) using software applications
Then identify at least three (3) themes you extracted from the three video transcripts. The number of themes will be based on your selected framework. For each theme:
(a) Label the theme
(b) Define the theme’s concept based on the interpretation of the data.
(c) Include a 2-3 direct quotes from the data that best represent how the theme is conceptualized.
As with all other assignments, your approach must be systematic, logical, fully explained, and supported with scholarly sources. Use the resources provided, and at least three other peer-reviewed articles to defend your research plan.
Length: 2-4 pages
Your assignment should demonstrate thoughtful consideration of the ideas and concepts presented in the course and provide new thoughts and insights relating directly to this topic. Your response should reflect scholarly writing and current APA standards. Be sure to adhere to Northcentral University’s Academic Integrity Policy.
Amankwaa, L. (2016). Creating protocols for trustworthiness in qualitative research. Journal of Cultural Diversity, 23(3), 121-127
Belotto, M. J. (2018). Data analysis methods for qualitative research: Managing the challenges of coding, interrater reliability, and thematic
Castleberry, A., & Nolen, A. (2018). Methodology matters: Thematic analysis of qualitative research data: Is it as easy as it sounds?
Connelly, L. M. (2016). Understanding research. Trustworthiness in qualitative research. MEDSURG Nursing, 25, 435-436
Jensen, E., & Laurie, C. (Academic). (2017). An introduction to qualitative data analysis [Video file]
Kristensen, G. K., & Ravn, M. N. (2015). The voices heard and the voices silenced: Recruitment processes in qualitative interview studies
__________________________________________________
Hide Folder Information
Turnitin™
Turnitin™ enabledThis assignment will be submitted to Turnitin™.
Instructions
A template is provided below for this Signature Assignment. Using the template provided and your relevant discussions from previous assignments in this course, with refinements from your instructors’ feedback, as appropriate: construct a proposed qualitative research plan. Your plan should reflect the features of qualitative research and the rationale for selecting a specific research design. Remember to support your work with citations.
Problem Statement (with recommended revisions)
Provide a clear justification with evidence on why this study is relevant to your field and worthy of doctoral-level study. Support your efforts using 3 scholarly sources published within the past 5 years to ensure relevancy. Remember, the problem statement should reflect your degree type (applied or theory-based).
Purpose Statement (with recommended revisions)
Apply the script introduced in this course and your instructor’s feedback to produce an accurate and aligned problem statement.
Research Questions (at least two questions)
The qualitative research query must be framed to deeply probe and investigate a problem. How, why, and what strategies are the best terms to include in your research question.
Methodology and Design (with the rationale)
Defend your choice to use the qualitative methodology to research your identified problem. Synthesize 2 or 3 sources to support your arguments.
Defend your choice to use a specific qualitative research design. Synthesize 2 or 3 sources to support your arguments.
Data Collection (outline and defend)
Explain how and why you will select participants from a specific population. Include citations for the identified population, the sampling method.
Describe data collection steps.
Ethical protection of human subjects
Data Analysis (include steps)
Logically define the steps in data analysis
Describe how the four elements of trustworthiness could be addressed
References
Page in APA Format
Length: 6-10 pages
References: 15-20 peer-reviewed resources.
Your assignment should demonstrate thoughtful consideration of the ideas and concepts presented in the course and provide new thoughts and insights relating directly to this topic. Your response should reflect scholarly writing and current APA standards. Be sure to adhere to Northcentral University’s Academic Integrity Policy.
References
Amankwaa, L. (2016). Creating protocols for trustworthiness in qualitative research. Journal of Cultural Diversity, 23(3), 121-127
Belotto, M. J. (2018). Data analysis methods for qualitative research: Managing the challenges of coding, interrater reliability, and thematic
Castleberry, A., & Nolen, A. (2018). Methodology matters: Thematic analysis of qualitative research data: Is it as easy as it sounds?
Connelly, L. M. (2016). Understanding research. Trustworthiness in qualitative research. MEDSURG Nursing, 25, 435-436
Jensen, E., & Laurie, C. (Academic). (2017). An introduction to qualitative data analysis [Video file]
Kristensen, G. K., & Ravn, M. N. (2015). The voices heard and the voices silenced: Recruitment processes in qualitative interview studies
Richards, K. A. R., & Hemphill, M. A. (2018) . A practical guide to collaborative qualitative data analysis. Journal of Teaching in
Yakut Çayir, M., & Saritaş, M. T. (2017). Computer assisted qualitative data analysis: A descriptive content analysis (2011 – 2016)
Pub. Date: 2016
Product: SAGE Research Methods Video
DOI:
https://dx.doi.org/10.4135/9781473992290
Methods: Qualitative data analysis, Coding
Keywords: practices, strategies, and tools
Disciplines: Anthropology, Business and Management, Criminology and Criminal Justice, Communication
and Media Studies, Counseling and Psychotherapy, Economics, Education, Geography, Health, Marketing,
Nursing, Political Science and International Relations, Psychology, Social Work, Sociology, Technology,
Computer Science
Access Date: January 27, 2023
Publishing Company: SAGE Publications Ltd
City: London
Online ISBN: 9781473992290
© 2016 SAGE Publications Ltd All Rights Reserved.
https://dx.doi.org/10.4135/9781473992290
[An Introduction to Qualitative Data Analysis]
PROFESSOR ERIC JENSEN: My name is Eric Jensen, I’m an Sociology Professor at the University
of Warwick. [Dr. Eric Jensen, Sociology Professor]
DR. CHARLES LAURIE: And I’m Charles Laurie, Director of Research at Verisk Maplecroft. [Dr.
Charles Laurie, Director of Research]
PROFESSOR ERIC JENSEN: This video is about how to analyze qualitative data. So qualitative
data analysis is the process of identifying patterns in written information, audio recordings, video, or
images. There are no universally accepted rules for this process that define exactly, step by step,
what you must do. But you should be thorough and detailed in your approach.
PROFESSOR ERIC JENSEN [continued]: There are different fully valid pathways to arrive at a good
understanding of your data through qualitative analysis. [What is qualitative analysis?]
DR. CHARLES LAURIE: Qualitative research is open ended by nature and relies on your judgement
to find patterns through the haze of words in your audio recordings or transcripts. While such judge-
ments can be personal and subjective, techniques specified in this segment can help you ensure that
your analysis is systematic.
PROFESSOR ERIC JENSEN: Qualitative analysis is not about writing an opinion on a research topic
or selecting a couple of quotes that support an argument you already decided you wanted to make.
You must develop a clear analytical route from your data to specific patterns. And ultimately, to a
written report containing representative examples from your data that show these patterns.
PROFESSOR ERIC JENSEN [continued]: [Finding Qualitative Data]
DR. CHARLES LAURIE: Your first step in qualitative analysis is to take stock of the data and contex-
tual information available to you. Written qualitative data can be anything from interview transcripts
to field notes or a broad range of other written materials, including diaries and even meeting notes.
Whatever form your data take, your need to organize them
DR. CHARLES LAURIE [continued]: to make sense of what you have.
PROFESSOR ERIC JENSEN: When you think of bulleted of data, your initial decision might be inter-
view recordings and transcripts. However, you might be surprised at how much additional information
you can collect along the way as your research project develops. You can find yourself with informa-
tion such as diaries, photographs, and a range of personal, business, or government documents.
PROFESSOR ERIC JENSEN [continued]: Such unplanned data is part of the open ended nature
SAGE
(c) SAGE Publications Ltd., 2017
SAGE Research Methods Video
Page 2 of 8 An Introduction to Qualitative Data Analysis
of qualitative research. Just be sure to document how you gathered any new data sources that you
might draw upon in your analysis.
DR. CHARLES LAURIE: At this point, as you begin your positive data analysis by taking stock of
your information, its worth distinguishing between background information you use to provide context
and data that you systematically scrutinized through the data analysis process. Data are comprised
of pieces of information you have to find in your Methods section
DR. CHARLES LAURIE [continued]: as the focus of your research. Your results will be based on
these data.
PROFESSOR ERIC JENSEN: By contrast, background information can help you understand the da-
ta and provide context for that data. For example, miscellaneous historical information or notes you
come across won’t necessarily be analyzed systematically. However, they still can play a role by pro-
viding insights into the broader picture of how your participants’ lives are constructed.
PROFESSOR ERIC JENSEN [continued]: [Context and Data]
DR. CHARLES LAURIE: Once you have separated out your data from background information,
you’re ready to begin an analysis of these data.
PROFESSOR ERIC JENSEN: With a qualitative project, your analysis begins in the Methods sec-
tion. Here, you explain who you collected data from and why and what circumstances and over what
period of time. This context orients your analysis and establishes the boundaries of the kinds of
knowledge claims that you can make.
DR. CHARLES LAURIE: Your data only holds meaning when it can be situated within the context of
its collection. For example, if you conducted an interview with an elderly person in a care home, in
order to understand perceptions of aging, you would need to take account of the location in which
the interview was conducted. That is, within the walls of the home where the participant is cared for
by the staff.
DR. CHARLES LAURIE [continued]: If the participant reported that she felt well cared for by the staff,
was this answer influenced by the environment in which it was collected, such as the possible pres-
ence of staff. Participants may have felt pressure to give certain answers or felt guilty about giving
negative feedback while in the setting. Or may have felt none of these things and given completely
frank interviews.
PROFESSOR ERIC JENSEN: Location is just the tip of the iceberg when it comes to the role of con-
text in qualitative research. Because gathering data for qualitative research relies so heavily on the
research or subjectivity, the analysis often needs to address the way in which the researcher may
SAGE
(c) SAGE Publications Ltd., 2017
SAGE Research Methods Video
Page 3 of 8 An Introduction to Qualitative Data Analysis
have influenced the results. Indeed, there is an extensive academic literature
PROFESSOR ERIC JENSEN [continued]: on how the qualities of the researcher, herself or himself,
can influence the results that emerge. [Beginning Data Analysis]
DR. CHARLES LAURIE: Now that you have accounted for contextual influences affecting your data,
you can begin your focused analysis of the content of your data. There are many possible ways to
do this. We advocate a wider use approach we’re calling pattern analysis. In this process, you take
the words and images and any other features you think are salient comprising
DR. CHARLES LAURIE [continued]: your data and categorize them using codes, which are specific
categories for the grouping of your data that apply across a number of individual quotations. The
word code can be a confusing piece of qualitative research jargon because code has many other
meanings. For example, in the context of computers.
DR. CHARLES LAURIE [continued]: In social research, coding simply means making and applying
categories to your data. You can use your code to develop comparisons and connect your data to
relevant theoretical concepts you’ve located in your literature review. Taken together, these codes,
comparisons, and concepts help you build explanations that address your research
DR. CHARLES LAURIE [continued]: questions. Therefore, they will help you build your analysis from
data to codes to comparisons to concepts and finally, to explanations. [Coding the Data]
PROFESSOR ERIC JENSEN: It’s worth going into more depth on the process of coding because
it’s fundamental to qualitative analysis. After refreshing your memory by re-reading your field notes,
interview, or focus group notes, you begin reading transcripts and other written data, listening to the
audio, or viewing the video data. Your aim is to establish a firm grounding in your data
PROFESSOR ERIC JENSEN [continued]: by understanding what your participants are saying and
why before you start constructing explanations about what’s going on.
DR. CHARLES LAURIE: You begin by setting up a series of initial code categories of the issues you
are expecting to see. This, again, highlights the value of beginning your analysis or at least a careful
reading of your data early on so your thinking can be well to developed by a stage.
PROFESSOR ERIC JENSEN: After coding your first transcript and adapting the codes as needed in
this first pass, you then move on to the next transcript and continue the process of applying codes
and tell your transcripts have all the coded. Once you’ve finished a complete first pass, you would
then start again, doing one pass after another until you’re certain that you’ve captured as much depth
as
SAGE
(c) SAGE Publications Ltd., 2017
SAGE Research Methods Video
Page 4 of 8 An Introduction to Qualitative Data Analysis
PROFESSOR ERIC JENSEN [continued]: possible for all your transcripts or until you run out of time.
DR. CHARLES LAURIE: Keep noting down the analytical thoughts that occur to you during the cod-
ing process, however droll or incomplete the thoughts may seem. Are you finding interesting connec-
tions. Have the accounts inspired you to do some additional readings. Are you seeing connections
between the data and theory you’ve been reading about.
PROFESSOR ERIC JENSEN: It’s essential to record these thoughts as you go because they’re often
fleeting and can be easily forgotten. Also, by recording these memos as they’re called within your
positive data analysis software, they will all be in one place and you can electronically connect them
to the piece of data that sparked the thought within the software. You’ll be grateful for the easy ac-
cess to your memos
PROFESSOR ERIC JENSEN [continued]: when you get to your writing up phase later on. [Theory,
Concepts, and Data Analysis]
DR. CHARLES LAURIE: During the coding process, think about how you can connect your findings
to theoretical concepts. Go back to concepts in your research question and literature review and look
for other related theoretical concepts that you could apply to see if they fit with your data. If existing
concepts don’t help explain your data, then you may need to develop new or adjusted concepts
DR. CHARLES LAURIE [continued]: to explain your findings.
PROFESSOR ERIC JENSEN: For instance, imagine the qualitative research as addressing a topic
relating to social class. She might start with the idea or concept that class identities are passed on
from one generation to the next because the upper class oppresses the poorer class. This is the con-
cept of oppression. If her data shows that the poorer class takes pride
PROFESSOR ERIC JENSEN [continued]: in its identity and values, then she can conclude the con-
cept of oppression is insufficient to explain the data that she has collected. In this case, a new expla-
nation might require using a different analytical concept, such as the idea of a working class subcul-
ture. In this way, qualitative analysis must draw upon, modify, or create theoretical concepts
PROFESSOR ERIC JENSEN [continued]: that are useful in developing explanations that may be ap-
plicable beyond the immediate context of the project.
DR. CHARLES LAURIE: One of the many advantages of coding is that the process allows you to
closely engage with the actual words and ideas of your participants. This means that as you code,
you’ll be able to use data extracts to develop your emerging analysis.
SAGE
(c) SAGE Publications Ltd., 2017
SAGE Research Methods Video
Page 5 of 8 An Introduction to Qualitative Data Analysis
PROFESSOR ERIC JENSEN: Never cherry pick your data extracts based on what fits your preexist-
ing assumptions about a topic. That’s all too easy to do. But good quality of research faces up to the
uncertainties, contradictions, or unexpected patterns of the data, rather than pretending that results
are just simple and clear.
DR. CHARLES LAURIE: For instance, imagine a comparative analysis of men’s and women’s atti-
tudes about marriage based on semi-structured interviews. Perhaps the research expects that men
would display more commitment phobia while women will be more eager to tie the knot. But the re-
sults of the analysis indicate that women and men who were interviewed were both equally commit-
ment phobic.
DR. CHARLES LAURIE [continued]: Rather than trying to make the data fit the theory by executing
interviews with commitment phobic women from the analysis, the best strategy here would be to seek
explanations from the unexpected findings. Also, the researcher might seek to clarify the analysis by
conducting follow up interviews or by reading up on the literature about gender roles to find existing
explanations that the study might support
DR. CHARLES LAURIE [continued]: or indeed, challenge. [Computer Assisted Qualitative Data
Analysis Software (CAQDAS)
PROFESSOR ERIC JENSEN: There are several qualitative analysis software packages sometimes
referred to as Computer Assisted Qualitative Data Analysis Software, or CAQDAS for short. These
products can help you get the most out of your raw qualitative data. They kind of operate like Mi-
crosoft Word does for writing essays.
DR. CHARLES LAURIE: Some qualitative researchers have criticized such software for alienating
researchers from their data and sometimes for causing an over emphasis on coding to the exclusion
of other aspects of qualitative analysis. However, the predominant opinion in methodological litera-
ture indicates that minor limitations stemming
DR. CHARLES LAURIE [continued]: from qualitative analysis software are more than adequate by
increases in productivity, reliability, consistency, and transparency.
PROFESSOR ERIC JENSEN: Once you’ve completed your coding passes and ended up with a set
of patterns of you’ve identified within your data, you will have combined or adjusted codes along the
way and created notes that document any initial ideas you had.
DR. CHARLES LAURIE: You may also have made comparisons between perspectives within each
sample or indeed, between samples, to get a clearer sense of the range of views emerging from your
data. In addition, you should have started making connections to key ideas from your literature re-
SAGE
(c) SAGE Publications Ltd., 2017
SAGE Research Methods Video
Page 6 of 8 An Introduction to Qualitative Data Analysis
view, especially theoretical concepts that can help you account for your data.
DR. CHARLES LAURIE [continued]: These are the first crucial steps in developing a systematic
analysis of your data. However, there are still several more steps to take in your qualitative analysis
to ensure that it’s as robust and insightful as possible. [Writing Up the Analysis]
PROFESSOR ERIC JENSEN: While qualitative data analysis software is an excellent tool to help
you manage and make sense of your data, your analysis extends into the writing up process. Moving
your analysis code by code into your research report document as an essential step, which can also
result in new insights. Laying out your ideas on the page, thinking them through as you write each
paragraph,
PROFESSOR ERIC JENSEN [continued]: and then repeatedly reviewing and rethinking what you’ve
written to deepen your analysis and sharpen your claims is the key to developing a high quality qual-
itative data analysis report. [Conclusion: Ensuring Quality in Qualitative Data Analysis]
DR. CHARLES LAURIE: Let’s now think about quality in your qualitative analysis. Many factors can
intervene to undermine the quality of your analysis. First, let’s consider your role as the decision mak-
er about how your data will be collected and analyzed. You’re likely to have some ideas about what
you expect to find from your research before you start your project.
DR. CHARLES LAURIE [continued]: You need to practice letting go of those ideas and be completely
open to where your data will take you. While you work be able to completely achieve this goal,
striving to keep an open mind is valuable in itself.
PROFESSOR ERIC JENSEN: You can help ensure the quality of your analysis by employing a few
strategies. First, make sure that you transcribe and read your positive data during the data collection
process to put you in a strong position to remember relevant contextual details that you add to your
field notes. Doing this can also help you by creating a feedback loop so
PROFESSOR ERIC JENSEN [continued]: that your ongoing analysis feeds back into your qualitative
data collection in the form of revised or new interview or focus group questions. Also, make sure you
read up on methodology in your particular sub-field. For example, if you’re using blogs as your data,
delve into articles or books on methodology and web-based research to ensure you’re fulfilling the
quality
PROFESSOR ERIC JENSEN [continued]: expectations in this domain. Don’t try to tie up every loose
end or smooth over every rough patch in your qualitative analysis. With qualitative analysis, you’re
allowing for diversity in people’s perspectives and experiences. Also, you don’t have to account for
every scrap of data that you’ve collected. At some point, you’ll have to make
SAGE
(c) SAGE Publications Ltd., 2017
SAGE Research Methods Video
Page 7 of 8 An Introduction to Qualitative Data Analysis
PROFESSOR ERIC JENSEN [continued]: a judgment about which aspects of your findings are most
relevant to addressing your research question. And also, don’t try to do everything. Don’t be afraid to
make the judgement that something is beyond the scope of your analysis. Just as you must narrow
the scope of your research project early in your project in order to keep it nice and focused, you also
need to keep the scope tightly focused
PROFESSOR ERIC JENSEN [continued]: within your qualitative analysis.
DR. CHARLES LAURIE: There is a growing body of methodological literature advocating quality as-
surance techniques to help insure quality in your qualitative analysis. Each qualitative research is
different and every instance of generating qualitative data will develop in different ways due to the
dynamics between the researcher, the participants, the research
DR. CHARLES LAURIE [continued]: questions, and the situation in which the data is gathered.
Therefore, instead of validity and reliability, techniques such as thick description, transparency, and
procedural clarity, deviant case analysis, and reflexivity will raise the quality of your analysis.
https://dx.doi.org/10.4135/9781473992290
SAGE
(c) SAGE Publications Ltd., 2017
SAGE Research Methods Video
Page 8 of 8 An Introduction to Qualitative Data Analysis
https://dx.doi.org/10.4135/9781473992290
-
SAGE Research Methods Video
An Introduction to Qualitative Data Analysis
November-December 2016 • Vol. 25/No. 6 435
Lynne M. Connelly, PhD, RN, is Associate Professor and Director of
Nursing, Robert J. Dehaemers Endowed Chair, Benedictine College
Atchison, KS. She is Research Editor for MEDSURG Nursing.
Trustworthiness in Qualitative
Research
I n their qualitative study on nurses’ confusion and
uncertainty with cardiac monitoring, Nickasch,
Marnocha, Grebe, Scheelk, and Kuehl (2016)
addressed trustworthiness in a number of ways.
Trustworthiness or truth value of qualitative research
and transparency of the conduct of the study are crucial
to the usefulness and integrity of the findings (Cope,
2014). In this column, I will discuss the components of
trustworthiness in qualitative research.
What Is Trustworthiness?
Trustworthiness or rigor of a study refers to the degree
of confidence in data, interpretation, and methods used
to ensure the quality of a study (Pilot & Beck, 2014). In
each study, researchers should establish the protocols
and procedures necessary for a study to be considered
worthy of consideration by readers (Amankwaa, 2016).
Although most experts agree trustworthiness is neces-
sary, debates have been waged in the literature as to
what constitutes trustworthiness (Leung, 2015).
Criteria outlined by Lincoln and Guba (1985) are
accepted by many qualitative researchers and will be the
focus of this column. These criteria include credibility,
dependability, confirmability, and transferability; they later
added authenticity (Guba & Lincoln, 1994). Each of these
criteria and the typically used procedures will be out-
lined. Not all procedures are used in each study.
Credibility
Credibility of the study, or the confidence in the
truth of the study and therefore the findings, is the most
important criterion (Polit & Beck, 2014). This concept is
analogous to internal validity in quantitative research.
The question a reader might ask is, “Was the study con-
ducted using standard procedures typically used in the
indicated qualitative approach, or was an adequate jus-
tification provided for variations?” Thus a grounded
theory study should be conducted similar to other
grounded theory studies. Techniques used to establish
credibility include prolonged engagement with partici-
pants, persistent observation if appropriate to the study,
peer-debriefing, member-checking, and reflective jour-
naling. Evidence also should be presented of iterative
questioning of the data, returning to examine it several
times. Negative case analysis or alternate explanations
should be explored as well.
Dependability
Dependability refers to the stability of the data over
time and over the conditions of the study (Polit & Beck,
2014). It is similar to reliability in quantitative research,
but with the understanding stability of conditions
depends on the nature of the study. A study of a phe-
nomenon experienced by a patient may be very similar
from time to time. In a study of a program instituted at
a hospital, however, conditions will change. Procedures
for dependability include maintenance of an audit trail
of process logs and peer-debriefings with a colleague.
Process logs are researcher notes of all activities that
happen during the study and decisions about aspects of
the study, such as whom to interview and what to
observe.
Confirmability
Confirmability is the neutrality or the degree find-
ings are consistent and could be repeated. This is analo-
gous to objectivity in quantitative research (Polit &
Beck, 2014). Methods include maintenance of an audit
trail of analysis and methodological memos of log.
Qualitative researchers keep detailed notes of all their
decisions and their analysis as it progresses. In some
studies, these notes are reviewed by a colleague; in other
studies, they may be discussed in peer-debriefing ses-
sions with a respected qualitative researcher. These dis-
cussions prevent biases from only one person’s perspec-
tive on the research. In addition, depending on the
study, the researcher may conduct member-checking
with study participants or similar individuals. For exam-
ple, Nickasch and colleagues (2016) presented their
findings at a national research conference and received
feedback indicating the presented issues were similar for
other nurses.
Transferability
The nature of transferability, the extent to which
findings are useful to persons in other settings, is differ-
ent from other aspects of research in that readers actual-
ly determine how applicable the findings are to their sit-
uations (Polit & Beck, 2014). Although this is considered
analogous to generalization in quantitative research, it
Understanding
Research Lynne M. Connelly
November-December 2016 • Vol. 25/No. 6436
is different from statistical generalization. Qualitative
researchers focus on the informants and their story
without saying this is everyone’s story. Researchers sup-
port the study’s transferability with a rich, detailed
description of the context, location, and people studied,
and by being transparent about analysis and trustwor-
thiness. Researchers need to provide a vivid picture that
will inform and resonant with readers (Amankwaa,
2016).
Authenticity
Authenticity is the extent to which researchers fairly
and completely show a range of different realities and
realistically convey participants’ lives (Polit & Beck,
2014). Selection of appropriate people for the study
sample and provision of a rich, detailed description are
ways the researchers address this criterion (Schou,
Høstrup, Lyngsø, Larsen, & Poulsen, 2011). No analogy
to authenticity exists in quantitative research; this area
represents the advantage of qualitative research to por-
tray fully the deep meaning of a phenomenon to
increases readers’ understanding.
Other Issues
The above criteria are mainstays of qualitative trust-
worthiness, but additional considerations exist as well.
The ethical implications of a study also affect its integri-
ty and useful. Recruiting procedures are important in
obtaining a group of people who can articulate their
experiences. Conduct of data analysis is another impor-
tant issue that can affect trustworthiness. These items
may be described in different sections of the research
report, but they are important to review when reading
Understanding Research
and critiquing an article. In addition, the procedures
used for trustworthiness must fit the research design.
Trustworthiness procedures and protocols used in a phe-
nomenological study may be similar but not identical to
grounded theory, ethnography, or qualitative descrip-
tive studies (Cope, 2014).
In this brief overview of trustworthiness, all proce-
dures could not be discussed in detail. Readers are
referred to the references or a qualitative research text if
further information is needed. Trustworthiness or rigor
is crucial to the confidence readers have in the findings
of any study, so this is an area readers should examine
when reading a research report.
REFERENCES
Amankwaa, L. (2016). Creating protocols for trustworthiness in qualita-
tive research, Journal of Cultural Diversity, 23(3), 121-127.
Cope, D.G. (2014). Methods and meanings: Credibility and trustworthi-
ness of qualitative research. Oncology Nursing Forum, 41(1), 89-
91.
Guba, E.G., & Lincoln, Y. (1994). Competing paradigms in qualitative
research. In N. Denzin & Y. Lincoln (Eds.), Handbook of qualitative
research (pp. 105-117). Thousand Oaks, CA: Sage.
Leung, L. (2015). Validity, reliability and generalizability in qualitative
research. Journal of Medicine and Primary Care, 4(3), 324-327.
Lincoln, Y.S., & Guba, E.G. (1985). Naturalistic inquiry. Newbury Park,
CA: Sage.
Nickasch, B.L., Marnocha, S., Grebe, L., Scheelk, H., & Kuehl, C.
(2016). ‘What do I do next?’ Nurses’ confusion and uncertainty with
ECG monitoring. MEDSURG Nursing, 25(6), 418-422.
Polit, D.F., & Beck, C.T. (2014). Essentials of nursing research:
Appraising evidence for nursing practice (8th ed.). Philadelphia,
PA: Wolters Kluwer/Lippincott Williams & Wilkins.
Schou, L., Høstrup, H., Lyngsø, E.E., Larsen, S., & Poulsen, I. (2011).
Validation of a new assessment tool for qualitative articles. Journal
of Nursing Scholarship, 68(9), 2086-2094.
Copyright of MEDSURG Nursing is the property of Jannetti Publications, Inc. and its content
may not be copied or emailed to multiple sites or posted to a listserv without the copyright
holder’s express written permission. However, users may print, download, or email articles for
individual use.
The Qualitative Report 2018 Volume 23, Number 11, How To Article 1, 2622-2633
Data Analysis Methods for Qualitative Research:
Managing the Challenges of Coding, Interrater Reliability, and
Thematic Analysis
Michael J. Belotto
Biomedical Research Alliance of New York, Hyde Park, New York, USA
The purpose of this article is to provide an overview of some of the principles
of data analysis used in qualitative research such as coding, interrater
reliability, and thematic analysis. I focused on the challenges that I experienced
as a first-time qualitative researcher during the course of my dissertation, in
the hope that how I addressed those difficulties will better prepare other
investigators planning endeavors into this area of research. One of the first
challenges I encountered was the dearth of information regarding the details of
qualitative data analysis. While my text books explained the general
philosophies of the interpretive tradition and its theoretical groundings, I found
few published studies where authors actually explained the details pertaining
to exactly how they arrived at their findings. Some authors even confirmed my
own experience that few published studies described processes such as coding
and methods to evaluate interrater reliability. Herein, I share the sources of
information that I did find and the methods that I used to address these
challenges. I also discuss issues of trustworthiness and how matters of
objectivity and reliability can be addressed within the naturalistic paradigm.
Keywords: Qualitative Research Data Analysis, Coding, Interrater Reliability,
Thematic Analysis
Introduction
The purpose of this commentary is to help students and new researchers navigate the
course of qualitative data analysis, in particular, areas that are not often explained in
publications of qualitative research studies, such as coding, interrater reliability, and thematic
analysis (Campbell, Quincy, Osserman, & Pederson, 2013). In order to convey some of the
challenges I faced as a first-time qualitative researcher, it is necessary to explain some of the
details of my research. In trying to decide on the topic of my dissertation to complete a degree
in public health, I thought about the difficulties I faced in my previous profession: a paramedic
in the New York City 9-1-1 system. During my 12-year career in Emergency Medical Services
(EMS), it seemed that many of the people I met were planning to pursue careers in other health
professions, focusing on medical school, physician’s assistant, or nursing, or other public safety
careers such as police officer or firefighter. This impressed upon me the notion that EMS was
viewed as a transient career. Therefore, I chose to focus my dissertation on the issue of career
longevity in the EMS profession. In particular, I examined if individuals came to this vocation
with preconceived notions, and if so, whether preemployment expectations were aligned or
misaligned with postemployment experiences. I further examined how alignment or
misalignment of expectations and experiences contributed to job satisfaction and the intention
to stay or leave the field.
Rapid turnover of emergency medical technicians (EMTs) and paramedics was a
phenomenon I also personally experienced during my tenure as a director of a hospital-based
EMS department. The importance of this problem was supported in the literature by projections
Michael J. Belotto 2623
for an aging U.S. population (U.S. Census Bureau, 2014), expected increases in age-related
medical emergencies, driving greater demand for EMS professionals (U.S. Department of
Labor, Bureau of Labor Statistics, 2014), and EMS agencies’ reported problems with both
recruitment and retention of staff (Freeman, Slifkin, & Patterson, 2009). While my literature
review yielded various studies of burnout and turnover in professionals such as emergency
room physicians, nurses, and social workers, job satisfaction and employment longevity in the
field of EMS was not well studied (Alexander, Weiss, Braude, Ernst, & Fullerton-Gleason,
2009; Perkins, DeTienne, Fitzgerald, Hill, & Harwell, 2009; U.S. Department of
Transportation, National Highway Traffic Safety Administration, 2008, 2011). In fact, the
review of the literature revealed a dearth of studies pertaining to EMS in general (Huot, 2013).
Therefore, I chose a qualitative, phenomenological design for the method of inquiry for my
study.
I chose the qualitative approach because it is appropriate in the early stages of research,
when the important variables relevant to a particular subject of inquiry may not yet be known
(Creswell, 2009). An advantage of the interpretive paradigm is it allows the researcher to
understand a phenomenon through a process of exploration of initial suspicions and
development of preliminary theories (Trochim & Donnelly, 2008). In my own
phenomenological research (Belotto, 2017), I found that the semi-structured interview allowed
me to ensure that I elicited the same core information from each participant, while also
providing me with the flexibility to probe more deeply into the rich descriptions of experiences
that participants shared. This enabled me as the researcher, rather than leading, to follow the
participants, as they guided me to the relevant factors associated with career longevity in the
EMS profession.
Given this context of the purpose and methodology of my research, herein are the
specific challenges I faced and how I addressed them. I begin with a discussion of the
development of my interview questions for an area of research that was young and for which
no questionnaires existed. I further explain how I addressed the validity of my interview
questions. I proceed to describe how I developed a system to code the interview transcripts.
My process of assessing interrater reliability is also explained. Finally, I discuss how I
synthesized the data into an organization of themes to interpret the findings of the research.
Content Validity
The lack of a current, validated questionnaire for a study such as mine presented a
number of challenges. First, this required that I create interview questions that would ensure
that I obtained the information necessary to address my research questions. Using the methods
explained by Lawshe (1975), I became familiar with the procedures to establish the content
validity of my core interview.
I assembled a panel with expertise in areas relevant to my research, including human
resources, research ethics, qualitative research, and the EMS profession. Panel members
assessed the effectiveness of the interview questions to address the research questions (see
Appendix A). A 4-point Likert scale (no relevance, low relevance, moderate relevance, and
strong relevance) was used rather than an odd number of choices, so that neutral comments
were avoided (Lynn, 1986). Taking into account the number of panel members, the formula
provided by Lawshe yielded a content validity ratio threshold, at which the degree of
concurrence of the panel would not be considered to have occurred by chance, at an alpha level
of .05. Therefore, questions yielding scores below the threshold were eliminated, thus
increasing the overall content validity index of the core interview instrument. Finding 11
individuals with varied and relevant expertise to my study who were both qualified and willing
to take the time to participate on the panel, developing and collecting the surveys, aggregating
2624 The Qualitative Report 2018
and analyzing the data, creating the tables to display the data, interpreting and writing the
results, and finally, making the necessary revisions to the interview questions added
approximately two months to the project.
Quality Assurance of the Data
After IRB approval of the study, the first participant’s interview was conducted,
recorded, and transcribed. In contrast to quantitative studies, the data analysis process began
immediately upon the enrollment of the first participant and was continuous. This process of
simultaneously recruiting participants, conducting interviews, and analyzing the data was
challenging. For example, excessive time spent on data analysis resulted in recruitment and
enrollment lags, which in turn resulted in a disruption to the scheduling of interviews and the
stream of recordings sent for transcription, thus ultimately limiting transcripts available for
analysis. Constant attentiveness was required to keep all aspects of the study flowing steadily.
I started the analysis process in keeping with what Ulin, Robinson, and Tolley (2005)
described as, immersing one’s self in the data. This meant continually reading the transcripts
to familiarize myself with the content. I assessed the quality of the data, whether responses
were ambiguous or contradictory and whether I was getting the information I needed to answer
my research questions. I also scrutinized my interview technique for bias, whether questions
were asked in a neutral manner, unexpected findings had emerged, or opportunities to probe
more deeply into responses were missed. For example, one participant provided a rich
description of how he progressed to various positions in the EMS profession, such as becoming
an educator and flight medic. This had resulted in a lengthy career of over 20 years. In
reviewing the transcript; however, I realized that I had not obtained all the information I
needed. Upon following up with this individual and probing more deeply into his thoughts
about the profession in general, he revealed that he did not think that his career was typical and
stated that he felt that for most of the workforce, EMS was a transient profession. Regularly
reviewing the quality of my data, becoming more familiar with the content, and scrutinizing
my interview technique all led to revisions and continual refinements of the interview process.
Data Analysis
Coding
In the review of each participant’s transcript, the “meaning units,” the words and
sentences that conveyed similar meanings, were identified and labeled with codes (Graneheim
& Lundman, 2004). The coding process allowed for the interpretation of large segments of text
and portions of information in new ways. Assessing how these meaning units were linked led
to the identification of themes. As I reviewed my data, I struggled to attach codes to various
sections of text that represented those meaning units, as there seemed to be endless choices of
words to characterize the experiences that participants described.
As the endless choices of characterizations were resulting in the creation of a very large
number of codes, I returned to the literature to find more information about coding qualitative
data. After finding and familiarizing myself with the Saldaña (2009) code book resource, I
decided to use a method of “structural coding,” whereby, I labeled passages with terms that
were related to the research questions. For example, since my study explored the relationship
between the alignment of preemployment career expectations and postemployment
experiences, job satisfaction, and the intention to leave the EMS profession, I used labels for
codes such as expectations, aligned experience, misaligned experience, satisfaction, and intent
Michael J. Belotto 2625
to leave. This method greatly reduced the number of codes and provided a context to create
categories of codes or code families that were related to my research questions.
For some experiences, I used a secondary label that referred to a family of challenges
of the profession such as, the physical challenges of the job, working with the pain of spinal
injuries, or the psychological challenges of coping with illness and deaths of patients. Finally,
I utilized a “descriptive method” of coding to create a label that conveyed the essence of what
I was hearing. For example, when a paramedic stated that he had not anticipated the burden of
physical injuries that ultimately led to a decision to leave the field, a primary theme of the
study; this was coded as “intent to leave -physical injury concern.” This approach addressed
some of the challenges resulting from the open-ended questioning that is used in qualitative
research, where participants may provide lengthy and complex responses, digress, or discuss
multiple themes, all which can greatly add to the difficulty of coding, and potentially reduce
interrater reliability (Campbell et al., 2013).
While computer assisted qualitative data analysis software (CAQDAS) programs have
become popular for processing large amounts of qualitative data, trying to learn the principles
of coding and qualitative data analysis, while also becoming competent at navigating the
functions of CAQDAS proved to be extremely difficult. Therefore, since CAQDAS was not an
option for me, I decided to manually code my data. Rather than utilizing the old tried and true
method of using numerous different colored pencils to outline sections of text, I developed a
somewhat more technological variation of that approach. Using Microsoft WORD
functionality, I highlighted sections of text and using the tracked changes and new comment
functions, I added my codes in the margins of the transcripts.
As I analyzed each study participant’s interview, I also developed a codebook (see
Appendix B). The codebook listed all the codes used for each participant’s transcript; thereby,
documenting exactly how every single passage of text was evaluated. The codebook
continually grew as subsequent participants discussed new topics, requiring additional codes.
Interrater Reliability
To assess my analysis of the data, I utilized the tool of interrater reliability. To establish
trust and confidence in the findings of the research, rigor was necessary to confirm the
consistency of the study methods (Thomas & Magilvy, 2011). I employed the method of
triangulation, whereby, I sought peer debriefing on my interpretations (Denzin, as cited in
Guba, 1981). The enrollment of 10 participants in my study resulted in approximately 200
pages of transcribed interviews. Due to this substantial amount of data, in keeping with the
recommendations in the literature, I evaluated interrater reliability by analyzing a sample of
texts (Barbour, 2001; Campbell et al., 2013; Hallgren, 2012; Kurasaki, 2000; Marques &
McCall, 2005).
Influenced by the methods of Hruschka et al. (2004), I conducted three rounds of
reliability checks. After review of the first two participants’ transcripts, I had generated 126
codes. I then shared the transcripts with two independent researchers. A meeting was held
whereupon the feedback indicated that the coding scheme would have to be modified, as it was
just not practical due to the large number of codes. It was at this point that I discovered the
Saldaña (2009) coding methods and implemented the coding process previously described.
Finding independent researchers who were both knowledgeable and willing to dedicate
themselves to coding lengthy transcripts was extremely difficult. Due to the time constraints
of the two independent researchers, a single independent coder was used for the second round
of reliability checks. The use of one or more independent coders is supported in the literature
by multiple authors (Barbour, 2001; Campbell et al., 2013; Creswell, 2009). After the
independent researcher coded the transcript of the first participant using the improved coding
2626 The Qualitative Report 2018
method, we compared how we interpreted each segment of text and calculated our level of
concordance. Using the methods of Campbell et al. (2013), we calculated an initial discriminant
capability of 72%. The discriminant capability of the coding scheme is a measure of the level
of intercoder reliability. We then used the method of negotiated agreement to reconcile the
remaining differences and recorded how many were reconciled and how many disagreements
prevailed. We also maintained a record of how reconciliation was achieved (i.e., if the
independent coder deferred to me or I to her). We repeated this process for a third round of
reliability checks which yielded similar results.
Development of Themes
Upon completion of the coding of all 10 transcripts, I proceeded to the next step in my
approach to the analysis of the data. This required that I develop a method to analyze the overall
content of the data. At first, trying to fathom the meaning of 200 pages of words was
overwhelming, and I could not find in my text books nor in any of the published studies how
to manage this task. I constructed a content analysis table to identify which codes were used
for each participant. The content analysis table was essentially a template of the codebook;
however, at this stage, it was used to analyze aggregate data.
When the template was used for individual participants during the coding process, the
comment number generated in the transcript by the tracked changes function in Microsoft Word
was placed in the box with the corresponding code. In this way, by viewing the code book, I
could go to the exact passage in the transcript to verify where that code was used. When the
template was used to analyze overall content, the subject identifying number was placed in the
box with the corresponding code. For example, if subject number one said they went into the
profession thinking it was a stepping stone to other careers, then the number one was placed in
the “yes” box for preconceived notions, for the code “transient career perception.” I then
repeated this process for each subject’s responses.
Utilizing the content analysis table in this way, I was able to cluster items of data that
were related. Since I coded the data with labels that were related to my research questions, the
patterns that emerged led to the identification of categories. I then examined the patterns that
had been placed together for the emergence of overarching themes (Percy, Kostere, & Kostere,
2015). If an additional second level of a pattern emerged, I categorized it as a secondary theme.
For example, the psychological challenges of coping with illness and deaths of patients
emerged as a primary theme, while participants also indicated that this was an expected
challenge of the profession. However, some participants added that coping with the grief of
family members who were suffering the losses of their loved ones was also particularly
challenging. This was considered to be a secondary theme pertaining to the psychological
challenges of the occupation. Since these decisions were subjective, I also used direct
participant quotes to support the rationale for each theme. The aggregate analysis table enabled
me to identify and distinguish the trends of various participant experiences. Handling and
sorting the data in this way greatly facilitated the identification of emerging primary and
secondary themes as illustrated in Table 1.
I then interpreted the data with regard to how those emerging themes addressed my
research questions and whether initial suspicions were supported. I also questioned whether
individual experiences that appeared to be disconfirming cases actually contested my initial
beliefs. For example, while two participants did express apprehension about physical injuries,
they did not indicate that this was a source of job dissatisfaction or that this was associated with
the intention to leave the profession. Upon further examination, it became clear that these
participants were paramedic students who were the youngest individuals in the study and did
not yet have the years of experiences similar to the seasoned paramedics. When the essence of
Michael J. Belotto 2627
their sentiments was actually compared to the notions of the veteran paramedics at similar time
points in their careers, it became apparent that these descriptions were not disconfirming cases.
Rather, when comparable career time points were assessed, the notions of the students were
actually consistent with how the veterans had felt when they were first entering the profession.
The exploration of experiences that appeared to be contradictory to the emerging themes served
to further enhance the credibility of the findings of the research (Booth, Carrol, Llott, Low, &
Cooper, 2013).
Table 1
Emerging Categories and Themes
Categories Primary Themes Secondary Themes
Vocational Influence Altruism
Career Longevity Perception Transient Profession
EMT to Paramedic Professional Growth
Self-Efficacy & Excitement
Challenges of the Profession Physical Challenges Physical Injury
Increased Physical
Challenges with
Advancing Age
Alternative
Occupational
Opportunities
Psychological Challenges –
Illness and Death of Patients
Grief of Family
Members
Importance of Relationships
Negative Relationships with
Partners / Colleagues –
Dissatisfaction
Acceptance of
Negative Relationships
Camaraderie
Conclusion
During the course of completing my dissertation, many issues emerged as a result of
conducting a qualitative research study. I observed that few authors of published qualitative
studies provided a “how to” manual describing the details of their analyses. In keeping with
the goal of this commentary, to help new researchers to navigate the path of qualitative data
analysis, herein is my summary of the major obstacles I faced and the most helpful resources
that I found to overcome these challenges.
A number of textbooks were helpful in describing the philosophical groundings of the
interpretive tradition (Creswell, 2009; Trochim & Donnelly, 2008; Ulin, Robinson, & Tolley
2005). As a new qualitative researcher, this information was extremely meaningful to me as it
edified me with regard to core values of the naturalistic paradigm. As I lived through the
dissertation experience for approximately two years, I developed a greater appreciation for
qualitative research principles and the importance of qualitative exploratory research.
For example, I had read about concepts such as spending time in the field to develop a
rapport with participants to establish trust (Dooley, 2007). While I appreciated the concept, as
2628 The Qualitative Report 2018
I continued my journey as a qualitative researcher, I began to actually experience this
phenomenon, in particular with paramedic students. As I continued to listen to participants’
poignant descriptions of their experiences and feelings about topics such as how they dealt with
death, the suffering of family members, and working with the agonizing pain of what were
often career ending spinal injuries, I came to understand their extraordinary commitment to
“my” research. I believe my presence at classes and descriptions of my own career created a
rapport for participants to volunteer their time and to commit to sharing their personal feelings.
As these stakeholders embraced the research topic’s relevance to their own careers, I believe
at that point it became “our” research.
Guidance on specific issues, such as establishing the content validity of my interview
questions was found in an article published by Lawshe (1975). This author provided all the
necessary information, including how to calculate the content validity index of my interview
and how to interpret the result. I found the most comprehensive resource on coding to be the
coding manual by Saldaña (2009). This work of over 200 pages is filled with explanations of
what codes are and their functions, different types of coding methods, and sample texts with
examples of how they were coded.
I found two studies that actually discussed coding schemes and how interrater reliability
was assessed in detail. The fact that researchers rarely discuss coding reliability was confirmed
by Campbell, Quincy, Osserman, and Pederson (2013) in their overview of coding schemes
and interrater reliability. During the conduct of HIV research at the Centers for Disease Control
and Prevention (CDC), Hruschka et al. (2004) discuss how they used different coding processes
for different types of studies. These authors provided details of how they created codebooks,
and how they dealt with factors impacting interrater reliability, such as long interviews with a
greater number of codes. The assessment of the trustworthiness of research findings is
discussed in articles by Guba (1981), Thomas and Magilvy (2011), and Whittemore, Chase,
and Mandle (2001). These authors provide explanations of topics such as credibility,
dependability, and confirmability of research results.
One of the recurring themes I found in the literature of qualitative research was that
there is not just one best way to do it (Campbell et al., 2013). Finding the most efficient method
will be determined by the type of study, the researcher, and available resources If available
resources to conduct the trial are limited or if there are time restrictions pertaining to
completing the study, I would advise new researchers to consider curtailing the number of
research questions. In my experience, having to analyze the data with regard to four research
questions did add significantly to the amount of time and work required to complete the study.
While the literature does support having more than one research question (Creswell, 2009;
Miles & Huberman, 1994; Simon, 2011), I would advise new researchers to be prepared, as
this does add length to the interview and increases the amount of time required for additional
coding and analysis.
To complete my first attempt at qualitative research, I used aspects of various
documented processes and developed my own methods that were relevant to my research and
my situation. My challenges were shaped by my own context, my findings in the literature, my
time deadlines to complete a dissertation, and my limitations as a first-time qualitative
researcher. I hope that this analysis and the sources I have provided will assist future
researchers and students to come.
Michael J. Belotto 2629
References
Alexander, M., Weiss, S., Braude, D., Ernst, A. A., & Fullerton-Gleason, L. (2009). The
relationship between paramedics’ level of education and degree of commitment.
American Journal of Emergency Medicine, 27(7), 830-837. doi:
10.1016/j.ajem.2008.06.039
Barbour, R. S., (2001). Checklists for improving rigour in qualitative research: A case of the
tail wagging the dog? BMJ, 322(7294), 1115-1117. doi: 10.1136/bmj.322.7294.1115
Belotto, M. J. (2017). Emergency medical service career longevity: Impact of alignment
between preemployment expectations and postemployment perceptions. (Doctoral
dissertation). Retrieved from ScholarWorks,
http://scholarworks.waldenu.edu/cgi/viewcontent.cgi?article=4384&context=dissertati
ons
Booth, A., Carrol, C., Llott, I., Low, L. L., & Cooper, K. (2013). Desperately seeking
dissonance: Identifying the disconfirming case in qualitative evidence synthesis.
Qualitative Health Research, 23(1), 126-141. doi: 10.1177/1049732312466295
Campbell, J. L., Quincy, C., Osserman, J., & Pedersen, O. K. (2013). Coding in-depth
semistructured interviews: Problems of unitization and intercoder reliability and
agreement. Sociological Methods & Research, 42(3), 294-320. doi:
10.1177/0049124113500475
Creswell, J. W. (2009). Research design: Qualitative, quantitative, and mixed methods
approaches (3rd ed.). Thousand Oaks, CA: Sage Publications, Inc.
Dooley, K. E. (2007). Viewing agricultural education research through a qualitative lens.
Journal of Agricultural Education, 48(4), 32-42. doi: 10.5032/jae.2007.04032
Freeman, V. A., Slifkin, R. T., & Patterson, P. D. (2009). Recruitment and retention in rural
and urban EMS: Results from a national survey of local EMS directors. Journal of
Public Health Management Practice, 15(3), 246-252. doi:
10.1097/PHH.0b013e3181a117fc
Graneheim, U. H., & Lundman, B. (2004). Qualitative content analysis in nursing research:
Concepts, procedures and measures to achieve trustworthiness. Nurse Education
Today, 24(2), 105-112. doi: 10.1016/j.nedt.2003.10.001
Guba, E. G. (1981). Criteria for assessing the trustworthiness of naturalistic inquiries.
Educational Technology and Research Development, 29(2), 75-91. doi:
10.1007/BF02766777
Hallgren, K. A. (2012). Computing inter-rater reliability for observational data: An overview
and tutorial. Tutorials in Quantitative Methods for Psychology, 8(1), 23-34.
Hruschka, D. J., Schwartz, D., Cobb St. John, D., Picone-Decaro, E., Jenkins, R. A., & Carey,
J. W. (2004). Reliability in coding open-ended data: Lessons learned from HIV
behavioral research. Field Methods, 16(3), 307-331. doi: 10.1177/1525822X04266540
Huot, K. (2013). Transition support for new graduate paramedics (Master’s thesis). Retrieved
from Handle (2013-10-25T17:32:59Z), https://viurrspace.ca/handle/10170/651
Kurasaki, K. S. (2000). Intercoder reliability for validating conclusions drawn from open-ended
interview data. Field Methods, 12(3), 179-194. doi: 10.1177/1525822X0001200301
Lawshe, C. H. (1975). A quantitative approach to content validity. Personnel Psychology,
28(4), 563-575. doi: 10.1111/j.1744-6570.1975.tb01393.x
Lynn, M. R. (1986). Determination and quantification of content validity. Nursing Research,
35(6), 382-385. doi: 10.1097/00006199-198611000-00017
Marques, J. F., & McCall, C. (2005). The application of interrater reliability as a solidification
instrument in a phenomenological study. The Qualitative Report, 10(3), 439-462.
Retrieved from https://nsuworks.nova.edu/tqr/vol10/iss3/3
http://scholarworks.waldenu.edu/cgi/viewcontent.cgi?article=4384&context=dissertations
http://scholarworks.waldenu.edu/cgi/viewcontent.cgi?article=4384&context=dissertations
https://viurrspace.ca/handle/10170/651
https://nsuworks.nova.edu/tqr/vol10/iss3/3
2630 The Qualitative Report 2018
Miles, M. B., & Huberman, A.M. (1994). Qualitative data analysis (2nd ed.). Thousand Oaks,
CA. Sage Publications, Inc.
Percy, W. H., Kostere, K., & Kostere, S. (2015). Generic qualitative research in psychology.
The Qualitative Report, 20(2), 76-85. Retrieved from
http://www.nova.edu/ssss/QR/QR20/2/percy5
Perkins, B. J., DeTienne, J., Fitzgerald, K., Hill, M., & Harwell, T. S. (2009). Factors associated
with workforce retention among emergency medical technicians in Montana.
Prehospital Emergency Care, 13(4), 456-461. doi: 10.1080/10903120902935330
Saldaña, J. (2009). The coding manual for qualitative researchers. Thousand Oaks, CA: Sage
Publications, Inc. Retrieved from
http://stevescollection.weebly.com/uploads/1/3/8/6/13866629/saldana_2009_the-
coding-manual-for-qualitative-researchers
Simon, M. K. (2011). Dissertation and scholarly research: Recipes for success. Seattle, WA:
Dissertation Success, LLC.
Thomas, E., & Magilvy, J. K. (2011). Qualitative rigor or research validity in qualitative
research. Journal for Specialists in Pediatric Nursing, 16(2), 151-155. doi:
10.1111/j.1744-6155.2011.00283.x
Trochim, W., & Donnelly, J. (2008). The research methods knowledge base (3rd ed.). Mason,
OH: Cengage Learning.
Ulin, P. R., Robinson, E. T., & Tolley, E. E. (2005). Qualitative methods in public health (1st
ed.). San Francisco, CA: Jossey-Bass.
U.S. Census Bureau. (2014, May 6). Fueled by aging baby boomers, nation’s older population
to nearly double, census bureau reports. Retrieved from
https://www.census.gov/newsroom/press-releases/2014/cb14-84.html
U.S. Department of Labor, Bureau of Labor Statistics. (2014, January 8). Occupational outlook
handbook. Retrieved from http://www.bls.gov/ooh/healthcare/emts-and-
paramedics.htm#tab-6
U.S. Department of Transportation, National Highway Traffic Safety Administration. (2008,
May). EMS workforce for the 21st century: A national assessment. Retrieved from
https://nasemso.org/wp-content/uploads/EMS-Workforce-21stC-Tidwell
U.S. Department of Transportation, National Highway Traffic Safety Administration. (2011,
May). The emergency medical services workforce agenda for the future. Retrieved from
http://www.ems.gov/pdf/2011/EMS_Workforce_Agenda_052011
Whittemore, R., Chase, S. K., & Mandle, C. L. (2001). Validity in qualitative research.
Qualitative Health Research, 11(4), 522-537. doi: 10.1177/104973201129119299
http://www.nova.edu/ssss/QR/QR20/2/percy5
http://stevescollection.weebly.com/uploads/1/3/8/6/13866629/saldana_2009_the-coding-manual-for-qualitative-researchers
http://stevescollection.weebly.com/uploads/1/3/8/6/13866629/saldana_2009_the-coding-manual-for-qualitative-researchers
https://www.census.gov/newsroom/press-releases/2014/cb14-84.html
http://www.bls.gov/ooh/healthcare/emts-and-paramedics.htm#tab-6
http://www.bls.gov/ooh/healthcare/emts-and-paramedics.htm#tab-6
https://nasemso.org/wp-content/uploads/EMS-Workforce-21stC-Tidwell
http://www.ems.gov/pdf/2011/EMS_Workforce_Agenda_052011
Michael J. Belotto 2631
Appendix A: Core
Interview Questions
Research Questions
Interview Questions
Research Question 1
What are the preconceived notions of EMTs
and paramedics prior to entering the
vocation and their notions of the vocation
after facing the realities of the job?
1. Let’s start off by talking about what you expected from the
EMS vocation.
2. What was the most important influence for entering this
profession?
3. How did your early experiences compare to what you
expected?
4. What about your experiences now?
Is your experience of the job now how you thought it would
be after working in the field for (# of years participant has
been working)?
5. If alignment between expectations and experiences is
different now than how you felt early in your career, what
changed?
Research Question 2
How does alignment or misalignment
between preemployment and
postemployment perceptions of the vocation
affect EMTs and paramedics?
6. Did the alignment or misalignment between your
expectations and experiences affect you?
7. When you first started working in the field, how did you
feel about your career choice?
8. What about now?
How do you feel about your career choice now?
Research Question 3
How does alignment or misalignment
between the notions of the vocation prior to
and following entry into the profession
contribute to job satisfaction or
dissatisfaction?
9. What was the most important thing that made you feel
satisfied about your job?
10. What was the most important thing that made you feel
dissatisfied with your job?
11. How did the relationship between your expectations and
experiences affect how you felt about EMS work?
12. Has how you felt about the job changed over time?
13. If your job satisfaction has changed over time, what was
the most important issue affecting your change in satisfaction?
Research Question 4
How does job satisfaction or dissatisfaction
contribute to the intent to stay in or leave
the profession?
14. Do you plan to work as an EMT/paramedic in the field
until retirement?
15. Are you planning to leave the EMS profession?
16. For those planning to leave the profession:
What is the most important factor affecting your decision to
leave the profession?
17. For those planning to stay in the profession:
What is the most important issue affecting your decision to
stay in the profession?
2632 The Qualitative Report 2018
Appendix B: Code Book
Subject # ___
Phenomenon of Interest
(Expectations)
Preconceived Notions
Notions
Experience/
Alignment
Experiences
Satisfaction/
Dissatisfaction
Intent to
Leave
Formed prior to entering
profession
Formed after
entering profession
When no
preconceived
notion
Please add
“S” or “D”
to phrase #
YES NO YES NO YES NO YES NO
Please add
“N/E”
for no effect
Career Perception – Transient Career
Career Perception – Path to Other Profession (Health, Public Safety)
Career Perception – Fieldwork- Longevity / Retirement in EMS
(this means working on the ambulance)
Career Perception – Longevity / Retirement in EMS
(this means EMS position other than fieldwork on ambulance, such as:
supervisor, dispatcher, educator)
Challenges – Physical Challenges
Challenges – Psychological Challenges
Challenges – Psychological Challenges – Patient Deaths
Challenges – Psychological Challenges – Family Member Grief
Vocational Influence – Altruism
Vocational Influence – family member in health field
Personality –Diversity of Practice–enjoy different types of calls
Personality – Diversity of Partners – enjoy working with different
partners
Personality – enjoy adrenaline rush
Professional Growth
Professional Growth – EMT to Paramedic – increased capability –
autonomy
Relationships –Partners (positive, negative) – camaraderie with
colleagues
Relationships – with Other Health Professionals
Michael J. Belotto 2633
Author Note
I currently serve as a member of the Biomedical Research Alliance of New York
(BRANY) IRB, responsible for the review of research protocols. I also serve as a research
compliance auditor, responsible for reviews of investigators’ sites, to ensure the conduct of
research is in compliance with federal regulations and ethical guidelines. Correspondence
regarding this article can be addressed directly to: mbelotto@brany.com.
Copyright 2018: Michael J. Belotto and Nova Southeastern University.
Article Citation
Belotto, M. J. (2018). Data analysis methods for qualitative research: Managing the challenges
of coding, interrater reliability, and thematic analysis. The Qualitative Report, 23(11),
2622-2633. Retrieved from https://nsuworks.nova.edu/tqr/vol23/iss11/2
mailto:mbelotto@brany.com
Copyright of Revista Brasileira de Enfermagem is the property of Associacao Brasileira de
Enfermagem and its content may not be copied or emailed to multiple sites or posted to a
listserv without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
A Practical Guide to Collaborative Qualitative Data Analysis
K. Andrew R. Richards
University of Alabama
Michael A. Hemphill
University of North Carolina at Greensboro
The purpose of this article is to provide an overview of a structured, rigorous approach to collaborative qualitative analysis while
attending to challenges associated with working in team environments. The method is rooted in qualitative data analysis literature
related to thematic analysis, as well as the constant comparative method. It seeks to capitalize on the benefits of coordinating
qualitative data analysis in groups, while controlling for some of the challenges introduced when working with multiple analysts.
The method includes the following six phases: (a) preliminary organization and planning, (b) open and axial coding,
(c) development of a preliminary codebook, (d) pilot testing the codebook, (e) the final coding process, and (f) reviewing
the codebook and finalizing themes. These phases are supported by strategies to enhance trustworthiness, such as (a) peer
debriefing, (b) researcher and data triangulation, (c) an audit trail and researcher journal, and (d) a search for negative cases.
Keywords: multiple analysts, qualitative methods, researcher training, trustworthiness
While qualitative research has been traditionally discussed
as an individual undertaking (Richards, 1999), research reports
have in general become increasingly multi-authored (Cornish,
Gillespie, & Zittoun, 2014; Hall, Long, Bermback, Jordan, &
Patterson, 2005), and the field of physical education is no exception
(Hemphill, Richards, Templin, & Blankenship, 2012; Rhoades,
Woods, Daum, Ellison, & Trendowski, 2016). Proponents of
collaborative data analysis note benefits related to integrating
the perspectives provided by multiple researchers, which is often
viewed as one way to enhance trustworthiness (Patton, 2015).
Collaborative data analysis also allows for researchers to effec-
tively manage large datasets while drawing upon diverse perspec-
tives and counteracting individual biases (Olson, McAllister,
Grinnell, Walters, & Appunn, 2016). Further, collaborative ap-
proaches have been presented as one way to effectively mentor new
and developing qualitative researchers (Cornish et al., 2014).
Despite the potential benefits associated with collaborative
qualitative data analysis, coordination among analysts can be
challenging and time consuming (Miles & Huberman, 1994).
Issues related to the need to plan, negotiate, and manage the
complexity of integrating multiple interpretations while balancing
diverse goals for involvement in research also represent challenges
that need to be managed when working in group environments
(Hall et al., 2005; Richards, 1999). Concerns have also been voiced
about the extent to which qualitative data analysis involving
multiple analysts is truly integrative and collaborative, rather than
reflective of multiple researchers working in relative isolation to
produce different accounts or understandings of the data (Moran-
Ellis et al., 2006).
Challenges associated with collaboration become com-
pounded when also considering the need for transparency in
qualitative data analysis. Analysts need to develop, implement,
and report robust, systematic, and defensible plans for analyzing
qualitative data so to build trustworthiness in both the process and
findings of research (Sin, 2007). Authors, however, often prioritize
results in research manuscripts, which limits space for discussing
methods. This leads to short descriptions of data analysis proce-
dures in which broad methods without an explanation of how they
were implemented (Moravcsik, 2014), and can limit the availability
of exemplar data analysis methods in the published literature.
This has given rise to calls for increased transparency in the
data collection, analysis, and presentation aspects of qualitative
research (e.g., Kapiszewski & Kirilova, 2014). The American
Political Science Association (APSA, 2012), for example, recently
published formal recommendations for higher transparency stan-
dards in qualitative research that call for detailed descriptions of
data analysis procedures and require authors support all assertions
with examples from the dataset.
To help address the aforementioned challenges, scholars
across a variety of disciplines have published reports on best
practices related to qualitative data analysis (e.g., Braun &
Clarke, 2006; Cornish et al., 2014; Hall et al., 2005). Many of these
approaches are rooted in theories and epistemologies of qualitative
research that guide practice (e.g., Boyatzis, 1998; Glaser & Strauss,
1967; Lincoln & Guba, 1985; Strauss & Corbin, 2015). Braun and
Clarke’s (2006) highly referenced article provides a step-by-step
approach to completing thematic analysis that helps to demystify
the process with practical examples. In another similar vein, Hall
and colleagues (2005) tackle challenges related to collaborative
data analysis and discuss processes related to (a) building an
analysis team, (b) developing reflexivity and theoretical sensitivity,
(c) addressing analytic procedures, and (d) preparing to publish
findings. Cornish and colleagues (2014) further this discussion by
noting several dimensions of collaboration that are beneficial in
Richards is with the Department of Kinesiology, University of Alabama,
Tuscaloosa, AL. Hemphill is with the Department of Kinesiology, University of
North Carolina at Greensboro, Greensboro, NC. Address author correspondence to
K. Andrew R. Richards at karichards2@ua.edu.
225
Journal of Teaching in Physical Education, 2018, 37, 225-231
https://doi.org/10.1123/jtpe.2017-0084
© 2018 Human Kinetics, Inc. RESEARCH NOTE
mailto:karichards2@ua.edu
mailto:karichards2@ua.edu
https://doi.org/10.1123/jtpe.2017-0084
qualitative data analysis. The rigor and quality of the methodology
may benefit, for example, when research teams include insider and
outsider perspectives, multiple disciplines, academics and practi-
tioners, international perspectives, or senior and junior faculty
members.
In this paper, we contribute to the growing literature that
seeks to provide practical approaches to qualitative data analysis by
overviewing a six-step approach to conducting collaborative qual-
itative analysis (CQA), which is grounded in qualitative methods
and data analysis literature (e.g., Glaser & Strauss, 1967; Lincoln &
Guba, 1985; Patton, 2015). While some practical guides in the
literature provide an overview of data analysis procedures, such as
thematic analysis (Braun&Clarke, 2006), and others discuss issues
related to collaboration (Hall et al., 2005), we seek to address both
by overviewing a structured, rigorous approach to CQA while
attending to challenges that stem from working in team environ-
ments. We close by making the case that the CQA process can be
employed when working with students, novice researchers, and
scholars new to qualitative inquiry.
Collaborative Qualitative Analysis:
Building Upon the Literature
In our collaborative work, we began employing a CQA process in
response to a need to balance rigor, transparency, and trustworthi-
ness in data analysis while managing the challenges associated
with analyzing qualitative data in research teams. Our goal was to
integrate the existing literature related to qualitative theory, meth-
ods, and data analysis (Glaser & Strauss, 1967; Patton, 2015;
Strauss & Corbin, 2015) to utilize procedures that allowed us to
develop consistency and agreement in the coding process without
quantifying intercoder reliability (Patton, 2015). Drawing from
recommendations presented in other guides for conducting quali-
tative data analysis (Braun & Clarke, 2006; Hall et al., 2005),
researchers adopting CQA work in teams to collaboratively
develop a codebook (Gibbert, Ruigrok, & Wicki, 2008) through
open and axial coding, and subsequently test that codebook against
previously uncoded data before applying it to the entire dataset.
There are steps embedded to capitalize on perspectives offered by
members of the research team (i.e., researcher triangulation;
Lincoln & Guba, 1985), and the process culminates in a set of
themes and subthemes that form the basis for study results. The
CQA process also embraces the tradition of constant comparison
(Glaser & Strauss, 1967) as newly coded data are compared with
existing coding structures and modifications are made to those
structures through the completion of the coding process. This
provides flexibility to modify generative themes1 in light of
challenging or contradictory data.
The CQA process is grounded in thematic analysis, which is
a process for identifying, analyzing, and reporting patterns in
qualitative data (Boyatzis, 1998). Typically, thematic analysis
culminates with a set of themes that describe the most prominent
patterns in the data. These themes can be identified using inductive
approaches, whereby the researcher seeks patterns in the data
themselves and without any preexisting frame of reference, or
through deductive approaches in which a theoretical or conceptual
framework provides a guiding structure (Braun & Clarke, 2006;
Taylor, Bogdan, & DeVault, 2015). Alternatively, thematic analy-
sis can include a combination of inductive and deductive analysis.
In such an approach, the research topic, questions, and methods
may be informed by a particular theory, and that theory may also
guide the initial analysis of data. Researchers are then intentional
in seeking new ideas that challenge or extend the theoretical
perspectives adopted, which makes the process simultaneously
inductive (Patton, 2015). The particular approach adopted by a
research team will relate to the goals of the project, and particularly
the extent to which the research questions and methods are
informed by previous research and theory.
Trustworthiness is at the center of CQA, and methodological
decisions are made during the research design phase to address
Guba’s (1981) four criteria of credibility, confirmability, depend-
ability, and transferability. In particular, we find that triangulation,
peer debriefing, an audit trail, negative case analysis, and thick
description fold into CQA quite naturally. In addition to the afore-
mentioned researcher triangulation, data triangulation is often a
central feature of design decisions as researchers seek to draw from
multiple data sources to enhance dependability (Brewer & Hunter,
1989), and an outside peer debriefer (Shenton, 2004) can be invited
to comment upon ongoing analysis so to add credibility. An audit
trail can be maintained in a collaborative researcher journal to
enhance confirmability (Miles & Huberman, 1994), and a negative
case analysis can highlight data that contradict the main findings
so to enhance credibility (Lincoln & Guba, 1985). Transferability
is addressed by providing a detailed account of the study context
and through rich description in the presentation of results
(Shenton, 2004).
Overview of the Collaborative Constant
Comparative Qualitative Analysis Process
The CQA process includes a series of six progressive steps that
begin following the collection and transcription of qualitative data,
and culminate with the development of themes and subthemes
that summarize the data (see Figure 1). These steps include
(a) preliminary organization and planning, (b) open and axial
coding, (c) the development of a preliminary codebook, (d) pilot
testing the codebook, (e) the final coding process, and (f) review of
the codebook and finalizing the themes. While the process can be
employed with teams of various sizes, we have found teams of two
to four analysts to be most effective because they capitalize on the
integration of multiple perspectives, while also limiting variability
due to inconsistencies in coding (Olson et al., 2016). In larger
teams, some members may serve as peer debriefers.
When considering the initiation of teamwork, we concur with
the recommendations of Hall and colleagues (2005) related to the
development of rapport among team members prior to beginning
analysis. A lack of comfort may lead team members to hold back
critique and dissenting viewpoints that could be important to data
analysis. This is particularly true of faculty members working with
graduate students where the implied power relationship can
discourage students from being completely forthright. As a result,
we recommend that groups engage in initial conversations un-
related to the data analysis so to get to know one another and their
relational preferences. This could include a discussion of com-
munication styles, previous qualitative research experience, and
epistemological views related to qualitative inquiry (Hall et al.,
2005). The team leader may also provide an overview of the CQA
process, particularly when working with team members who have
not used it previously. As part of this process it should be made
clear that all perspectives and voices are valued, and that all team
members have an important contribution to make in the data
analysis process.
JTPE Vol. 37, No. 2, 2018
226 Richards and Hemphill
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
Phase One: Preliminary Organization and Planning
Following the collection and transcription of data, the CQA process
begins with an initial team meeting to discuss project logistics and
create an overarching plan for analysis. This includes writing a
brief description of the project, listing all qualitative data sources to
be included, acknowledging any theoretical or conceptual frame-
works utilized, and considering research questions to be addressed.
Members of the data analysis team should also have an initial
discussion of and negotiate through topics, such as the target
Figure 1 — Overview of the six steps involved in collaborative qualitative analysis. Strategies for enhancing trustworthiness underpin the analysis
process.
JTPE Vol. 37, No. 2, 2018
Qualitative Data Analysis 227
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
journal, anticipated authorship, and a flexible week-by-week plan
for analysis. The weekly plan includes a reference to the data
analysis phase, coding assignments for each team member, and
space for additional notes and clarification (see Figure 2). Deci-
sions related to the target journal and authorship, as well as the
weekly plan for analysis, will likely evolve over time, but we find it
helpful to begin such conversations early to ensure that all team
members are on the same page.
Phase Two: Open and Axial Coding
To begin the data analysis process we use open coding to identify
discrete concepts and patterns in the data, and axial coding to make
connections between those patterns (Corbin & Strauss, 1990).
While open and axial coding are distinct analytical procedures,
we embrace Strauss and Corbin’s (2015) recommendation that
they can occur simultaneously as researchers identify patterns and
then begins to note how those patterns fit together. Specifically,
each member of the research team reads two to three different data
transcripts (e.g., field notes, interviews, reflection journal entries)
and codes them into generative categories using their preferred
method (e.g., qualitative data analysis software, manual coding).
The goal is to identify patterns common across transcripts, or to
note deviant cases that appear.
Depending on the approach to thematic analysis adopted, a
theoretical framework and research questions could frame this
process. We find it helpful, however, to retain at least some
inductive elements so to remain open to generative themes that
may not fit with theory. Following each round of coding, team
members write memos in a researcher journal, preferably through a
Project Overview and Data Analysis Timeline
Project Overview: To understand how physical education teachers navigate the sociopolitical
realities of the contexts in which they work and derive meaning through interactions with
administrators, colleagues, parents, and students. This work is a qualitative follow-up to a large-
scale survey that was completed by over 400 physical education teachers from the US Midwest.
1. Theoretical Framework: Occupational socialization theory
2. Target Journal:Physical education pedagogy specific journal, such as the Journal of
Teaching in Physical Education or Research Quarterly for Exercise and Sport
3. Anticipated Authorship:Researcher 1, Researcher 2, Researcher 3
4. Data Sources:30 individual interviews, 5 focus group interviews, field notes from
observations of teachers
5. Research Questions:
a. How do physical education teachers perceive that they matter given the
marginalized nature of their subject?
b. How do interactions with administrators, colleagues, parents, and students
influence physical educators’ perceptions of mattering and marginalization?
c. How do physical education teachers’ perceptions of mattering and
marginalization influence feelings of role stress and burnout?
Weekly Plan for Data Analysis:
Week Coding Phase Coding Assignment
Notes
July 11, 2016 Initial Meeting rof nalp eht ssucsiD enoN
analysis and review the
data analysis timeline.
Make changes and
adjustments to the plan as
necessary. Discuss the
various phases of analysis
and prepare to begin open
coding.
August 1, 2016 Open Coding 1 Researcher 1: 1001, 1002
Researcher 2: 1003, 1004
Researcher 3: 1005, 1006
Open coding of each
transcript into categories.
Following coding, identify
3-4 generative themes and
write a 1 page memo
August8, 2016 Open Coding 2 Researcher 1: 1022, 1023
Researcher 2: 1024, 1025
Researcher 3: 1007, 1027
Open coding of each
transcript into categories.
Following coding, identify
3-4 generative themes and
write a 1 page memo
Figure 2 — Example of a project overview, code numbers (e.g., 1001) refer to interview transcripts.
JTPE Vol. 37, No. 2, 2018
228 Richards and Hemphill
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
shared online platform (e.g., Google Docs), in which they overview
the coding and describe two or three generative themes supported
by data excerpts. During research meetings, team members over-
view their coding in reference to the memos they wrote, and the
team discusses the coding process more generally. Phase two
continues for three to four iterations, or until the research team
feels they have seen and agree upon a variety of generative themes
related to the research questions. The exact number of transcripts
coded depends on the size of the dataset and the level of initial
agreement established amongst the researchers. The team canmove
on when all coders feel comfortable with advancing to the devel-
opment of a codebook. In our experience, this usually involves
coding approximately 30% of all transcripts, but could be less when
working with large datasets.
Phase Three: Development of a Preliminary
Codebook
After the completion of phase two, one team member reviews the
memos and develops a preliminary codebook (Richards,
Gaudreault, Starck, & Woods, in press). An example codebook
is included in Figure 3, and typically includes first- and second-order
themes, definitions for all themes, and space to code quotations from
the transcripts. Theme definitions provide the criteria against which
quotations are judged for inclusion in the codebook, and thus should
be clear and specific. We code by copy/pasting excerpts from the
transcript files into the codebook and flagging each with the
participant’s code number, the line numbers in the transcript file,
and a reference to the data source (e.g., Interview 1001, 102–105).
This allows for reference back to the data source to gain additional
context for quotations as needed. We always include a “General
(Uncoded)” category where researchers can place quotations that are
relevant, but do not fit anywhere in the existing coding structure.
These quotations can then be discussed during team meetings. Once
compiled, the draft codebook is circulated to the research team for
review and discussed during a subsequent team meeting. Changes
are made based on the team discussion, and a preliminary codebook
is finalized. At this stage we enlist the assistance of a researcher who
is familiar with the project, but not involved in the data analysis, to
serve as a peer debriefer (Lincoln & Guba, 1985). This individual
reviews and comments on the initial codebook, and appropriate
adjustments are made before proceeding.
Phase Four: Pilot Testing the Codebook
After the initial codebook has been developed, it is tested against
previously uncoded data. During this step, the researchers all code
the same two to three transcripts, and make notes in the researcher
journal related to interesting trends or problems with the codebook.
Weekly research team meetings provide a platform for researchers
to overview and compare their coding and discrepancies are
discussed until consensus is reached. Entries in the researcher
journal are also discussed. These discussions lead to the develop-
ment of coding conventions, which function as rules that guide
subsequent coding decisions. Conventions may be created for
double coding excerpts into two generative themes in rare instances
when both capture the content of a single quotation, and that
quotation cannot be divided in a meaningful way.
Perceived Mattering Codebook
stpircsnarT morf selpmaxE snoitinifeD semehtbuS semehT
Subject Marginalization Lack of
communication
Teacher believes physical education does
not matter due to lack of communication
about issues that affect thephysical
education environment.
“My stressful day, um probably when things pop up that are
not…A lot of my stresses get raised from being an activities
director. If the school calls me and says now they have to—
they have kids who are not coming, they change times, or I
have a different schedule. My stuff is very organized and if
it’s not where I think it’s supposed to be and I need it, that’s
very stressful for me” (1019, 210–217, individual interview)
dna emit fo kcaL
resources
Teacher believes physical education does
not matter due to lack of teaching contact
time and resources such as materials,
equipment for PE, or teaching facilities.
“It’s kind of rough because I don’t have my own classroom. I
don’t have my own computer up there. I don’t have a room
that I can make into a welcoming environment so that’s kind
of rough” (1018, 110–112, individual interview)
“Right now that class is more just like babysitting. It’s just a
study hall, kind of boring. I don’t have a classroom I’m in the
gym balcony where the bleachers are at. I don’t have space
the kids complain” (1018, 120–122, focus group)
eileb rehcaeT troppus fo kcaL ves physical education does not
matter due to situations in which the
physical educator does not feel support for
ideas or initiatives.
“I think the colleagues, it wouldn’t matter either way outside
of the P.E. teachers, and I think the administration wouldn’t
care either way.” (1018, 348–350, individual interview)
“At the elementary level that would be a big issue. As they
get a little older, you know middle school, high school it’s not
as much probably fun. They don’t see it in their eyes as much
fun. The students themselves probably wouldn’t care, there’d
be a handful.” (1019, 307–309, focus group)
Figure 3 — Example codebook including themes, subthemes, definitions of subthemes, and quotations from the dataset.
JTPE Vol. 37, No. 2, 2018
Qualitative Data Analysis 229
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
Conventions can also specify priority in the use of generative
themes. In Figure 3, for example, there are generative themes for
both “lack of support” and “lack of communication” related to
subject marginalization. Lack of communication could be consid-
ered a way in which support is limited, but because there is a
specific category for lack of communication, it would receive
priority when coding. Modifications are made to the codebook
as needed during these meetings, and an updated codebook is
produced to guide subsequent analysis. The pilot testing continues
for three to four rounds of coding, or until the research team feels
confident in the codebook. Once the team feels ready to move on,
they have a final discussion of the codebook in light of the pilot
testing and make adjustments. The peer debriefer (Lincoln &Guba,
1985) then reviews the evolving codebook and recommends
changes prior to the final coding process.
Phase Five: Final Coding Process
In the final phase of coding the adjusted codebook is applied to
all project data, including that which had been previously coded
during the formative phases of codebook development. While the
researcher triangulation involved when using multiple coders can
increase “validity2” in qualitative research, some have argued that
it has the potential to reduce “reliability” because of inconsisten-
cies in coding across analysts (Olson et al., 2016). As a result, some
qualitative researchers have introduced measures of inter-coder
reliability in an attempt to quantify agreement between coders
(Neuendorf, 2017). While acknowledging these perspectives, we
struggle with efforts to apply the quantitative principles of reliabil-
ity and validity to qualitative data analysis (Patton, 2015). We
prefer to approach the issue of coder agreement, and the broader
notions of trustworthiness and credibility, by establishing a clear
protocol and codebook (Gibbert et al., 2008) through previous steps
of CQA, and then dialogue through and reach consensus
on coded data. This is done either through consensus coding or
split coding. Regardless of the strategy chosen, coding conventions
developed during previous phases are applied to the coding process.
Analysts continue to make notes in the researcher journal related to
problems with the generative themes, or interesting patterns in the
data, and issues are discussed during weekly research meetings.
We continue to apply the constant comparative method (Strauss &
Corbin, 2015) at this stage as modifications are made to the code-
book to reflect ongoing insights developed in the coding process.
Consensus coding is the more rigorous, but more time-
consuming form of final coding. It is likely the more effective
approach when working in larger groups where coding consistency
concerns are more abundant (Olson et al., 2016). During each
iteration of coding, team members code the same two to three
transcripts into the codebook. Then, during research team meet-
ings, each coded statement is compared across members of the
research team. Disagreements are discussed until the group reaches
consensus. Split Coding relies more heavily on the establishment of
clarity through the preliminary coding phases and the coding
conventions that have been developed (Gibbert et al., 2008). While
less rigorous than consensus coding, split coding is also less time
consuming and manageable within smaller teams. During each
iteration of coding, team members code two to three different
transcripts. As a result, only one member of the teamwill code each
transcript. Then, during research meetings, questions or concerns
related to particular excerpts are discussed. Split coding culminates
with each team member reviewing all coded excerpts in the
codebook, and disagreements are discussed to consensus.
Phase Six: Review the Codebook and Finalize the
Themes
After all of the transcripts have been coded using consensus
coding or split coding, the research team meets one final time to
review the codebook. During the meeting, the codebook is
developed into a thematic structure comprised of themes and
associated subthemes that describe participants’ perspectives.
The thematic structure is reviewed and approved by all members
of the research team, and the final agreed upon structure forms the
basis for the result that will be presented as part of the manuscript.
Importantly, through the earlier stages of CQA, all members of
the research team have had a hand in shaping and agree upon the
themes that are presented. This process, therefore, capitalizes on
the enhanced trustworthiness provided by multiple analysts,
while minimizing issues related to coder variability, without
attempting to quantify the qualitative data analysis process
(Patton, 2015).
Conclusions and Final Thoughts
The purpose of this article is to provide an overview of a structured,
rigorous approach to CQA while attending to challenges that stem
from working in team environments. While this article has focused
primarily on the data analysis process, effective analysis begins at
the design phase when researchers pose research questions, decide
on methods, and identify participants (Patton, 2015). After data
have been collected, the six-phase CQA process is adopted to make
meaning through the formation of generative themes. This process
integrates existing approaches to qualitative research (Glaser &
Strauss, 1967; Miles & Huberman, 1994; Patton, 2015), and
contributes to the emerging literature that seeks to provide practical
examples of qualitative data analysis (e.g., Braun & Clarke, 2006;
Cornish et al., 2014; Hall et al., 2005). It provides a structured and
rigorous approach that enhances transparency through the data
analysis process (e.g., Kapiszewski & Kirilova, 2014; Moravcsik,
2014), while capitalizing on the development of a codebook and
multiple researchers’ perspectives (Gibbert et al., 2008).
In considering qualitative data analysis, Woods and Graber
(2016) explain, “ultimately, it is the responsibility of the investi-
gator to select those procedures that best meet the philosophic
orientation of the study, the purpose of the investigation, and the
methods that were used to collect the data” (p. 30). Regardless of
the particular approach taken, all qualitative researchers are chal-
lenged to ensure methodological rigor and transparency, and CQA
provides one way to demonstrate inclusive collaboration among
researchers. The coding, memoing, and pilot testing of the code-
book provide multiple layers where all researchers have opportu-
nities to share their perspectives. The audit trail maintained through
ongoing discussions and the researcher journal also enhances
transparency and allows for the process to be documented and
adapted for use across multiple research projects.
We find that CQA can aid in the management of large,
qualitative datasets by providing a structured and phasic approach
to analysis. This can be particularly helpful for graduate students,
early career researchers, and diverse research teams who may be
struggling to identify rigorous data analysis procedures that meet
the needs of all researchers (Cornish et al., 2014). The step-by-step
nature of the approach also has applicability for those coordinating
groups of researchers, or analysts who want to adopt a rigorous,
systematic, and defensible process that can be implemented with
fidelity on a consistent basis. The process can further be adapted for
JTPE Vol. 37, No. 2, 2018
230 Richards and Hemphill
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
those who prefer to analyze data manually, or through qualitative
data analysis software.
In order to enhance transparency, researchers should be spe-
cific about the methods used when analyzing data (Moravcsik,
2014). This can be done, in part, by identifying and implementing
with fidelity a practical guide to analysis, such as the one advocated
in this paper, or other examples in the literature (e.g., Braun &
Clarke, 2006; Cornish et al., 2014; Hall et al., 2005). The process
can then be specifically identified and cited in the methods, along
with an explanation of any adaptations or deviations from original
articulation. To further transparency, researchers may also com-
municate why they use collaboration in qualitative research, and
how they believe it enhances study results. In future qualitative
methodology discussions, researchers should continue to consider
more nuanced understandings of how collaboration enhances
qualitative research. These conversations have the potential to
capitalize on the benefits associated with multiple analysts, and
thus could aid the design of future research.
Notes
1. While many researchers use terms such as “emergent” or “emerging”
when discussing themes and the processes through which they are devel-
oped (Taylor & Ussher, 2001), this language implies that the researcher
plays a generally passive role in the creation of themes, or “if we just look
hard enough they will ‘emerge’ like Venus on the half shell” (Ely, Vinz,
Downing, & Anzul, 1997, p. 205). We, therefore, refer to themes as being
generative so to emphasize the active role researchers play in generating
them through qualitative data analysis.
2. While we agree with the perspective of Patton (2015), who is reluctant
to apply the quantitatively oriented terms of “reliability” and “validity” to
discussions of qualitative data analysis, we use them here because they
are adopted by Olson and colleagues (2016). Our intent is to differentiate
our desire to enhance trustworthiness and credibility from inter-coder
agreement, which is more quantitatively driven.
References
American Political Science Association. (2012). Guide to professional
ethics in political science (2nd ed.). Washington, DC: Author.
Boyatzis, R.E. (1998). Transforming qualitative information: Thematic
analysis and code development. Thousand Oaks, CA: Sage.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology.
Qualitative Research in Psychology, 3, 77–101. doi:10.1191/
1478088706qp063oa
Brewer, J., & Hunter, A. (1989). Multimethod research: A synthesis of
styles. Thousand Oaks, CA: Sage.
Corbin, J., & Strauss, A. (1990). Grounded theory research: Procedures,
canons, and evaluative criteria. Qualitative Sociology, 13, 3–21.
doi:10.1007/BF00988593
Cornish, F., Gillespie, A., & Zittoun, T. (2014). Collaborative analysis of
qualitative data. In U. Flick (Ed.), The Sage handbook of qualitative
data analysis (pp. 79–93). Thousand Oaks, CA: Sage.
Ely, M., Vinz, R., Downing, M., & Anzul, M. (1997). On writing
qualitative research: Living by words. London, UK: Routledge.
Gibbert, M., Ruigrok, W., & Wicki, B. (2008). What passes as a rigerous
case study? Strategic Management Journal, 29, 1465–1474. doi:
10.1002/smj.722
Glaser, B.G., & Strauss, A. (1967). The discovery of grounded theory:
Strategies for qualitative research. Chicago, IL: Aldine.
Guba, E. (1981). Criteria for assessing the trustworthiness of naturalistic
inquiry. Educational Technology Research and Development, 29(2),
75–91.
Hall, W.A., Long, B., Bermback, N., Jordan, S., & Patterson, K. (2005).
Qualitative teamwork issues and strategies: Coordination through
mutual adjustment. Qualitative Health Research, 15, 394–410.
PubMed doi:10.1177/1049732304272015
Hemphill, M.A., Richards, K.A.R., Templin, T.J., & Blankenship, B.T.
(2012). A content analysis of qualitative research in the Journal of
Teaching in Physical Education from 1998 to 2008. Journal of Tea-
ching in Physical Education, 31, 279–287. doi:10.1123/jtpe.31.3.279
Kapiszewski, D., & Kirilova, D. (2014). Transparency in qualitative
security studies research: Standards, beliefs, and challenges. Security
Studies, 23, 699–707. doi:10.1080/09636412.2014.970408
Lincoln, Y.S., &Guba, E. (1985).Naturalistic inquiry. NewYork, NY: Sage.
Miles, M.B., & Huberman, A.M. (1994). Qualitative data analysis: An
expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage.
Moran-Ellis, J., Alexander, V.D., Cronin, A., Dickinson, M., Fielding, J.,
Sleney, J., & Thomas, H. (2006). Triangulation and integration:
Processes, claims and implications. Qualitative Research, 6, 45–59.
doi:10.1177/1468794106058870
Moravcsik, A. (2014). Transparency: The revolution in qualitative
research. Political Science and Politics, 47, 48–53. doi:10.1017/
S1049096513001789
Neuendorf, K. (2017). The content analysis guidebook (2nd ed.).
Thousand Oaks, CA: Sage.
Olson, J.D., McAllister, C., Grinnell, L.D., Walters, K.G., & Appunn, F.
(2016). Applying constant comparative method with multiple investi-
gators and inter-coder reliability. The Qualitative Report, 21(1), 26–42.
Patton, M.Q. (2015). Qualitative research and evaluation methods (4th
ed.). Thousand Oaks, CA: Sage.
Rhoades, J.L., Woods, A.M., Daum, D.N., Ellison, D., & Trendowski,
T.N. (2016). JTPE: A 30-year retrospective of published research.
Journal of Teaching in Physical Education, 35, 4–15. doi:10.1123/
jtpe.2014-0112
Richards, K.A.R., Gaudreault, K.L., Starck, J.R., & Woods, A.M. (in
press). Physical education teachers’ perceptions of perceived matter-
ing and marginalization. Physical Education and Sport Pedagogy.
Richards, L. (1999). Qualitative teamwork: Making it work. Qualitative
Health Research, 9, 7–10. doi:10.1177/104973299129121659
Shenton, A.K. (2004). Strategies for ensuring trustworthiness in qualitative
research projects. Education for Information, 22, 63–75. doi:10.3233/
EFI-2004-22201
Sin, C.H. (2007). Using software to open up the “black box” of qualitative
data analysis in evaluations. Evaluation, 13, 110–120. doi:10.1177/
1356389007073684
Strauss, A., & Corbin, J. (2015). Basics of qualitative research: Techni-
ques and procedures for developing grounded theory (4th ed.).
New York, NY: Sage.
Taylor, G.W., & Ussher, J.M. (2001). Making sense of S&M: A discourse
analytic account. Sexualities, 4, 293–314. doi:10.1177/136346001
004003002
Taylor, S., Bogdan, R., & DeVault, M.L. (2015). Introduction to qualita-
tive research methods: A guidebook and resource (4th ed.).
New York, NY: Wiley.
Woods, A.M., & Graber, K. (2016). Interpretive and critical research:
A view through a qualitative lens. In C.D. Ennis (Ed.), Routledge
handbook of physical education pedagogies (pp. 21–33). New York,
NY: Routledge.
JTPE Vol. 37, No. 2, 2018
Qualitative Data Analysis 231
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
https://doi.org/10.1191/1478088706qp063oa
https://doi.org/10.1191/1478088706qp063oa
https://doi.org/10.1191/1478088706qp063oa
https://doi.org/10.1007/BF00988593
https://doi.org/10.1007/BF00988593
https://doi.org/10.1002/smj.722
https://doi.org/10.1002/smj.722
https://doi.org/10.1002/smj.722
http://www.ncbi.nlm.nih.gov/pubmed/15761107?dopt=Abstract
https://doi.org/10.1177/1049732304272015
https://doi.org/10.1177/1049732304272015
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1080/09636412.2014.970408
https://doi.org/10.1080/09636412.2014.970408
https://doi.org/10.1080/09636412.2014.970408
https://doi.org/10.1080/09636412.2014.970408
https://doi.org/10.1177/1468794106058870
https://doi.org/10.1177/1468794106058870
https://doi.org/10.1017/S1049096513001789
https://doi.org/10.1017/S1049096513001789
https://doi.org/10.1017/S1049096513001789
https://doi.org/10.1123/jtpe.2014-0112
https://doi.org/10.1123/jtpe.2014-0112
https://doi.org/10.1123/jtpe.2014-0112
https://doi.org/10.1123/jtpe.2014-0112
https://doi.org/10.1177/104973299129121659
https://doi.org/10.1177/104973299129121659
https://doi.org/10.3233/EFI-2004-22201
https://doi.org/10.3233/EFI-2004-22201
https://doi.org/10.3233/EFI-2004-22201
https://doi.org/10.1177/1356389007073684
https://doi.org/10.1177/1356389007073684
https://doi.org/10.1177/1356389007073684
https://doi.org/10.1177/136346001004003002
https://doi.org/10.1177/136346001004003002
https://doi.org/10.1177/136346001004003002
Copyright of Journal of Teaching in Physical Education is the property of Human Kinetics
Publishers, Inc. and its content may not be copied or emailed to multiple sites or posted to a
listserv without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
Journal of Clinical Epidemiology 129 (2021) 74e85
REVIEW
Defining Rapid Reviews: a systematic scoping review and thematic
analysis of definitions and defining characteristics of rapid reviews
Candyce Hamela,b,*, Alan Michauda, Micere Thukua, Becky Skidmorea, Adrienne Stevensa,
Barbara Nussbaumer-Streitc, Chantelle Garrittya,b
aOttawa Hospital Research Institute, Knowledge Synthesis Group, Ottawa, ON K1H 8L6, Canada
bUniversity of Split, School of Medicine, Split, Croatia 21000
cCochrane Austria, Department for Evidence-based Medicine and Evaluation, Danube University Krems, Krems, Austria
Accepted 29 September 2020; Published online 8 October 2020
Abstract
Background and Objective: Rapid reviews were first mentioned in the literature in 1997, when Best et al. described the rapid health
technology assessment program in the south and west regions of England but did not provide a formal definition. More recently, the only
consensus around a rapid review definition is that a formal definition does not exist. The primary aim of this work is to create a repository of
existing definitions and to identify key themes, which may help the knowledge synthesis community in defining rapid review products.
Methods: A systematic scoping review was performed to identify definitions used in journal-published rapid reviews written in English
between 2017 and January 2019. We searched Medline, Embase Classic þ Embase, PsycINFO, ERIC, Cochrane Library, CINAHL, and
Web of Science on December 21,
2018.
Two reviewers performed study selection and data extraction using a prioriedefined methods pub-
lished in a protocol. Definitions from rapid review methods articles (published from 1997 onward) identified in another scoping review were
added to the results, and all definitions were thematically analyzed using NVivo. A quantitative analysis was also performed around studies
cited.
Results: Definitions from 216 rapid reviews and 90 rapid review methods articles were included in the thematic analysis. Eight key
themes were identified: accelerated/rapid process or approach, variation in methods shortcuts, focus/depth/breadth of scope, compare
and contrast to a full traditional systematic review, stakeholder rationale, resource efficiency rationale, systematic approach, bias/limita-
tions. Secondary referencing was a common occurrence.
Conclusion: Thematic analysis performed in this systematic scoping review has allowed for the creation of a suggested definition for
rapid reviews that can be used to inform the systematic review community. � 2020 Elsevier Inc. All rights reserved.
Keywords:: Scoping review; Rapid reviews; Definition; Thematic analysis
1. Introduction
A rapid review (RR) was originally mentioned in the
literature in 1997, when Best et al. described the rapid
health technology assessment program in the south and
Funding: This work is supported in part by funds provided by Cochrane
and in part from a Canadian Institutes of Health Research grant (funding
research number 142310). The funders had no role in the development
of this manuscript and its protocol (registered on OSF: https://os-
f.io/y5f2m/). They did not have a role in the data collection, analyses,
interpretation of the data, and publication of the findings.
Declaration of interest: None.
* Corresponding author. Ottawa Hospital Research Institute, 501
Smyth Road, Ottawa, Ontario, Canada K1H 8L6.
E-mail address: cahamel@ohri.ca (C. Hamel).
https://doi.org/10.1016/j.jclinepi.2020.09.041
0895-4356/� 2020 Elsevier Inc. All rights reserved.
west regions of England [1]. Although they did not provide
a definition of an RR, they described a service which pro-
duces reports within two person months. The key features
of the service were to produce reports that were accurate,
timely, and accessible to decision makers. More recently,
the only consensus around an RR definition is that a formal
definition does not exist [2e4]. Several definitions have
been used in publications about RR methods, RR programs,
and RRs themselves. In 2016, Kelly et al. performed a
modified Delphi consensus approach and came up with a
set of statements defining the characteristics of an RR
[4], but did not provide a formal definition or a systematic
evaluation of existing definitions.
The popularity of RRs has been increasing over the past
20 years, with various organizations developing RRs,
including the World Health Organization (WHO) [5], the
mailto:cahamel@ohri.ca
http://crossmark.crossref.org/dialog/?doi=10.1016/j.jclinepi.2020.09.041&domain=pdf
https://doi.org/10.1016/j.jclinepi.2020.09.041
https://doi.org/10.1016/j.jclinepi.2020.09.041
https://doi.org/10.1016/j.jclinepi.2020.09.041
75ical Epidemiology 129 (2021) 74e85
What is new?
Key findings
� A repository of definitions from 216 rapid reviews
and 90 rapid review methods articles was created
(158 rapid reviews and 73 rapid review methods ar-
ticles provided a definition).
� Among the rapid reviews, 59 unique references
were cited 275 times. The top four cited authors
were referenced 135 times. Among rapid review
methods articles, 50 unique references were cited
179 times.
� A thematic analysis identified eight key themes in
defining rapid reviews.
� Secondary referencing was common among cited
articles.
What this adds to what was known?
� There is currently no consensus on what defines a
rapid review.
� The four most commonly reported themes (used in
~�50% of definitions) were used to create a pre-
liminary definition of a rapid review. Suggestions
are included on how users might tailor this defini-
tion to best meet their individual remit and man-
dates for producing rapid reviews.
What is the implication and what should change
now?
� The preliminary definition, with caveats, presented
can help the systematic review community define
their review with consistency, regardless of the la-
bel used to describe it.
Samueli Institute’s Rapid Evidence Assessment of the
Literature (REAL�) program [6], and the Canadian
Agency for Drugs and Technologies in Health Rapid
Response Service [7]. The number of RRs published in
the last 5 years has steadily increased. In 2013, 15
journal-published RRs were identified, growing to 52 by
2016, and 108 in 2018. Although these numbers are small,
most RRs are not published in journals. For example, in
2016, 52 published RRs and over 250 unpublished RRs
were identified from various health care organizations. In
2017, the Knowledge Synthesis Group at the Ottawa Hos-
pital Research Institute identified 148 organizations glob-
ally who produced RRs [RR workshop presentation,
November 27, 2019 Ottawa, Canada, with data derived
from internal projects].
Some of the problems with lacking a common definition
for RRs are that it makes it difficult
C. Hamel et al. / Journal of Clin
(i)
for researchers (e.g., building search strategies that accu-
rately identify RR) and readers/users of results to iden-
tify RRs correctly. This is important as the line may
be blurred (both in the conduct and the resulting conclu-
sions) between systematic reviews (SRs) that do not
meet a high-quality methodological conduct (e.g., low
or critical risk using AMSTAR 2) and RRs that use
transparent, measured abbreviated methods;
(ii)
to create and set methodological standards and apply
consistent constructs (e.g., Preferred Items in Systematic
Reviews and Meta-Analysis [PRISMA] for RRs, AM-
STAR for RRs); and
(iii)
as it results in a heterogeneous set of products under the
same name or conversely a homogeneous set of products
under different names. The term ‘rapid’ points toward
the speed at which the review is performed and not the
abbreviation or omission of steps taken to conduct the
review. For this reason, researchers have suggested other
terms be used, for example, restricted reviews [8,9]. To
date, ‘rapid review’ is the term that has been colloquially
adopted by the research community and endorsed by
various organizations, including Cochrane and the
WHO. However, other organizations have chosen other
terms, such as rapid evidence assessment by the UK
government.
2. Objective
The objective of this systematic scoping review was to
identify published RR literature to answer the question:
How are RRs defined in the literature? This work will pro-
vide a summary of existing definitions identified in the
literature on RRs and examine existing definitions to iden-
tify common themes across the body of literature. Creating
a repository of existing definitions and developing a prelim-
inary definition, while allowing for caveats and flexibility
depending on the organizational preferences or mandate,
is an important step in helping the knowledge synthesis
community conduct and identify RRs. In addition, as Co-
chrane considers RRs an important piece in their content
strategy, this makes this topic very important as these re-
sults will inform discussions within Cochrane on the utility
of RRs as a product.
3. Methods
This systematic scoping review was guided by estab-
lished scoping review methodology [10,11] and has been
prepared in accordance with the PRISMA extension for
Scoping Reviews (PRISMA-ScR) [12]. A protocol for this
work was registered on the Open Science Framework
76 C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
(OSF: https://osf.io/y5f2m/). Methods are briefly described
in Table 1, with additional details and deviations from the
protocol in Appendix A.
4. Results
The search strategies to identify RRs resulted in 2,657
unique records, of which 422 were evaluated at full text,
with 216 RRs included (Figure 1). Several records were
excluded at title/abstract as they did not explicitly state
the term rapid or a derivative. For feasibility, only those
with rapid, expedited, or abbreviated were considered for
inclusion, while excluding those that described the review
as focused (n 5 347), targeted (n 5 54), or pragmatic
(n 5 28). In addition, several other terms which may be
considered ‘rapid’ derivatives were identified; however,
because of the number of these records and our focus on
those who self-declared as ‘rapid’, they were also excluded
(n 5 127) (Appendix C).
Among the 216 RRs, 101 were published in 2017, 106
were published in 2018, and nine were published in January
of 2019 (Table 2). Most of the RRs (82.5%) were from cor-
responding authors from the United Kingdom (n 5 82),
Australia (n 5 41), the United States of America
(n 5 31), and Canada (n 5 24). Almost two-thirds
(63.0%) used the term RR, with others using the terms
rapid evidence assessment (10.1%) and rapid systematic re-
view (8.8%). Nearly two-thirds (141 of 216; 65.3%) first
Table 1. Methods in brief
Project stage Me
Eligibility criteria – Published rapid reviews using ‘rapid’ or derivative (e.g
– Published between January 2017 and January 20
19
– Written in English (for feasibility)
Searching for
studies
– Developed by an experienced information specialist wi
– Peer-reviewed using the PRESS checklist [13]
– Search (Dec 2018): MEDLINE� ALL, Embase Classic
Science (Appendix B)
– Search strategies not restricted by language
– Supplemented with definitions from rapid review meth
Study selection – Performed in DistillerSR [15]
– Piloted title/abstract (n 5 100) and full-text screening
– Liberal accelerated58 screening for titles and abstracts
– Dual-independent screening based on full text, with co
Data charting – Performed in DistillerSR [15]
– Piloted extractions (n 5 5), conflicts resolved through
– One reviewer extracted studies, a second reviewer verifi
Data synthesis – Rapid review characteristics and studies’ references ex
– Definitions imported into NVivo (version 12) for coding
used the term in the title, with the remaining first using
the term in the abstract (Appendix D).
4.1. Definitions from published RRs
In total, 158 (73%) RRs provided a definition. Fifteen
provided their own (i.e., 11 providing only their own defi-
nition and four referencing their own in addition to other
authors), and one provided a definition, but the references
in the publication did not line up and therefore no refer-
ences were recorded [16]. Some RR authors did not provide
an explicit definition, but made reference to another author
or method (e.g., ‘‘We conducted a review of the literature
using the rapid evidence assessment (REA) method
[17,18].’’) [19]. Among the 146 RRs that provided a defini-
tion citing another author, 59 unique references were cited a
total of 275 times (Appendix E.1). Among all RRs, a me-
dian of two references (range 0 to 7) were cited. Further-
more, 29 articles were cited once. The top four articles
cited were Khangura 2012 (n 5 54) [2], Ganann 2010
(n 5 42) [20], Tricco 2015 (n 5 21) [3], and Grant 2009
(n 5 18) [18] (Table 3).
4.2. Definitions from RR methods articles
In total, 81% (73 of 90) of the RR methods articles pro-
vided a definition. These definitions were included in the
thematic analysis to supplement the definitions identified
in the RRs. Briefly, methods articles were published be-
tween 1997 and 2019, with the majority of the articles pub-
lished since 2014 (68 of 90 (75.6%)). A total of 200
thod description
., abbreviated) in the title or abstract
th input on search terms by members of the research team
þ Embase, PsycINFO, ERIC, Cochrane Library, CINAHL, Web of
ods scoping review [14]
(n 5 25), co
nflicts resolved through
discussion
nflicts resolved through discussion
discussion
ed all extracted data, conflicts resolved through discussion
ported to MS Excel 2016
into themes
https://osf.io/y5f2m/
In
cl
ud
ed
Id
en
tif
ic
at
io
n
Sc
re
en
in
g
El
ig
ib
ilit
y
Records iden fied
through database
searching
N = 3672
Addi onal records
iden fied through grey
literature searching
N = 0
Addi onal records
iden fied from methods
scoping review
N = 90
Records a er duplicates removed N = 2657
Records screened
N = 2657
Records excluded N = 22
35
——————————————————–
Record does not discuss rapid review (or its variants) (n = 1679)
Pragma c review (n = 28)
Targeted review (n = 54)
Focused review (n = 347)
Other (to exclude) ( = 127)
Full-text ar cles
assessed for eligibility
N = 422
Full-text ar cles excluded, with reasons N = 206
————————————————————————
Full-text not available (n=2)
Not published in English (n = 11)
Other (i.e., animal studies, full conference proceeding, irretrievable due
to record error) (n=6)
Review not published in 2017, 2018 or January 2019 (n = 4)
Authors did not use the term rapid, accelerated, expedited or a variant
in the tle, abstract, or methods sec on to describe the review process
(n = 183)
Studies included in
synthesis
N = 216 RRs
N = 90 Methods papers
Fig. 1. PRISMA flow diagram.
77C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
definitions were cited, with 21 articles providing their own
definition (with or without a reference to other articles) and
50 unique articles. Among the 21 articles that provided
their own definition, 10 are those that are often referenced
in the RRs [18,20e28]. Methods articles referenced an
average of 2.22 references (median: 1, range: 0 to 10)
(Appendix E.2). The top four articles referenced were Ga-
nann 2010 [20] (n 5 27), Khangura 2012 [2] (n 5 27),
Khangura 2014 [29] (n 5 14), and Polisena 2015 [30]
(n 5 11). The other top articles in the RRs not in the top
four here, Tricco 2015 [3] and Grant 2009 [18], were refer-
enced 10 and four times, respectively.
There was overlap between the articles cited in the RRs
and methods articles. Across both data sources, there were
79 unique citations, with 30 citations included in both
scoping reviews, 29 unique to the RRs, and 20 unique to
the methods articles. Among the citations found in only
one of the two data sources, most were only referenced
one or two times (highlighted in Appendix F).
4.3. Thematic analysis
All definitions from the RRs and the methods articles
were thematically analyzed in NVivo. We identified eight
major themes (Figure 2). Among the 204 articles that re-
ported definitions (75 did not provide a definition and 27
RRs cited other studies with no identifiable themes), the
most common themes were theme 4: Compare and contrast
Table 2. Rapid review characteristics
Rapid review characteristics Rapid reviews (N [ 216)
Year publisheda
2017 101 (46.7%)
2018 106 (49.1%)
2019 9 (4.2%)
Countries of the corresponding author
UK 82 (38.0%)
Australia 41 (19.0%)
USA 31 (14.4%)
Canada 24 (11.1%)
Ireland 6 (2.8%)
Italy 5 (2.3%)
Germany 4 (1.9%)
Denmark, South Africa, Switzerland 3 (1.4%) (each)
Finland, India, Spain 2 (0.9%) (each)
Japan, Korea, Nepal, Norway, Poland, Sweden, Taiwan, Thailand & UK
1 (0.5%) (each)
Terminology used (first mentioned in RR)
Rapid review 136 (63.0%)
Rapid evidence assessment 22 (10.1%)
Rapid systematic review 19 (8.8%)
Rapid evidence review, rapid literature review (each) 12 (5.6%)
Systematic rapid evidence assessment, systematic rapid review 2 (0.9%) (each)
Abbreviated review, rapid appraisal, rapid best-fit framework synthesis, rapid
evidence-based review, rapid evidence summary, rapid evidence synthesis,
rapid meta-review, rapid qualitative review, rapid response review, rapid
structured evidence review, rapid synthesis
1 (0.5%) (each)
Terminology first mentioned in
Title 141 (65.3%)
Abstract 75 (34.7%)
References
Total 290
Unique references/citations 59
Median (Range) 2 (0 to 7)
Mean 1.34
Top references
Khangura 2012 54
Gannan 2010 42
Tricco 2015 21
Grant 2009 18
Number of references
0 59 (27.3%)
1 84 (38.9%)
2 35 (16.2%)
3 or more 38 (17.6%)
a Articles may be an Epub before print with the print date after January 2019. Years published are taken as of the search date (December 20,
2018).
78 C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
to SRs (68.1%; 139 of 204), theme 2: Variation in shortcut
methods (54.9%; 112 of 204), with theme 1: Accelerated/
rapid process and theme 6: Resource efficiency rationale
tied (48.5%; 99 of 204 each) (Figure 3). Definitions often
covered more than one of these themes, with a range of 1
to 8 (median: 3; mean: 3).
4.3.1. Theme 1: Accelerated or rapid process/approach
The terms accelerated, streamlined, quickly or rapid
were used in terms of the speed or timing for the overall
approach to completing the RR. For example, ‘‘rapid re-
views have been described as a streamlined alternative to
standard systematic reviews [31].’’ [32].
Table 3. e Top four study cited and their definitions
Study Definition Cites
Khangura 2012a ‘‘Given this lack of definition and evolving landscape, we have abstained from applying the
label ‘rapid review’ to our KTA syntheses, and have alternatively called them ‘evidence
summaries’. Despite this, we consider our evidence summaries to be part of the continuum
of rapid reviews, as conceptualized by Ganann and colleagues’’.
Ganann 2010b
Ganann 2010b ‘‘Rapid reviews are literature reviews that use methods to accelerate or streamline traditional
systematic review processes.’’
None
Tricco 2015c ‘‘.we used the following working definition, ‘a rapid review is a type of knowledge synthesis in
which components of the systematic review process are simplified or omitted to produce
information in a short period of time.’’’
Khangura 2012a
Grant 2009d ‘‘They aim to be rigorous and explicit in method and thus systematic but make concessions to
the breadth or depth of the process by limiting particular aspects of the systematic review
process.’’
Butler 2005e
a Khangura et al. Syst Rev. 2012; 1:10.
b Ganann et al. 2010. Implement Sci. 2010; 5:56.
c Tricco et al. 2015. BMC Med. 2015; 13:224.
d Grant & Booth. 2009. Health Info Libr J. 2009; 26(2):91-108.
e Link to Butler 2005 no longer active, update Burton 2007.
79C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
4.3.2.
Theme 2: Variation in methods shortcuts
There were a variety of words used to describe the short-
cuts used in the methods, including streamlined, restricted,
pragmatic, abbreviated, modifications, concessions, expe-
dited, simplifying, constraints, truncated, modified or
omitted steps, and limiting. The variety of words relate to
the lack of a standardized approach in which steps these
were applied, with some definitions providing examples
on which steps of the review process these shortcuts would
be applied. For example, ‘‘Major sources of streamlining
can include narrowing the scope of the review questions;
limiting literature search databases; the use of single (vs.
dual) abstract and full-text screening; reducing the extent
of data abstraction; omitting risk of bias/quality appraisal;
and restricting the extent of the synthesis [33].’’ [34].
Fig. 2. Eight key theme
4.3.3. Theme 3: Focus/depth/breadth of scope
Similar to the theme 2, this was more specific to the
topic, scope, or question being addressed in the RR rather
than the methodology. For example, ‘‘Rapid review is an
evidence synthesis methodology that applies a systematic
approach to evidence identification and syntheses, but with
a more limited scope than a systematic review.’’ [35].
4.3.4. Theme 4: Compare and contrast to a full tradi-
tional systematic review
Definitions often included a comparison or related RRs
to full SRs but provided an explanation in the text as to
the difference in general between an SR and the RR. For
example, ‘‘A rapid structured review differs from a
s in defining RRs.
99
112
35
139
72
99
50
19
Accelerated/ rapid
process
Varia on in methods
shortcuts
Focus/ breadth/ depth of
scope
Compare and contrast to
SR
Stakeholder ra onale
Resource efficiency
ra onale
Systema c approach
Bias/limita on
Repor ng of key themes
Fig. 3. Frequency of reporting of key themes.
80 C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
systematic review in relation to the extensiveness of the
search and methods used to undertake the analysis [36].’’
[37].
4.3.5.
Theme 5: Stakeholder rationale
Many definitions referenced performing an RR to inform
policy practice or to meet the needs of stakeholders,
including decision makers (e.g., health professionals) and
consumers. For example, ‘‘Rapid reviews are an emerging
type of knowledge synthesis which aims ‘to inform
health-related policy decisions and discussions, especially
when information needs are immediate’ [38].’’ [39].
4.3.6.
Theme 6: Resource efficiency rationale
Definitions often referred to RRs being performed
because of resource constraints, including cost, human re-
sources, time, and expertise. The difference between
completing a review in a timely way (theme 1) and
completing a review in a limited time frame is around the
requirement of completing the review, rather than at the
speed (e.g., rapidly, timely). For example, ‘‘Rapid reviews
use systematic review methods to search and critically
appraise existing research within limited resource and time
constraints [40].’’ [41].
4.3.7.
Theme 7: Systematic approach
Although RRs take shortcuts, several definitions stated
that they remain systematic, transparent, rigorous, repli-
cable, explicit, robust, using scientific methods. For
example, ‘‘‘Rapid reviews’ are knowledge synthesis in
which components of the systematic review process are
simplified or omitted, to produce information in a timely
manner, while retaining rigor in the selection and appraisal
of studies [2,20,22].’’ [42].
4.3.8.
Theme 8: Bias/limitations
There was some discussion around the bias that may be
introduced due to shortcuts. Although there are few studies
that formally evaluate RRs compared with full SRs, there is
a potential for bias and limitations when using shortcuts.
For example, ‘‘although potential biases related to stream-
lining procedures must be acknowledged [2].’’ [43].
4.4. Suggested definition
As there is not one common set of methods shortcuts
that can be taken when conducting an RR, there may not
be one common definition for an RR. As such, we suggest
the following broad definition, which meets a minimum set
of requirements identified in the thematic analysis, which
will also be used to seek further consensus from the system-
atic review community.
‘‘A rapid review is a form of knowledge synthesis that
accelerates the process of conducting a traditional system-
atic review through streamlining or omitting a variety of
methods to produce evidence in a resource-efficient
manner.’’
This definition covers the most common themes (i.e., 1,
2, 4, and 6) that were identified in approximately 50% or
more of the RRs and methods articles. By using broad
words like resources, this definition captures the time
element, as well as cost and human elements. Users could
81C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
then tailor this definition accordingly to best meet their in-
dividual remit and mandates for producing RRs by adding
additional details covered in other themes. For example, if
an organization produces RRs only when stakeholders
make a request (theme 5), it can be modified to include this
requirement.
‘‘A rapid review is a form of knowledge synthesis that
accelerates the process of conducting a traditional system-
atic review through streamlining or omitting a variety of
methods to produce evidence for stakeholders in a
resource-efficient manner.’’
Likewise, if the systematic aspect (theme 7) of RRs is
important, the definition can be further modified.
‘‘A rapid review is a rigorous and transparent form of
knowledge synthesis that accelerates the process of con-
ducting a traditional systematic review through streamlin-
ing or omitting a variety of methods to produce evidence
for stakeholders in a resource-efficient manner.’’
4.5. Collaboration among RR definition references
It was common for RR definitions to use secondary
referencing (i.e., quoting or paraphrasing from a source
which is mentioned in another text) [44]. As this was not
the primary objective of this scoping review, this is further
discussed in Appendix G for the interested reader.
5. Discussion
To the best of our knowledge, this is the first systemat-
ically developed repository of RR definitions and an anal-
ysis of their major common themes. Eight key themes
were identified, with the four most common themes being
comparing and contrasting RRs to SRs, variation in
shortcut methods, with accelerated/rapid process or
approach, and resource efficiency rationale tied for third.
As a criterion for inclusion was the use of the term rapid
or derivative and the goal is to conduct the review rapidly
regardless of which stages of conduct are abbreviated/
omitted, it is not surprising that one of the key themes
was around the accelerated or rapid approach.
As previously mentioned, some of the problems of lack-
ing a common definition are around the difficulty in identi-
fying RRs correctly and in having a homogenous set of
products under different names. Among the RRs included
in this scoping review, 18 different terms were used
(Table 1), with an additional 23 terms, that may be consid-
ered derivatives, excluded (for feasibility) when screening
titles/abstracts (Appendix C). Although the term ‘rapid re-
view’ seems to be the generally adopted term, ‘rapid’ points
to the speed of the process and not necessarily the methods
in which this is achieved. Recently, the term ‘restricted re-
view’ has been suggested to better capture the restrictions
in the methods [8,9]; however, this does not relate to the
speed of production. A common term for labeling these
products may not be feasible, as many organizations have
already adopted different terms for the same types of prod-
ucts. However, a definition with central tenets may help
producers of these reviews to identify their research for
easy identification, regardless of the term used to describe
the review. The importance of defining (vs. labeling) is
further supported in the Cochrane Handbook and MECIR,
which state that study design labels may be ambiguous,
and a focus on the defining features of the study is more
important than the label [45,46].
It is better to rely on the original source of the informa-
tion than to rely on the wording of another author who may
impose their own interpretation or meaning [44]. Although
48 unique references were cited in the RRs, there is a high
level of secondary referencing, as displayed in the collabo-
rative map (Appendix G. Figure 1), many pointing to the
same smaller set of studies. Therefore, in the context of
developing a definition for RRs (and/or a minimum set or
criteria/central tenets), the number of definitions used and
cited may not be as extensive as what the results from this
scoping review demonstrate. Using the suggested definition
from this scoping review, and the key citations for addi-
tional support, may help lessen the ‘noise’ of what has been
used and help guide future research in this area.
When comparing the key themes identified in this
scoping review to related research, we see there are some
similarities. Kelly et al. (2016) identified seven defining
characteristics of RRs through a Delphi process [4]. How-
ever, there were some limitations to this process as only 1
reviewer selected the included studies and it is unclear
how the initial survey was developed. In addition, the
search was run in December 2014, and the progression in
the amount of research evaluating RR methods, methodo-
logical development and guidance, and an increase in the
number of published RRs has grown since this time [14].
This initial work provides a solid foundation to which this
methodologically robust scoping review builds on using a
more contemporary sample. We were able to map six of
these seven defining characteristics to the themes we iden-
tified (Table 4). The only key theme not covered is theme 3
related to the focus/breadth or depth of the scope. The only
defining characteristic of an RR, identified by Kelly et al.,
that could not be related to one of the key themes identified
in this scoping review was that ‘‘rapid reviews have a pro-
tocol describing objectives, scope, PICO, and approach’’,
although this is more around the process of developing an
RR and less around defining it. Furthermore, Hartling
et al. [47] identified 36 rapid products from 20 organiza-
tions and concluded that there is extensive variability in
products labeled as RRs, but that the range of methods used
in developing these products is driven by and supported by
close and ongoing communication between the producers
of the review and the end user, a concept captured by key
themes 2 and 5.
To date, only one definition has emerged at the center
[20]: ‘‘Literature reviews that use methods to accelerate
Table 4. Kelly defining characteristics compared with themes identified
Kelly et al. defining characteristics Key theme(s)
Rapid reviews are conducted in less time than a systematic review
Theme 1: Accelerated/rapid process or approach
Theme 4: Compare and contrast to SR
Rapid reviews use a spectrum of approaches to complete an evidence synthesis related to a
defined research question(s) using the most systematic or rigorous methods as a limited
time frame allows
Theme 2: Variation in methods shortcuts
Theme 6: Resource efficiency rationale
Theme 7: Systematic approach
Rapid reviews should have a protocol describing objectives, scope, PICO, and approach None
Rapid reviews should tailor the explicit, reproducible methods conventionally used in a
systematic review in some manner to expedite the review process
Theme 1: Accelerated/rapid process or approach
Theme 2: Variation in methods shortcuts
Theme 4: Compare and contrast to SR
Theme 7: Systematic approach
Rapid reviews should transparently report methods and findings with a level of detail
needed to adequately answer the research question, meet the requirements of the
decision maker commissioning the review, and inform the audience for which the review
is intended, while meeting a delivery time line agreed on in advance.
Theme 5: Stakeholder rationale
Theme 6: Resource efficiency rationale
Theme 7: Systematic approach
Rapid reviews should be considered in the context of the decision at hand when emergent
or urgent decisions are required.
Theme 6: Resource efficiency rationale
Choices to adapt workflow should be balanced against the yet undetermined impact to
conclusions or validity of findings, and this risk should be communicated to the end user.
Theme 8: Bias/limitations
82 C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
or streamline traditional systematic review processes’’.
However, this definition does not specifically address vari-
ances in types of RRs produced across different contexts,
which are likely driven by the mandate or scope of the or-
ganization or entity producing them. In addition, when
comparing this definition to the eight themes identified in
the thematic analysis, it covers three of the eight key
themes: accelerated/rapid process or approach (theme 1),
variation in methods shortcuts (theme 2), and compare
and contrast to traditional systematic reviews (theme 4).
As this definition is from 2010, and RRs have been
evolving over time, one might expect that it would not
cover all key themes.
5.1. Implications for future research
Despite the increased use of RRs in policymaking
[48,49], to date, there is no agreed-on definition on what
constitutes a ‘rapid review’. Yet, other areas of knowledge
synthesis have developed definitions (e.g., what represents
a systematic review update, scoping reviews) [50,51] that
have been agreed on by the broader knowledge synthesis
community. Several other groups and programs have devel-
oped their own definitions for RRs. For example, Crawford
et al. 2015 describe the REAL� method, which ‘‘utilizes
specific tools (e.g., automated online software) and stan-
dard procedures (e.g., rulebooks) to rigorously deliver more
reliable, transparent and objective SRs in a streamlined
fashion, without compromising quality and at a lower cost
than other SR methods’’ [6]. The Department for Interna-
tional Development within the UK government has their
own program and state on their website that ‘‘Rapid evi-
dence assessments provide a more structured and rigorous
search and quality assessment of the evidence than a liter-
ature review but are not as exhaustive as a systematic re-
view’’. They can be used to ‘‘gain an overview of the
density and quality of evidence on a particular issue, sup-
port programming decisions by providing evidence on
key topics, and support the commissioning of further
research by identifying evidence gaps’’ [17]. Based on
the themes identified in this review, these definitions do
not fully define RRs.
As a field of research, RRs need to at least develop a
minimum set of criteria. If the concept of ‘rapid review’
is better defined, it will enable future studies of this meth-
odology to be a clearly distinguishable approach, measur-
able to the extent possible, and understandable in terms
of empirical observations. In a wider sense, researchers
need to be able to describe what is and what is not a ‘rapid
review’. Until such a time that a general working definition
is established, it may hinder efforts to promote the utility of
RRs to end users who may benefit from more timely evi-
dence to inform their decision-making. Lack of an
agreed-on definition may also unfairly hamper acceptance
of ‘rapid reviews’ by journal editors as a legitimate publi-
cation type and limit acknowledgment as a credible
83C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
academic output in terms of promotion and tenure of re-
searchers who undertake them. It also results in authors
producing a variety of products which are labeled under a
wide array of names, contributing to the lack of cohesion
and unity around the method. Furthermore, having a widely
accepted definition may facilitate the future funding of
‘rapid reviews’ by granting agencies. More generally, a
definition would facilitate discussion about RRs and would
improve understanding by end users. Collectively, this
highlights the need for an evidence-informed definition of
RR which can be adopted by researchers.
5.2. Strengths and limitations
This study provides a repository of existing definitions
identified in the current literature, identifies general themes,
and provides a flexible working definition of RRs to be used
by the wider knowledge synthesis community. In addition,
through our collaborative mapping, this study has allowed
us a first glance at the network of RR researchers who,
through their RR and methods work, have provided and
cited defining features of RRs.
However, there were some limitations. First, for feasi-
bility, only English journalepublished RRs identified in
the databases that were searched were captured. The pur-
pose of this scoping review was not to identify all RRs writ-
ten in the included time period, but rather to get a sense of
what definitions are currently being used. We included def-
initions from 216 RRs and supplemented these with the
definitions from 90 RR methods articles. It is likely that
RRs not captured would use definitions that would fall un-
der the eight key themes identified. Second, as the main
purpose of this review was to extract definitions verbatim
from RRs, some information was not extracted (e.g., fund-
ing source of RR), as suggested by PRISMA-ScR, or was
only extracted by 1 reviewer (e.g., the country of the corre-
sponding author). In addition, in some cases, citations may
not have specified a definition, but rather alluded to a
component of that definition. For example, ‘‘Rapid review
is a fairly new approach which has inherent strengths and
limitations [2,20,28,52e54].’’ [55]. We did not delve into
each reference to see which provided a definition and which
were studies that evaluated the inherent strengths and lim-
itations of RRs, but rather captured it in its entirety. In other
cases, the reference provided was not specific to RRs but
pointed to a methodology that was followed: ‘‘We conduct-
ed a rapid systematic literature review after a priori devel-
oped protocol [56].’’ [57]. It is therefore possible that some
of the references may not actually provide a definition for
RRs but instead may contain the methods of RRs or ratio-
nale as to why one might conduct an RR. Third, several
terms were identified during title and abstract screening,
some of which may have been RRs but were not identified
as such (Appendix C). Because of the number of records
with these terms, they were excluded, for feasibility.
Therefore, it is possible that some reviews may have been
missed that would qualify as an RR.
6. Conclusion
Eight key themes were identified, which have been
considered in developing a preliminary, broad definition
of an RR. This suggested definition, with additional caveats
and opportunity for flexibility, will help the systematic re-
view community define their review with consistency,
regardless of the label used to describe it. Failure to use a
consistent definition, or at least a minimum set of criteria,
will be a barrier to moving the science forward in this field.
CRediT authorship contribution statement
Candyce Hamel: Conceptualization, Funding acquisi-
tion, Data curation. Alan Michaud: Investigation, Valida-
tion, Writing – review & editing. Micere Thuku:
Investigation, Validation, Writing – review & editing.
Becky Skidmore: Data curation. Adrienne Stevens:
Conceptualization, Funding acquisition, Writing – review
& editing. Barbara Nussbaumer-Streit: Conceptualiza-
tion, Funding acquisition, Writing – review & editing.
Chantelle Garritty: Conceptualization, Data curation.
Supplementary data
Supplementary data to this article can be found online at
https://doi.org/10.1016/j.jclinepi.2020.09.041.
References
[1] Best L, Stevens A, Colin-Jones D. Rapid and responsive health tech-
nology assessment: the development and evaluation process in the
South and West region of England. J Clin Effectiveness 1997;2(2):
51e6.
[2] Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evi-
dence summaries: the evolution of a rapid review approach. Syst
Rev 2012;1:10.
[3] Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, et al.
A scoping review of rapid review methods. BMC Med 2015;13:224.
[4] Kelly SE, Moher D, Clifford TJ. Defining rapid reviews: a modified
DELPHI consensus approach. Int J Technol Assess Health Care
2016;32(4):265e75.
[5] Tricco AC, Langlois EV, Straus SE. (editors). Rapid Reviews to
Strengthen Health Policy and Systems: A Practical Guide. Geneva:
World Health Organization; 2017. Available at https://www.who.
int/alliance-hpsr/resources/publications/rapid-review-guide/en/. Ac-
cessed November 15, 2019.
[6] Crawford C, Boyd C, Jain S, Khorsan R, Jonas W. Rapid Evidence
Assessment of the Literature (REAL(�)): streamlining the systematic
review process and creating utility for evidence-based health care.
BMC Res Notes 2015;8:631.
[7] CADTH Rapid Response Service: about the rapid response service.
Available at https://www.cadth.ca/about-cadth/what-we-do/products-
services/rapid-response-service. Accessed November 12, 2019.
https://doi.org/10.1016/j.jclinepi.2020.09.041
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref1
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref1
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref1
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref1
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref1
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref2
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref2
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref2
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref3
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref3
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref4
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref4
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref4
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref4
https://www.who.int/alliance-hpsr/resources/publications/rapid-review-guide/en/
https://www.who.int/alliance-hpsr/resources/publications/rapid-review-guide/en/
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref6
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref6
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref6
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref6
https://www.cadth.ca/about-cadth/what-we-do/products-services/rapid-response-service
https://www.cadth.ca/about-cadth/what-we-do/products-services/rapid-response-service
84 C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
[8] Aronson JK, Heneghan C, Mahtani KR, Pluddemann A. A word
about evidence: ‘‘rapid reviews’’ or ‘‘restricted reviews’’? BMJ
Evidence-Based Med 2018;23(6):204e5.
[9] Pluddemann A, Aronson JK, Onakpoya I, Heneghan C, Mahtani KR.
Redefining rapid reviews: a flexible framework for restricted system-
atic reviews. BMJ Evidence-Based Med 2018;23(6):201e3.
[10] Arksey H, O’Malley L. Scoping studies: towards a methodological
framework. Int J Soc Res Methodol 2005;8(1):19e32.
[11] Peters MDJ, Godfrey C, McInerney P, Baldini Soares C, Khalil H,
Parker D. Chapter 11: scoping reviews. In: Aromataris E, Munn Z,
editors. Joanna Briggs Institute Reviewer’s Manual. https://
reviewersmanual.joannabriggs.org/. Accessed October 5, 2018.
[12] Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping
reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med
Published Online September 2018;4.
[13] McGowan J, Sampson M, Salzwdel DM, et al. PRESS Peer Review of
Electronic Search Strategies: 2015 Guideline Statement. J Clin Epide-
miol 2016;75:40e6. https://doi.org/10.1016/j.jclinepi.2016.01.021.
[14] Hamel C, Michaud A, Thuku M, et al. Few evaluative studies exist
examining rapid review methodology across stages of conduct: a sys-
tematic scoping review. J Clin Epidemiol 2020;126:131e40. https:
//doi.org/10.1016/j.jclinepi.2020.06.027.
[15] DistillerSR [Computer Program]. https://v2dis-prod.
evidencepartners.com.
[16] Harrison R, Manias E, Mears S, Heslop D, Hinchcliff R, Hay L. Ad-
dressing unwarranted clinical variation: a rapid review of current ev-
idence. J Eval Clin Pract 2019;25:53e65.
[17] Department for International Development. Rapid Evidence Assess-
ments. GOV.UK. 2017. Available at https://www.gov.uk/government/
collections/rapid-evidence-assessments. Accessed November 12, 2019.
[18] GrantMJ, BoothA. A typology of reviews: an analysis of 14 review types
and associated methodologies. Health Info Libr J 2009;26:91e108.
[19] Bagnasco A, Cadorin L, Barisone M, Bressan V, Iemmi M, Prandi M,
et al. Ethical dimensions of paediatric nursing: a rapid evidence
assessment. Nurs Ethics 2018;25(1):111e22.
[20] Ganann R, Ciliska D, Thomas H. Expediting systematic reviews:
methods and implications of rapid reviews. Implement Sci 2010;5:56.
[21] Abou-Setta AM, Jeyaraman M, Attia A, Al-Inany HG, Ferri M,
Ansari MT, et al. Methods for developing evidence reviews in short
periods of time: a scoping review. PLOS ONE 2016;11:e0165903.
[22] Harker J, Kleijnen J. What is a rapid review? A methodological
exploration of rapid reviews in Health Technology Assessments. Int
J Evid Based Healthc 2012;10(4):397e410.
[23] Kelly SE, Moher D, Clifford TJ. Quality of conduct and reporting in
rapid reviews: an exploration of compliance with PRISMA and AM-
STAR guidelines. Syst Rev 2016;5:79.
[24] Schunemann HJ, Moja L. Reviews: Rapid! Rapid! Rapid! … and sys-
tematic. Sys Rev 2015;4:4.
[25] Thomas J, Newman M, Oliver S. Rapid evidence assessments of
research to inform social policy: taking stock and moving forward.
Evid Policy 2013;9(1):5e27.
[26] Varker T, Forbes D, Dell L, Weston A, Merlin T, Hodson S, et al.
Rapid evidence assessment: increasing the transparency of an
emerging methodology. J Eval Clin Pract 2015;21:1199e204.
[27] Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S,
et al. Rapid versus full systematic reviews: validity in clinical prac-
tice? 2008;78(11).
[28] Watt A, Cameron A, Sturm L, Lathlean T, Babidge W, Blamey S,
et al. Rapid reviews versus full systematic reviews: an inventory of
current methods and practice in health technology assessment. Int J
Technol Assess Health Care 2008;(2):133.
[29] Khangura S, Polisena J, Clifford TJ, Farrah K, Kamel C. Rapid re-
view: an emerging approach to evidence synthesis in health technol-
ogy assessment. Int J Technol Assess Health Care 2014;30(1):20e7.
[30] Polisena J, Garritty CM, Kamel C, Stevens A, Abou-Setta AM. Rapid
review programs to support health care and policy decision making: a
descriptive analysis of processes and methods. Syst Rev 2015;4:26.
[31] Peterson K, Floyd N, Ferguson L, Christensen V, Helfand M. User
survey finds rapid evidence reviews increased uptake of evidence
by Veterans Health Administration leadership to inform fast-paced
health-system decision-making. Syst Rev 2016;5.
[32] Coster JE, Turner JK, Bradbury D, Cantrell A. Why do people choose
emergency and urgent care services? A rapid review utilizing a sys-
tematic literature search and narrative synthesis. Acad Emerg Med
2017;24(9):1137e49.
[33] Polisena J, Garritty C, Umscheid CA, et al. Rapid Review Summit:
an overview and initiation of a research agenda. Syst Rev 2015;4:111.
[34] Patnode CD, Eder ML, Walsh ES, Viswanathan M, Lin JS. The use of
rapid review methods for the U.S. Preventive services task force. Am
J Prev Med 2018;54(1S1):S19e25.
[35] Chambers S, Barton KL, Albani V, Anderson AS, Wrieden WL.
Identifying dietary differences between Scotland and England: a
rapid review of the literature. Public Health Nutr 2017;20(14):
2459e77.
[36] Centre forReviews andDissemination. SystematicReviews:CRD’s guid-
ance for undertaking reviews in health care. 2009. Available at http://
file:///C:/Dropbox/Dropbox/Systematic%20review/CRD%20-%20How
%20to%20conduct%20a%20SR . Accessed September 25, 2018.
[37] Alison Rodriguez, Joanna Smith, Kirstine McDermid. Dignity ther-
apy interventions for young people in palliative care: a rapid struc-
tured evidence review. Int J Palliat Nurs. (7):339.
[38] Lal S, Adair C. E-mental health: a rapid review of the literature. Psy-
chiatr Serv 2014;65(1):24e32.
[39] Huband N, Furtado V, Schel S, et al. Characteristics and needs of
long-stay forensic psychiatric inpatients: a rapid review of the litera-
ture. In: Andreasson AA, Baldwin B, Beer B, Braun B, Butwell C,
et al, editors. The International Journal of Forensic Mental Health,
17 2018:45e60.
[40] Booth A, Sutton A, Papaioannou D. Systematic Approaches to a Suc-
cessful Literature Review. 2nd ed. London, UK: SAGE Publications
Ltd.; 2016.
[41] Clibbens N, Harrop D, Blackett S. Early discharge in acute mental
health: a rapid literature review. Int J Ment Health Nurs 2018;
27(5):1305e25.
[42] Naughton AM, Cowley LE, Tempest V, Maguire SA, Mann MK,
Kemp AM. Ask Me! self-reported features of adolescents experi-
encing neglect or emotional maltreatment: a rapid systematic review.
Child Care Health Dev 2017;43(3):348e60.
[43] M€annist€o II, Pirttimaa RA. A review of interventions to support the
educational attainments of children and adolescents in foster care.
Adoption & Fostering 2018;42(3):266e81.
[44] University of Leicester – Student Learning Development. 1.13 sec-
ondary reference. University of Leicester. Available at https://
www2.le.ac.uk/offices/ld/resources/writing/harvard/content/1.13-
secondary-reference. Accessed November 3, 2019.
[45] McKenzie J, Brennan S, Ryan R, Thomson H, Johnston R, Thomas J.
Chapter 3: defining the criteria for including studies and how they
will be grouped for the synthesis. Cochrane Handbook for Systematic
Reviews of Interventions Version 6.0. 2019. Available at https://
training.cochrane.org/handbook/current/chapter-03. Accessed July
11, 2020.
[46] Higgins J, Lasserson T, Chandler J, et al. Methodological Expecta-
tions of Cochrane Intervention Reviews (MECIR): Standards for
the Conduct and Reporting of New Cochrane Intervention Reviews,
Reporting of Protocols and the Planning. London, UK: Cochrane;
2019. Available at https://community.cochrane.org/mecir-manual.
Accessed November 15, 2019.
[47] Hartling L, Guise J-M, Kato E, Anderson J, Belinson S, Berliner E,
et al. A taxonomy of rapid reviews links report types and methods to
specific decision-making contexts. J Clin Epidemiol 2015;68:
1451e1462.e3.
[48] O’Leary DF, Casey M, O’Connor L, et al. Using rapid reviews: an
example from a study conducted to inform policy-making. J Adv
Nurs 2017;73:742e52.
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref8
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref8
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref8
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref8
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref9
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref9
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref9
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref9
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref10
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref10
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref10
https://reviewersmanual.joannabriggs.org/
https://reviewersmanual.joannabriggs.org/
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref12
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref12
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref12
https://doi.org/10.1016/j.jclinepi.2016.01.021
https://doi.org/10.1016/j.jclinepi.2020.06.027
https://doi.org/10.1016/j.jclinepi.2020.06.027
https://v2dis-prod.evidencepartners.com
https://v2dis-prod.evidencepartners.com
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref13
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref13
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref13
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref13
https://www.gov.uk/government/collections/rapid-evidence-assessments
https://www.gov.uk/government/collections/rapid-evidence-assessments
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref15
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref15
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref15
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref16
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref16
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref16
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref16
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref17
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref17
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref18
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref18
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref18
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref19
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref19
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref19
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref19
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref20
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref20
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref20
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref21
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref21
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref22
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref22
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref22
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref22
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref23
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref23
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref23
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref23
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref24
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref24
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref24
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref25
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref25
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref25
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref25
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref26
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref26
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref26
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref26
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref27
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref27
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref27
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref28
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref28
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref28
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref28
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref29
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref29
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref29
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref29
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref29
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref30
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref30
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref31
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref31
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref31
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref31
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref32
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref32
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref32
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref32
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref32
http://file:///C:/Dropbox/Dropbox/Systematic%20review/CRD%20-%20How%20to%20conduct%20a%20SR
http://file:///C:/Dropbox/Dropbox/Systematic%20review/CRD%20-%20How%20to%20conduct%20a%20SR
http://file:///C:/Dropbox/Dropbox/Systematic%20review/CRD%20-%20How%20to%20conduct%20a%20SR
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref35
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref35
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref35
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref36
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref36
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref36
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref36
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref36
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref36
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref37
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref37
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref37
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref38
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref38
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref38
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref38
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref39
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref39
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref39
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref39
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref39
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref40
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref40
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref40
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref40
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref40
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref40
https://www2.le.ac.uk/offices/ld/resources/writing/harvard/content/1.13-secondary-reference
https://www2.le.ac.uk/offices/ld/resources/writing/harvard/content/1.13-secondary-reference
https://www2.le.ac.uk/offices/ld/resources/writing/harvard/content/1.13-secondary-reference
https://training.cochrane.org/handbook/current/chapter-03
https://training.cochrane.org/handbook/current/chapter-03
https://community.cochrane.org/mecir-manual
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref45
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref45
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref45
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref45
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref45
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref46
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref46
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref46
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref46
85C. Hamel et al. / Journal of Clinical Epidemiology 129 (2021) 74e85
[49] Lawani MA, Valera B, Fortier-Brochu E, L�egar�e F, Carmichael PH,
Côt�e L, et al. Five shared decision-making tools in 5 months: use
of rapid reviews to develop decision boxes for seniors living with de-
mentia and their caregivers. Syst Rev 2017;6(1):56.
[50] Moher D, Tsertsvadze A. Systematic reviews: when is an update an
update? Lancet 2006;367:881e3.
[51] Colquhoun HL, Levac D, O’Brien KK, Straus S, Tricco AC,
Perrier L, et al. Scoping reviews: time for clarity in definition,
methods, and reporting. J Clin Epidemiol 2014;67:1291e4.
[52] Abrami PC, Borokhovski E, Bernard RM, Wade CA, Tamim R,
Persson T, et al. Issues in conducting and disseminating brief reviews
of evidence. Evid Policy 2010;6(3):371e89.
[53] Australian Safety and Efficacy Register of New Interventional Proced-
ures -Surgical. Rapid versus full systematic reviews: an inventory of cur-
rent methods and practice in Health Technology Assessment. Report no
60. Stepney: Australian Safety and Efficacy Register of New Interven-
tional Procedures -Surgical (ASERNIP-S). Report number 60 2007.
Available at https://www.crd.york.ac.uk/CRDWeb/ShowRecord.asp?
ID=32007000667&ID=32007000667. Accessed November 18, 2019.
[54] Gough D, Thomas J, Oliver S. Clarifying differences between review
designs and methods. Syst Rev 2012;1:28.
[55] Bhardwaj K, Locke T, Biringer A, Booth A, Darling EK, Dougan S,
et al. Newborn Bilirubin screening for preventing severe Hyperbilir-
ubinemia and Bilirubin encephalopathy: a rapid review. Curr Pediatr
Rev 2017;13(1):67e90.
[56] Hartling L, Guise J-M, Kato E, et al. EPC Methods: An Exploration
of Methods and Context for the Production of Rapid Reviews.
Rockville (MD): Agency for Healthcare Research and Quality
(US); 2015.
[57] Aronow WS, Shamliyan TA. Comparative effectiveness and safety of
Rivaroxaban in adults with nonvalvular atrial fibrillation. Am J Ther
2018.
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref47
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref47
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref47
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref47
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref47
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref47
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref47
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref47
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref48
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref48
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref48
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref49
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref49
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref49
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref49
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref50
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref50
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref50
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref50
https://www.crd.york.ac.uk/CRDWeb/ShowRecord.asp?ID=32007000667&ID=32007000667
https://www.crd.york.ac.uk/CRDWeb/ShowRecord.asp?ID=32007000667&ID=32007000667
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref52
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref52
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref53
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref53
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref53
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref53
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref53
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref54
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref54
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref54
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref54
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref55
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref55
http://refhub.elsevier.com/S0895-4356(20)31127-6/sref55
- Defining Rapid Reviews: a systematic scoping review and thematic analysis of definitions and defining characteristics of ra …
1. Introduction
2. Objective
3. Methods
4. Results
4.1. Definitions from published RRs
4.2. Definitions from RR methods articles
4.3. Thematic analysis
4.3.1. Theme 1: Accelerated or rapid process/approach
4.3.2. Theme 2: Variation in methods shortcuts
4.3.3. Theme 3: Focus/depth/breadth of scope
4.3.4. Theme 4: Compare and contrast to a full traditional systematic review
4.3.5. Theme 5: Stakeholder rationale
4.3.6. Theme 6: Resource efficiency rationale
4.3.7. Theme 7: Systematic approach
4.3.8. Theme 8: Bias/limitations
4.4. Suggested definition
4.5. Collaboration among RR definition references
5. Discussion
5.1. Implications for future research
5.2. Strengths and limitations
6. Conclusion
CRediT authorship contribution statement
Supplementary data
References
A Practical Guide to Collaborative Qualitative Data Analysis
K. Andrew R. Richards
University of Alabama
Michael A. Hemphill
University of North Carolina at Greensboro
The purpose of this article is to provide an overview of a structured, rigorous approach to collaborative qualitative analysis while
attending to challenges associated with working in team environments. The method is rooted in qualitative data analysis literature
related to thematic analysis, as well as the constant comparative method. It seeks to capitalize on the benefits of coordinating
qualitative data analysis in groups, while controlling for some of the challenges introduced when working with multiple analysts.
The method includes the following six phases: (a) preliminary organization and planning, (b) open and axial coding,
(c) development of a preliminary codebook, (d) pilot testing the codebook, (e) the final coding process, and (f) reviewing
the codebook and finalizing themes. These phases are supported by strategies to enhance trustworthiness, such as (a) peer
debriefing, (b) researcher and data triangulation, (c) an audit trail and researcher journal, and (d) a search for negative cases.
Keywords: multiple analysts, qualitative methods, researcher training, trustworthiness
While qualitative research has been traditionally discussed
as an individual undertaking (Richards, 1999), research reports
have in general become increasingly multi-authored (Cornish,
Gillespie, & Zittoun, 2014; Hall, Long, Bermback, Jordan, &
Patterson, 2005), and the field of physical education is no exception
(Hemphill, Richards, Templin, & Blankenship, 2012; Rhoades,
Woods, Daum, Ellison, & Trendowski, 2016). Proponents of
collaborative data analysis note benefits related to integrating
the perspectives provided by multiple researchers, which is often
viewed as one way to enhance trustworthiness (Patton, 2015).
Collaborative data analysis also allows for researchers to effec-
tively manage large datasets while drawing upon diverse perspec-
tives and counteracting individual biases (Olson, McAllister,
Grinnell, Walters, & Appunn, 2016). Further, collaborative ap-
proaches have been presented as one way to effectively mentor new
and developing qualitative researchers (Cornish et al., 2014).
Despite the potential benefits associated with collaborative
qualitative data analysis, coordination among analysts can be
challenging and time consuming (Miles & Huberman, 1994).
Issues related to the need to plan, negotiate, and manage the
complexity of integrating multiple interpretations while balancing
diverse goals for involvement in research also represent challenges
that need to be managed when working in group environments
(Hall et al., 2005; Richards, 1999). Concerns have also been voiced
about the extent to which qualitative data analysis involving
multiple analysts is truly integrative and collaborative, rather than
reflective of multiple researchers working in relative isolation to
produce different accounts or understandings of the data (Moran-
Ellis et al., 2006).
Challenges associated with collaboration become com-
pounded when also considering the need for transparency in
qualitative data analysis. Analysts need to develop, implement,
and report robust, systematic, and defensible plans for analyzing
qualitative data so to build trustworthiness in both the process and
findings of research (Sin, 2007). Authors, however, often prioritize
results in research manuscripts, which limits space for discussing
methods. This leads to short descriptions of data analysis proce-
dures in which broad methods without an explanation of how they
were implemented (Moravcsik, 2014), and can limit the availability
of exemplar data analysis methods in the published literature.
This has given rise to calls for increased transparency in the
data collection, analysis, and presentation aspects of qualitative
research (e.g., Kapiszewski & Kirilova, 2014). The American
Political Science Association (APSA, 2012), for example, recently
published formal recommendations for higher transparency stan-
dards in qualitative research that call for detailed descriptions of
data analysis procedures and require authors support all assertions
with examples from the dataset.
To help address the aforementioned challenges, scholars
across a variety of disciplines have published reports on best
practices related to qualitative data analysis (e.g., Braun &
Clarke, 2006; Cornish et al., 2014; Hall et al., 2005). Many of these
approaches are rooted in theories and epistemologies of qualitative
research that guide practice (e.g., Boyatzis, 1998; Glaser & Strauss,
1967; Lincoln & Guba, 1985; Strauss & Corbin, 2015). Braun and
Clarke’s (2006) highly referenced article provides a step-by-step
approach to completing thematic analysis that helps to demystify
the process with practical examples. In another similar vein, Hall
and colleagues (2005) tackle challenges related to collaborative
data analysis and discuss processes related to (a) building an
analysis team, (b) developing reflexivity and theoretical sensitivity,
(c) addressing analytic procedures, and (d) preparing to publish
findings. Cornish and colleagues (2014) further this discussion by
noting several dimensions of collaboration that are beneficial in
Richards is with the Department of Kinesiology, University of Alabama,
Tuscaloosa, AL. Hemphill is with the Department of Kinesiology, University of
North Carolina at Greensboro, Greensboro, NC. Address author correspondence to
K. Andrew R. Richards at karichards2@ua.edu.
225
Journal of Teaching in Physical Education, 2018, 37, 225-231
https://doi.org/10.1123/jtpe.2017-0084
© 2018 Human Kinetics, Inc. RESEARCH NOTE
mailto:karichards2@ua.edu
mailto:karichards2@ua.edu
https://doi.org/10.1123/jtpe.2017-0084
qualitative data analysis. The rigor and quality of the methodology
may benefit, for example, when research teams include insider and
outsider perspectives, multiple disciplines, academics and practi-
tioners, international perspectives, or senior and junior faculty
members.
In this paper, we contribute to the growing literature that
seeks to provide practical approaches to qualitative data analysis by
overviewing a six-step approach to conducting collaborative qual-
itative analysis (CQA), which is grounded in qualitative methods
and data analysis literature (e.g., Glaser & Strauss, 1967; Lincoln &
Guba, 1985; Patton, 2015). While some practical guides in the
literature provide an overview of data analysis procedures, such as
thematic analysis (Braun&Clarke, 2006), and others discuss issues
related to collaboration (Hall et al., 2005), we seek to address both
by overviewing a structured, rigorous approach to CQA while
attending to challenges that stem from working in team environ-
ments. We close by making the case that the CQA process can be
employed when working with students, novice researchers, and
scholars new to qualitative inquiry.
Collaborative Qualitative Analysis:
Building Upon the Literature
In our collaborative work, we began employing a CQA process in
response to a need to balance rigor, transparency, and trustworthi-
ness in data analysis while managing the challenges associated
with analyzing qualitative data in research teams. Our goal was to
integrate the existing literature related to qualitative theory, meth-
ods, and data analysis (Glaser & Strauss, 1967; Patton, 2015;
Strauss & Corbin, 2015) to utilize procedures that allowed us to
develop consistency and agreement in the coding process without
quantifying intercoder reliability (Patton, 2015). Drawing from
recommendations presented in other guides for conducting quali-
tative data analysis (Braun & Clarke, 2006; Hall et al., 2005),
researchers adopting CQA work in teams to collaboratively
develop a codebook (Gibbert, Ruigrok, & Wicki, 2008) through
open and axial coding, and subsequently test that codebook against
previously uncoded data before applying it to the entire dataset.
There are steps embedded to capitalize on perspectives offered by
members of the research team (i.e., researcher triangulation;
Lincoln & Guba, 1985), and the process culminates in a set of
themes and subthemes that form the basis for study results. The
CQA process also embraces the tradition of constant comparison
(Glaser & Strauss, 1967) as newly coded data are compared with
existing coding structures and modifications are made to those
structures through the completion of the coding process. This
provides flexibility to modify generative themes1 in light of
challenging or contradictory data.
The CQA process is grounded in thematic analysis, which is
a process for identifying, analyzing, and reporting patterns in
qualitative data (Boyatzis, 1998). Typically, thematic analysis
culminates with a set of themes that describe the most prominent
patterns in the data. These themes can be identified using inductive
approaches, whereby the researcher seeks patterns in the data
themselves and without any preexisting frame of reference, or
through deductive approaches in which a theoretical or conceptual
framework provides a guiding structure (Braun & Clarke, 2006;
Taylor, Bogdan, & DeVault, 2015). Alternatively, thematic analy-
sis can include a combination of inductive and deductive analysis.
In such an approach, the research topic, questions, and methods
may be informed by a particular theory, and that theory may also
guide the initial analysis of data. Researchers are then intentional
in seeking new ideas that challenge or extend the theoretical
perspectives adopted, which makes the process simultaneously
inductive (Patton, 2015). The particular approach adopted by a
research team will relate to the goals of the project, and particularly
the extent to which the research questions and methods are
informed by previous research and theory.
Trustworthiness is at the center of CQA, and methodological
decisions are made during the research design phase to address
Guba’s (1981) four criteria of credibility, confirmability, depend-
ability, and transferability. In particular, we find that triangulation,
peer debriefing, an audit trail, negative case analysis, and thick
description fold into CQA quite naturally. In addition to the afore-
mentioned researcher triangulation, data triangulation is often a
central feature of design decisions as researchers seek to draw from
multiple data sources to enhance dependability (Brewer & Hunter,
1989), and an outside peer debriefer (Shenton, 2004) can be invited
to comment upon ongoing analysis so to add credibility. An audit
trail can be maintained in a collaborative researcher journal to
enhance confirmability (Miles & Huberman, 1994), and a negative
case analysis can highlight data that contradict the main findings
so to enhance credibility (Lincoln & Guba, 1985). Transferability
is addressed by providing a detailed account of the study context
and through rich description in the presentation of results
(Shenton, 2004).
Overview of the Collaborative Constant
Comparative Qualitative Analysis Process
The CQA process includes a series of six progressive steps that
begin following the collection and transcription of qualitative data,
and culminate with the development of themes and subthemes
that summarize the data (see Figure 1). These steps include
(a) preliminary organization and planning, (b) open and axial
coding, (c) the development of a preliminary codebook, (d) pilot
testing the codebook, (e) the final coding process, and (f) review of
the codebook and finalizing the themes. While the process can be
employed with teams of various sizes, we have found teams of two
to four analysts to be most effective because they capitalize on the
integration of multiple perspectives, while also limiting variability
due to inconsistencies in coding (Olson et al., 2016). In larger
teams, some members may serve as peer debriefers.
When considering the initiation of teamwork, we concur with
the recommendations of Hall and colleagues (2005) related to the
development of rapport among team members prior to beginning
analysis. A lack of comfort may lead team members to hold back
critique and dissenting viewpoints that could be important to data
analysis. This is particularly true of faculty members working with
graduate students where the implied power relationship can
discourage students from being completely forthright. As a result,
we recommend that groups engage in initial conversations un-
related to the data analysis so to get to know one another and their
relational preferences. This could include a discussion of com-
munication styles, previous qualitative research experience, and
epistemological views related to qualitative inquiry (Hall et al.,
2005). The team leader may also provide an overview of the CQA
process, particularly when working with team members who have
not used it previously. As part of this process it should be made
clear that all perspectives and voices are valued, and that all team
members have an important contribution to make in the data
analysis process.
JTPE Vol. 37, No. 2, 2018
226 Richards and Hemphill
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
Phase One: Preliminary Organization and Planning
Following the collection and transcription of data, the CQA process
begins with an initial team meeting to discuss project logistics and
create an overarching plan for analysis. This includes writing a
brief description of the project, listing all qualitative data sources to
be included, acknowledging any theoretical or conceptual frame-
works utilized, and considering research questions to be addressed.
Members of the data analysis team should also have an initial
discussion of and negotiate through topics, such as the target
Figure 1 — Overview of the six steps involved in collaborative qualitative analysis. Strategies for enhancing trustworthiness underpin the analysis
process.
JTPE Vol. 37, No. 2, 2018
Qualitative Data Analysis 227
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
journal, anticipated authorship, and a flexible week-by-week plan
for analysis. The weekly plan includes a reference to the data
analysis phase, coding assignments for each team member, and
space for additional notes and clarification (see Figure 2). Deci-
sions related to the target journal and authorship, as well as the
weekly plan for analysis, will likely evolve over time, but we find it
helpful to begin such conversations early to ensure that all team
members are on the same page.
Phase Two: Open and Axial Coding
To begin the data analysis process we use open coding to identify
discrete concepts and patterns in the data, and axial coding to make
connections between those patterns (Corbin & Strauss, 1990).
While open and axial coding are distinct analytical procedures,
we embrace Strauss and Corbin’s (2015) recommendation that
they can occur simultaneously as researchers identify patterns and
then begins to note how those patterns fit together. Specifically,
each member of the research team reads two to three different data
transcripts (e.g., field notes, interviews, reflection journal entries)
and codes them into generative categories using their preferred
method (e.g., qualitative data analysis software, manual coding).
The goal is to identify patterns common across transcripts, or to
note deviant cases that appear.
Depending on the approach to thematic analysis adopted, a
theoretical framework and research questions could frame this
process. We find it helpful, however, to retain at least some
inductive elements so to remain open to generative themes that
may not fit with theory. Following each round of coding, team
members write memos in a researcher journal, preferably through a
Project Overview and Data Analysis Timeline
Project Overview: To understand how physical education teachers navigate the sociopolitical
realities of the contexts in which they work and derive meaning through interactions with
administrators, colleagues, parents, and students. This work is a qualitative follow-up to a large-
scale survey that was completed by over 400 physical education teachers from the US Midwest.
1. Theoretical Framework: Occupational socialization theory
2. Target Journal:Physical education pedagogy specific journal, such as the Journal of
Teaching in Physical Education or Research Quarterly for Exercise and Sport
3. Anticipated Authorship:Researcher 1, Researcher 2, Researcher 3
4. Data Sources:30 individual interviews, 5 focus group interviews, field notes from
observations of teachers
5. Research Questions:
a. How do physical education teachers perceive that they matter given the
marginalized nature of their subject?
b. How do interactions with administrators, colleagues, parents, and students
influence physical educators’ perceptions of mattering and marginalization?
c. How do physical education teachers’ perceptions of mattering and
marginalization influence feelings of role stress and burnout?
Weekly Plan for Data Analysis:
Week Coding Phase Coding Assignment
Notes
July 11, 2016 Initial Meeting rof nalp eht ssucsiD enoN
analysis and review the
data analysis timeline.
Make changes and
adjustments to the plan as
necessary. Discuss the
various phases of analysis
and prepare to begin open
coding.
August 1, 2016 Open Coding 1 Researcher 1: 1001, 1002
Researcher 2: 1003, 1004
Researcher 3: 1005, 1006
Open coding of each
transcript into categories.
Following coding, identify
3-4 generative themes and
write a 1 page memo
August8, 2016 Open Coding 2 Researcher 1: 1022, 1023
Researcher 2: 1024, 1025
Researcher 3: 1007, 1027
Open coding of each
transcript into categories.
Following coding, identify
3-4 generative themes and
write a 1 page memo
Figure 2 — Example of a project overview, code numbers (e.g., 1001) refer to interview transcripts.
JTPE Vol. 37, No. 2, 2018
228 Richards and Hemphill
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
shared online platform (e.g., Google Docs), in which they overview
the coding and describe two or three generative themes supported
by data excerpts. During research meetings, team members over-
view their coding in reference to the memos they wrote, and the
team discusses the coding process more generally. Phase two
continues for three to four iterations, or until the research team
feels they have seen and agree upon a variety of generative themes
related to the research questions. The exact number of transcripts
coded depends on the size of the dataset and the level of initial
agreement established amongst the researchers. The team canmove
on when all coders feel comfortable with advancing to the devel-
opment of a codebook. In our experience, this usually involves
coding approximately 30% of all transcripts, but could be less when
working with large datasets.
Phase Three: Development of a Preliminary
Codebook
After the completion of phase two, one team member reviews the
memos and develops a preliminary codebook (Richards,
Gaudreault, Starck, & Woods, in press). An example codebook
is included in Figure 3, and typically includes first- and second-order
themes, definitions for all themes, and space to code quotations from
the transcripts. Theme definitions provide the criteria against which
quotations are judged for inclusion in the codebook, and thus should
be clear and specific. We code by copy/pasting excerpts from the
transcript files into the codebook and flagging each with the
participant’s code number, the line numbers in the transcript file,
and a reference to the data source (e.g., Interview 1001, 102–105).
This allows for reference back to the data source to gain additional
context for quotations as needed. We always include a “General
(Uncoded)” category where researchers can place quotations that are
relevant, but do not fit anywhere in the existing coding structure.
These quotations can then be discussed during team meetings. Once
compiled, the draft codebook is circulated to the research team for
review and discussed during a subsequent team meeting. Changes
are made based on the team discussion, and a preliminary codebook
is finalized. At this stage we enlist the assistance of a researcher who
is familiar with the project, but not involved in the data analysis, to
serve as a peer debriefer (Lincoln & Guba, 1985). This individual
reviews and comments on the initial codebook, and appropriate
adjustments are made before proceeding.
Phase Four: Pilot Testing the Codebook
After the initial codebook has been developed, it is tested against
previously uncoded data. During this step, the researchers all code
the same two to three transcripts, and make notes in the researcher
journal related to interesting trends or problems with the codebook.
Weekly research team meetings provide a platform for researchers
to overview and compare their coding and discrepancies are
discussed until consensus is reached. Entries in the researcher
journal are also discussed. These discussions lead to the develop-
ment of coding conventions, which function as rules that guide
subsequent coding decisions. Conventions may be created for
double coding excerpts into two generative themes in rare instances
when both capture the content of a single quotation, and that
quotation cannot be divided in a meaningful way.
Perceived Mattering Codebook
stpircsnarT morf selpmaxE snoitinifeD semehtbuS semehT
Subject Marginalization Lack of
communication
Teacher believes physical education does
not matter due to lack of communication
about issues that affect thephysical
education environment.
“My stressful day, um probably when things pop up that are
not…A lot of my stresses get raised from being an activities
director. If the school calls me and says now they have to—
they have kids who are not coming, they change times, or I
have a different schedule. My stuff is very organized and if
it’s not where I think it’s supposed to be and I need it, that’s
very stressful for me” (1019, 210–217, individual interview)
dna emit fo kcaL
resources
Teacher believes physical education does
not matter due to lack of teaching contact
time and resources such as materials,
equipment for PE, or teaching facilities.
“It’s kind of rough because I don’t have my own classroom. I
don’t have my own computer up there. I don’t have a room
that I can make into a welcoming environment so that’s kind
of rough” (1018, 110–112, individual interview)
“Right now that class is more just like babysitting. It’s just a
study hall, kind of boring. I don’t have a classroom I’m in the
gym balcony where the bleachers are at. I don’t have space
the kids complain” (1018, 120–122, focus group)
eileb rehcaeT troppus fo kcaL ves physical education does not
matter due to situations in which the
physical educator does not feel support for
ideas or initiatives.
“I think the colleagues, it wouldn’t matter either way outside
of the P.E. teachers, and I think the administration wouldn’t
care either way.” (1018, 348–350, individual interview)
“At the elementary level that would be a big issue. As they
get a little older, you know middle school, high school it’s not
as much probably fun. They don’t see it in their eyes as much
fun. The students themselves probably wouldn’t care, there’d
be a handful.” (1019, 307–309, focus group)
Figure 3 — Example codebook including themes, subthemes, definitions of subthemes, and quotations from the dataset.
JTPE Vol. 37, No. 2, 2018
Qualitative Data Analysis 229
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
Conventions can also specify priority in the use of generative
themes. In Figure 3, for example, there are generative themes for
both “lack of support” and “lack of communication” related to
subject marginalization. Lack of communication could be consid-
ered a way in which support is limited, but because there is a
specific category for lack of communication, it would receive
priority when coding. Modifications are made to the codebook
as needed during these meetings, and an updated codebook is
produced to guide subsequent analysis. The pilot testing continues
for three to four rounds of coding, or until the research team feels
confident in the codebook. Once the team feels ready to move on,
they have a final discussion of the codebook in light of the pilot
testing and make adjustments. The peer debriefer (Lincoln &Guba,
1985) then reviews the evolving codebook and recommends
changes prior to the final coding process.
Phase Five: Final Coding Process
In the final phase of coding the adjusted codebook is applied to
all project data, including that which had been previously coded
during the formative phases of codebook development. While the
researcher triangulation involved when using multiple coders can
increase “validity2” in qualitative research, some have argued that
it has the potential to reduce “reliability” because of inconsisten-
cies in coding across analysts (Olson et al., 2016). As a result, some
qualitative researchers have introduced measures of inter-coder
reliability in an attempt to quantify agreement between coders
(Neuendorf, 2017). While acknowledging these perspectives, we
struggle with efforts to apply the quantitative principles of reliabil-
ity and validity to qualitative data analysis (Patton, 2015). We
prefer to approach the issue of coder agreement, and the broader
notions of trustworthiness and credibility, by establishing a clear
protocol and codebook (Gibbert et al., 2008) through previous steps
of CQA, and then dialogue through and reach consensus
on coded data. This is done either through consensus coding or
split coding. Regardless of the strategy chosen, coding conventions
developed during previous phases are applied to the coding process.
Analysts continue to make notes in the researcher journal related to
problems with the generative themes, or interesting patterns in the
data, and issues are discussed during weekly research meetings.
We continue to apply the constant comparative method (Strauss &
Corbin, 2015) at this stage as modifications are made to the code-
book to reflect ongoing insights developed in the coding process.
Consensus coding is the more rigorous, but more time-
consuming form of final coding. It is likely the more effective
approach when working in larger groups where coding consistency
concerns are more abundant (Olson et al., 2016). During each
iteration of coding, team members code the same two to three
transcripts into the codebook. Then, during research team meet-
ings, each coded statement is compared across members of the
research team. Disagreements are discussed until the group reaches
consensus. Split Coding relies more heavily on the establishment of
clarity through the preliminary coding phases and the coding
conventions that have been developed (Gibbert et al., 2008). While
less rigorous than consensus coding, split coding is also less time
consuming and manageable within smaller teams. During each
iteration of coding, team members code two to three different
transcripts. As a result, only one member of the teamwill code each
transcript. Then, during research meetings, questions or concerns
related to particular excerpts are discussed. Split coding culminates
with each team member reviewing all coded excerpts in the
codebook, and disagreements are discussed to consensus.
Phase Six: Review the Codebook and Finalize the
Themes
After all of the transcripts have been coded using consensus
coding or split coding, the research team meets one final time to
review the codebook. During the meeting, the codebook is
developed into a thematic structure comprised of themes and
associated subthemes that describe participants’ perspectives.
The thematic structure is reviewed and approved by all members
of the research team, and the final agreed upon structure forms the
basis for the result that will be presented as part of the manuscript.
Importantly, through the earlier stages of CQA, all members of
the research team have had a hand in shaping and agree upon the
themes that are presented. This process, therefore, capitalizes on
the enhanced trustworthiness provided by multiple analysts,
while minimizing issues related to coder variability, without
attempting to quantify the qualitative data analysis process
(Patton, 2015).
Conclusions and Final Thoughts
The purpose of this article is to provide an overview of a structured,
rigorous approach to CQA while attending to challenges that stem
from working in team environments. While this article has focused
primarily on the data analysis process, effective analysis begins at
the design phase when researchers pose research questions, decide
on methods, and identify participants (Patton, 2015). After data
have been collected, the six-phase CQA process is adopted to make
meaning through the formation of generative themes. This process
integrates existing approaches to qualitative research (Glaser &
Strauss, 1967; Miles & Huberman, 1994; Patton, 2015), and
contributes to the emerging literature that seeks to provide practical
examples of qualitative data analysis (e.g., Braun & Clarke, 2006;
Cornish et al., 2014; Hall et al., 2005). It provides a structured and
rigorous approach that enhances transparency through the data
analysis process (e.g., Kapiszewski & Kirilova, 2014; Moravcsik,
2014), while capitalizing on the development of a codebook and
multiple researchers’ perspectives (Gibbert et al., 2008).
In considering qualitative data analysis, Woods and Graber
(2016) explain, “ultimately, it is the responsibility of the investi-
gator to select those procedures that best meet the philosophic
orientation of the study, the purpose of the investigation, and the
methods that were used to collect the data” (p. 30). Regardless of
the particular approach taken, all qualitative researchers are chal-
lenged to ensure methodological rigor and transparency, and CQA
provides one way to demonstrate inclusive collaboration among
researchers. The coding, memoing, and pilot testing of the code-
book provide multiple layers where all researchers have opportu-
nities to share their perspectives. The audit trail maintained through
ongoing discussions and the researcher journal also enhances
transparency and allows for the process to be documented and
adapted for use across multiple research projects.
We find that CQA can aid in the management of large,
qualitative datasets by providing a structured and phasic approach
to analysis. This can be particularly helpful for graduate students,
early career researchers, and diverse research teams who may be
struggling to identify rigorous data analysis procedures that meet
the needs of all researchers (Cornish et al., 2014). The step-by-step
nature of the approach also has applicability for those coordinating
groups of researchers, or analysts who want to adopt a rigorous,
systematic, and defensible process that can be implemented with
fidelity on a consistent basis. The process can further be adapted for
JTPE Vol. 37, No. 2, 2018
230 Richards and Hemphill
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
those who prefer to analyze data manually, or through qualitative
data analysis software.
In order to enhance transparency, researchers should be spe-
cific about the methods used when analyzing data (Moravcsik,
2014). This can be done, in part, by identifying and implementing
with fidelity a practical guide to analysis, such as the one advocated
in this paper, or other examples in the literature (e.g., Braun &
Clarke, 2006; Cornish et al., 2014; Hall et al., 2005). The process
can then be specifically identified and cited in the methods, along
with an explanation of any adaptations or deviations from original
articulation. To further transparency, researchers may also com-
municate why they use collaboration in qualitative research, and
how they believe it enhances study results. In future qualitative
methodology discussions, researchers should continue to consider
more nuanced understandings of how collaboration enhances
qualitative research. These conversations have the potential to
capitalize on the benefits associated with multiple analysts, and
thus could aid the design of future research.
Notes
1. While many researchers use terms such as “emergent” or “emerging”
when discussing themes and the processes through which they are devel-
oped (Taylor & Ussher, 2001), this language implies that the researcher
plays a generally passive role in the creation of themes, or “if we just look
hard enough they will ‘emerge’ like Venus on the half shell” (Ely, Vinz,
Downing, & Anzul, 1997, p. 205). We, therefore, refer to themes as being
generative so to emphasize the active role researchers play in generating
them through qualitative data analysis.
2. While we agree with the perspective of Patton (2015), who is reluctant
to apply the quantitatively oriented terms of “reliability” and “validity” to
discussions of qualitative data analysis, we use them here because they
are adopted by Olson and colleagues (2016). Our intent is to differentiate
our desire to enhance trustworthiness and credibility from inter-coder
agreement, which is more quantitatively driven.
References
American Political Science Association. (2012). Guide to professional
ethics in political science (2nd ed.). Washington, DC: Author.
Boyatzis, R.E. (1998). Transforming qualitative information: Thematic
analysis and code development. Thousand Oaks, CA: Sage.
Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology.
Qualitative Research in Psychology, 3, 77–101. doi:10.1191/
1478088706qp063oa
Brewer, J., & Hunter, A. (1989). Multimethod research: A synthesis of
styles. Thousand Oaks, CA: Sage.
Corbin, J., & Strauss, A. (1990). Grounded theory research: Procedures,
canons, and evaluative criteria. Qualitative Sociology, 13, 3–21.
doi:10.1007/BF00988593
Cornish, F., Gillespie, A., & Zittoun, T. (2014). Collaborative analysis of
qualitative data. In U. Flick (Ed.), The Sage handbook of qualitative
data analysis (pp. 79–93). Thousand Oaks, CA: Sage.
Ely, M., Vinz, R., Downing, M., & Anzul, M. (1997). On writing
qualitative research: Living by words. London, UK: Routledge.
Gibbert, M., Ruigrok, W., & Wicki, B. (2008). What passes as a rigerous
case study? Strategic Management Journal, 29, 1465–1474. doi:
10.1002/smj.722
Glaser, B.G., & Strauss, A. (1967). The discovery of grounded theory:
Strategies for qualitative research. Chicago, IL: Aldine.
Guba, E. (1981). Criteria for assessing the trustworthiness of naturalistic
inquiry. Educational Technology Research and Development, 29(2),
75–91.
Hall, W.A., Long, B., Bermback, N., Jordan, S., & Patterson, K. (2005).
Qualitative teamwork issues and strategies: Coordination through
mutual adjustment. Qualitative Health Research, 15, 394–410.
PubMed doi:10.1177/1049732304272015
Hemphill, M.A., Richards, K.A.R., Templin, T.J., & Blankenship, B.T.
(2012). A content analysis of qualitative research in the Journal of
Teaching in Physical Education from 1998 to 2008. Journal of Tea-
ching in Physical Education, 31, 279–287. doi:10.1123/jtpe.31.3.279
Kapiszewski, D., & Kirilova, D. (2014). Transparency in qualitative
security studies research: Standards, beliefs, and challenges. Security
Studies, 23, 699–707. doi:10.1080/09636412.2014.970408
Lincoln, Y.S., &Guba, E. (1985).Naturalistic inquiry. NewYork, NY: Sage.
Miles, M.B., & Huberman, A.M. (1994). Qualitative data analysis: An
expanded sourcebook (2nd ed.). Thousand Oaks, CA: Sage.
Moran-Ellis, J., Alexander, V.D., Cronin, A., Dickinson, M., Fielding, J.,
Sleney, J., & Thomas, H. (2006). Triangulation and integration:
Processes, claims and implications. Qualitative Research, 6, 45–59.
doi:10.1177/1468794106058870
Moravcsik, A. (2014). Transparency: The revolution in qualitative
research. Political Science and Politics, 47, 48–53. doi:10.1017/
S1049096513001789
Neuendorf, K. (2017). The content analysis guidebook (2nd ed.).
Thousand Oaks, CA: Sage.
Olson, J.D., McAllister, C., Grinnell, L.D., Walters, K.G., & Appunn, F.
(2016). Applying constant comparative method with multiple investi-
gators and inter-coder reliability. The Qualitative Report, 21(1), 26–42.
Patton, M.Q. (2015). Qualitative research and evaluation methods (4th
ed.). Thousand Oaks, CA: Sage.
Rhoades, J.L., Woods, A.M., Daum, D.N., Ellison, D., & Trendowski,
T.N. (2016). JTPE: A 30-year retrospective of published research.
Journal of Teaching in Physical Education, 35, 4–15. doi:10.1123/
jtpe.2014-0112
Richards, K.A.R., Gaudreault, K.L., Starck, J.R., & Woods, A.M. (in
press). Physical education teachers’ perceptions of perceived matter-
ing and marginalization. Physical Education and Sport Pedagogy.
Richards, L. (1999). Qualitative teamwork: Making it work. Qualitative
Health Research, 9, 7–10. doi:10.1177/104973299129121659
Shenton, A.K. (2004). Strategies for ensuring trustworthiness in qualitative
research projects. Education for Information, 22, 63–75. doi:10.3233/
EFI-2004-22201
Sin, C.H. (2007). Using software to open up the “black box” of qualitative
data analysis in evaluations. Evaluation, 13, 110–120. doi:10.1177/
1356389007073684
Strauss, A., & Corbin, J. (2015). Basics of qualitative research: Techni-
ques and procedures for developing grounded theory (4th ed.).
New York, NY: Sage.
Taylor, G.W., & Ussher, J.M. (2001). Making sense of S&M: A discourse
analytic account. Sexualities, 4, 293–314. doi:10.1177/136346001
004003002
Taylor, S., Bogdan, R., & DeVault, M.L. (2015). Introduction to qualita-
tive research methods: A guidebook and resource (4th ed.).
New York, NY: Wiley.
Woods, A.M., & Graber, K. (2016). Interpretive and critical research:
A view through a qualitative lens. In C.D. Ennis (Ed.), Routledge
handbook of physical education pedagogies (pp. 21–33). New York,
NY: Routledge.
JTPE Vol. 37, No. 2, 2018
Qualitative Data Analysis 231
D
ow
nl
oa
de
d
by
E
bs
co
P
ub
lis
hi
ng
a
el
ls
w
or
th
@
eb
sc
o.
co
m
o
n
05
/0
8/
18
, V
ol
um
e
${
ar
tic
le
.is
su
e.
vo
lu
m
e}
, A
rt
ic
le
N
um
be
r
${
ar
tic
le
.is
su
e.
is
su
e}
https://doi.org/10.1191/1478088706qp063oa
https://doi.org/10.1191/1478088706qp063oa
https://doi.org/10.1191/1478088706qp063oa
https://doi.org/10.1007/BF00988593
https://doi.org/10.1007/BF00988593
https://doi.org/10.1002/smj.722
https://doi.org/10.1002/smj.722
https://doi.org/10.1002/smj.722
http://www.ncbi.nlm.nih.gov/pubmed/15761107?dopt=Abstract
https://doi.org/10.1177/1049732304272015
https://doi.org/10.1177/1049732304272015
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1123/jtpe.31.3.279
https://doi.org/10.1080/09636412.2014.970408
https://doi.org/10.1080/09636412.2014.970408
https://doi.org/10.1080/09636412.2014.970408
https://doi.org/10.1080/09636412.2014.970408
https://doi.org/10.1177/1468794106058870
https://doi.org/10.1177/1468794106058870
https://doi.org/10.1017/S1049096513001789
https://doi.org/10.1017/S1049096513001789
https://doi.org/10.1017/S1049096513001789
https://doi.org/10.1123/jtpe.2014-0112
https://doi.org/10.1123/jtpe.2014-0112
https://doi.org/10.1123/jtpe.2014-0112
https://doi.org/10.1123/jtpe.2014-0112
https://doi.org/10.1177/104973299129121659
https://doi.org/10.1177/104973299129121659
https://doi.org/10.3233/EFI-2004-22201
https://doi.org/10.3233/EFI-2004-22201
https://doi.org/10.3233/EFI-2004-22201
https://doi.org/10.1177/1356389007073684
https://doi.org/10.1177/1356389007073684
https://doi.org/10.1177/1356389007073684
https://doi.org/10.1177/136346001004003002
https://doi.org/10.1177/136346001004003002
https://doi.org/10.1177/136346001004003002
Copyright of Journal of Teaching in Physical Education is the property of Human Kinetics
Publishers, Inc. and its content may not be copied or emailed to multiple sites or posted to a
listserv without the copyright holder’s express written permission. However, users may print,
download, or email articles for individual use.
Week 7 – Assignment: Signature Assignment: Design a Qualitative Study
Hide Folder Information
Turnitin™
Turnitin™ enabledThis assignment will be submitted to Turnitin™.
Instructions
A template is provided below for this Signature Assignment. Using the template provided and your relevant discussions from previous assignments in this course, with refinements from your instructors’ feedback, as appropriate: construct a proposed qualitative research plan. Your plan should reflect the features of qualitative research and the rationale for selecting a specific research design. Remember to support your work with citations.
Problem Statement (with recommended revisions)
Provide a clear justification with evidence on why this study is relevant to your field and worthy of doctoral-level study. Support your efforts using 3 scholarly sources published within the past 5 years to ensure relevancy. Remember, the problem statement should reflect your degree type (applied or theory-based).
Purpose Statement (with recommended revisions)
Apply the script introduced in this course and your instructor’s feedback to produce an accurate and aligned problem statement.
Research Questions (at least two questions)
The qualitative research query must be framed to deeply probe and investigate a problem. How, why, and what strategies are the best terms to include in your research question.
Methodology and Design (with the rationale)
Defend your choice to use the qualitative methodology to research your identified problem. Synthesize 2 or 3 sources to support your arguments.
Defend your choice to use a specific qualitative research design. Synthesize 2 or 3 sources to support your arguments.
Data Collection (outline and defend)
Explain how and why you will select participants from a specific population. Include citations for the identified population, the sampling method.
Describe data collection steps.
Ethical protection of human subjects
Data Analysis (include steps)
Logically define the steps in data analysis
Describe how the four elements of trustworthiness could be addressed
References
Page in APA Format
Length: 6-10 pages
References: 15-20 peer-reviewed resources.
Your assignment should demonstrate thoughtful consideration of the ideas and concepts presented in the course and provide new thoughts and insights relating directly to this topic. Your response should reflect scholarly writing and current APA standards. Be sure to adhere to Northcentral University’s Academic Integrity Policy.
References
Amankwaa, L. (2016). Creating protocols for trustworthiness in qualitative research. Journal of Cultural Diversity, 23(3), 121-127
Belotto, M. J. (2018). Data analysis methods for qualitative research: Managing the challenges of coding, interrater reliability, and thematic
Castleberry, A., & Nolen, A. (2018). Methodology matters: Thematic analysis of qualitative research data: Is it as easy as it sounds?
Connelly, L. M. (2016). Understanding research. Trustworthiness in qualitative research. MEDSURG Nursing, 25, 435-436
Jensen, E., & Laurie, C. (Academic). (2017). An introduction to qualitative data analysis [Video file]
Kristensen, G. K., & Ravn, M. N. (2015). The voices heard and the voices silenced: Recruitment processes in qualitative interview studies
Richards, K. A. R., & Hemphill, M. A. (2018) . A practical guide to collaborative qualitative data analysis. Journal of Teaching in
Yakut Çayir, M., & Saritaş, M. T. (2017). Computer assisted qualitative data analysis: A descriptive content analysis (2011 – 2016)
Linda Amankwaa, PhD, RN, FAAN
Abstract: Experienced and novice researchers, plan qualitative proposals where evidence of rigor
must be provided within the document. One option is the creation of a trustworthiness protocol
with details noting the characteristic of rigor, the process used to document the rigor, and then
a timeline directing the planned time for conducting trustworthiness activities. After reviewing
several documents, an actual plan of conducting trustworthiness as not found. Thus, these authors
set out to create a trustworthiness protocol designed not only for the dissertation, but a framework
for others who must create similar trustworthiness protocols for their research. The purpose of this
article is to provide a reference for the trustworthiness plan, a dissertation example and showcase a
trustworthiness protocol that may be used as an example to other qualitative researchers embarking
on the creation of a trustworthiness protocol that is concrete and clear.
Key Words: Trustworthiness, Research Protocols, Qualitative
Research
Creating Protocols for
Trustworthiness in Q ualitative
Research
Anything perceived as being of low or no value is
also perceived as being worthless, unreliable, or
invalid. Research that is perceived as worthless
is said to lack rigor. This means findings are not worth
noting or paying attention to, because they are unreliable.
To avoid this argument, proof of reliability and validity
in qualitative research methods is required. However,
some researchers have suggested that reliability and
validity are not terms to be used to explain the usefulness
of qualitative research. They believe that those terms are
to be used to validate quantitative research (Altheide &
Johnson, 1998; Leininger, 1994). Morse (1999) expressed
concern about qualitative research losing value by em
phasizing when qualitative researchers fail to recognize
crucial importance of reliability and validity in qualita
tive methods, they are also mistakenly supporting the
idea that qualitative research is defective and worthless,
lacking in thoroughness, and of unempirical value.
Guba and Lincoln (1981) stated that, “All research must
have ‘truth value’, ‘applicability’, ‘consistency’, and
‘neutrality’ in order to be considered worthwhile. They
concluded that the end result of establishing rigor or
“trustworthiness,” (the analogous for rigor in qualitative
research), for each method of research requires a differ
ent approach. It was noted by Guba and Lincoln (1981),
Linda Amankwaa, PhD, RN, FAAN, is an Associate
Professor in the Department of Nursing at Albany State Uni
versity in Albany, GA31705. Dr. Amankwaa may be reached
at: 229-430-4731 or at: Linda.Amankwaa@asurams.edu.
within the rationalistic paradigm, criteria to reach the
goal of rigor are internal validity, external validity, reli
ability, and objectivity. They proposed use of terms such
as credibility, fittingness, auditability, and confirmability
in qualitative research to ensure “trustworthiness” (Guba
& Lincoln, 1981). Later, these criteria were changed to
credibility, transferability, dependability, and confirm
ability (Lincoln & Guba, 1985).
Lincoln and Guba (1985) suggested that the value of a
research study is strengthened by its trustworthiness. As
established by Lincoln and Guba in the 1980s, trustwor
thiness involves establishing:
• Credibility – confidence in the ‘truth ‘ of the
finding
• Transferability – showing that the findings have
applicability in other contexts
• Dependability – showing that the findings are
consistent and could be repeated
• Confirmability – a degree of neutrality or the ex
tent to which the findings of a study are shaped
by the respondents and not researcher bias,
motivation, or interest.
For purposes of this discussion, this classic work is
used to frame trustworthiness actions and activities to
create a protocol for qualitative studies. Nursing faculty
and doctoral nursing students who conduct qualitative
research will find this reference useful.
Journal of Cultural Diversity • Vol. 23, No. 3 Fall 2016
mailto:Linda.Amankwaa@asurams.edu
Credibility Activities
Lincoln and Guba (1985) described a series of techniques
that can be used to conduct qualitative research that at
tains the criteria they outlined. Techniques for establishing
credibility as identified by Lincoln and Guba (1985) are:
prolonged engagement, persistent observation, triangula
tion, peer debriefing, negative case analysis, referential
adequacy, and member-checking. Typically member check
ing is viewed as a technique for establishing the validity
of an account. Lincoln and Guba posit that this is the most
crucial technique for establishing credibility.
Transferability Activities
One strategy that can be employed to facilitate transfer-
ability is thick description (Creswell & Miller, 2000; Lincoln
& Guba, 1985). Thick description is described by Lincoln
and Guba as a way of achieving a type of external valid
ity. By describing a phenomenon in sufficient detail one
can begin to evaluate the extent to which the conclusions
drawn are transferable to other times, settings, situations,
and people. Since, as stated by Merriam (1995) it is the
responsibility of the consumer of research to determine
or decide if and how research results might be applied
to other settings, the original researcher m ust provide
detailed information about the phenomenon of study to
assist the consumer in making the decision. This requires
the provision of copious amounts of information regard
ing every aspect of the research. The investigator will
include such details as the location setting, atmosphere,
climate, participants present, attitudes of the participants
involved, reactions observed that may not be captured on
audio recording, bonds established between participants,
and feelings of the investigator. One word descriptors will
not suffice in the development of thick description. The
investigator in essence is telling a story with enough detail
that the consumer/reader obtains a vivid picture of the
events of the research. This can be accomplished through
journaling and maintaining records whether digital or
handwritten for review by the consumer/reader.
Confirmability Activities
To establish confirmability Lincoln and Guba (1985)
suggested confirmability audit, audit trail, triangulation,
and reflexivity. An audit trail is a transparent description of
the research steps taken from the start of a research project
to the development and reporting of findings (Lincoln &
Guba). These are records that are kept regarding what was
done in an investigation. Lincoln and Guba cite Halpern’s
(1983) categories for reporting information when develop
ing an audit trail:
“1) Raw data – including all raw data, written field
notes, unobstrusive measures (documents); 2) Data
reduction and analysis products – including sum
maries such as condensed notes, unitized information
and quantitative summaries and theoretical notes; 3)
Data reconstruction and synthesis products – includ
ing structure of categories (themes, definitions, and
relationships), findings and conclusions and a final
report including connections to existing literatures
and an integration of concepts, relationships, and
interpretations; 4) Process notes – including method
ological notes (procedures, designs, strategies, ratio
nales), trustworthiness notes (relating to credibility,
dependability and confirmability) and audit trail notes;
5) Materials relating to intentions and dispositions –
including inquiry proposal, personal notes (reflexive
notes and. motivations) and expectations (predictions
and intentions); 6) Instrument development informa
tion – including pilot forms, preliminary schedules,
observation formats” (page#).
Using multiple data sources within an investigation to
enhance understanding is called triangulation. Researchers
see triangulation as a method for corroborating findings
and as a test for validity (Lincoln & Guba, 1985). Rather
than seeing triangulation as a method for validation or veri
fication, qualitative researchers generally use this technique
to ensure that an account is rich, robust, comprehensive
and well-developed (Lincoln & Guba, 1985).
Denzin (1978) and Patton (1999) identify four types of
triangulation: methods triangulation, source triangulation;
analyst triangulation; theory/perspective triangulation.
They suggested that methods triangulation involves check
ing out the consistency of finding generated by different
data collection methods. Triangulation of sources is an
examination of the consistency of different data sources
from within the same method (i.e. at different points in
time; in public vs. private settings; comparing people with
different viewpoints).
Another one of the four methods identified by Denzin
and Patton includes analyst triangulation. This is the use
of multiple analysts to review findings or using multiple
observers and analysts. This provides a check on selective
perception and illuminate blind spots in an interpretive
analysis. The goal is to understand multiple ways of see
ing the data. Finally, they described theory/perspective
triangulation as the use of multiple theoretical perspectives
to examine and interpret the data.
According to Lincoln and Guba (1985) reflexivity is,
“An attitude of attending systematically to the context
of knowledge construction, especially to the effect of the
researcher, at every step of the research process.” They
suggested the following steps to develop reflexivity: 1)
Designing research that includes multiple investigators.
This fosters dialogue, leads to the development of comple
mentary and divergent understandings of a study situation
and provides a context in which researchers’ (often hid
den) – beliefs, values, perspectives and assumptions can be
revealed and contested; 2) Develop a reflexive journal. This
is a type of diary where a researcher makes regular entries
during the research process. In these entries, the researcher
records methodological decisions and the reasons for them,
the logistics of the study and reflection upon what is hap
pening in terms of one’s own values and interests. Diary
keeping of this type is often very private and cathartic; 3)
Report research perspectives, positions, values and beliefs
in manuscripts and other publications. Many believe that it
is valuable and essential to briefly report in manuscripts, as
best as possible, how one’s preconceptions, beliefs, values,
assumptions and position may have come into play during
the research process.
Dependability Activities
To establish dependability, Lincoln and Guba (1985) sug
gested a technique known as inquiry audit. Inquiry audits
are conducted by having a researcher that is not involved in
the research process examine both the process and product
of the research study (Lincoln & Guba, 1985). The purpose
is to evaluate the accuracy and evaluate whether or not the
findings, interpretations and conclusions are supported by
the data (Lincoln & Guba, 1985).
Journal of Cultural Diversity • Vol. 23, No. 3 Fall 2016
Creating a Protocol for Qualitative Researchers
The creation of a protocol for establishing trustwor
thiness within qualitative research is essential to rigor.
Further, we note that researchers rarely document how
or what their trustworthiness plan or protocol consisted
of within research documents. Thus, we posit here that
creating such a protocol prior to initiation of the research
study is essential to revealing trustworthiness within the
research process. By creating this plan a priori, the rigor
of qualitative research is apparent.
This history and purposed need for this article heralds
from a doctoral dissertation search to find examples of
trustworthiness protocols for direction to complete trust
worthiness within doctoral qualitative research. Since none
could be found, discussions lead the researcher to create a
table that could used by those who are planning qualita
tive studies. Another interesting point is that qualitative
researchers, unlike quantitative researchers, rarely create
protocol guidelines.
The establishment of trustworthiness protocols in quali
tative research requires the use of several techniques. This
protocol will be detail specific so those researchers have
a guideline for trustworthiness activities. Such a protocol
guides prospective qualitative researchers in their quest
for rigor. Several tables are presented here. The first table
outlines the main topics within the trustworthiness proto
col. The remaining tables outline the suggested activities
within trustworthiness protocol and for those creating a
trustworthiness protocol.
Table one is the basic criteria for a trustworthiness pro
tocol using Lincoln and Guba (1985). However, researchers
may use other models of rigor. Creating a table aligned with
the planned model of rigor is the recommendation. The
following five table are examples of a “created” protocol
with examples of very specific activities related to each
trustworthiness criteria.
Summary
In summary, trustworthiness is a vital component
within the research process. Attending to the language of
trustworthiness and the important activities of reliabil
ity, add to the comprehensiveness and the quality of the
research product. This discussion heralds the new idea
that trustworthiness must be planned ahead of time with
a protocol. This protocol must include dates and times
trustworthiness actions. We contend that researchers can
use the protocol by adding two columns which specify the
date of the planned trustworthiness action and the date the
action was actually completed. This information can then
be included in the audit trail thus authenticating the work
qualitative researcher and the rigor of the research.
REFERENCES
Altheide, D., & Johnson, J. (1998). Criteria for assessing interpre
tive validity in qualitative research. In N. K. Denzin, & Y. S.
Lincoln (Eds.), Collecting and interpreting materials, 283- 312.
Creswell, J. & Miller, D. (2000). Determining validity and qualita
tive inquiry. Theory Into Practice, 39(3), 125-130.
Denzin, N. (1978). Sociological Methods. New York: McGraw-Hill.
Guba, E. & Lincoln, Y. (1981). Effective evaluation: improving the
usefidness of evaluation results through responsive and naturalistic
approaches. San Francisco, CA: Jossey-Bass.
Leininger, M. (1994). Evaluation criteria and critique of qualitative
and interpretive research. Qualitative Inquiry, 1, 275-279.
Lincoln, Y. S. & Guba, E. G. (1985). Naturalistic Inquiry. Newbury
Park, CA: Sage Publications.
Morse, J. (1999). Myth #3: Reliability and validity are not relevant
to qualitative mquiry.Qualitative Heath Research, 9, 717.
Patton, M. Q. (1999). “Enhancing the quality and credibility of
qualitative analysis.” HSR: Health Services Research. 34(5),
Part II, 1189-1208.
BIBLIOGRAPHY
Bitsch, V. (2005). Qualitative research: A grounded theory example
and evaluation criteria. Journal of Argibusiness, 23 (1), 75-91.
Carpenter, R. (1995). Grounded theory research approach. In H.
J. Streubert & R. D. Carpenter(Eds-), Qualitative research and
in nursing: Advancing the humanistic imperative, 145-161.
Cohen D., Crabtree, B. (2006). Qualitative Research Guidelines
Project. July 2006. http://www.qualres.org/HomeRefl-3703.
html
Giacomini, M. & Cook, D. (2000). A user’s guide to qualitative
research in health care. In Users’ guides to evidence-based
medicine. Journal of the American Medical Association, 284(4),
478-482.
Morse, J. Barrett, M., Mayan, M., Olson, K., & Spiers, J. (2002).
Verification strategies for establishing reliability and valid
ity in qualitative research. International Journal of Q ualita
tive Methods, 1, 2, Article 2. Retrieved April 30, 2010 from
http: / / w w w.ualberta.ca/-ijqrn/
Neuman, L. (2003). Qualitative and quantitative measurements. In
Social research methods: Qualitative and quantitative approaches,
fifth edition, 169-209.
Plack, M. (2005). Human nature and research paradigms: Theory
meets physical therapy practice. The Qualitative Report, 10(2),
223-245.
Polit, D. & Hungler, B. (1999). Research control in quantitative
research. In Nursing research: Principles and m ethods,
sixth edition, 219-238. Lippincott.
Rubin, H. & Rubin, I. (1995). Qualitative interviewing: The art of
hearing data. Thousand Oaks, CA: Sage Publications.
Siegle, D. (2002). Principles and methods in educational research:
A web-based course from the University of Connecticut. Re
trieved April 30, 2010 from http: / / www.gifted.uconn.edu/
siegle / research/qualitative / qualitativeInstructorNotes.html
Tobin, G. & Begley, C. (2004). Methodological rigour within a
qualitative framework. Journal of Advanced Nursing, 48(4),
388-396.
Table 1. Basic Trustworthiness Criteria (Lincoln & Guba, 1985)
Criteria Technique
Credibility Peer debriefing, member checks, journaling
Transferability Thick description, journaling
Dependability Inquiry audit with audit trail
Confirmability Triangulation, journaling
Journal of Cultural Diversity Fall 2016
http://www.qualres.org/HomeRefl-3703
http://www.ualberta.ca/-ijqrn/
http://www.gifted.uconn.edu/
Table 2. Credibility
Credibility Recommended activities/plan
Peer 1. Write plan within proposal.
debriefing/debriefer
2. Commission a peer to work with researcher during the time of interviews and data
collection.
3. This person must complete an attestation form to work with researcher. Plan to meet
with this person after each interview.
4. During visits with the peer debriefer, research and peer discuss interviews, feelings,
actions of subjects, thoughts, and ideas that present during this time. Discuss
blocking, clouding and other feelings of researcher. Discuss dates and times needed
for these activities. Will meet once a week for 30 minutes to an hour.
5. Journal these meetings. Write about thoughts that surfaced and keep these dated for
research and evaluation during data analysis.
6. Need to be computer files so that you may use this information within data analysis.
Member Checks 1. Outline different times and reasons you plan to conduct member checks or collect
feedback from members about any step in the research process.
2. Member checks will consist of communication with members after significant
activities.
3. These activities may include interviews, data analysis, and other activities.
4. Within two weeks of the interview, send members a copy of their interview so that
they can read it and edit for accuracy.
5. Within two weeks of data analysis completion, member will review a copy of the final
themes.
6. Members are asked the question, “Does the interview transcript reflect your words
during the interview?”
7. Choose negative cases and cases that follow pattern.
8. Be sure these check are recorded and are computer files so that you may use this
information in data analysis.
Journaling plans 1. Journaling will begin with the writing of the proposal.
2. Journaling will be conducted after each significant activity. These include each
interview, weekly during analysis, after peer debriefing visits, and theme production.
3. Journals will be audited by research auditor.
4. Journals will include dates, times, places and persons on the research team.
5. Journals need to be computer files so that you may use them in data analysis.
Protocol Create a timeline with planned dates for each activity related to credibility before
commencing the study. This protocol with dates and activities should appear in the
appendix.
Journal of Cultural Diversity • Vol. 23, No. 3 Fall 2016
Table 3. Transferability
Thick Description Actions for this activity include:
1. Reviewing crafted questions with Peer reviewer for clarity.
2. Planning questions that call for extended answers.
3. Asking open ended questions that solicit detailed answers.
4. Interviewing in such a way as to obtain a detailed, thick and robust response.
5. The object is to reproduce the phenomenon of research as clearly and as detailed as
possible.
6. This action is replicated with each participant and with each question (sub-question)
or statement.
7. This continues until all questions and sub-questions are discussed.
8. The peer reviewer along with the researcher review responses for thickness and
robustness.
9. There are two issues related to thick description here. The first is receiving thick
responses (not one sentence paragraphs). The second is writing up the responses of
multiple participants in such a way as to describe the phenomena as a thick
response.
Journaling Actions for this activity include:
1. Planning journal work in advance is an option. Such that the researcher could decide
what dates and how often the journal will occur.
2. Journaling after interview is common.
3. Journaling after peer-review sessions.
4. Journaling after a major event during the study.
5. Journal entries should be discussed with peer reviewer such that expression of
thoughts and ideas gleaned during research activities can be connected to
participants’ experiences.
6. Journals can be maintained in various formats. Information for the journal can be
received in the form of emails, documents, recordings, note cards/note pads. We
recommend that the researcher decide on one of the options.
7. Journaling includes dates of actions related to significant and insignificant activities of
the research.
8. Journal may start on the first date a decision is made to conduct the research.
9. Journaling ends when the research is completed and all participants have been
interviewed.
10. As with each of the concepts here, create a timeline with a date-line protocol for each
activity before commencing the study.
Protocol Create a timeline with planned dates for each activity related to transferability before
commencing the study. This protocol with dates and activities should appear in the appendix.
Journal of Cultural Diversity • Vol. 23, No. 3 Fall 2016
Table 4. Dependability
Audit Trail Components of the audit trail include:
1. Make the list of documents planned for audit during the research work.
2. Commission the auditor based on plan for study.
3. Decide audit trail review dates and times.
4. See auditor information below
5. Write up audit trail results in the journal.
Journaling Actions for this activity include:
I . Planning journal work in advance is an option. Such that the researcher could decide what
dates and how often the journal will occur.
I I . Journaling after interview is common.
12. Journaling after peer-review sessions.
13. Journaling after a major event during the study.
14. Journal entries should be discussed with peer reviewer such that expression of thoughts
and ideas gleaned during research activities can be connected to participants’ experiences.
15. Journals can be maintained in various formats. Information for the journal can be received in
the form of emails, documents, recordings, note cards/note pads. We recommend that the
researcher decide on one of the options.
16. Journaling includes dates of actions related to significant and insignificant activities of the
research.
17. Journal may start on the first date a decision is made to conduct the research.
18.
Journaling ends when the research is completed and all participants have been interviewed.
Auditor 1. The auditor is reviewing the documents for authenticity and consistency.
2. The auditor may be a colleague or someone unfamiliar with the research such that activities
can be questioned for clarity.
3. The auditor should have some comprehension of the research process.
4. Planning in advance for the time commitment as an auditor is crucial.
5. Should provide constructive feedback on processes in an honest fashion.
6. Auditor, researcher, and participants should speak the same language.
7. Must be able to create and maintain audit trail documents.
Protocol Create a timeline with planned dates for each activity related dependability before commencing the
study. This protocol with dates and activities should appear in the appendix.
Journal of Cultural Diversity • Vol. 23, No. 3 Fall 2016
Table 5. Confirmability
Triangulation 1. Determine triangulation methods
2. Document triangulation plans within journal.
3. Discuss triangulation results peer-reviewer
4. Decide if further triangulation is needed
5. Write up the triangulation results.
Journaling Actions for this activity include:
2. Planning journal work in advance is an option. Such that the researcher could decide what
dates and how often the journal will occur.
19. Journaling after interview is common.
20. Journaling after peer-review sessions.
21. Journaling after a major event during the study.
22. Journal entries should be discussed with peer reviewer such that expression of thoughts
and ideas gleaned during research activities can be connected to participants
experiences.
23. Journals can be maintained in various formats. Information for the journal can be received
in the form of emails, documents, recordings, note cards/note pads. We recommend that
the researcher decide on one of the options.
24. Journaling includes dates of actions related to significant and insignificant activities of the
research.
25. Journal may start on the first date a decision is made to conduct the research.
Journaling ends when the research is completed and all participants have been interviewed.
Protocol Create a timeline with planned dates for each activity related confirmability before commencing the
study. This protocol with dates and activities should appear in the appendix.
• Vol. 23, No. 3 ( E 9Journal of Cultural Diversity Fall 2016
Copyright of Journal of Cultural Diversity is the property of Tucker Publications, Inc. and its
content may not be copied or emailed to multiple sites or posted to a listserv without the
copyright holder’s express written permission. However, users may print, download, or email
articles for individual use.