Practice Gap Analysis and Recommended Improvement with RCA²
Your
Practice Gap Analysis and Recommended Improvement with RCA² will be determined by the clinical practice gap that you have identified in your practicum setting. This document will include an overview of the problem, how you determined the root cause of the gap, policy, regulatory, and literature review, evidence-based practice recommendation, impact to nursing care, staffing, and finances, identification of data that will measure progress, and proposed outcome evaluation. The written report is provided using APA document guidelines, minimum 10 pages and maximum 12 pages in length (excluding title page, abstract, graphics, and references). Peer-reviewed, evidence-based literature support is required and is not to be older than 5 years.
I.
Introduction
–Describe the organization, its structure and table of organization, leadership culture and governance model, model of nursing care, and position of the nursing department within the organization Describe the department and/or unit, leadership structure, staffing, unit-based governance model, communication styles, and perception of teamwork. Describe the culture of safety within the organization and any public benchmarks of performance or ratings.
II.
Current State
–
Describe how you organized your stakeholder team and their relation to the problem/event (how they know the details of the problem/event).
Assess and clarify the current knowledge of the policy or practice.
Identify and understand any variabilities or reasons for performance deviance (practice drift). Develop your fishbone or timeline to visualize the 5-Whys of the root cause determination.
Classify the event.
III.
Literature, Policy, and Regulatory Review
–
Describe the
current state of practice as compared to the practices identified in the literature, policy, and regulatory review. How does this impact patient safety?
Provide key points on why this is an opportunity for quality improvement (this is where you include discussion points on how the policy or practice needs to be changed in order to improve the outcomes of patient care, efficiency, and/or staff or patient safety
IV.
Recommended Actions/Change
–
Describe the plan and the team (staff) that is responsible for implementing the changes to the policy or practice and how they will be educated to make the changes. Include best practice citations from the literature to support recommendations.
Describe how the plan will be carried out and which data will be collected to measure change in practice.
Describe how the outcomes will be evaluated and acted on from what is learned. How will this change be replicated, if needed, throughout the organization? If replication is not indicated, describe why.
a.
Staffing and Financial Impact
–
Describe the cost-benefit analysis and include information in this section that informs the leadership team of what the return on investment and financial considerations will be in order to successfully implement the change in the policy or practice. When you discuss the cost-benefit analysis include answers to the following questions:
i. What changes to staffing are required (if any?) and/or changes to the method of delivery of nursing care.
ii. What is the impact if these changes are not incorporated?
iii. What are the possible costs of pursuing the recommendations for change? Consider equipment, technology, and any and all other resources needed
iv. How will the costs compare with the benefits? (Cost/Benefit analysis)
v. Is there a financial impact if the policy or practice is not changed?
vi. Is there more than one option to achieve the implementations of the changes to achieve the desired goals, and if so, what are those options and how do they compare in cost?
V.
Summary
–
Summarize the key points that you want the leadership team to consider so that they will adopt your recommendations for change of the policy or practice. This is a concise section that is your opportunity to advocate for your change recommendations.
VI.
Appendices
– Include the graphic depiction of the 5-Why Analysis
VII.
Professional and scholarly writing
– This will be the last page of your report and you need to provide all of the evidence-based practice literature that you referenced in support of your recommendations. The reference page begins on a separate page and follows APA format. The written report is provided using APA document guidelines, minimum 10 pages and maximum 12 pages in length (excluding title page, abstract, graphics, and references). Peer-reviewed, evidence-based literature support is required and is not to be older than 5 years.
Helpful tips and resources for writing a contrast and comparison analysis essay.
· Write in APA format
· Provide a cover page with the title of the assignment, your name, the course name, your professor’s name, and the date
· Use evidence based and peer reviewed best practice literature published within the last 5 years to support your viewpoints
· Provide a reference page on your paper in APA format
· Check for spelling or grammar errors before submitting the paper (read out loud) or use the Writing Fellows for assistance
· Websites with helpful tips on writing essays-
or
https://writingcenter.fas.harvard.edu/pages/strategies-essay-writing
· Make sure each paragraph is constructed with a topic sentence, supporting details, and a concluding sentence. Each paragraph should have a minimum of 3 sentences and no more than 5-6 sentences.
· This assignment should be 5-7 pages- excluding title page, diagram page, and references
Peer reviewed articles to support this assignment:
Dash, S., Shakyawar, S., Sharma, M. & Kaushik, S. (2019). Big data in healthcare: Management , analysis, and future prospects.
Journal of Big Data 6(54:1-25.
https://doi.org/10.1186/s40537-019-0217-0
Groot, W. (2021). Root cause analysis – what do we know?
MAB (’s-Gravenhage. Online), 95(1/2), 87–93.
https://doi.org/10.5117/mab.95.60778
National Patient Safety Foundation (2016).
RCA² Improving Root Cause Analyses and Actions to Prevent Harm, Version 2. Boston, MA.
https://www.ihi.org/resources/Pages/Tools/RCA2-Improving-Root-Cause-Analyses-and-Actions-to-Prevent-Harm.aspx
Niraula, S. R. (2019). A review of research process, data collection and analysis.
Insights in Biology and Medicine
3: 001-006.
https://www.heighpubs.org/hjbm/ibm-aid1014.php
Volden, G. H. (2019). Assessing public projects’ value for money: An empirical study of the usefulness of cost–benefit analyses in decision-making.
International Journal of Project Management, 37(4), 549–564.
https://doi.org/10.1016/j.ijproman.2019.02.007
RUBRIC NUR 6
4
9 Assignment #4:
Practice Gap Analysis and Recommended Improvement with RCA²
Criterion
|
EXEMPLARY |
PROFICIENT |
DEVELOPING |
NEEDS IMPROVEMENT |
Introduction (Step 1) |
1 5 Comprehensively describes all aspects of the organization, unit-based governance, perceptions of teamwork, culture of safety, and public benchmarks |
12 Describes all aspects of the organization, unit-based governance, perceptions of teamwork, culture of safety, and public benchmarks |
1 0 Describes some aspects of the organization AND/OR unit based governance, AND/OR perceptions of teamwork, AND/OR culture of safety, AND/OR public benchmarks |
0- 10 Does not provide a clear or logical description of the organization AND/OR unit based governance, AND/OR perceptions of teamwork, AND/OR culture of safety, AND/OR public benchmarks |
Current state (Step 2) |
20 Comprehensively describe the team development, assessment of current knowledge, variabilities of performance, and classification of the event. Includes a detailed graphic depiction of the problem/event of the 5-Whys root cause determination |
1 8 Describes the team development, assessment of current knowledge, variabilities of performance, , and classification of the event.. Includes a graphic depiction of the problem/event of the 5-Whys root cause determination |
15
Minimal description of the team development, assessment of current knowledge, variabilities of performance, , and classification of the event.. Includes a basic graphic depiction of the problem/event of the 5-Whys root cause determination |
0-15 Does not provide multiple points of description of the team development, assessment of current knowledge, variabilities of performance, , and classification of the event AND/OR missing a graphic depiction of the problem/event of the 5-Whys root cause determination |
Literature, Policy, and Regulatory Review (Step 3 ) |
20
Comprehensive comparison of the current state, impact to patient safety, and key points of opportunity for quality improvement |
18
Comparison of the current state, impact to patient safety, and key points of opportunity for quality improvement |
15
Minimal comparison of the current state, impact to patient safety, and key points of opportunity for quality improvement |
0-15
Does not provide a related AND/OR logical comparison of the current state, impact to patient safety, and key points of opportunity for quality improvement |
Recommended Actions/Change (Step 4) |
20
Comprehensive description of the plan, the team who will implement the actions/change, cost/benefit analysis with staffing and financial impact, the data that will be collected and evaluated, and replication throughout the organization if applicable |
18
Description of the plan, the team who will implement the actions/change, cost/benefit analysis with staffing and financial impact, the data that will be collected and evaluated, and replication throughout the organization if applicable |
15
Minimal description of the plan, the team who will implement the actions/change, cost/benefit analysis with staffing and financial impact, the data that will be collected and evaluated, and replication throughout the organization if applicable |
0-15
Does not provide a logical or related description of the plan, the team who will implement the actions/change, cost/benefit analysis with staffing and financial impact, the data that will be collected and evaluated, and replication throughout the organization if applicable |
Summary (Step 5) |
10
Comprehensive and concise summary of the key points to advocate for change recommendations |
8
Summary of the key points to advocate for change recommendations |
7 Minimal summary of the key points to advocate for change recommendations |
0-7 Does not provide a logical, related, AND/OR concise summary of key points to advocate for change recommendations |
Appendices (Step 6) |
5
Comprehensive and detailed graphic depiction of 5-Why analysis |
4
Detailed graphic depiction of 5-Why analysis |
3
Minimal graphic depiction of 5-Why analysis |
0
Missing graphic depiction of 5-Why analysis |
Professional and scholarly writing (Step 7) |
10
All articles are peer reviewed and within 5 years of publication. Less than 3 errors in spelling, grammar, language, APA format, readability |
7.5 At least 2 articles are peer reviewed and within 5 years of publication. 3-5 errors in spelling, grammar, language, APA format, readability |
5
Articles utilized are not peer reviewed AND/OR are not within 5 years of publication. 5-7 errors in spelling, grammar, language, APA format, readability |
0-5 Articles are not peer reviewed AND are more than 5 years from publication. More than 7 errors in spelling, grammar, language, APA format, readability |
CRITERIA EXCEPTIONAL PROFICIENT DEVELOPING NEEDS IMPROVEMENT
CONTENT
40 Points
Exceeds expectations &
includes all main
components:
background, current
state, lit/policy/reg
review, recommended
action/change,
implications, summary,
references
30 Points
One of the main
components is missing
or underdeveloped:
background, current
state, lit/policy/reg
review, recommended
action/change,
implications, summary,
references
20 points
Two or more of the
main components are
missing or
underdeveloped:
background, current
state, lit/policy/reg
review, recommended
action/change,
implications, summary,
references
10 Points
Most or all of the main
components are
missing or
underdeveloped:
background, current
state, lit/policy/reg
review, recommended
action/change,
implications, summary,
references
ORGANIZATION
20 Points
Exceeds expectations &
includes all main
components:
professional display,
graphics, logical visual
flow, organized, easy
readability
1
5 Points
One of the following is
missing: professional
display, graphics, logical
visual flow, organized,
easy readability
10 Points
Two of the following
are missing:
professional display,
graphics, logical visual
flow, organized, easy
readability
5 Points
Most or all of the
following are missing:
professional display,
graphics, logical visual
flow, organized, easy
readability
STYLE/
TIMELINESS
20 Points
Exceeds expectations &
includes all main
components in
creativity, aesthetic
appeal, grammar, APA,
and timeliness of
submission.
15 Points
One of the following
criteria is missing:
creativity, aesthetic
appeal, grammar, APA,
or timeliness of
submission is lacking.
10 Points
Two of the following
criteria are missing:
creativity, aesthetic
appeal, grammar, APA,
or timeliness of
submission.
5 Points
Most or all of the
following criteria are
missing: creativity,
aesthetic appeal,
grammar, APA, or
timeliness of
submission
VOICE / VIDEO
NARRATION
20 Points
Exceeds expectations &
includes all main
components: easy to
understand, clear with
correct pauses,
professional language,
identifies major
headings and key
points, audible with
15 Points
One of the following
criteria is missing: easy
to understand, clear
with correct pauses,
professional language,
identifies major
headings and key points,
audible with good
video/audio quality
10 Points
Two of the following
criteria are missing:
easy to understand,
clear with correct
pauses, professional
language, identifies
major headings and key
points, audible with
good video/audio
5 Points
More than three of the
following criteria are
missing:
easy to understand,
clear with correct
pauses, professional
language, identifies
major headings and key
points, audible with
NURS -649: Practice Gap Analysis and Recommended Improvement with RCA²
Poster Rubric
good video/audio
quality
quality good video/audio
quality
\
-
NURS -649: Practice Gap Analysis and Recommended Improvement with RCA²
Poster Rubric
Wendy Groot
Received 14 November 2020 | Accepted 22 January 2021 | Published 10 March 2021
Root cause analysis (RCA) provides audit firms, regulators, policy makers and practitioners the opportunity to learn from past ad-
verse events and prevent them from reoccurring in the future, leading to better audit quality. Recently approved regulations (ISQM1)
make RCA mandatory for certain adverse events, making it essential to learn how to properly conduct an RCA. Building on the
findings and recommendations from the RCA literature from other industries where RCA practice is more established such as the
aviation and healthcare industries, audit firms can implement an adequate and effective RCA process. Based on the RCA literature,
I argue that audit firms would benefit from a systems-based approach and establishing a no-blame culture.
Audit firms can use the insights from other professions to effectively establish an RCA process. Furthermore, the paper informs the
audit profession on the developments regarding RCA.
Keywords
root cause analysis; audit firms; systems thinking; no blame culture
Root cause analysis (RCA) is the process of identifying
the causes of adverse events (e.g. inspection findings, audit
failures, restatements, litigation) and preventing these root
causes from happening again in the future (e.g. Leveson
et al. 2020; Percarpio et al. 2008; Wu et al. 2008). Recent-
ly adopted regulations mandate that audit firms establish
RCA procedures and identify remedial action to prevent
the root causes from reoccurring (ISQM1, IAASB 2020).1
The standard will come into effect on December 15, 2022.
ISQM1 describes the main objective of RCA as understan-
ding the underlying circumstances2 or attributes causing
the adverse event. These attributes can be linked to prior
research on audit quality through the audit quality indi-
cators (AQI’s) (DeFond and Zhang 2014; Francis 2011;
Knechel et al. 2013). The AQI’s may lead to relevant areas
where root causes can be examined and, vice versa, identi-
fied root causes might lead to AQI’s (AFM 2017; PCAOB
2014). Several audit firms review AQI’s as part of the
RCA, such as the number of audit hours, partner tenure,
and percentage of partner time, as these aspects are known
to influence audit quality and may help to identify the root
cause (FRC 2016). When root causes are identified, the
current literature on AQI’s can help identify proper reme-
dial action (Nolder and Sunderland 2020). For example,
prior research shows that critical thinking can be improved
by prompting a deliberative mindset (Griffith et al. 2015)
or a systems-thinking perspective (Bucaro 2019). Literatu-
re from other professions, such as healthcare and aviation,
show that linking safety or quality indicators to the RCA
also benefits system-wide learning (Chang et al. 2005;
O’Connor and O’Dea, 2007; Taitz et al. 2010; Wiegmann
and Shappell 2001).
The purpose of this article is to provide insight in
the background of RCA and RCA practice in the audit
profession. It is important to gain more insight, as RCA
provides audit firms the opportunity to learn from past
adverse events and prevent them in the future. Regulators
Copyright Wendy Groot. This is an open access article distributed under the terms of the Creative Commons Attribution License
(CC-BY-NC-ND 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and
source are credited.
Maandblad voor Accountancy en Bedrijfseconomie 95(1/2) (2021): 87–93
DOI 10.5117/mab.95.60778
https://doi.org/10.5117/mab.95.60778
https://mab-online.nl
Wendy Groot: Root cause analysis – what do we know?88
find that in investigating adverse events audit firms do not
reach the level of depth needed to identify the root cause
(AFM 2020; FRC 2016; Nolder and Sunderland 2020).
Gaining more insight in the context, or system, in which
the root causes emerged would provide a more in-depth
understanding. Organizations form complex systems,
consisting of underlying relationships between humans,
technology and their surroundings (Grant et al. 2018). If
the system in which the adverse events emerged is not
altered, more adverse events are likely to occur (Dien et
al. 2014; Labib 2015).
Furthermore, to conduct a valuable RCA, a safe en-
vironment is needed, where those involved with the ad-
verse event feel free to speak up (Iedema et al. 2006; Wu
et al. 2008). I find that audit firms should provide more
clarity on how RCA findings could impact the individu-
als involved in the adverse event, to encourage a safe en-
vironment and no-blame culture. Those involved might
be reluctant to be open about their experience when their
openness could lead to disciplinary, legal or institutional
actions, resulting in possibly missing essential insights.
In the second section of this paper I describe the RCA
process. In the third section, I elaborate on the use of RCA
in other professions, after which I reflect on the current
situation of the audit profession. In the fourth section, I
conclude with a summary.
The RCA process aims to understand why an adverse
event came about (e.g. Bagian et al. 2002; Benner 1975;
Percarpio et al. 2008). As noted, examples of adverse
events in the audit profession are litigation, audit failures,
inspection findings, or restatements. Generally, the RCA
process consists of five separate steps: defining the pro-
blem; collecting the data; analyzing the data; identifying
root causes; and identifying remedial action (Mahto and
Kumar 2008; Percarpio et al. 2008; Rooney and Van-
den Heuvel 2004). To conduct these steps, the audit firm
needs to appoint an investigation team, to which I refer as
the RCA team.
Using an example from audit practice, I illustrate the
steps outlined above, starting with what happened – defi-
ning the problem. A regulatory inspection finds that the
auditor failed to sufficiently assess and challenge the as-
sumptions in the cash flow forecasts of X’s management
for the audit of the goodwill impairment.3 The problem
definition for the ensuing RCA could be formulated as:
the audit of the goodwill at X failed to meet the standards
(obtaining sufficient appropriate audit evidence – ISA
200.17 and ISA 540). To investigate how this happened,
the RCA team could review the working papers regarding
goodwill, to gain insight on the work done or, to be more
accurate, the work documented – collecting the data. Data
on the planning on the engagement (planned and worked
hours) provides information about the audit team’s capa-
city and helps to contextualize the issue. Furthermore, the
RCA team conducts interviews with the engagement team
and involved specialists, to learn about the adverse event’s
circumstances and the perceptions of those involved. Af-
ter the data is gathered and the interviews are conducted,
the RCA team analyzes the observations – analyzing the
data. There are several (qualitative and quantitative) tools
that can be used to analyze the data and formulate cau-
sal factors (possible contributors to the adverse event).4
In the case of the goodwill impairment, identified causal
factors can be, for example: high workload, no coaching
on the job, insufficient training, or lack of professional
skepticism. After the data are analyzed and visualized, the
RCA team drills the causal factors down to the underlying
roots of the adverse event – identifying root causes. The
main objective of this step is to distinguish the symptoms
from the actual root causes, since merely addressing the
symptoms would not prevent the problem from happe-
ning again (Mahto and Kumar 2008). Let’s assume that
the engagement team failed to gather counter evidence
on the management assumptions. The root cause of this
problem could be a ‘check the box’ mentality triggered
by the use of extensive checklists in the audit guidance.5
When the root causes are properly identified, measures
are formulated to prevent the adverse event from reoccur-
ring (Percarpio et al. 2008; Wu et al. 2008) – identifying
remedial action. The literature on AQI’s can help with
formulating appropriate remedial measures. For exam-
ple, measures to trigger a deliberative mindset, leading
to a considerate or skeptical state (Griffith et al. 2015) or
decision aids prompting a systems-thinking perspective,
leading to a more holistic approach of an organization’s
business processes (Bucaro 2019).
3. Promising practices regarding
the RCA
This section explores the RCA literature from other in-
dustries where RCA is a more established phenomenon,
aviation and healthcare. This review reveals two promi-
sing practices which are relevant for the audit profession,
namely systems thinking and a no-blame culture.
3.1 Systems thinking
Although it is important to gain an in-depth under-
standing to identify the root causes, audit firms do not
seem to have reached this level (AFM 2020; FRC 2016;
Nolder and Sunderland 2020). For example, audit firms
commonly identify the lack of professional skepticism
as a root cause (AFM 2020; FRC 2016; Nolder and
Sunderland 2020). However, the lack of professional
skepticism is not a root cause, as it is merely a descrip-
tion (a symptom) of the auditor’s behavior (Nolder and
Sunderland 2020). To identify the root cause, the RCA
team has to understand the context in which the audi-
tor lacked professional skepticism, to explain why this
occurred. This understanding requires systems thinking
Maandblad voor Accountancy en Bedrijfseconomie 95(1/2): 87–93
https://mab-online.nl
89
(Dien et al. 2014; Leveson 2020). Organizations, such
as audit firms, form a complex system, consisting of
underlying relationships between humans, technology
and their surroundings (Grant et al. 2018). When the
systems in which the adverse events emerged continue
to exist, this setting could be expected to cause more
adverse events (Dien et al. 2014; Labib 2015). It is, the-
refore, essential that the RCA focusses on the system
in which the adverse event has occurred (Besnard and
Hollnagel 2014; Leveson et al. 2020).
3.1.1 System-based RCA tools
Prior literature from other professions finds that the tools
used in RCA to analyze the data and identify root causes
are often based on linear models (Besnard and Hollnagel
2014; Peerally et al. 2016). Linear models imply a causal
chain of events, leading to the root cause which induced
the adverse event. Such a chain of events leaves no room
for the impact of interdependencies between technical,
human, and organizational components. The complex re-
ality does not fit well with linear models (Leveson et al.
2020), as they simplify reality to an extent that they might
paint an incomplete, or even untrue, picture. The linear
narrative leads to a reductionist view6 of reality, with the
risk of focusing on the apparent issues and not addressing
flaws in the system as a whole (Dien et al. 2004; Peerally
et al. 2016).
Furthermore, the RCA practice tends to emphasize
the search for ‘the’ root cause (Wu et al. 2008). This ten-
dency is simply implied by the singular form of the name
root cause analysis, but it is also facilitated by some
RCA tools7 (Peerally et al. 2016). Both the simplistic
perspective of reality and ‘the one root cause’ lead to a
view in which the system is not adequately addressed.
When the systems in which the adverse events emerged
continue to exist, it could be expected to cause more ad-
verse events (Labib 2015). It is, therefore, essential that
the RCA focusses on the broader system in which the ad-
verse event has occurred (Besnard and Hollnagel 2014;
Leveson et al. 2020).
The ISQM1 does emphasize the non-linear nature of
the RCA process, but does not provide further guidance
(ISQM1, IAASB 2020). The use of system-based RCA
tools could help with gaining an in-depth understanding
of the system needed to properly identify the root causes.
However, (transparency8) reports show that tools such as
the 5 Why method and the cause-and-effect diagram (or
the fishbone)9 are used most often in the audit profession
(CAANZ 2019; FRC 2016; NBA 2019; PCAOB 2014).
Although these tools might prove helpful in analyzing the
data and identifying the root causes, they carry the risk of
creating a linear narrative and a reductionist view, leading
to incorrect root causes or insufficient levels of depth
(Dien et al. 2004; Muir et al. 2016; PCAOB 2014; Peer-
ally et al. 2016). The RCA practice in the audit profession
would benefit from system-based RCA tools, which are
emerging in other professions.10
3.1.2 Collaborative systems
The audit profession does not exist in a vacuum, but func-
tions in an interdependent environment. The profession
consists of audit firms, global networks, clients (inclu-
ding audit committees, several layers of management
and/or internal audit), regulators, professional bodies for
auditors and educational institutions. Each of these com-
ponents is a system onto itself, with (a certain) manage-
rial and operational independence – the components are
collaborative systems (Maier 1998).
The importance of addressing the collaborative sys-
tems in RCA is acknowledged in healthcare (Leveson
2020) and illustrated by the RCA practices in the avia-
tion industry. Aviation consists of airlines, airports, the
Federal Aviation Administration, aircraft manufacturers,
and so on (Maier 1998). When adverse events occur in
aviation, all the components are investigated and the en-
tire industry is informed about these investigations. This
method allows the aviation industry to make changes in
the system as a whole, instead of an isolated component
of the industry (Leveson et al. 2020). This broad systems
approach contributes significantly to the industry’s low
accident rates (Leveson 2011). If the collaborative sys-
tems are not considered, recommendations might be ai-
med at the wrong level of the system (Wu et al. 2008)
The Commission examining the future of the audit
profession, installed by the Ministry of Finance in the
Netherlands, emphasizes the significance of the broader
system in which the audit firm operates, to acquire high
quality audits (CTA 2020). Although the importance of
corroborative systems is acknowledged in the Dutch au-
dit profession, it is not fully adopted in RCA practices.
The regulator finds that audit firms have developed from
focusing on identifying root causes at individual and en-
gagement team levels (AFM 2017), to including the or-
ganization-wide impact (AFM 2019; 2020). However,
this analysis does not include the level of collaborative
systems, nor is there a way of informing the entire audit
profession about RCA findings for system-wide learning.
For both the inclusion of the collaborative systems in the
RCA as well as the reporting on the findings of the RCA,
a common vocabulary is needed. The AQI’s could help
establish this common vocabulary and with learning on
the level of the audit profession (system-wide instead of
organization-wide), similarly as for example in aviation
(O’Connor and O’Dea 2007; Wiegmann and Shappell
2001) and healthcare (Chang et al. 2005; Taitz et al. 2010).
3.2 Establishing no-blame culture
The RCA’s focus on systems as a whole, also implies that
the investigation does not focus on the individuals invol-
ved (Dien et al. 2004; Macrae 2014; Wu et al. 2008). The
investigations are to be conducted without blaming the
individuals involved, in order to avoid a blame culture
and optimize learning (Bik 2019; Iedema et al. 2006).
To effectively conduct an RCA, the involved individuals
https://mab-online.nl
Wendy Groot: Root cause analysis – what do we know?90
need to share their experiences uninhibitedly. Blaming
the individuals risks creating an unsafe learning environ-
ment and creates difficulties for speaking up (Andiola et
al. 2020; Gold et al. 2014; Kadous et al. 2019; Nelson
et al. 2016). Also, when the RCA targets the individuals
rather than the system in which the adverse events oc-
curred, deficiencies in the system are not addressed (e.g.
Besnard and Hollnagel 2014; Rasmussen 1997).
The RCA practice, however, does not always reflect this
no-blame culture. First, the investigation is conducted after
an adverse event has occurred, leading to hindsight bias,
risking the investigation teams being overly critical of tho-
se involved (Fischhoff 1975). This effect can be reinforced
by using local teams, and not including RCA experts, to
conduct RCA (Peerally et al. 2016). Second, some RCA
tools seem to encourage blame seeking. For example, a
tool11 might entail a checklist which explicitly asks about
the individuals sloppy work habits (Livingston et al. 2001).
Third, RCA can have consequences for those involved in
the form of disciplinary, legal or institutional actions, when
that individual bears any fault (Dempsey 2010; Peerally
et al. 2016). Prior research, discussed next, provides some
measures on how to overcome these three challenges, to
support a no-blame culture when conducting RCA.
Overly critical: To prevent the RCA team from being
overly critical to those involved in the adverse event, it is
important that the RCA team is multidisciplinary, skilled,
and properly trained (Macrae 2014; Peerally et al. 2016).
In aviation, the safety investigators usually have extensive
operational experience, as this experience is seen as essen-
tial for the RCA (Macrae 2014). Also, investigations into
accidents in aviation have been formally assigned to an
independent accident investigation body (Dempsey 2010;
Sweeney 1950). This investigation team is likely to be less
susceptible to interpersonal relations within the organiza-
tion or negative hierarchical influences (Percarpio et al.
2008). Also, the investigation team is specifically trained
to conduct RCA, increasing the expertise of the RCA team.
Blame seeking RCA tools: The use of system-based RCA
tools helps prevent blaming the individuals, as they focus
on improving the system instead of focusing on human er-
ror (Peerally et al. 2016), as elaborated in section 3.1.
Consequences for those involved in adverse events:
Besides assuring an independent expert investigation
team, an investigation body can also provide clarity on
the distribution of responsibilities between the bodies
that investigate the adverse events and the bodies that
impose disciplinary, legal or institutional actions (Demp-
sey 2010; Kooijmans et al. 2014; Peerally et al. 2016).
The sole objective of the body’s investigation is to pre-
vent future events from happening and any proceedings
regarding blame are to be conducted separately (Demp-
sey 2010; Macrae 2014). The same applies to the Dutch
Safety Board, which also studies accidents other than in
aviation, such as railway, chemical and military incidents
(Kooijmans et al. 2014).
Regulatory and transparency reports show that RCA in
the audit profession are conducted by internal RCA teams,
in most cases (partly) independent from the audit practice
(CAANZ 2019; FRC 2020; NBA 2019). Although often
organized as an independent team, the RCA is conduc-
ted internally within the audit firm, possibly leading to a
higher susceptibility to negative effects from interperso-
nal relationships or hierarchy. Furthermore, although the
AFM reports progress in recent years regarding an open
error climate, they also conclude that creating an open
error climate is still challenging for audit firms, as there
is a lack of clarity on the possible consequences of the
RCA on the individuals involved (AFM 2020). To further
improve the open error climate, the AFM suggests distin-
guishing between permissible and inadmissible mistakes.
Although such a distinction would make individual im-
pact more explicit, it does not clarify the allocation of res-
ponsibilities regarding legal, disciplinary or institutional
actions. Confusion regarding the distribution of responsi-
bilities might lead to conducting RCA to allocate blame
(Dempsey 2010; Peerally et al. 2016).
A possible solution for the independence of RCA teams
and the confusion regarding personal consequences is to
place the responsibility for proceedings regarding blame
elsewhere, outside the RCA team, in line with aviation
and the Dutch Safety Board, as proposed earlier by a study
from TNO, commissioned by the NBA (TNO 2014). The
audit profession at large (e.g. the practitioners, regulators,
academics) needs to consider whether enough measures
are taken to assure the autonomous functioning of the
RCA team and if not, how to develop those measures.
RCA aims to answer the questions of why an adver-
se event occurred and how to prevent re-occurrence
(e.g. Leveson et al. 2020; Percarpio et al. 2008; Wu et
al. 2008). RCA also helps in determining the drivers
of the quality of the audit (ISQM1, IAASB 2020), and
strengthening the AQI’s in doing so. RCA is conducted
by defining the problem; collecting the data; analyzing
the data; identifying root causes; and identifying remedial
actions (Mahto and Kumar 2008; NBA 2019; Rooney and
Vanden Heuvel 2004).
Literature from other professions with more establis-
hed RCA practices argue the importance of a systems
approach and the need to comprehend the complex sys-
tem in which adverse events occur in order to acquire the
level of depth needed to understand the root cause and
to be able to properly identify remedial actions (e.g. Be-
snard and Hollnagel 2014; Leveson 2004; Peerally 2016;
Rasmussen 1997). RCA tools tend to be based on linear
models, which can lead to a reductionist view, where
the reality is more complex and system failures remain
unaddressed. The linear narrative could also lead to the
search for ‘the one fundamental root cause’ (Wu et al.
2008). Since RCA in audit practice often does not ac-
quire the level of depth needed (AFM 2020; Nolder and
Sunderland 2020), the use of system-based RCA tools
Maandblad voor Accountancy en Bedrijfseconomie 95(1/2): 87–93
https://mab-online.nl
91
might prove beneficial. Furthermore, within the audit
profession, RCA focuses on a single organization, ra-
ther than the collaborative system (e.g. global networks,
clients and regulators). Without addressing the collabo-
rative systems, system-wide learning for the auditing
profession is not established. Furthermore, the root cause
and remedial action might be insufficient or aimed at the
wrong level of the system (Wu et al. 2008).
Avoiding blame is important for the RCA process, as
it prompts those involved to speak up during the RCA,
optimize learning, and create a safe learning environ-
ment (Iedema et al. 2006). However, several factors
promote a blame environment: hindsight bias, leading
to being overly critical to those involved (Fischhoff
1975); possible disciplinary, legal or institutional acti-
ons (Dempsey 2010; Peerally et al. 2016); and blame
seeking tools (Livingston et al. 2001). In this regard it
is important to establish independent, multi-disciplinary
expert teams, with a clear distribution of responsibilities
regarding disciplinary, legal or institutional actions. The
RCA practice in the audit profession is internally orga-
nized within each audit firm, leading to risks regarding
its independence. Furthermore, there is a lack of clarity
regarding the possible concurrence with the RCA and
disciplinary, legal and institutional actions. The audit
profession needs to further investigate how the possible
independence issue, and the confusion regarding perso-
nal consequences for those involved with the adverse
event, can be mitigated.
� W. Groot MSc RA is manager in the PwC National Office and an external PhD candidate at the Vrije Universiteit
Amsterdam, School of Business and Economics – Department of Accounting.
I would like to thank Chris Knoops and the three anonymous reviewers for their comments and feedback. Furthermore,
I want to thank Anna Gold, Herman van Brenk, Dominic Detzen, Arnold Wright, Tjibbe Bosman, Frank Duijm, Arjan
Brouwer, Janneke Timmers, Harm Jan Kruisman and Linsey Groot for their important insights. Your commentaries
have improved this paper greatly.
Notes
1. The IAASB accepted an International Standard on Quality Management for all firms providing financial audits or reviews, or other assurance
engagements. The standard requires audit firms to conduct RCA when deficiencies are identified (IAASB 2020). The audit firms are responsible
for establishing procedures regarding the nature, timing and extent of the RCA process. Furthermore, the firms need to evaluate the severity
and pervasiveness of the deficiencies (art. 41, ISQM1, IAASB 2020), allowing for different types of investigations. When the root causes are
identified, the firms must take remedial actions to prevent these from reoccurring (art. 42, ISQM1, IAASB 2020). The IAASB also suggests that
audit firms conduct RCA of good practices (art. A169, ISQM1, IAASB 2020).
2. Such as the appropriate involvement of the partner (art. A167, ISQM1, IAASB 2020), or sufficient supervision and review of conducted work
(art. A169, ISQM1, IAASB 2020).
3. The insufficient challenging of management in complex estimates and forward-looking estimates, such as goodwill impairments, is used as
example as it is a regularly reoccurring finding in the audit quality inspection reports (July 2020) of the FRC.
4. See Livingston’s et al. (2001) book on accident investigation techniques for a comprehensive overview of different methods.
5. Adverse events often have multiple root causes, as the adverse events emerge in systems with interdependent components (Peerally et al. 2016).
To illustrate the complexity – it might be that in the case of the goodwill impairment there was not only the beforementioned ‘check the box’
mentality, but also a high workload, which led to the engagement team’s reluctance to gather counter evidence on management’s assumptions.
Subsequently, the root cause of high workload could be due to a lack of time management skills of the engagement manager or an understaffed
engagement team because the audit firm has difficulties attracting sufficient suitable staff members.
6. A reductionist view means that complex entities are reduced to more fundamental and simpler entities or terms.
7. Peerally et al. (2016) argue that the linear narrative is exacerbated by RCA techniques such as timelines and the 5 Whys, as they tend to encou-
rage a reductionist view.
8. From 2014 up to and including 2018, nine firms provided statutory audits for PIE’s: Deloitte, EY, KPMG, PwC (the Big 4) and Accon avm,
Baker Tilly, BDO, Grant Thornton and Mazars (the Next 5). Accon avm, Baker Tilly, and Grant Thornton handed in their permit to conduct
statutory audits at PIE’s, in 2019. At the moment of analyzing the transparency reports (September 2020) the 2020 reports are not yet available.
I reviewed the transparency reports quite extensively, however, the paper has developed in such a way that the results of the review do not fit
the current scope of this paper. The discussion of the transparency reports in this paper is, therefore, limited. For a comprehensive discussion of
the transparency reports of the Dutch audit firms, see Dick de Waard and Peter Brouwer’s paper in this issue of MAB. De Waard and Brouwer
study to what extent the transparency reports give insight in the audit firm’s audit quality.
https://mab-online.nl
Wendy Groot: Root cause analysis – what do we know?92
9. Once the RCA team grasps an idea of the most likely causes for the adverse event, the 5 Why method can be used to drill down
to the root cause (Muir et al. 2016). The cause-and-effect diagram is visualized as a fishbone, in which the bones form categories
(e.g., procedures and people) and the possible causes are lined along these bones (Doggett 2006). Examples of those bones are
procedures, people and culture.
10. Examples of system-based RCA tools are the System Theoretic Accident Model and Processes (Leveson 2004) and the Functional Resonance
Analysis Method (Hollnagel 2012), as used in several industries (Dutch Safety Board 2020 and Patriarca et al. 2017); or Causal Analysis based
on Systems Theory (Leveson 2011), as demonstrated for use in healthcare (Leveson 2020).
11. This specific example regards the Systematic Accident Cause Analysis – developed for incidents on offshore installations (Livingston et al. 2001).
� AFM (Autoriteit Financiële Markten) (2017) Quality of PIE audit
firms. https://www.afm.nl/en/nieuws/2017/juni/kwaliteitslag-oob
� AFM (Autoriteit Financiële Markten) (2019) Kwaliteit overige
OOB-accountantsorganisaties onderzocht. https://www.afm.nl/
nl-nl/nieuws/2019/nov/rapport-kwaliteit-overige-oob-accoun-
tants
� AFM (Autoriteit Financiële Markten) (2020) De kwaliteitsslag bij
de Big 4-accountantsorganisaties onderzocht. Uitkomsten van het
onderzoek naar de kwaliteitsgerichte cultuur, de kwaliteitscirkel en
kwaliteitswaarborgen. https://www.afm.nl/nl-nl/nieuws/2020/juli/
rapport-kwaliteitsslag-big4
� Andioala LM, Downey DH, Westermann KD (2020) Examining cli-
mate and culture in audit firms: Insights, practice implications, and
future research directions. Auditing: A Journal of Practice & Theory
39(4): 1–29. https://doi.org/10.2308/AJPT-19-107
� Bagian JP, Gosbee J, Lee CZ, Williams L, McKnight SD, Mannos
DM (2002) The veterans affairs root cause analysis system in ac-
tion. Journal on Quality Improvement 28(10): 531–545. https://doi.
org/10.1016/S1070-3241(02)28057-8
� Benner L (1975) Accident investigations: multilinear events se-
quencing methods. Journal of Safety Research 7(2): 67–73.
� Besnard D, Hollnagel E (2014) I want to believe: Some myths about
the management of industrial safety. Cognition, Technology and
Work 16(1): 13–23. https://doi.org/10.1007/s10111-012-0237-4
� Bik O (2019) Ten considerations for conducting root cause anal-
ysis in auditing – Practice note -. Foundation for Auditing Re-
search. https://foundationforauditingresearch.org/files/papers/
ten-considerations-for-conducting-root-cause-analysis-in-audit-
ing_1558354850_1f9f1838
� Bucaro A (2019) Enhancing auditors’ critical thinking in audits of
complex estimates. Accounting, Organizations and Society 73: 35–
49. https://doi.org/10.1016/j.aos.2018.06.002
� CAANZ (Chartered Accountants Australia and New Zealand)
(2019) Improving audit quality using root cause analysis. https://
www.charteredaccountantsanz.com/tools-and-resources/client-ser-
vice-essentials/audit-and-assurance/external-auditors-guide-to-im-
proving-audit-quality-using-root-cause-analysis
� Chang A, Schyve PM, Croteau RJ, O’Leary DS, Loeb JM (2005)
The JCAHO patient safety event taxonomy: A standardized termi-
nology and classification schema for near misses and adverse events.
International Journal for Quality in Health Care 17(2): 95–105.
https://doi.org/10.1093/intqhc/mzi021
� CTA (Commissie toekomst accountancysector) (2020) Vertrouwen
op controle. https://www.rijksoverheid.nl/documenten/kamerstuk-
ken/2020/01/30/vertrouwen-op-controle-eindrapport-van-de-com-
missie-toekomst-accountancysector
� DeFond M, Zhang J (2014) A review of archival auditing research.
Journal of Accounting and Economics 58(2–3): 275–326. https://
doi.org/10.1016/j.jacceco.2014.09.002
� De Waard DA, Brouwer PGJ (2021) Vier jaar transparantie geana-
lyseerd: Exploratief onderzoek naar transparantieverslagen van
accountantsorganisaties. Maandblad voor Accountancy en Bedrijfs-
economie 95(1–2): 47–55. https://doi.org/10.5117/mab.95.60954
� Dempsey PS (2010) Independence of aviation safety investigation
authorities: Keeping the foxes from the henhouse. Journal of Air
Law and Commerce 75(Spring 2010): 223–284. https://scholar.smu.
edu/jalc/vol75/iss1/10
� Dien Y, Llory M, Montmayeul R (2004) Organisational accidents inves-
tigation methodology and lessons learned. Journal of Hazardous Mate-
rials 111(13): 147–153. https://doi.org/10.1016/j.jhazmat.2004.02.021
� Doggett A (2005) Root cause analysis: A framework for tool selecti-
on. Quality Management Journal 12(4): 34–45. https://doi.org/10.10
80/10686967.2005.11919269
� Dutch Safety Board (Onderzoeksraad voor Veiligheid) (2020)
Verborgen gebreken? Lessen uit de instorting van het dak van het
AZ-stadion. https://www.onderzoeksraad.nl/nl/page/14903/verbor-
gen-gebreken-lessen-uit-de-instorting-van-het-dak-van-het-az
� Fischhoff B (1975) Hindsight is not equal to foresight: The effect of
outcome knowledge on judgment under uncertainty. Journal of Ex-
perimental Psychology: Human Perception and Performance 1(3):
288–299. https://doi.org/10.1037/0096-1523.1.3.288
� Francis JR (2011) A framework for understanding and researching
audit quality. Auditing: A Journal of Practice & Theory 30(2): 125–
152. https://doi.org/10.2308/ajpt-50006
� FRC (Financial Reporting Council) (2016) Audit quality thematic
review – Root cause analysis a review of the six largest UK audit
firms. https://www.frc.org.uk/getattachment/dc0bba94-d4cd-447c-
b964-bad1260950ec/Root-Cause-Analysis-audit-quality-thema-
tic-report-Sept-2016
� FRC (Financial Reporting Council) (2020) Results of FRC audit in-
spections. https://www.frc.org.uk/news/july-2020/results-of-frc-au-
dit-inspections
� Gold A, Gronewold U, Salterio SE (2014) Error management in
audit firms: Error climate, type, and originator. The Accounting Re-
view 89(1): 303–330. https://doi.org/10.2308/accr-50592
� Grant E, Salmon PM, Stevens NJ, Goode N, Read GJ (2018) Back to
the future: What do accident causation models tell us about accident
prediction? Safety Science 104(December 2017): 99–109. https://
doi.org/10.1016/j.ssci.2017.12.018
� Griffith EE, Hammersley JS, Kadous K, Young D (2015) Auditor
mindsets and audits of complex estimates. Journal of Accounting
Research 53(1): 49–77. https://doi.org/10.1111/1475-679X.12066
https://www.afm.nl/en/nieuws/2017/juni/kwaliteitslag-oob
https://www.afm.nl/nl-nl/nieuws/2019/nov/rapport-kwaliteit-overige-oob-accountants
https://www.afm.nl/nl-nl/nieuws/2019/nov/rapport-kwaliteit-overige-oob-accountants
https://www.afm.nl/nl-nl/nieuws/2019/nov/rapport-kwaliteit-overige-oob-accountants
https://www.afm.nl/nl-nl/nieuws/2020/juli/rapport-kwaliteitsslag-big4
https://www.afm.nl/nl-nl/nieuws/2020/juli/rapport-kwaliteitsslag-big4
https://doi.org/10.2308/AJPT-19-107
https://doi.org/10.1016/S1070-3241(02)28057-8
https://doi.org/10.1016/S1070-3241(02)28057-8
https://doi.org/10.1007/s10111-012-0237-4
https://foundationforauditingresearch.org/files/papers/ten-considerations-for-conducting-root-cause-analysis-in-auditing_1558354850_1f9f1838
https://foundationforauditingresearch.org/files/papers/ten-considerations-for-conducting-root-cause-analysis-in-auditing_1558354850_1f9f1838
https://foundationforauditingresearch.org/files/papers/ten-considerations-for-conducting-root-cause-analysis-in-auditing_1558354850_1f9f1838
https://doi.org/10.1016/j.aos.2018.06.002
https://www.charteredaccountantsanz.com/tools-and-resources/client-service-essentials/audit-and-assurance/external-auditors-guide-to-improving-audit-quality-using-root-cause-analysis
https://www.charteredaccountantsanz.com/tools-and-resources/client-service-essentials/audit-and-assurance/external-auditors-guide-to-improving-audit-quality-using-root-cause-analysis
https://www.charteredaccountantsanz.com/tools-and-resources/client-service-essentials/audit-and-assurance/external-auditors-guide-to-improving-audit-quality-using-root-cause-analysis
https://www.charteredaccountantsanz.com/tools-and-resources/client-service-essentials/audit-and-assurance/external-auditors-guide-to-improving-audit-quality-using-root-cause-analysis
https://doi.org/10.1093/intqhc/mzi021
https://www.rijksoverheid.nl/documenten/kamerstukken/2020/01/30/vertrouwen-op-controle-eindrapport-van-de-commissie-toekomst-accountancysector
https://www.rijksoverheid.nl/documenten/kamerstukken/2020/01/30/vertrouwen-op-controle-eindrapport-van-de-commissie-toekomst-accountancysector
https://www.rijksoverheid.nl/documenten/kamerstukken/2020/01/30/vertrouwen-op-controle-eindrapport-van-de-commissie-toekomst-accountancysector
https://doi.org/10.1016/j.jacceco.2014.09.002
https://doi.org/10.1016/j.jacceco.2014.09.002
https://doi.org/10.5117/mab.95.60954
https://scholar.smu.edu/jalc/vol75/iss1/10
https://scholar.smu.edu/jalc/vol75/iss1/10
https://doi.org/10.1016/j.jhazmat.2004.02.021
https://doi.org/10.1080/10686967.2005.11919269
https://doi.org/10.1080/10686967.2005.11919269
https://www.onderzoeksraad.nl/nl/page/14903/verborgen-gebreken-lessen-uit-de-instorting-van-het-dak-van-het-az
https://www.onderzoeksraad.nl/nl/page/14903/verborgen-gebreken-lessen-uit-de-instorting-van-het-dak-van-het-az
https://doi.org/10.1037/0096-1523.1.3.288
https://doi.org/10.2308/ajpt-50006
https://www.frc.org.uk/getattachment/dc0bba94-d4cd-447c-b964-bad1260950ec/Root-Cause-Analysis-audit-quality-thematic-report-Sept-2016
https://www.frc.org.uk/getattachment/dc0bba94-d4cd-447c-b964-bad1260950ec/Root-Cause-Analysis-audit-quality-thematic-report-Sept-2016
https://www.frc.org.uk/getattachment/dc0bba94-d4cd-447c-b964-bad1260950ec/Root-Cause-Analysis-audit-quality-thematic-report-Sept-2016
https://www.frc.org.uk/news/july-2020/results-of-frc-audit-inspections
https://www.frc.org.uk/news/july-2020/results-of-frc-audit-inspections
https://doi.org/10.2308/accr-50592
https://doi.org/10.1016/j.ssci.2017.12.018
https://doi.org/10.1016/j.ssci.2017.12.018
https://doi.org/10.1111/1475-679X.12066
Maandblad voor Accountancy en Bedrijfseconomie 95(1/2): 87–93
https://mab-online.nl
93
� Hollnagel E (2012) FRAM: the functional resonance analysis me-
thod: modeling complex socio-technical systems. Farnham, UK:
Ashgate Publishing.
� IAASB (International Auditing and Assurance Standards Board)
(2020) International standard on quality management 1 – approved
by IAASB (Vol. 1). https://www.ifac.org/system/files/meetings/fi-
les/20200914-IAASB-Agenda-Item-2-A.4-ISQM-1-Final-Approved-
Draft-Updated-Marked-From-Agenda-Item-2-A.2-FINAL
� Iedema RAM, Jorm C, Long D, Braithwaite J, Travaglia J, Westbrook
M (2006) Turning the medical gaze in upon itself: Root cause analysis
and the investigation of clinical error. Social Science and Medicine
62(7): 1605–1615. https://doi.org/10.1016/j.socscimed.2005.08.049
� Kadous K, Proell CA, Rich J, Zhou YP (2019) It goes without
saying: The effects of intrinsic motivational orientation, leadership
emphasis of intrinsic goals, and audit issue ambiquity on speaking
up. Contemporary Accounting Research 36(4): 2113–2141. https://
doi.org/10.1111/1911-3846.12500
� Knechel RW, Krishnan GV, Pevzner M, Shefchik LB, Velury UK
(2013) Audit quality: Insights from the academic literature. Audi-
ting: A Journal of Practice & Theory 32(Supplement 1): 385–421.
https://doi.org/10.2308/ajpt-50350
� Kooijmans T, Tjong Tjin Tai TFE, De Waard BWN, Hendricksen
SEJ, Jansen R (2014) Het gebruik van onderzoeksinformatie en
rapporten van de Onderzoeksraad voor veiligheid in juridische pro-
cedures. https://www.onderzoeksraad.nl/nl/media/inline/2019/7/2/
rapport_2014_definitief
� Labib A (2015) Learning (and unlearning) from failures: 30 years
on from Bhopal to Fukushima an analysis through reliability engi-
neering techniques. Process Safety and Environmental Protection
97(September): 80–90. https://doi.org/10.1016/j.psep.2015.03.008
� Leveson N (2004) A new accident model for engineering safer
systems. Safety Science 42(4): 237–270. https://doi.org/10.1016/
S0925-7535(03)00047-X
� Leveson N (2011) Engineering a safer world: Systems thinking ap-
plied to safety. https://doi.org/10.7551/mitpress/8179.001.0001
� Leveson N, Samost A, Dekker S, Finkelstein S, Raman J (2020)
A systems approach to analyzing and preventing hospital adver-
se events. Journal of Patient Safety 16(2): 162–167. https://doi.
org/10.1097/PTS.0000000000000263
� Livingston AD, Jackson G, Priestley K (2001) Root causes analysis:
Literature review. Contract Research Report 325/2001. HSE Books.
https://www.hse.gov.uk/research/crr_pdf/2001/crr01325
� Macrae C (2014) Close calls: Managing risk and resilience in
airline flight safety. Palgrave Macmillan. UK, 1–24. https://doi.
org/10.1057/9781137376121_1
� Mahto D, Kumar A (2008) Application of root cause analysis in im-
provement of product quality and productivity. Journal of Industrial
Engineering and Management 1(2): 16–53. https://doi.org/10.3926/
jiem.2008.v1n2.p16-53
� Maier MW (1998) Architecting principles systems-of-systems.
Systems Engineering 1(4): 267–284. https://doi.org/10.1002/
(SICI)1520-6858(1998)1:4%3C267::AID-SYS3%3E3.0.CO;2-D
� Muir I, Cano M, Terry A. (2016). A tool application model for Root
Cause Analysis. In: Baccarani C, Martin J (Eds) 19th Toulon-Vero-
na International Conference Excellence in Services: University of
Huelva, (Spain), 5 and 6 September 2016, 341–352. http://sites.les.
univr.it/eisic/wp-content/uploads/2018/07/Muir- Cano-Terry
� NBA (Koninklijke Nederlandse Beroepsorganisatie van Accoun-
tants) (2019) Rapport oorzakenanalyse OOB accountantsorganisa-
ties. https://www.nba.nl/projecten/in-het-publiek-belang/uitkom-
sten-oorzakenanalyse/
� Nelson MW, Proell CA, Randel AE (2016) Team-oriented leader-
ship and auditors’ willingness to raise audit issues. The Accounting
Review 91(6): 1781–1805. https://doi.org/10.2308/accr-51399
� Nolder CJ, Sunderland D (2020) Mapping firms’ root cause anal-
yses to audit behavioral research – A way forward. Working paper
Suffolk University.
� O’Connor P, O’Dea A (2007) The U.S. Navy’s aviation safety pro-
gram: a critical review. International Journal of Applied Aviation
Studies, 7(2): 312–328. http://hdl.handle.net/10379/2581
� Patriarca R, Di Gravio G, Costantino F (2016) A Monte Carlo evo-
lution of the Functional Resonance Analysis Method (FRAM) to
assess performance variability in complex systems. Safety Science
91:49–60. https://doi.org/10.1016/j.ssci.2016.07.016
� PCAOB (Public Company Accounting Oversight Board) (2014)
Standing Advisory Group Meeting initiatives to improve audit qual-
ity—Root Cause Analysis, Audit Quality Indicators, and Quality
Control Standards, June 24–25, 2014. https://pcaobus.org/news-
events/events/event-details/pcaob-standing-advisory-group-meet-
ing_772
� Peerally MF, Carr S, Waring J, Dixon-Woods M (2016) The problem
with root cause analysis. BMJ Quality and Safety 26(5): 417–422.
https://doi.org/10.1136/bmjqs-2016-005511
� Percarpio KB, Watts BV, Weeks WB (2008) The effectiveness of
root cause analysis: What does the literature tell us? Joint Commis-
sion Journal on Quality and Patient Safety 34(7): 391–398. https://
doi.org/10.1016/S1553-7250(08)34049-5
� Rasmussen J (1997) Risk management in a dynamic society https://
doi.org/10.1016/S0925-7535(97)00052-0: A modelling problem.
Safety Science 27(2–3): 183–213.
� Rooney JJ, Vanden Heuvel LN (2004) Root cause analysis for
beginners. Quality Progress 37(7): 45–53. https://asq.org/quali-
ty-progress/articles/root-cause-analysis-for-beginners?id=0228b-
91456514ba490c89979b577abb4
� Sweeney E (1950) Safety regulations and accident investigation:
Jurisdictional conflicts of C.A.B. and C.A.A. – Part II. Journal of
Air Law and Commerce 17(3): 269–282. https://scholar.smu.edu/
jalc/vol17/iss3/3
� Taitz J, Genn K, Brooks V, Ross D, Ryan K, Shumack B, Burrell T,
Kennedy P (2010) System-wide learning from root cause analysis:
a report from the New South Wales Root Cause Analysis Review
Committee. Quality and Safety in Health Care 19(6): 1–6. https://
doi.org/10.1136/qshc.2008.032144
� TNO (2014) Een lerende sector: financiële onderzoeksraad? https://
www.accountant.nl/globalassets/accountant.nl/in-het-publiek-be-
lang/tno_een_lerende_sector_okt2014
� Wiegmann D, Shappell S (2001) A Human Error Analysis of Com-
mercial Aviation Accidents Using the Human Factors Analysis and
Classification System (HFACS). Federal Aviation Administration,
16 pp. https://doi.org/10.4324/9781315092898-5
� Wu AW, Lipshutz AKM, Pronovost PJ (2008) Effectiveness and
efficiency of root cause analysis in medicine. JAMA – Journal of
the American Medical Association 299(6): 685–687. https://doi.
org/10.1001/jama.299.6.685
https://www.ifac.org/system/files/meetings/files/20200914-IAASB-Agenda-Item-2-A.4-ISQM-1-Final-Approved-Draft-Updated-Marked-From-Agenda-Item-2-A.2-FINAL
https://www.ifac.org/system/files/meetings/files/20200914-IAASB-Agenda-Item-2-A.4-ISQM-1-Final-Approved-Draft-Updated-Marked-From-Agenda-Item-2-A.2-FINAL
https://www.ifac.org/system/files/meetings/files/20200914-IAASB-Agenda-Item-2-A.4-ISQM-1-Final-Approved-Draft-Updated-Marked-From-Agenda-Item-2-A.2-FINAL
https://doi.org/10.1016/j.socscimed.2005.08.049
https://doi.org/10.1111/1911-3846.12500
https://doi.org/10.1111/1911-3846.12500
https://doi.org/10.2308/ajpt-50350
https://www.onderzoeksraad.nl/nl/media/inline/2019/7/2/rapport_2014_definitief
https://www.onderzoeksraad.nl/nl/media/inline/2019/7/2/rapport_2014_definitief
https://doi.org/10.1016/j.psep.2015.03.008
https://doi.org/10.1016/S0925-7535(03)00047-X
https://doi.org/10.1016/S0925-7535(03)00047-X
https://doi.org/10.7551/mitpress/8179.001.0001
https://doi.org/10.1097/PTS.0000000000000263
https://doi.org/10.1097/PTS.0000000000000263
https://www.hse.gov.uk/research/crr_pdf/2001/crr01325
https://doi.org/10.1057/9781137376121_1
https://doi.org/10.1057/9781137376121_1
https://doi.org/10.3926/jiem.2008.v1n2.p16-53
https://doi.org/10.3926/jiem.2008.v1n2.p16-53
https://doi.org/10.1002/(SICI)1520-6858(1998)1:4%3C267::AID-SYS3%3E3.0.CO;2-D
https://doi.org/10.1002/(SICI)1520-6858(1998)1:4%3C267::AID-SYS3%3E3.0.CO;2-D
http://sites.les.univr.it/eisic/wp-content/uploads/2018/07/Muir-
http://sites.les.univr.it/eisic/wp-content/uploads/2018/07/Muir-
https://www.nba.nl/projecten/in-het-publiek-belang/uitkomsten-oorzakenanalyse/
https://www.nba.nl/projecten/in-het-publiek-belang/uitkomsten-oorzakenanalyse/
https://doi.org/10.2308/accr-51399
http://hdl.handle.net/10379/2581
https://doi.org/10.1016/j.ssci.2016.07.016
https://pcaobus.org/news-events/events/event-details/pcaob-standing-advisory-group-meeting_772
https://pcaobus.org/news-events/events/event-details/pcaob-standing-advisory-group-meeting_772
https://pcaobus.org/news-events/events/event-details/pcaob-standing-advisory-group-meeting_772
https://doi.org/10.1136/bmjqs-2016-005511
https://doi.org/10.1016/S1553-7250(08)34049-5
https://doi.org/10.1016/S1553-7250(08)34049-5
https://doi.org/10.1016/S0925-7535(97)00052-0
https://doi.org/10.1016/S0925-7535(97)00052-0
https://asq.org/quality-progress/articles/root-cause-analysis-for-beginners?id=0228b91456514ba490c89979b577abb4
https://asq.org/quality-progress/articles/root-cause-analysis-for-beginners?id=0228b91456514ba490c89979b577abb4
https://asq.org/quality-progress/articles/root-cause-analysis-for-beginners?id=0228b91456514ba490c89979b577abb4
https://scholar.smu.edu/jalc/vol17/iss3/3
https://scholar.smu.edu/jalc/vol17/iss3/3
https://doi.org/10.1136/qshc.2008.032144
https://doi.org/10.1136/qshc.2008.032144
https://www.accountant.nl/globalassets/accountant.nl/in-het-publiek-belang/tno_een_lerende_sector_okt2014
https://www.accountant.nl/globalassets/accountant.nl/in-het-publiek-belang/tno_een_lerende_sector_okt2014
https://www.accountant.nl/globalassets/accountant.nl/in-het-publiek-belang/tno_een_lerende_sector_okt2014
https://doi.org/10.4324/9781315092898-5
https://doi.org/10.1001/jama.299.6.685
https://doi.org/10.1001/jama.299.6.685
-
Root cause analysis – what do we know?
- 3. Promising practices regarding the RCA
Research Article
Abstract
Relevance to practice
1. Introduction
2. RCA process
3.1 Systems thinking
3.2 Establishing no-blame culture
4. Conclusion
Acknowledgments
References
Available online at www.sciencedirect.com
ScienceDirect
www.elsevier.com/locate/ijproman
International Journal of Project Management 37 (2019) 549–564
Assessing public projects’ value for money: An empirical study
of the usefulness of cost–benefit analyses in decision-making
Gro Holst Volden
Concept Research Program, Norwegian University of Science and Technology, 7491 Trondheim, Norway
Received 3 June 2018; received in revised form 3 February 2019; accepted 4 February 2019
Available online xxxx
Available online 23 March 2019
Abstract
Value for money, as measured by cost–benefit analyses (CBAs), is a crucial part of the business case for major public investment projects.
However, the literature points to a range of challenges and weaknesses in CBAs that may cause their degree of usefulness in decision-making to be
limited. The paper presents an empirical study of CBA practice in Norway, a country that has made considerable efforts to promote quality and
accountability in CBAs of public projects. The research method is qualitative, based on a case study of 58 projects. The results indicate that the
studied CBAs are largely of acceptable quality and heeded by decision-makers. Appraisal optimism seems to have been reduced by the
introduction of external quality assurance of CBAs. However, there is need for a more consistent assessment of the non-monetized benefits, and
distinguishing them from other decision perspectives such as the achievement of political goals. The paper offers a set of practical
recommendations to increase CBA usefulness further.
© 2019 Elsevier Ltd, APM and IPMA. All rights reserved.
Keywords: Project value; Project appraisal and evaluation; Cost–benefit analysis; Business case
1. Introduction
1.1. Projects ought to be good value for money
The project management community has increasingly
shifted its attention beyond the ‘iron triangle’ of cost, time,
and quality, to take a wider, strategic view of projects. Projects
are implemented to deliver benefits and create value for users,
the parent organization, and/or society at large (Morris, 2013;
Samset and Volden, 2012). Accordingly, project governance
has become an important issue in project research and practice.
It refers to the processes, systems, and regulations that the
financing party must have in place to ensure that relevant and
viable projects are chosen and delivered efficiently (Müller,
2009; Volden and Samset, 2017b).
Williams and Samset (2010) refer to the choice of project
concept as the most important decision that project owners
E-mail address: gro.holst.volden@ntnu.no.
https://doi.org/10.1016/j.ijproman.2019.02.007
0263-7863/00 © 2019 Elsevier Ltd, APM and IPMA. All rights reserved.
make. The choice of concept ought to be approved on the basis
of a business case, in which the expected benefits and strategic
outcomes are described (Jenner, 2015). The business case
provides a rationale for the preferred solution, and is therefore
crucial for future benefits and cost management (Musawir et al.,
2017; Serra and Kunc, 2015).
This paper focuses on the cost–benefit analysis (CBA) which
is often a crucial part of the business case. The CBA concerns the
relationship between resources invested and the benefits that can
be achieved and is a tool to determine the project’s value for
money (i.e. whether it is profitable for society). Specifically, the
aim of a CBA is to compute the net present value (NPV) of a
project or various project alternatives, as defined by Eq. (1):
NPV ¼
XN
t¼
0
Bt−Ct
1þ ið Þt ð1Þ
whereB is social benefit,C is social cost, i is social discount rate, t
is time, and N is the period of analysis. It can be used to rank
http://crossmark.crossref.org/dialog/?doi=10.1016/j.ijproman.2019.02.007&domain=pdf
gro.holst.volden@ntnu.no
https://doi.org/10.1016/j.ijproman.2019.02.007
https://doi.org/10.1016/j.ijproman.2019.02.007
https://doi.org/10.1016/j.ijproman.2019.02.007
https://doi.org/10.1016/j.ijproman.2019.02.007
Journal logo
Imprint logo
550 G.H. Volden / International Journal of Project Management 37 (2019) 549–564
projects unambiguously (Boardman et al., 2011). The decision
rule is to adopt a project if the NPV is positive, or in the case of
several alternatives, to select the project with the highest NPV.
Alternative criteria such as the benefit–cost ratio or internal rate
of return can be applied too, but the NPV is normally
recommended as a metric.
The CBA is particularly relevant for state-funded projects,
as they are regarded in an overall national perspective, rather
than the perspective of particular agencies, regions, or
stakeholder groups. The benefits are interpreted in terms of
the affected people’s willingness to pay for them, and the costs
are defined by the value of the alternative uses of the resources
(Boardman et al., 2011).
The aim of the CBA is to be comprehensive in terms of the
coverage of a project’s impacts (Sager, 2013), and to monetize
them as far as possible. Various techniques have been developed
to elicit the willingness to pay (WTP) for non-market goods.
However, remaining impacts that cannot be monetized must be
described and presented in other ways, to enable decisions to be
made as towhether theywill be likely to improve or depreciate the
NPV. In some cases, if analysts are unable or unwilling to attribute
a monetary value to key benefits, they may be forced to apply
cost-effectiveness analyses. In such cases, the intention is to
minimize a ratio involving the benefit in physical units and
monetary costs (e.g. cost per life saved). Unlike the CBA, the cost-
effectiveness analysis does not make it possible for the analyst to
conclude that the given project will contribute to social welfare
(Boardman et al., 2011). It is thus a subordinate or second-best
measure of value for money. Additionally, various multicriteria
analyses are sometimes used, but they are not measures of value
for money. In this paper we focus on value for money asmeasured
by the CBA and not on project analysis in general.
A number of authors have highlighted the value for money
perspective and the CBA (e.g. Jenner, 2015; Laursen and Svejvig,
2016; Terlizzi et al., 2017). Governments and professional project
management bodies all require assurance of value for money,
such as the Association for Project Management (2018), the
(former)Office ofGovernmentCommerce (2009), and the Project
Management Institute (2017). Volden and Samset (2017a) studied
project governance frameworks in sixOECDcountries, and found
that all of the frameworks highlighted the CBA in the front-end of
projects. This is a dominant method of appraisal in the transport
sector, for which many countries have developed guidelines
(HEATCO, 2006;Mackie et al., 2014). Similarly, highlighting the
CBA in the front-end has been used to assess development aid
projects for decades, and is referred to as one of the World Bank’s
signature issues (World Bank, 2010). The appraisal method is also
increasingly used in other sectors.
1.2. The research gap
However, the attention paid to the quality and utility of
CBAs is limited in project research. The broad but fragmented
literature on CBAs, which discusses a number of challenges
and weaknesses, is rarely cited in project management and
project governance literature. This is surprising, as we would
normally expect that the quality of an analysis affects the extent
to which CBAs are used, their recommendations followed and
social benefits realized. We claim that it is not sufficient to
require a CBA to be performed, but that also its usefulness must
be ensured as part of project governance frameworks. A
number of studies have documented the limited impact of
CBAs on political decisions (e.g. Annema, 2013; Eliasson et
al., 2015; Nyborg, 1998). For example, a review of World Bank
projects shows that CBAs are rarely mentioned in policy
documents, and that the percentage of projects justified
following CBAs is declining (World Bank, 2010).
The explanations given in the literature are multifaceted and
involve both analytical and political issues. For example, the
World Bank report notes that only 54% of CBAs were of
acceptable quality, but also that high-quality CBAs were often
disregarded by decision-makers (World Bank, 2010).
In this paper we focus on the analytical issues in terms of the
weaknesses that materialize in CBA reports. Other authors have
focused on issues such as adverse incentives at the decision-
making level that may result in the value for money aspect
being played down when decisions are made (e.g. Sager, 2016).
Decision-making in a democratic setting is inherently complex,
frequently unpredictable, and influenced by other decision
logics than just the rational economic ones. Therefore, as noted
by Samset and Volden (2015), the greatest potential for
improvement might be to strengthen the analytical processes.
1.3. This study
The aim of this study is to increase knowledge about the
quality and usefulness of CBAs as basis for project selection.
We take the perspective of the financing party (the true owner)
who, in the case of public projects, is the entire society and its
taxpayers, as represented by the Cabinet.
We define seven research questions (RQs) that together
cover the main weaknesses in CBAs that have been discussed
in academic literature (cf. Section 2). We want to learn about
the relative prevalence of these weaknesses and to consider to
what extent they reduce the quality and usefulness of analyses.
The seventh and last research question, about whether CBA
recommendations are actually followed (RQ7), is therefore of
particular interest, and we consider it in relation to the other six
questions. The seven questions are as follows:
RQ1: Are the CBAs consistent across projects with respect
to which impacts are included, whether a valuation has been
performed, and parameters and assumptions applied?
RQ2: Are non-monetized impacts assessed and presented
consistently?
RQ3: Are associated uncertainties identified and presented?
RQ4: Are distributional impacts presented as supplementary
information?
RQ5: Are CBAs unbiased? Specifically, is there a difference
between CBAs done by project promoters and CBAs done
by an independent party?
RQ6: Is transparency and clarity acceptable in the reports?
RQ7: Do decision-makers follow the advice presented in the
CBAs?
1 Projects IN Controlled Environments, see www.axelos.com.
551G.H. Volden / International Journal of Project Management 37 (2019) 549–564
To answer these research questions, we apply high-quality
empirical data from Norway. Since 2005, CBAs have been
compulsory in appraisals of the country’s largest public
investment projects under the Ministry of Finance’s Quality
Assurance (QA) scheme. The scheme is presented and
discussed by Volden and Samset (2017b).
The QA scheme applies to public infrastructure projects that
exceed an estimated threshold cost of NOK 750 million (USD
100 million). In those projects, external quality assurance (QA)
of decision documents is required before the Cabinet makes its
choice of project concept. As a basis for the external QA, the
sectoral ministry or agency prepares a conceptual appraisal
(CA) document. The CA is the business case and must include
an assessment of needs and overall requirements, a possibility
study that results in at least three alternative project concepts,
including the zero-investment alternative, and a CBA of these
concepts. The QAs are performed by private consultants
contracted by the Ministry of Finance. The QA team should
review the CA and thereafter present its own independent CBA,
with alternatives ranked on the basis of their estimated value for
money. This implies that for each project there will be two
value for money assessments, one produced by the initiating
ministry or agency and the other by the external quality assurer.
The QA team includes economists who are experts on CBA.
Additionally, the ministries and agencies use highly qualified
people to prepare the CBAs. The CA-QA process takes place at
the same stage in all projects’ life cycle, namely the end of the
pre-study phase. The Norwegian Ministry of Finance has issued
guidelines with a set of overall requirements for CBAs that we
consider to be in line with best practice internationally
(Finansdepartementet, 2005, 2014).
We considered Norway an interesting research case because of
the effortsmade to ensure thatCBAs are of highquality.According
toFlyvbjerg’s (2006) categorizationof case study research,Norway
is a ‘critical case’ (here understood as an assumed best case). Our
findings should be relevant beyond the Norwegian context, our
thinkingbeing thatCBAweaknesses observed in this country,with
a project governance scheme that requires high-quality and quality
assured CBAs, will most likely also be a problem in countries
without such a scheme. That said, there may be cultural and other
differences between countries that influence project practices. In a
case study, we must always present reservations concerning
transferability of results across countries.
In Section 2 we present a review of the literature on
weaknesses in CBAs. The review forms the basis for the
framework of analysis applied to study the case CBAs. The
framework is presented in Section 3, and a description of the study
data and methodology is provided in Section 4. In Section 5, we
present and discuss the findings with respect to each research
question. Lastly, in Sections 6 and 7 we present our conclusio
ns
and recommendations, and discuss possibilities for further work.
2. Literature review
Today it is widely recognized that not only programs and
portfolios, but also individual projects, should be linked to
higher-order goals and strategies. The project management
community has been increasingly concerned with how projects
create value and reap benefits (Shenhar et al., 2001; Zwikael
and Smyrk, 2012; Morris, 2013; Breese et al., 2015;
Hjelmbrekke et al., 2017). Whereas some authors focus on
the front-end, others discuss benefits management throughout
the project life-cycle (e.g., Serra and Kunc, 2015; Musawir et
al., 2017).
However, this part of the project management literature is
still young. As noted by Laursen and Svejvig (2016) the
definitions of project benefits and value are sometimes vague
and depend on the perspective chosen. Baccarini (1999)
suggested a distinction between two levels of project success,
i.e. project management success, which concerns delivery, and
product success, which concerns the outcome. Samset (2003)
suggested a triple-level performance test concerning project
outputs, first-order effects for users, and long-term effects for
society. A similar chain of benefits has been suggested by
Zwikael and Smyrk (2012) and Serra and Kunc (2015) and is
also largely in line with PRINCE2®.1 In the framework
suggested by Zwikael and Smyrk (2012) it is also specified
who should be responsible for project success on each level.
The project manager is responsible for success at the
operational level (project management success), the project
owner is responsible for success at the tactical level (project
ownership success) and the funder is responsible for success at
the strategic level (project investment success).
In this paper we focus on the highest level of project success
(i.e., project investment success, in Zwikael and Smyrk’s
terminology) where benefits and costs are compared to
determine the effective ‘return’ on the investment. The CBA
takes an overall societal perspective where all benefits and costs
to affected parties nation-wide ought to be included, and (to the
extent possible) translated into the monetary amount that
people are willing to exchange. This is not the only possible
definition of project investment success (as discussed further in
Section 2.1) but at least it provides a very clear definition.
The project management community has not devoted much
attention to the specificities of the CBA thus far, and we
therefore had to search for other types of literature. The ‘CBA
literature’ is large, with publications in transport sector journals
as well as journals in economics, public policy and other social
sciences.
Many weaknesses and challenges have originated in both
theory and practice regarding the use of CBAs, to the extent
that decision-makers do not find them useful or trustworthy.
Such weaknesses may remain undisclosed due to the complex-
ity and often low transparency of the methodology. In the
following subsections we synthesize the literature on the
various weaknesses in CBAs, which may explain decision-
makers’ lack of confidence in this metric. The literature is
fragmented in the sense that different authors focus on entirely
different issues. However, we suggest the following categori-
zation of the weaknesses in CBAs: (1) criticism of the CBAs’
normative fundament, (2) discussion of various measurement
problems, and (3) challenges relating to appraisal optimism.
http://www.axelos.com
552 G.H. Volden / International Journal of Project Management 37 (2019) 549–564
2.1. The CBA – Its normative fundament
The CBA is a powerful project evaluation tool, primarily
because it is not based on political preferences, and therefore it
can be characterized as a ‘neutral tool’ (van Wee and Rietvold,
2013). However, this strength is also a weakness because the
CBA only recognizes people’s preferences in their role as
consumers. By contrast, analysis of people’s preferences in their
role as citizens may give a different result (Mouter and Chorus,
2016), as may the use of either planners’ preferences or
decision-makers’ preferences (Mackie et al., 2014). Thus, the
CBA is a framework for measuring efficiency, not equity,
alignment with political goals, or any other definition of social
desirability. Inevitably, the use of WTP implies that more
weight is attached to high-income groups than to low-income
groups (Nyborg, 2014). Furthermore, by focusing on the
aggregate WTP, the CBA disregards the fact that some groups
may be worse off after project completion than they were
previously. The use of aggregate WTP is justified by the
Kaldor-Hicks efficiency criterion, according to which a new
resource allocation would be an improvement for society if the
winners could hypothetically compensate the losers and still be
better off. However, there is no requirement for such
compensation to be given (Nyborg, 2014).
Thus, the CBA is of little help in cases in which the public
sector has clear policy objectives that differ from consumers’
preferences. Nyborg (1998) found this an important reason why
some Norwegian politicians did not trust the CBA, with
politicians on the left of the political axis being most sceptical.
Mouter (2017) has reported similar responses from Dutch
politicians.
A related critique is that the CBA systematically downplays
the welfare of future generations. Decision-makers are increas-
ingly concerned with investments’ sustainability (Eskerod and
Huemann, 2013; Haavaldsen et al., 2014), which requires a
more holistic and long-term perspective than taken in CBAs. In
particular, the use of a discount rate in CBAs implies that
impacts on future generations have low worth today, and this
weakness has been criticized by a number of authors (e.g.
Ackerman, 2004; Næss, 2006; Pearce et al., 2006).
Some researchers have suggested that the CBA should be
replaced by some form of multicriteria analysis that is based on
the preferences of planners or decision-makers, at least in cases
with moral dimensions (Browne and Ryan, 2011, van Wee,
2013). Others have noted that a multicriteria analysis has
weaknesses too, which makes it more subjective and manipu-
lable (Dobes and Bennett, 2009). In our view, both types of
analysis can supplement each other, as they measure different
things. For all projects that either directly or indirectly aim to
contribute to economic growth, the CBA should at least be
partly relevant.
The solution to this weakness most often recommended by
authors is that all the costs and benefits should be presented in a
disaggregated and transparent form that shows how they are
distributed, not just their aggregated effect. When relevant, a
separate overview and discussion of significant distributional
impacts, both within and between generations, should be
provided in the report. In that way, decision-makers would be
able to decide for themselves whether the distributional
imp
acts
are acceptable. The CBA could also be included more
systematically in a broader project evaluation framework that
includes other perspectives than efficiency, such as the Five
Case Model in the UK, in which the economic case is one of the
five cases (HM Treasury, 2013). Another framework, one that
has been very influential in evaluations of development
assistance projects, comprises the five OECD-DAC criteria of
efficiency, effectiveness, impact, relevance, and sustainability
(Samset, 2003). A variant of the latter framework has been
applied in ex post evaluations of Norwegian projects (Volden,
2018).
2.2. Measurement problems
Even if the ethical and normative premises on which the
CBA rests were accepted, the credibility and usefulness of the
results might be low due to various measurement problems
(Atkins et al., 2017). At an early stage, information about the
effects of a project is sparse and depends on many assumptions
(Samset and Volden, 2015). Thus, an early CBA will have
many sources of error, such as omitted impacts, forecasting
errors, and valuation errors. Several studies have indicated that
cost estimates and demand forecasts are highly inaccurate (i.e.
Flyvbjerg et al., 2003; Kelly et al., 2015; Nicolaisen and
Driscoll, 2014; van Wee, 2007). For example, Nicolaisen and
Driscoll (2014) reviewed 12 studies conducted within the
transport sector in various countries and concluded that traffic
forecasts were unreliable, largely due to weaknesses in the
model specifications, combined with low transparency, which
made it difficult for others to observe what had been done.
Prediction and valuation of non-market goods such as
health, safety, and the environment are a particular challenge.
Different studies have revealed very different estimates of
people’s willingness to pay for such goods: for example,
research conducted for a recently published doctoral thesis
revealed huge variation in the estimates of the value of a
statistical life (Elvik, 2017). It should also be noted that
valuation methods differ in what they measure. For example,
while stated preference (SP) methods are designed to capture
the total value, revealed preference (RP) methods estimate only
use values (Boardman et al., 2011). In many cases, inferior
approaches that violate the principle of consumer sovereignty
are used, such as implicit valuation, whereby analysts use the
government’s WTP as a proxy for the population’s WTP. As
discussed by Sager (2013) and by Mouter and Chorus (2016), a
related challenge is that the population’s preferences may be
unstable, and the difference between consumer values and
political opinions may be blurred.
Thus, it is crucial that the uncertainty involved in estimation
is not downplayed (Flyvbjerg et al., 2003). Additionally,
transparency is crucial: Wachs (1989) recommends that all
details of the models and parameters should be available to
anyone who might wish to replicate, verify, or merely critique
the uses of the technical procedures. This implies that the
553G.H. Volden / International Journal of Project Management 37 (2019) 549–564
findings must be presented in a disaggregated form and not
only as a summary indicator (Nyborg, 1998; Næss, 2006).
A further challenge is that the CBA is normally based on a
partial equilibrium model and only measures direct effects. This
is acceptable as long as other markets are competitive, but
following the publication of the SACTRA report in the UK
(Standing Advisory Committee on Trunk Road Assessment,
1999), attention has been paid to market imperfections that may
mean that the full benefits of a transport investment fail to be
included in the CBA. Some authors have indicated that such
wider economic benefits may be considerable (Venables, 2007;
Vickerman, 2008), while others have noted that they may also
be negative (Næss et al., 2017; Small, 1999). Given that these
impacts are not included in the NPV, they must be identified,
discussed, and potentially quantified separately.
More generally, some impacts are inherently difficult to
quantify and monetize. In particular, environmental effects are
often substantially underestimated or ignored in practice,
despite being possible to measure in principle (Ackerman,
2004; Browne and Ryan, 2011; Kelly et al., 2015; Næss et al.,
2017). CBA textbooks and guides make it clear that non-
monetized impacts must be identified, described, and balanced
against the NPV, yet few textbooks give specific guidance on
how this should be done. In practice, the treatment of non-
monetized impacts tends to be random or politically driven as
noted by some authors (e.g. Ackerman, 2004; Mackie and
Preston, 1998).
2.3. Appraisal optimism
The third and last weakness of CBAs is that they are
inherently at risk of bias and manipulation. For example,
Mouter (2017) interviewed decision-makers who said that they
knew how easy it was to affect results by ‘shifting the buttons
in the model’ (Mouter, 2017, p. 1134). As noted by Wachs
(1989), planning is not just analytical, and ‘the most effective
planner is sometimes the one who can cloak advocacy in the
guise of scientific or technical rationality’ (Wachs, 1989, p.
477).
Mackie and Preston (1998) list 21 sources of error and bias
in transport project appraisals and conclude that appraisal
optimism is one of the most important sources. Empirically, it
has been shown that not only are CBAs inaccurate, but also
they are often biased on the optimistic side (Flyvbjerg et al.,
2003; Kelly et al., 2015; Nicolaisen and Driscoll, 2014; van
Wee, 2007; World Bank, 2010).
Significant research has focused on explaining leaders’ and
entrepreneurs’ optimism bias as a feature inherent in human
behaviour. Such people are self-confident and tend to
exaggerate their own abilities and control over a situation.
While some authors describe this behaviour as unconscious
(e.g. Lovallo and Kahneman, 2003), others argue that the
persistence of bias is intentional and driven by a persistent
excess demand for project finance (e.g. Bertisen and Davis,
2008). The persistence of bias can also be explained in terms of
a principal–agent problem (Eisenhardt, 1989), such as when
project promoters, who themselves are not responsible for
funding, compete for discretionary grants from a limited budget
(Samset and Volden, 2015). However, it is difficult to find
conclusive empirical evidence of manipulation, as noted by
Andersen et al. (2016).
A common recommendation to avoid appraisal optimism,
whether or not it is intentional, is to ensure an outside view
(Flyvbjerg, 2009; Lovallo and Kahneman, 2003; Mackie and
Preston, 1998). This can be done by, for example, applying
historical data (e.g. through reference class forecasting) and/or
by having an independent third party perform or review the
CBA. Additionally, systematic ex post evaluations should be
performed to learn about the costs and benefits that can be
expected (Flyvbjerg et al., 2003; Mackie and Preston, 1998;
Volden, 2018).
Additionally, incentives for true speech must be in place. In
this respect, Flyvbjerg et al. (2003) and Samset and Volden
(2015) all recommend that project promoters are made
accountable for financing, risk, and benefits realization, and
that the appraisals are transparent and open to scrutiny. Mouter
(2017) points out that the CBA is often complex and lacks
transparency, which makes it particularly difficult to discover
manipulation. More generally, an overall project governance
framework that takes the risk of front-end agency problems into
account should be in place.
3. Conceptual framework
We argue that the three strands of literature discussed in the
previous section give rise to three broad explanations for why
CBAs may not be considered useful by decision-makers. A
simple conceptual framework is presented in Fig. 1.We have
chosen ‘CBA usefulness’ as the main outcome variable. It is a
multifaceted term that, in meaning, partly overlaps other terms
such as trustworthiness, validity, and credibility (see Patton,
1999, and Scriven, 2015, for a discussion of criteria of merit by
which analyses and evaluations ought to be evaluated). Since
the CBA is specifically intended for decision support, CBA
usefulness is considered from decision-makers’ perspective. To
some extent, the assessment of CBA usefulness will be
subjective and depend on each decision-maker’s preferences,
competencies, and other abilities, but our focus is on
assessments with which most decision-makers are likely to
agree.
In line with the three categories of weaknesses of CBAs
presented in Sections 2.1–2.3, we argue that CBA usefulness is
threatened when (1) the analysis is too narrow in terms of
relevant aspects being included in the business case (only the
CBA alone), (2) the analysis is inconsistent, incomplete, and
uncertainties are underestimated, and (3) the analysis is biased
on the part of the analyst. By contrast, CBA usefulness is high
when these weaknesses are not present.
The next step is to develop a framework for the empirical
analysis, based on the conceptual framework in Fig. 1. In
practice, the relative significance of the weaknesses in CBAs is
largely unknown, as is the extent to which CBAs adhere to the
recommendations provided in the literature to avoid or mitigate
the weaknesses. To date, few empirical studies have
Fig. 1. Three types of weaknesses that lead to low CBA usefulness – a simple conceptual framework.
554 G.H. Volden / International Journal of Project Management 37 (2019) 549–564
systematically reviewed CBA reports with respect to their
overall relevance, quality, and credibility. This raises the
question of whether it is possible for governments, through
guidelines, quality standards, and other governance mecha-
nisms, to ensure that CBAs are of high quality and useful to
decision-makers. An interesting case is a recent study of the
quality of CBAs of public projects in the UK (Atkins et al.,
2017), in which the authors mainly focus on the second and
third categories of weaknesses discussed above. The UK has
taken steps to improve project competencies in central
government and has introduced various governance arrange-
ments to improve project performance (Volden and Samset,
2017a). Atkins et al. (2017) find that the CBAs are largely of
acceptable quality, but that some challenges remain, the most
important of which concern the lack of consistency across
projects, and poor transparency and communication. They are
also concerned about possible bias in the cost estimates,
especially in cases in which decisions have been based on early
estimates.
We draw on the most essential recommendations provided
in literature, which, if adhered to, could increase CBA
usefulness. Authors who criticize the normative foundations
of the CBA (cf. Section 2.1) typically recommend that value for
money assessments are supplemented by analyses of the
project’s impact on, for example, equity and sustainability.
Those who discuss measurement problems (cf. Section 2.2)
recommend a certain level of standardization, proper treatment
of non-monerized impacts and uncertainty analyses. Lastly,
those who are worried about appraisal optimism (cf. Section
2.3) recommend an outside view, and measures to ensure
accountability. Common to all of the aforementioned three
groups of authors is that they recommend transparent CBAs.
Fig. 2. Framewor
Fig. 2 shows our framework for the empirical analysis,
including the seven research questions presented in Section 1.
The use of the CBA in practice, understood as adherence to
its recommendations, is a relevant indicator of CBA usefulness
and is applied in this study (RQ7). We expect, ceteris paribus,
that a CBA is more often adhered to when it is of high quality.
However, it should be noted that adherence is not a perfect
indicator of usefulness. As noted by Scriven (2015), there may
be a number of reasons for lack of adherence to the results of a
high-quality analysis. A thorough treatment of these issues
would lead us beyond the analytical process and into political
decision-making. Hence, for the purpose of this study, we
merely assume that an instrumental decision logic or the
‘rational ideal’ is applied on the part of decision-makers
(Samset and Christensen, 2017) and therefore disregard
problems on the decision-making level, such as self-interest,
the practice of ‘horse trading’, positioning, and power.
The final step in the outcome chain would be ‘realized value
for money’. This, too, would be an interesting indicator
(although similar caution is required). Unfortunately, we do
not have access to ex post data, and therefore this is not a topic
of the empirical study.
4. Methodology
The empirical part of this study is largely qualitative, with
the purpose of exploring, describing, and evaluating CBA
practice within the Norwegian QA scheme. It is a multiple-case
study of 58 Norwegian projects, based on a document review,
interviews, and a review of the decisions made by the Cabinet.
Although we refer to the cases as ‘projects’, all of the
investments are studied in their early phases, in which they
k of analysis.
Table
1
Subquestions applied for the review of CBAs.
RQ Subquestion
RQ1 1 Describe the impacts included.
RQ1, RQ2 2 How are impacts treated (especially monetized or not).
RQ1, RQ6 3 Key assumptions and parameter values used to estimate the
NPV (according to a pre-established list).
RQ1, RQ5 4 What is QAs reaction to CBA structure in CA? describe
deviations between the two CBAs.
RQ2 5 Analyst’s interpretation of the non-monetized impacts
(‘economic effect’ or other).
RQ2 6 Methodology and measurement scale used to assess non-
monetized impacts.
RQ2 7 Comprehensive analysis of non-monetized impacts? (pages
used in the report)
RQ2 8 Comprehensive analysis of non-monetized impacts?
(researcher’s judgement)
RQ2 9 Non-monetized impacts – whose judgement? (e.g. experts,
stakeholders, decision-maker).
RQ3 10 Type of risk analysis conducted, if any.
RQ3 11 Comprehensive risk analysis (researcher’s judgement)?
Capital cost, benefits, non-monetized separately.
RQ4 12 Distributional impacts or other considerations included along
with the CBA.
RQ4 13 Comprehensive distributional analysis (researcher’s
judgement)?
RQ4 14 Distributional/other decision criteria clearly separated from
CBA (researcher’s judgement)?
All 15 Are the CA and QA in agreement on the recommendation?
RQ4, RQ5 16 Sign (and value?) of NPV of recommended alternative.
RQ4, RQ5 17 Is the recommended alternative the one with highest NPV?
RQ5 18 Is the zero option recommended?
RQ6 19 Overall level of transparency (researcher’s judgement).
RQ6 20 Are models used to simulate impacts?
RQ6 21 If so, are the models explained? (reference to manuals, model
version, etc.)
RQ6 22 Does the report include an executive summary?
RQ6 23 Is the report written in a non-technical language?
(researcher’s judgement)
RQ7 24 Status of the project as of today.
RQ5, RQ7 25 Whose advice is followed, CA or QA?
Table 2
Projects included in the research.
Projects included (sector) N = 58
Road 20
Railway 5
Other transport (sea, coast, mixed) 11
Building 8
555G.H. Volden / International Journal of Project Management 37 (2019) 549–564
exist only conceptually. The Cabinet might chose the zero-
investment alternative, in which case the project proposal will
be rejected. Since very few of the projects have been
completed, no information is available that can be used to
determine the accuracy of the CBAs.
It should be noted that although the main unit of analysis is
the project, we present some of the findings at ‘CBA report’
level (since most of the projects have two CBAs). At a higher
level, one could consider Norway as a case, since all of the
projects have been through CBAs in their front-end phase as
part of the Norwegian QA scheme. However, this study is not
an evaluation of the scheme but rather a study of CBA practice
in a relatively large number of case projects, all of which
belong to this (supposedly) favourable context.
The seven research questions listed in Section 1 were
disaggregated into 25 subquestions that were more specific and
contained indicators for the review of documents, as shown in
Table 1. Some subquestions may contribute to answering more
than one research question (RQ). However, the analysis was
also inductive and open for exploring and describing other
patterns and relationships that were revealed in the process.
Our main data source was the CA and QA reports for the 58
projects, which constituted the total population of projects that
underwent CA and QA in the period 2005–2014, and are thus
representative of projects in all of the major sectors that
undergo QA in Norway. Currently, the transport sector has the
largest number of projects, with most QAs performed on road
projects. Other major categories are building construction,
defence, and ICT projects.2 The projects varied in size,
complexity, purpose, and stakeholders involved, but in general
they were the largest state-funded infrastructure projects in
Norway in the period (Table 2).
For five of the projects (three of them within defence), the
CA document was exempt from public access. For these
projects, we only had access to the QA report and the
presentation of the CA results discussed therein. Thus, we had
access to a total of 111 CBA reports for our 58 projects.
The CA-QA process is followed by an administrative and
political process in government. We established the status of all
projects as of 2016, after the choice of project concept had been
made by the Cabinet. To do this, we conducted a broad
investigation of government documents, with particular focus
on White Papers, to establish Parliament’s ultimate choice of
concept.
Additionally, we held semi-structured interviews with 26
key informants, all of whom were highly experienced within
the field of CBA and had been involved in one or more of the
studied projects. We considered that the interviews provided us
with a deeper understanding, and since they were conducted
after the document reviews, we were able to present some key
findings and ask the interviewees for comments on them. Ten
interviewees were senior ministry officers who commissioned
CBAs from agencies, consultants, and quality assurers. They
2 Some sectors are exempt from the Ministry of Finance’s scheme, but have
their own, similar schemes, such as the energy and petroleum sector, and the
hospital sector. These are not included in the study.
represented the decision-making level in this context. The other
16 interviewees were experts from the agencies and the QA
teams and represented the persons who conducted the analyses.
The interview guides were structured around the seven research
questions, and the interviewees were invited to talk freely,
based on their own experiences. It should be noted that the data
collected from the interviews did not concern particular
Defence 5
ICT 4
Sports event 3
Other 2
Table 3
Changes in CBA structure. QA compared with the CA for the same project
(most important change registered) (N = 58).
Type of change Number %
No change or minor change 17 29
Impact categories removed 13 22
Impact categories added 8 14
More impacts monetized (formerly non-monetized) 3 5
Impossible to compare due to different approach 12 21
No information 5 9
Total 58 100
556 G.H. Volden / International Journal of Project Management 37 (2019) 549–564
projects, but rather the general practice in central government.
Each interview lasted 1–2 h.
A large Excel spreadsheet was applied, in which facts,
assessments, notes from the document reviews as well as the
interview transcripts were combined in the coding process. A
list of the most interesting topics, counts, and possible
relationships was continuously revised as we went through
the material. The resulting themes and categories were not too
different from the initial ones. The findings also included a
number of categorizations, counting of occurrences, and cross-
tabulation. In particular, the responses to subquestions 15 and
25, about whether the QA approved the CA and whose advice
was followed by the decision-makers, were compared with
various quality indicators. The results were also cross-tabulated
against background variables such as project type.
All of the steps in the coding process gave considerable
room for the researcher’s own judgement, which might give rise
to concerns about subjectivity and potential bias in our results.
An important mechanism used to secure reliability and validity
was the consultation of reliable sources of information. We
used high quality, publicly available documents, as well as
interviewees who had first-hand experience of CBA practice.
The interviews were transcribed and the interviewees were
subsequently given the opportunity to read and comment on the
transcription. Furthermore, the use of different sources (i.e.
document reviews and interviews, and interviewees with
different perspectives) to illuminate each RQ, proved useful
for revealing any inconsistencies in the data. The coding and
analysis were also discussed with fellow researchers.
5. Presentation and discussion of findings
5.1. CBAs are comprehensive and partly standardized (RQ1)
Our overall assessment based on the document review is that
most of the CBAs are relatively comprehensive, and that
appraisals of similar types of projects generally include the
same impact categories. In particular, payable costs, including
both the capital cost and the maintenance and operating cost,
are thoroughly estimated in most cases. Some benefits are
monetized, most notably payable revenues, time savings, other
consumer benefits, and in some cases also impacts on health
and safety and the environment. Other impacts are treated as
non-monetized impacts in the framework. Overall, only about
half of the CBAs (45% of CAs, 55% of QAs) monetize all or
the most important impacts. The degree of monetization varies
across sectors, but even for road projects, less than 80% of the
CBAs monetize all or the most important impacts. Thus, non-
monetized impacts play a key role in the studied analyses.
Further, the CBAs of road and rail projects are more
standardized than the CBAs of other project types. For
example, whereas some CBAs of building projects only present
and discuss first-order effects for users (e.g. users of a museum,
university, or prison), others discuss long-term, wider benefits,
such as improved national competitiveness due to better
research and education. The interviewees reported that they
were often unsure about whether and how to treat indirect,
long-term impacts, for which no guidelines exist. Generally, the
level of standardization regarding the non-monetized impacts is
low. We return to this problem in Section 5.2.
Some quality assurers claim that the CAs are overly
‘creative’ with regard to the benefits included. This is
particularly the case for non-monetized benefits. Table 3
shows the most common changes made by QAs relative to
the CAs. The good news is that the largest category of changes
is ‘No or minor changes’. There are no clear sector differences.
It can also be shown that ‘No or minor changes’ is correlated
with QAs approving the final recommendation, cf. subquestion
15.
The calculation of an NPV is normally based on a number of
parameters and assumptions, and an overview of some them is
given in Table 4. Although it should be possible to vary most
parameters due to, for example, local variation in people’s
WTP, it seems that the observed variation is somewhat higher
than expected. For example, there seems to be much confusion
about the discount rate and how it should vary according to
systematic risk. Similarly, the degree to which real price
adjustment is applied seems arbitrary. Some sectors (e.g.
transport) have their own CBA guidelines that specify key
parameters and values, implying that practice is more consistent
in these CBAs. None of the CBAs included independent
valuation studies to obtain exact WTPs.
Prior to 2014, hardly any parameters had been fixed as
compulsory in the national guidelines issued by the Ministry of
Finance, with the exception of the marginal cost of funds. Since
2014, some additional parameters have been fixed, most
notably the discount rate and the value of a statistical life. In
our view, this has led to a more consistent practice across
CBAs, and should have been considered for other parameters
too, most notably the social cost of carbon.
5.2. Inconsistent handling of non-monetized impacts (RQ2)
Non-monetized impacts are often essential in the CBAs.
However, their interpretation is sometimes unclear and
arbitrary, especially in the CAs. Some findings from the
document review are presented in Table 5. On the one hand, the
ministries and agencies seem to put more efforts into the
analysis of non-monetized impacts than do the quality assurers,
but on the other hand, they have a less clear understanding of
what those impacts actually measure. Many CAs tend to mix
economic impacts with goal achievement and other
Table 4
Selected parameters applied in the CBAs (N = 111).
Parameters Practice observed
Marginal cost of public funds 0.2 (fixed by the Ministry of Finance)
Discount rate Varies within the range 2–5%, later fixed at 4%
and declines over time
Value of a statistical life Varies in the range NOK 15–35 million, later
fixed at NOK 30 million
Value of time In most cases, average wage is used for
business travel, but lower for leisure (in the
transport sectors, based on a Norwegian SP
study)
Method for calculating
residual value
Large variations. Linear depreciation, market
valuation, NPV of remaining net benefit flows,
or set to 0
Real price adjustment Large variations. Applied by some sectors,
only for some impacts
Social cost of carbon Varies within the range NOK 110–400 per ton,
later an increasing price path is introduced in
some sectors
557G.H. Volden / International Journal of Project Management 37 (2019) 549–564
considerations when presenting non-monetized impacts. Polit-
ical and strategic considerations at various levels (e.g. agency,
sector, region, or a stakeholder group) that extend far beyond
consumer preferences are frequently brought into the discus-
sion of whether the projects are good value for money. In our
view, this is a serious weakness, that may lead to wrong
conclusions.
Not only the interpretation, but also the choice of
measurement scales varies considerably (e.g. cardinal, ordinal,
or purely qualitative). Most CBAs of road projects apply the
road agency’s recommended framework for assessing five types
of negative effects on nature and the environment, which are
summarized in terms of ‘plusses and minuses’ on a scale
ranging from −4 to +4. CBAs of other project types have a less
systematic approach. Some quality assurers have introduced
their own frameworks for analysing non-monetized impacts,
but these frameworks are not consistent.
We consider that the documentation of the non-monetized
impacts is sufficient in less than half the CBAs (cf. subquestion 8).
Table 5
Selected findings relating to non-monetized impacts in CBAs, sorted by CAs
(N = 53) and QAs (N = 58).
Indicator All (%) CAs (%) QAs (%)
Interpretation/perspective (researcher’s
understanding)
Economic impact 56 34 77
Goal achievement, mixed or unclear 44 66 23
100 100 100
Methodology
Qualitative 22 21 23
‘Plusses and minuses’ 54 46 64
Other scoring or ranking 24 33 13
100 100 100
Comprehensiveness
Average % of CBA (in terms of page
numbers)
22 27 17
Well documented (researchers’ judgement),
% ‘yes’
45 53 36
Generally, the data sources used, the people involved, and the
principles for valuation, are not well documented. For example,
information about whose judgement they are based on is not
provided inmany cases.Moreover, in general, the development of
these impacts over time is not discussed. There are no obvious
differences between sectors or project types.
Interestingly, a comprehensive treatment of the non-
monetized impacts in the CA is not correlated with QAs
approving the final recommendation. Only when CAs apply the
same interpretation of non-monetized impacts as the QA, they
are more likely to agree on the final recommendation, and vice
versa. This is supported by the interviews and indicates that
quality assurers tend to be suspicious about a thorough
discussion of non-monetized impacts that extend beyond an
economic interpretation.
Interviewees from ministries and agencies acknowledged
that performing the non-monetized part of the CBA is difficult.
One interviewee said, ‘In our sector [defence] we often discuss
the achievement of military goals rather than socio-economic
benefits. I guess we need better guidance on how to distinguish
between a multiple-criteria analysis and a CBA.’ By
contrast, the quality assurers are more loyal to the economic
perspective.
5.3. Uncertainty thoroughly assessed for capital cost, but to a
lesser extent for other impacts (RQ3)
Our document review included an assessment of major
uncertainties relating to costs and benefits, and how these were
assessed and presented. Generally, the studied CBAs were
more concerned about risks to the capital cost than risks to
benefits and other long-term impacts. The reason probably lies
in the QA scheme itself, which requires that stochastic
estimation techniques are applied to estimate the capital cost,
but there are no such requirements for other impacts. Overall,
capital cost uncertainties are well handled in the studied CBAs.
Uncertainties relating to other impacts are more varied and
often superficial. About 60% of the CBAs (CAs and QAs alike)
report sensitivity tests, but such tests are often simple and only
focus on one or two parameters. One analyst said, ‘We have
strict deadlines, and sensitivity testing is just one of the things
that we don’t have time for.’ Uncertainties relating to non-
monetized impacts are rarely discussed in the CBAs. In our
view, more attention should be paid to uncertainties in all
impacts, not just capital cost.
The combination of uncertainties and irreversible invest-
ments that gives rise to quasi-option values (Boardman et al.,
2011) is discussed briefly and qualitatively in some of the QA
reports. Quasi-option values are typically higher in the zero-
investment alternative, and in some cases this has been used by
quality assurers as an argument for postponing the investment
decision.
Overall, we consider that about two-thirds of the CBAs as
acceptable with regard to identifying and analysing risk (cf.
subquestion 11). QAs perform far better than do CAs
(74% acceptable versus 47%). Interestingly, when a CA is in
the ‘acceptable’ category, the QA approves the final
558 G.H. Volden / International Journal of Project Management 37 (2019) 549–564
recommendation more often. This indicates that QAs recognize
a good uncertainty analysis as a crucial quality indicator of the
CBA.
5.4. Other considerations are not clearly distinguished from
value for money (RQ4)
Overall, 47% of CAs present other decision criteria (goal
achievement, distributional analyses etc.) along with the CBA,
whereas only 5% of the QAs do the same (cf. subquestion 14).
We do not find any clear sectoral differences. Generally, the
discussion of distributional impacts is rather superficial, and in
most cases not sufficiently comprehensive. Immediate effects
are discussed more often than are long-term distributional
effects. For example, impacts on future generations are hardly
mentioned in any of the reports. An equally worrying
observation is that when such other considerations are included
in the report, they are in many cases not clearly separated from
the value for money perspective.
As discussed in Section 5.2, benefits for specific groups or
regions are often discussed in the CBAs as if they were net
economic benefits to the country, although they may be a
matter of redistribution. This explains the failure to report
distributional impacts in many of the CBAs, particularly the
CAs. They are already reported as benefits (but the corre-
sponding negative impacts for other groups are not presented).
By contrast, the quality assurers mention that their primary
focus is on value for money, and some seem to ignore decision-
makers’ need for supplementary information altogether. Cross-
tabulations show that CAs that present a broad and holistic
decision base, correlates with QAs not approving their
recommendations.
It should be noted that the distinction between wider
economic benefits and pure distributional effects (i.e. economic
effects that are most likely to be offset elsewhere) is not always
clear. Our interviewees confirmed that performing this part of
the analysis is challenging, and that more research and better
guidance is welcome.
Table 6
Characteristics of the recommended project alternative (N = 58 projects).
Indicator All (%) CAs (%) QAs (%)
Sign of NPV in recommended alternative
Positive 30 25 36
Negative or zero 70 75 64
100 100 100
The recommended alternative has the
highest/least negative NPV, % of the CBAs
55 44 66
Zero alternative recommended, % of the CBAs 11 3 19
5.5. Appraisal optimism has been avoided for NPV estimation,
but may influence the CBA in other ways (RQ5)
Although not always openly stated, there is commonly a
preferred project alternative from the agency’s perspective. One
of the consultants stated: ‘Everyone knows which concept the
CA is hoping for, and it is always the most expensive one.’ This
raises the question of whether the CAs are biased in favour of a
preferred alternative.
In the absence of ex post data, we compared the CBAs done
by agency and quality assurer, in the knowledge that the latter
party was independent of the project. It should be noted that the
quality assurers may introduce new combined alternatives or
adjustments to existing alternatives, for example to make the
zero investment alternative more realistic, which implies that
the sets of project alternatives assessed in the two reports are
not identical. Therefore, instead of pairwise comparisons of
alternatives, we studied the characteristics of each party’s
highest ranked alternative.
Generally, the QAs disagree with the CA recommendations,
either partly or fully, in the majority of projects (33 out of 58).
We have already mentioned that QAs seem to ‘reward’ CAs for
having an appropriate CBA structure and for including a
comprehensive uncertainty analysis, but not for comprehensive
analyses of non-monetized impacts or for presenting a broad
decision base. We also found that there are no striking sectoral
differences: if anything, there seems to be slightly less
disagreement about defence projects. Next, we focus our
discussion on the extent to which CAs are systematically more
optimistic about the projects’ value for money. Specifically, in
the knowledge that QAs put much weight on the NPV, one
could suspect that the CAs present a biased NPV.
From Table 6, it can be seen that the CAs recommended
project alternatives with a negative or zero NPV in 75% of the
cases, whereas the corresponding percentage for the QAs is
slightly lower (64%). Thus, it is apparent that the ministries and
agencies are not concerned about promoting projects with a
negative NPV. Rather, these findings may indicate that the
NPV is not manipulated to make projects appear more
profitable.
It should be noted that in our review of parameters and
assumptions (cf. subquestion 3), we also looked for systematic
differences between the CAs and the QAs. In this case, too, we
did not find any clear indications that the CAs applied more
optimistic parameters. Generally, practice seemed to vary as
much between different quality assurers as between quality
assurers and ministries and agencies.
However, we cannot exclude the possibility that CAs are
biased in terms of the non-monetized impacts, or by excluding
or systematically downgrading the simplest and less costly
alternatives. As shown in the lower part of Table 6, CAs
recommend the alternative with the highest or least negative
NPV less often than do QAs. CAs hardly ever recommend the
zero alternative. One group of projects that attracted our
attention is those for which CA recommends an alternative with
negative NPV and the QA recommends an alternative with
positive NPV (10 projects). In each of these cases, the QA
either preferred a less costly alternative, or downscaled the
alternative recommended by the CA, thus turning a negative
NPV into a positive one.
The findings presented in Table 6 also demonstrate the
emphasis that ministries and agencies, and to some extent
quality assurers put on the non-monetized impacts, which are
559G.H. Volden / International Journal of Project Management 37 (2019) 549–564
considered to outweigh a negative NPV in the majority of
cases. In light of the emphasis put on those impacts, the
inconsistent interpretation and treatment of such impacts is
worrying (as discussed previously). Furthermore, there are
indications that the quality assurers do not scrutinize this part of
the CA in the same way as they scrutinize the NPV. One
interviewed quality assurer said, ‘I guess the agencies realize
that any attempts to cheat with numbers will be revealed. It is
easier to get away with the qualitative assessments.’ The
interviewees from the agencies denied that they had manipu-
lated the data. Rather, they accused quality assurers of ignoring
important non-monetized benefits. The interviewees who were
decision-makers stated that they felt uncertain about how to
interpret the reports and which party to believe when the CA
and QA differed. First and foremost, they considered it
important to be able to trust the quality of the CBAs. Some
referred to the QA reports as helpful for determining the quality
of the CAs, but one interviewee said he would have liked the
QA reports to be ‘reviewed by independent experts too’.
5.6. Transparency and communication acceptable, but could be
improved (RQ6)
Transparency and clear communication are crucial to ensure
CBA usefulness. Overall, we judge the level of transparency as
acceptable (cf. subquestion 19) in c.80% of the studied CBAs,
meaning that they are documented in sufficient detail, either in
the main report or in an appendix. However, many reports
could have been improved. Key parameters, such as the
discount rate, price level, and period of analysis, are not always
explicitly stated; for example, 12% of the CBAs do not include
information about the discount rate used. Generally, the QAs
are more transparent than are the CAs. There is also a tendency
for the more transparent CBAs to have been produced by
inexperienced agencies than by, for example, the road and rail
agencies, possibly because they lack a standard framework and
therefore need to explain every step of their analysis.
Traffic models and impact models are frequently used by the
transport agencies, and some consultants have developed their
own economic models that produce inputs to the CBAs. These
models are not always well explained in the reports, and often
appear as black boxes. Even experts in the agencies find the
models difficult to understand, as exemplified by one
interviewee, who said, ‘The result of traffic simulations
depends on so many detailed assumptions about the new
road, such as curvature, width, velocity, etc. It is impossible to
understand everything. You just have to trust the model.’ One
quality assurer admitted that he often took the traffic estimates
from the agencies’ models for granted, because it was
impossible to verify them. By contrast, interviewees from
ministries/agencies accused some consultant of treating their
own models as business secrets.
Economic impacts are often presented in an aggregate form
in the CBAs. For example, road projects normally generate a
range of emissions to air (NOx, CO2, N2O, and local air
pollution in the form of particulate matter). These are
commonly presented in the reports as ‘environmental costs’,
which obscures their individual impacts.
Furthermore, in all projects, a large number of project-
specific assumptions will have to be set by the analyst. These
are not always well explained in the CBAs. One example is the
assumption made about toll fees on new roads in Norway,
which may affect consumer benefits significantly. In two-thirds
of the road project CAs, it is assumed there are no user fees, and
hardly any of those CAs include an explanation of the reasons
behind this assumption. The QA reports are therefore useful
because they may question key assumptions. They may agree
or disagree with the ministries and agencies, but their
discussions will nevertheless add useful information for
decision-makers. We only find a slightly positive correlation
between the transparency in CAs and the QAs approving the
final recommendation.
In many CBAs, technical language is used, and the reports
are long: reports with 100 pages or more are common. This is
relevant in terms of accessibility because decision-makers
normally face constraints in terms of their expertise and time.
The majority of CBAs (95% of QAs and 63% of CAs) include a
summary. However, most of these summaries are short and
rather superficial. In our view, only c.10% of the reports
include a sufficiently informative summary that cover all major
impacts (whether monetized or not), uncertainties, distribu-
tional impacts and/or other considerations, and key assump-
tions on which the results are based.
The interviewed decision-makers confirmed that they often
found it difficult to understand the complexity of CBAs. They
also confirmed that they thought summaries should be more
comprehensive.
5.7. Decision-makers found CBAs more useful when approved
by an independent party (RQ7)
The ultimate test of whether decision-makers’ find CBAs
useful, is the extent to which they follow the recommendations
in the reports. Certainly, other concerns than value for money
may affect public investment decisions, and traditionally the
CBA has not been very influential in public project decision-
making in Norway. However, it is important to note that the
CBA follows an assessment of public needs and strategies,
implying that the shortlisted alternatives are all considered
relevant to these strategies. We therefore expect political
decision-makers to follow the ranking based on value for
money at least to some extent, given that they have confidence
in the analyses.
Overall, in the majority of cases (c.80%), the Cabinet has
chosen to go-ahead with either one conceptual alternative or, in
a few cases, several conceptual alternatives to be developed
further into a major construction project. Only in c.20% of the
cases is the zero alternative selected or the project put on hold
or withdrawn. There are no clear differences between project
types. We did a large number of cross-tabulations to shed light
on how CBA quality might have influenced decisions. The
following findings are worth mentioning. A low degree of
monetization does not seem to reduce adherence. Rather,
0
0,1
0,2
0,3
0,4
0,5
0,6
0,7
0,8
0,9
1
% %
CA and QA agree CA and QA disagree
Proposal withdrawn
Put on hold
Zero investment alternative
Several investment
alternatives to be
developed further
One investment alternative
to be developed further
Fig. 3. Cabinet decisions, based on 58 projects, of which for 25 the two CBAs agree and for 33 they disagree with key recommendations. Percentages for each of the
two groups.
560 G.H. Volden / International Journal of Project Management 37 (2019) 549–564
decision-makers’ adherence seem to be higher when the CBAs
include comprehensive analyses of the non-monetized impacts
and the distributional impacts, and they prefer reports that
present a broad decision base that includes more than value for
money. There is no correlation between adherence and the sign
of the NPV in the recommended alternative, which is another
indication that decision-makers care about the non-monetized
impacts. By contrast, comprehensive risk analysis is not
correlated with adherence. This is partly in contrast to the
quality indicators that QAs seem to emphasize in their
assessment of the CAs.
One finding that attracted our attention was that when CA
recommendations (based on both NPV and the non-monetized
impacts) were approved by the QAs, decision-makers’ adher-
ence was substantially higher. The distinction between cases in
which ministry/agency and quality assurer agreed on the project
ranking and cases in which they disagreed, is shown in Fig. 3.
The Cabinet has followed the recommendation in 92% of the
cases in which the two CBAs are in agreement. By contrast,
Value for money
Non-
monetized
impacts
Monetized
impacts
Distributional imp
listed
Assessed and presented in monetary terms
Assessed and presented in other (qualitative) ways
+
– Within generation (
– Between generatio
(sustainability)
Fig. 4. Suggested early-phase business case – diff
when they are not in agreement, the Cabinet has made a clear
choice of concept in only 48% of cases, often in line with CA
recommendations. In the remaining 52% of cases, the Cabinet
has chosen either multiple alternatives or no investment (the
latter often in line with QA recommendations) or has put the
decision on hold. In one case, a sports event, the proposal was
withdrawn following a very critical QA report. These findings
suggest that decision-makers care about more than just value
for money. They also suggest that CBAs are heeded and that a
critical QA can make decision-makers stop and reconsider the
case.
Additionally, we asked interviewees to comment explicitly
on the perceived usefulness of the CBAs. The majority,
especially those who were decision-makers, found the CBAs
useful ‘given that they are of high quality’. One interviewee
said, ‘The existence of two CBAs that come to the same
conclusion is a strong indicator of quality.’ Another inter-
viewee, a consultant, stated that ‘In some cases, politicians need
an excuse for rejecting a hopeless project, and a critical QA
acts
equity)
ns
Goal
achievement
Assessment based on
a set of defined goals
Other
relevant
perspectives
erent decision perspectives (not to be added).
561G.H. Volden / International Journal of Project Management 37 (2019) 549–564
report can be that excuse.’ Also, one interviewee noted that the
QA scheme itself might discourage agencies from coming
forward with poor proposals in the first place. However,
another interviewee reminded us that decision-makers are not
obliged to follow the advice from CBAs, and said, ‘It is nice to
know a project’s value for money, but we cannot make politics
only based on that.’
6. Conclusions
A CBA offers a clearly defined interpretation of project
success, as may be formally required in relation to public
project selection. However, challenges and weaknesses in
CBAs may be overlooked, which implies that decision-makers
may not find them useful and trustworthy. We have studied the
usefulness of CBAs produced as part of compulsory appraisals
of major infrastructure projects. Two types of CBAs are done
for major public projects in Norway, one by the initiating
ministry/agency and one by external quality assurers. Both
types of CBAs rank the project alternatives based on their
estimated value for money. With a few exceptions, they are
openly available to researchers as well as to members of the
wider public.
We expected, and found, that the studied CBAs would be
and are largely of good quality. In particular, the use of
independent quality assurers is normally considered a means to
reduce the risk of appraisal optimism. Also, the risk of
inconsistent, incomplete, and inaccurate estimates should be
limited, given the time and resources spent on the analyses and
the considerable expertise involved. Thus, the study of a
‘critical case’ (Flyvbjerg, 2006) should be useful to explore the
potential for overcoming any CBA weaknesses, and to identify
weaknesses that are more difficult to avoid than are others.
6.1. CBAs are heeded by decision-makers
A key finding from our research is that decision-makers
consider CBAs a vital part of the business case for
infrastructure project proposals. This was found through direct
measurement (interviews) as well as indirectly (revealed
adherence to recommendations). This contrasts with the role
of CBAs before the QA scheme was introduced in 2005. In the
past, if CBAs were produced at all, they rarely affected public
project decision-making in Norway (Nyborg, 1998). Generally,
we find that the ministries and agencies invest considerable
resources in their CBAs today. This is in line with findings
from an earlier study (Volden and Andersen, 2018), which
demonstrates that the QA scheme has led to strong efforts in
ministries and agencies to strengthen their project competencies
and governance models at the agency level. However, we
would like to make it clear that we have not proved an effect of
the QA scheme as such.
We find that the Cabinet has almost always approved a
project proposal if it was recommended as good value for
money by the ministry/agency, and endorsed by the quality
assurer. However, if a project proposal was recommended by
the ministry/agency, but not endorsed by the quality assurer, the
Cabinet was more likely to have rejected it or reconsidered it.
This is a clear indication that the CBAs are heeded by decision-
makers. Furthermore, the interviewed decision-makers explic-
itly stated that they considered the use of two CBAs was a
stronger decision base than the use of just one CBA. This
finding is in line with literature on appraisal optimism that
recommends an external view on the appraisal and planning of
a project (Flyvbjerg, 2009; Lovallo and Kahneman, 2003;
Mackie and Preston, 1998).
Our findings indicate that appraisal optimism has largely
been avoided in NPV estimation (i.e. the third category of
weaknesses in CBAs, cf. Fig. 1). Ministries and agencies
generally do not estimate NPVs as positive more often than do
quality assurers. The fact that an external review will be
performed seems to have a disciplining effect on ministries and
agencies. However, we cannot exclude the possibility that CAs
deliberately downgrade or exclude ‘cheap alternatives’ in some
cases.
Furthermore, the comprehensiveness and consistency of
analyses is largely at an acceptable level in the studied CBAs
(cf. the second category of weaknesses). This also applies to
transparency, which is essential to reveal all three types of
weaknesses in CBAs and to increase decision-makers’ under-
standing of the analyses. Thus, the situation in Norway is
somewhat more encouraging than that found in the UK by
Atkins et al. (2017), where inconsistency, poor transparency,
and communication were serious weaknesses in project
appraisals. Similarly, Annema (2013) found that transparency
in Dutch CBAs was generally poor, despite the introduction of
a new CBA guide that had led to other improvements. An
explanation may be the requirement that QA reports in Norway
should be openly available to the public. Nevertheless, there is
potential for improvement in the Norwegian CBAs with regard
to consistency and to uncertainty assessments and transparency.
To summarize, the following research questions have all
largely been answered with a ‘yes’ response or at least a ‘to an
acceptable extent’ response: RQ1 about CBA consistency
across projects, RQ3 about uncertainties being identified and
presented, RQ5 about unbiased estimates, and RQ6 about
transparency and clarity. This may, in turn, explain why RQ7
about decision-makers’ adherence to CBA recommendations,
can also be answered with a conditional affirmative.
6.2. Non-monetized impacts need a clearer definition and more
systematic treatment, distinguished from considerations beyond
the project’s value for money
Two remaining weaknesses in CBAs require attention. First,
RQ2 about whether the non-monetized impacts are handled
consistently has been answered negatively. Second, the answer
to RQ4, about distributional impacts and other considerations,
is that such issues are being presented and discussed in CAs,
but they are often mixed with the value for money assessments.
The former finding is much in line with the findings of
Ackerman (2004) and of Mackie and Preston (1998), whereas
the latter finding has not been studied systematically, to our
knowledge.
562 G.H. Volden / International Journal of Project Management 37 (2019) 549–564
It should be noted that the two weaknesses are related. There
may be many pros and cons relating to the project beyond value
for money. Our findings confirm that decision-makers do care
about information beyond value for money assessments.
However, when included, such other considerations are often
incorrectly referred to as non-monetized impacts and ‘added’ to
the NPV. This creates confusion for decision-makers, who
cannot be sure what has been measured (i.e., whether value for
money or some other confounded criterion).
One explanation for such observed weaknesses is that the
non-monetized part of a CBA is a difficult topic – a fact that is
neglected in CBA textbooks and guidelines. However,
differences between ministries/agencies and quality assurers
may also indicate opportunism. This means that ministries and
agencies may deliberately overestimate the non-monetized
impacts by including benefits that are not true economic
benefits, and they could do this in the knowledge that it would
be more difficult for the quality assurers to disprove qualitative
arguments than quantitative arguments. If that is the case, the
problem of appraisal optimism in CBAs may be present after
all, although in another form than expected.
Clearly, methodological improvements as well as guidelines
for assessing non-monetized impacts are required. Addition-
ally, quality assurers must take such impacts seriously.
Assessments of non-monetized impacts ought to be guided by
the question of whether they are likely to improve or
deteriorate the NPV, not by some other valuation principle
(such as whether they are in line with a set of political goals).
Admittedly, the distinction between consumer preferences and
other perspectives is not easy in practice, but this is also a
challenge in monetization (Sager, 2013; Mouter and Chorus,
2016). If we allow for arbitrary interpretations of the non-
monetized impacts, the pricing versus non-pricing decision
could become an opportunistic one.
As noted by Laursen and Svejvig (2016), the definition of
‘value’ in projects is often vague and may depend on the
perspective taken. The great advantage with value for money
as defined by the CBA is the clarity. Therefore, it is important
to accept that definition in practice, whether impacts are
monetized or not monetized. The great advantage with value
for money as defined by the CBA is the clarity. The
disadvantage is that only efficiency aspects are covered. We
believe the definition of CBA should be accepted, whether
impacts are monetized or not monetized. However, with a
narrow interpretation of non-monetized impacts, it is even
more crucial to balance value for money against other
perspectives or interpretations of social value. Not only should
each project alternative’s distributional impacts be presented as
part of the business case, but we suggest that also each project
alternative’s achievement of relevant goals and strategies is
assessed and presented. Goals and strategies may overlap with
value for money, which would typically be the case when
goals are related to national economic development. In other
cases, goals and strategies may be better aligned with
distributional considerations, and thus in conflict with value
for money considerations. For example, goals could be
defined for the well-being of specific groups or regions,
environmental sustainability or other considerations not well
covered by the CBA (cf. Section 2.1). Basically, goals and
strategies could be related to anything that political decision-
makers care about.
Admittedly, goal alignment is already checked for the
shortlisted alternatives in a CA, but some alternatives will often
score higher than do others, which may be relevant for project
selection. We think it is important that the three (or more)
perspectives are presented separately, as shown in Fig. 4 by the
thick lines between them. Thus, it is clear that although the
monetized and non-monetized impacts should be added to
assess the project’s value for money, the different decision
perspectives should not be added. Instead, any conflicts
between the perspectives should be identified, and the final
balancing between them ought to be done by the decision-
makers. Should there be no conflicts, this will normally be
highly relevant and useful information too. The framework
constitutes a holistic business case that can easily be expanded
to fit with an early-phase version of the Five Case Model
applied in the UK (HM Treasury, 2013) or with the OECD-
DAC criteria (Volden, 2018). This topic is worthy of more
attention from the research community as well as from
governments.
6.3. Recommendations
The findings from our research have provided the basis for a
set of practical recommendations to increase CBA usefulness.
The target group for these recommendations is project owners
and senior officers who are responsible for project governance
frameworks. Although the studied projects are public ones, we
believe that many of the following recommendations are
relevant to private sector organizations too.
1. A number of perspectives beyond value for money may be
relevant to decision-makers. We suggest these perspectives
are defined by decision-makers in advance and included in
the business case. In our study, high-quality CBAs were
often presented alone, forming a business case that was too
narrow.
2. An important purpose of a CBA is to assess a number of
alternative solutions to the problem at hand. Not only large
construction projects, but also simple and low-cost solu-
tions. One should be aware that project promoters may not
have the right incentives to include the latter type of
alternatives.
3. Completeness and consistency are important quality criteria,
which comprise, for example, the impact categories
included, the extent to which impacts are monetized, and
the choice of parameter values. Although all projects are
unique, our findings indicate that there is room for more
standardization.
4. Possible errors and uncertainties need to be identified and
presented as part of the CBA, to the extent that they can
affect the ranking and recommendations.
5. The Non-monetized impacts are as relevant as the monetized
impacts. They should not be ignored (as some, highly
563G.H. Volden / International Journal of Project Management 37 (2019) 549–564
experienced, analysts tended to do, in this study), nor should
they be overvalued or mixed with other perspectives than the
value for money perspective.
6. Measures to prevent optimism bias on the part of project
promoters are recommended. Relevant measures such as
transparency and external quality assurance of reports,
seemed to work well in the studied projects.
7. Although not found to be a problem in this study, analyst
competence and qualifications are key.
8. Understandability and communication (meaning, for exam-
ple, the use of simple language and a readily available
summary) are important aspects of transparency in reports,
and relevant to decision-makers who are not CBA experts.
7. Limitations and further work
The use of case projects from a single country has some
limitations. Therefore, broader conclusions cannot be drawn on
the basis of our findings. In particular, as highlighted in the
governance literature, a project governance scheme ought to be
adapted to a specific context. The experiences gained from the
application of the Norwegian QA scheme may not be
transferable to other countries. An interesting topic for further
study would be a systematic comparison of CBA practices in
countries that have introduced independent review of CBAs.
Further, for the sake of simplicity, we have assumed that
decisions are based on an instrumental decision logic, and we
have not considered adverse incentives on the decision-making
level. The true potential for improving decisions through better
CBAs would probably be moderated by various conditions at
the decision-making level. An extended model ought to be
established to take this into account.
Additionally, it should be noted that we have studied CBAs
in an early project phase. It remains to be seen whether the
projects will actually be good value for money after they have
been implemented. The selected project alternative needs to be
developed further in a detailed planning process before the
project is implemented. In that phase, there is a risk of cost
escalation, and the realization of intended impacts has to be
ensured through active cost and benefits management. An
interesting topic for further research would therefore be to
follow the projects throughout subsequent phases, and to
perform updated CBAs in medias res as well as ex post, to learn
whether the agencies manage to retain their focus on producing
value for money.
Funding
The work was supported by the Concept Research Program
at the Norwegian University of Science and Technology, which
in turn is funded by the Norwegian Ministry of Finance.
Declaration of interest
No conflicts of interest.
Acknowledgements
The author would like to thank her colleagues Ole Jonny
Klakegg, Heidi Bull-Berg and Ola Lædre for interesting
discussions and useful comments to an earlier draft. Further,
she would like to thank the editor and the anonymous reviewers
for their contributions to improving the paper in the review
process.
References
Ackerman, F., 2004. Priceless benefits, costly mistakes: what’s wrong with
cost-benefit analysis. Post-autistic Econ. Rev. 25.
Andersen, B.S., Samset, K., Welde, M., 2016. Low estimates – high stakes:
underestimation of costs at the front-end of projects. Int. J. Manag. Proj.
Bus. 9 (1), 171–193.
Annema, J.A., 2013. The use of CBA in decision-making on mega-projects:
Empirical evidence. In: Priemus, H., van Wee, B. (Eds.), International
Handbook on Mega-Projects. Edward Elgar, Cheltenham, UK,
pp. 291–313.
Association for Project Management, 2018. Resources. https://www.apm.org.
uk/resources/ (retrieved April 2018).
Atkins, G., Davies, N., Kidney Bishop, T., 2017. How to Value Infrastructure:
Improving Cost Benefit Analysis. Institute for Government, London, UK.
Baccarini, D., 1999. The logical framework method for defining project
success. Proj. Manag. J. 30 (4), 25–32.
Bertisen, J., Davis, G.A., 2008. Bias and error in mine project capital cost
estimation. Eng. Econ. 53 (2), 118–139.
Boardman, A., Greenberg, D., Vining, A., Weimer, D., 2011. Cost-Benefit
Analysis. 4th ed. Pearson.
Breese, R., Jenner, S., Serra, C.E.M., Thorp, J., 2015. Benefits management.
Lost and found in translation. Int. J. Proj. Manag. 33, 1438–1451.
Browne, D., Ryan, L., 2011. Comparative analysis of evaluation techniques for
transport policies. Environ. Impact Assess. Rev. 31 (3), 226–233.
Dobes, L., Bennett, J., 2009. Multi-criteria analysis: ‘good enough’ for
government work? Agenda 16 (3).
Eisenhardt, K., 1989. Agency theory: an assessment and review. Acad. Manag.
Rev. 14 (1), 57–74.
Eliasson, J., Börjesson, M., Odeck, J., Welde, M., 2015. Does benefit–cost
efficiency influence transport investment decisions? J. Transport Econ. Pol.
49 (3), 377–396.
Elvik, R., 2017. The Value of Life: The Rise and Fall of a Scientific Research
Programme. Doctoral theses at NTNU 2017:340. NTNU, Trondheim,
Norway.
Eskerod, P., Huemann, M., 2013. Sustainable development and project
stakeholder management: what standards say. Int. J. Manag. Proj. Bus. 6
(1), 36–50.
Finansdepartementet, 2005. Veileder i samfunnsøkonomiske analyser.
Regjeringen, Oslo, Norway.
Finansdepartementet, 2014. Prinsipper og krav ved utarbeidelse av
samfunnsøkonomiske analyser mv. Rundskriv R-109/14. Regjeringen,
Oslo, Norway.
Flyvbjerg, B., 2006. Five misunderstandings about case-study research. Qual.
Inq. 12 (2), 219–245.
Flyvbjerg, B., 2009. Survival of the unfittest: why the worst infrastructure gets
built—and what we can do about it. Oxf. Rev. Econ. Policy 25 (3),
344–367.
Flyvbjerg, B., Bruzelius, N., Rothengatter, W., 2003. Megaprojects and Risk:
An Anatomy of Ambition. Cambridge University Press, Cambridge, UK.
Haavaldsen, T., Lædre, O., Volden, G.H., Lohne, J., 2014. On the concept of
sustainability – assessing the sustainability of large public infrastructure
investment projects. Int. J. Sustain. Eng. 7 (1), 2–12.
HEATCO, 2006. Deliverable 5: Proposal for Harmonised Guidelines. http://
www.kbsz.hu/dokumentumok/20070411_0.2-HEATCO_D5 (retrieved
24 Sept 2018).
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0005
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0005
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0010
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0010
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0010
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0015
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0015
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0015
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0015
https://www.apm.org.uk/resources/
https://www.apm.org.uk/resources/
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0025
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0025
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0030
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0030
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0035
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0035
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0040
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0040
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0045
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0045
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0050
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0050
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0055
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0055
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0060
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0060
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0065
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0065
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0065
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0070
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0070
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0070
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0075
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0075
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0075
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0080
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0080
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0085
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0085
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0085
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0090
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0090
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0095
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0095
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0095
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0100
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0100
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0105
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0105
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0105
http://www.kbsz.hu/dokumentumok/20070411_0.2-HEATCO_D5
http://www.kbsz.hu/dokumentumok/20070411_0.2-HEATCO_D5
564 G.H. Volden / International Journal of Project Management 37 (2019) 549–564
Hjelmbrekke, H., Klakegg, O.J., Lohne, J., 2017. Governing value creation in
construction project: a new model. Int. J. Manag. Proj. Bus. 10 (1), 60–83.
HM Treasury, 2013. Public Sector Business Cases Using the Five Case Model
(Green Book Supplementary Guidance on Developing Public Value from
Spending Proposals).
Jenner, S., 2015. Why do projects ‘fail’ and more to the point what can we do
about it? The case for disciplined, ‘fast and frugal’ decision-making. PM
World J. 4 (3).
Kelly, C., Laird, J., Constantini, S., Richards, P., Carbajo, J., Nellthorp, J.,
2015. Ex post appraisal: what lessons can be learnt from EU cohesion
funded transport projects. Transp. Policy 37, 83–91.
Laursen, M., Svejvig, P., 2016. Taking stock of project value creation: a
structured literature review with future directions for research and practice.
Int. J. Proj. Manag. 34 (4), 736–747.
Lovallo, D., Kahneman, D., 2003. Delusions of success: how optimism
undermines executives’ decisions. Harv. Bus. Rev. 81 (7), 56–63.
Mackie, P., Preston, J., 1998. Twenty-one sources of error and bias in transport
project appraisal. Transp. Policy 5, 1–7.
Mackie, P., Worsley, T., Eliasson, J., 2014. Transport appraisal revisited. Res.
Transp. Econ. 47, 3–18.
Morris, P.W.G., 2013. Reconstructing project management reprised: a
knowledge perspective. Proj. Manag. J. 44 (5), 6–23.
Mouter, N., 2017. Dutch politicians’ use of cost-benefit analysis. Transportation
44, 1127–1145.
Mouter, N., Chorus, C., 2016. Value of time – a citizen perspective. Transp.
Res. A 91, 317–329.
Müller, R., 2009. Project Governance: Fundamentals of Project Management.
Gower, New York, NY.
Musawir, A., Serra, C., Zwikael, O., Ali, I., 2017. Project governance, benefit
management, and project success: towards a framework for supporting
organizational strategy implementation. Int. J. Proj. Manag. 35 (8),
1658–1672.
Næss, P., 2006. Cost-benefit analysis of transportation investments: neither
critical nor realistic. J. Crit. Real. 5 (1), 32–60.
Næss, P., Volden, G.H., Odeck, J., Richardson, T., 2017. Neglected and
Underestimated Negative Impacts of Transport Investments. Concept
Report No. 54. Ex ante Academic Publisher, Trondheim, Norway.
Nicolaisen, M.S., Driscoll, P.A., 2014. Ex-post evaluations of demand forecast
accuracy: a literature review. Transp. Rev. 34 (4), 540–557.
Nyborg, K., 1998. Some politicians’ use of cost-benefit analysis. Public Choice
95, 381–401.
Nyborg, K., 2014. Project evaluation with democratic decision-making:
what does cost-benefit analysis really measure? Ecol. Econ. 196,
124–131.
Office of Government Commerce, 2009. Managing Successful Projects
with PRINCE2 (PRINCE2™). 5th ed. TSO (The Stationery Office),
London, UK.
Patton, M.Q., 1999. Enhancing the quality and credibility of qualitative
analysis. Health Serv. Res. J. 34 (5 Pt 2), 1189–1208.
Pearce, D., Atkinson, G., Mourato, P., 2006. Cost-benefit Analysis and the
Environment: Recent Developments. OECD, Paris, France.
Project Management Institute (Ed.), 2017. A Guide to the Project Management
Body of Knowledge (PMBOK® Guide), 6th ed Project Management
Institute, Newton Square, PA.
Sager, T., 2013. The comprehensiveness dilemma of cost-benefit analysis. Eur.
J. Transp. Infrastruct. Res. 13 (3), 169–183.
Sager, T., 2016. Why don’t cost-benefit results count for more? The case of
Norwegian road investment priorities. Urban Plan. Transp. Res. 4 (1), 101–121.
Samset, K., 2003. Project Evaluation: Making Projects Succeed. Tapir
Academic Press. Trondheim, Norway.
Samset, K., Christensen, T., 2017. Ex ante project evaluation and the
complexity of early decision-making. Public Org. Rev. 17 (1), 1–17.
Samset, K., Volden, G.H., 2012. The proposal. In: Williams, T., Samset, K.
(Eds.), Project Governance: Getting Investments Right. Palgrave Macmil-
lan, Basingstoke, UK, pp. 46–80.
Samset, K., Volden, G.H., 2015. Front-end definition of projects: ten paradoxes
and some reflections regarding project management and project governance.
Int. J. Proj. Manag. 34 (2), 297–313.
Scriven, M., 2015. Key Evaluation Checklist (KEC). http://michaelscriven.info/
papersandpublications.html (retrieved 23rd January 2019).
Serra, C.E.M., Kunc, M., 2015. Benefits realisation management and its
influence on project success and on the execution of business strategies. Int.
J. Proj. Manag. 33 (1), 53–66.
Shenhar, A.J., Dvir, D., Levy, O., Maltz, A.C., 2001. Project success: a
multidimensional strategic concept. Long Range Plan. 34, 699–725.
Small, K., 1999. Project evaluation. In: Gómez-Ibáñez, J., Tye, W.B., Winston,
C. (Eds.), Essays in Transport Economics and Policy: A Handbook in
Honor of John R. Meyer. Brookings Institution, Washington, DC.
Standing Advisory Committee on Trunk Road Assessment (SACTRA), 1999.
Transport and the Economy. TSO, Norwich, UK.
Terlizzi, M.A., Albertin, A.L., de Oliveira, H.R., de Moraes, C., 2017. IT
benefits management in financial institutions: practices and barriers. Int.
J. Proj. Manag. 35 (5), 763–782.
Venables, A.J., 2007. Evaluating urban transport improvements: cost-benefit
analysis in the presence of agglomeration and income taxation. J. Transp.
Econ. Pol. 41 (2), 173–188.
Vickerman, R., 2008. Transit investments and economic development. Res.
Transp. Econ. 23 (1), 107–115.
Volden, G.H., 2018. Public project success as seen in a broad perspective:
lessons from a meta-evaluation of 20 infrastructure projects in Norway.
Eval. Prog. Plan. 69, 109–117.
Volden, G.H., Andersen, B., 2018. The hierarchy of public project governance
frameworks: an empirical study of principles and practices in Norwegian
ministries and agencies. Int. J. Manag. Proj. Bus. 11 (1), 174–198.
Volden, G.H., Samset, K., 2017a. Governance of major public investment projects:
principles and practices in six countries. Proj. Manag. J. 48 (3), 90–108.
Volden, G.H., Samset, K., 2017b. Quality assurance in megaproject
management: The Norwegian way. In: Flyvbjerg, B. (Ed.), The Oxford
Handbook of Megaproject Management. Oxford University Press, Oxford,
UK.
Wachs, M., 1989. When planners lie with numbers. J. Am. Plan. Assoc. 55 (4),
476–479.
van Wee, B., 2007. Large infrastructure projects: a review of the quality of
demand forecasts and cost estimations. Environ. Plan. B 34, 611–625.
van Wee, B., 2013. Ethics and the ex ante evaluation of mega-projects, in:
Priemus, H., van Wee, B. (Eds.), International Handbook on Mega-Projects.
Edward Elgar, Cheltenham, UK, 356–381.
van Wee, B., Rietvold, P., 2013. CBA: Ex ante evaluation of mega-projects, in:
Priemus, H., van Wee, B. (Eds.), International Handbook on Mega-Projects.
Edward Elgar, Cheltenham, UK, 269–291.
Williams, T., Samset, K., 2010. Issues in front-end decision making on projects.
Proj. Manag. J. 41 (2), 38–49.
World Bank, 2010. Cost-Benefit Analysis in World Bank Projects. Independent
Evaluation Group, World Bank, Washington, DC.
Zwikael, O., Smyrk, J., 2012. A general framework for gauging the
performance of initiatives to enhance organizational value. Br. J. Manag.
23, 6–22.
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0115
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0115
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0120
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0120
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0120
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0125
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0125
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0125
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0130
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0130
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0135
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0135
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0135
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0140
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0140
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0145
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0145
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0150
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0150
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0155
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0155
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0160
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0160
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0165
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0165
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0170
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0170
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0175
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0175
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0175
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0175
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0180
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0180
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0185
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0185
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0185
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0190
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0190
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0195
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0195
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0200
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0200
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0200
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0205
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0205
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0205
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0210
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0210
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0215
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0215
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0220
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0220
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0220
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0225
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0225
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0230
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0230
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0235
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0235
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0240
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0240
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0245
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0245
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0245
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0250
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0250
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0250
http://michaelscriven.info/papersandpublications.html
http://michaelscriven.info/papersandpublications.html
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0260
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0260
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0260
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0265
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0265
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0270
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0270
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0270
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0275
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0280
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0280
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0280
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0285
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0285
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0285
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0290
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0290
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0295
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0295
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0295
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0300
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0300
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0300
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0305
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0305
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0310
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0310
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0310
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0310
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0315
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0315
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0320
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0320
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0325
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0325
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0330
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0330
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0335
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0335
http://refhub.elsevier.com/S0263-7863(18)30600-8/rf0335
- Assessing public projects’ value for money: An empirical study of the usefulness of cost–benefit analyses in decision-making
1. Introduction
1.1. Projects ought to be good value for money
1.2. The research gap
1.3. This study
2. Literature review
2.1. The CBA – Its normative fundament
2.2. Measurement problems
2.3. Appraisal optimism
3. Conceptual framework
4. Methodology
5. Presentation and discussion of findings
5.1. CBAs are comprehensive and partly standardized (RQ1)
5.2. Inconsistent handling of non-monetized impacts (RQ2)
5.3. Uncertainty thoroughly assessed for capital cost, but to a lesser extent for other impacts (RQ3)
5.4. Other considerations are not clearly distinguished from value for money (RQ4)
5.5. Appraisal optimism has been avoided for NPV estimation, but may influence the CBA in other ways (RQ5)
5.6. Transparency and communication acceptable, but could be improved (RQ6)
5.7. Decision-makers found CBAs more useful when approved by an independent party (RQ7)
6. Conclusions
6.1. CBAs are heeded by decision-makers
6.2. Non-monetized impacts need a clearer definition and more systematic treatment, distinguished from considerations beyon…
6.3. Recommendations
7. Limitations and further work
Funding
Declaration of interest
References
Insights in Biology and MedicineOpen Access
HTTPS://WWW.HEIGHPUBS.ORG
001
ISSN
2639-6769
Review Article
A review of research process, data
collection and analysis
Surya Raj Niraula*
Professor (Biostatistics), School of Public Health and Community Medicine, B.P. Koirala
Institute of Health Sciences, Dharan, Nepal
*Address for Correspondence: Dr. Surya Raj
Niraula, Postdoc (USA), PhD (NPL), Professor
(Biostatistics), School of Public Health and
Community Medicine, B.P. Koirala Institute
of Health Sciences, Dharan, Nepal, Tel: +977
9842035218; Email: surya.niraula@bpkihs.edu
Submitted: 10 December 2018
Approved: 10 January 2019
Published: 11 January 2019
Copyright: © 2019 Niraula SR. This is an open
access article distributed under the Creative
Commons Attribution License, which permits
unrestricted use, distribution, and reproduction
in any medium, provided the original work is
properly cited
How to cite this article: Niraula SR. A review of research process, data collection and analysis. Insights
Biol Med. 2019; 3: 001-006. https://doi.org/10.29328/journal.ibm.1001014
Research is the process of searching for knowledge. It is systematic search pertinent
information on speci ic topic of interest. It is a careful investigation or inquiry especially
through search for new facts in any branch of knowledge [1]. It is a scienti ic way of
getting answers for research questions and testing hypothesis. The research question
is based on uncertainty about something in the population. This can be formulated by
searching different literatures from index and non index journals, books, internet, and
different unpublished research work etc. A good research question should follow the
FINER criteria i.e. Feasible, Interesting, Novel, Ethical and Relevant [2].
The complete research is the whole design which arises from de ining research
problems to the report writing (
). The research problems are determined
on the basis of well known concept and theories or previous research indings.
The assumptions in term of hypothesis are made. The process of inquiry is done by
interviewing or observing or recording data and the collected data are analyzed with
interpretation. Basically there are two approaches of data collection, quantitative
and qualitative. The quantitative approach views human phenomena as being
focused to study objective i.e. able to be measured. It has its roots in positivism.
Quantitative approach to research involves data collection methods such as structured
questionnaire, interviews and observations together with other tools. This approach
helps investigators to quantify the information.
Interpret data and report
Define research
problems
Review of literatures
Concept and
theories
Previous research
findings
Formulate
hypothesis
Data
collection
Analysis
Figure 1: Flow chart-a complete research process.
A review of research process, data collection and analysis
Published: 11 January 2019 002
On the other hand, in depth interviews and unstructured observations are associated
with qualitative research. The socially stigmatized and hidden issues are understood
and explored by the qualitative research approach. In fact, the purpose of quantitative
research is to measure concepts or variables that are predetermined objectively and to
examine the relationship between them numerically and statistically. Researches have
to choose methods which are appropriate for answering their questions.
Where do data come from?
Basically there are two sources of data, primary and secondary. The secondary
data, which are generally obtained from different departments of country like health,
education, population and may be collected from different hospitals, clinics, and
schools’ records, can be utilized for our own research. The secondary sources may be
private and foundation databases, city and county governments, surveillance data from
the government programs, and federal agency statistics – Census, NIH, etc. The use of
secondary data may save our survey cost, time, and may be accurate if the government
agency has collected the information. However, there are several limitations over it.
The secondary data may be out of date for what we want to analyze. It may not have
been collected long enough for detecting trends, e.g. organism pattern registered
in a hospital for 2 months. A major limitation is we should formulate the research
objectives based on availability of variables in the data set. On the other hand there
may be missing information on some observations. Unless such missing information
is caught and corrected for, analysis will be biased. There may be many biases like
sample selection bias, source choice bias, drop out etc.
If we look at primary source, it has more advantages than the secondary source
of data. The data can be collected through surveys, focus groups, questionnaires,
personal interviews, experiments and observational study. If we have time for
designing our collection instrument, selecting our population or sample, pretesting/
piloting the instrument to work out sources of bias, administration of the instrument,
and collection/entry of data, using primary source of data collection, researcher may
minimize the sampling bias, and other confounding bias.
Analysis
The analysis is an important part of research. The analysis of the data depends
upon the types of variables and its’ nature [3]. The irst thing for the data analysis is to
describe the characteristics of the variables. The analysis can be scrutinized as follows:
Summarizing data: Data are the bunch of values of one or more variables. A
variable is a characteristic of samples that has different values for different subjects.
Value can be numeric, counting, and category. The numeric values of continuous
variables are those which have numeric meanings, a unit of measurement, and may be
in fraction like – height, weight, blood pressure, monthly income etc. Another type of
variables is discrete variables which are based on counting process like – number of
student in different classes, number of patients visiting OPD in each day etc [4].
If the variables are numeric, they can be explored by plotting histogram, steam
and leaf plot, Whisker box plot, and normal plots to visualize how well the values it a
normal distribution. When the variables are categorical, they can be visualized by pie
chart or bar diagrams or just the frequencies and percentages.
A statistic is a number summarizing a bunch of values. Simple or univariate
statistics summarize values of one variable. Effect or outcome statistics summarize the
relationship between values of two or more variables. Simple statistics for numeric
variables are
a) Mean: the average
A review of research process, data collection and analysis
Published: 11 January 2019 003
b) Standard deviation: the typical variation
c) Standard error of the mean: the typical variation in the mean with repeated
sampling divided by the root of (sample size).
Mean and standard deviation are most commonly used measure of central tendency
and dispersion respectively in case of normally distributed data (Tables 1,2). Median
(middle value or 50th percentile) and quartiles (25th and 75th percentiles) are used
for grossly non-normally distributed data.
Common statistical tests
The table 1 describes how the different tests are applied for different purpose.
Simple statistics for categorical variables are frequency, proportion or odds ratio. The
effect size derived from statistical model (equation) of the form Y (dependent) Vs X
(Predictor) depend on type of Y and X.
a) If the model is numeric vurses to numeric e.g. SBP and cholesterol; linear
regression with correlation coef icient could be used to ind the relationship
between the variables, where effect statistics gives slope and intercept of the
line of equation, called as parameters. The correlation coef icient is explained in
terms of variance explained in the model. This provides measures of goodness of
it. Other statistics typical or standard error of the estimate provides the residual
error and based measure of validity (with criterion variable on the Y axis).
b) But if the model is numerical versus categorical e.g. marks in medical exam
versus sex, the model will be t-test for 2 groups and one way ANOVA for more
than two groups (
). Effects statistics will be difference between means,
express as row difference, percent difference, or fraction of the root mean
square error which is an average standard deviation of the two groups. The
table 2 shows the result of ANOVA for academic performances.
c) If the model is numerical versus categorical (repeated measures in different
time interval) eg. weight loss (kg) and each month, the model will be paired
t-test (2 months) and repeated measures ANOVA with one within the factor (>2
month), where effect statistics will be change in mean expressed as row change,
percentage change, or fraction of pre standard deviation.
d) If the model is categorical versus categorical e.g. smoking habit versus sex,
the model test will Chi-square or Fisher exact tests where the effect statistics
provides relative frequencies, expressed as a difference in frequencies, ratio
of frequencies (relative risk) or odds ratio. The relative risk is appropriate for
: Test statistics based on types of variables.
Y (Response) X (Predictor) Model/Test Effect Statistics
Numeric Numeric Regression Slope, intercept, Correlation
Numeric Categorical t-test, ANOVA Mean difference
Categorical Categorical Chi-square, Fisher exact Frequency difference or ratio
Categorical Numeric Categorical Frequency ratio
Table 2: Academic performance in different levels of the MBBS students during 1994 to 1996.
Batches (n)
Mean ± SD
SLCS ISS EES MBBSI MBBSII MBBS III MBBSIV MBBS V MBBS
Total
1994 (29) 74.2±6.3 71.9±7.8 71.3±2.5 67.8±5.2 71.0±5.1 73.3±5.9 69.5±4.3 65.8±3.0 69.5±4.2
1995 (29) 75.2±5.1 69.9 8.7 52.1±4.0 67.3±5.4 68.0 4.0 65.3±3.5 65.3±3.5 62.3 17.6 65.6±5.6
1996 (28) 76.4±5.3 71.2 8.4 54.4±4.2 69.3±5.1 73.2 4.6 64.3±3.0 65.7±3.2 66.5 3.2 67.8±3.4
F value 1.1 0.4 241.2 1.1 9.2 42.7 11.1 1.3 5.2
P value NS NS <0.0001 NS <0.0001 <0.0001 <0.0001 NS < 0.01
Source: Niraula et al, 2006[6].
A review of research process, data collection and analysis
Published: 11 January 2019 004
cross-sectional or prospective designs. It is a risk of having a certain disease for
one group relative to other group. Odds ratio is appropriate for case-control
designs, a cross product of 2X2 contingency table.
e) If the model is nominal category versus >= 2 numeric e.g. heart disease versus
age, sex and regular exercise, the model test will be categorical modeling where
effect statistics will be relative risk or odds ratio. This can be analyzed using
logistic regression or generalize liner modeling. The complex models will be
most reducible to t tests, regression, or relative frequencies.
f) If the model is controlled trial (numeric versus 2 nominal categories) e.g.
strength vs trial vs group, the model will be unpaired t test of change scores (2
trails, 2 groups) or repeated measures ANOVA with within-and between-subject
factors (>2 trials or groups) where the effect statistics will be the difference in
change in main expressed as raw difference, percent difference, or fraction of
the pre standard deviation [5].
g) If the model is extra predictor variable to “control for something” (numeric
versus >= 2 numeric) eg. Cholestrol vs physical activity vs age, multiple linear
regression or analysis of covariance (ANCOVA) can be used. Another example
of use of linear regression analysis to ind the signi icant predictor for MBBS
performance is demonstrated in table 3.
h) If we want to ind out degree of association between two numeric variables,
we can examine by the correlation coef icient which may take values from -1
to +1. A positive value of the coef icient indicates positive association, whereas
negative coef icient indicates a negative association. Another example of use of
correlation matrix to show the association between the two scores in different
classes (
).
Generalizing from a sample to a population
We study a sample to estimate the population parameter. The value of a statistic for
a sample is only an estimate of the true (population) value which is expressed precision
or uncertainty in true value using 95% con idence limits. The con idence limits
represent likely range of the true value. There is a 5 % chance the true value is outside
the 95% con idence interval, also called level of signi icance: the type I error rate [7,8].
Statistical signi icance is an old-fashioned way of generalizing, based on testing
whether the true value could be zero or null.
: Stepwise linear regression for predicting MBBS performance.
Coeffi cients Collinearity Statistics
Model Unstandardized Coeffi cients Standardized
Coeffi cients P value Tolerance VIF
B SE Beta
(Constant), 57.34 4.32 0.00
Intermediate in Science Score 0.145 0.06 0.253 <0.02 1.0 1.0 R2 = 0.064, Adjusted R2 = 0.053; F(1,84)=5.77, P<0.02
Source: Niraula et al, 2006 [6].
Table 4: Correlation matrix of academic performance of MBBS students.
SLCS ISS EES MBBSI MBBSII MBBSIII MBBSIV MBBS V
ISS 0.290*3
EES -0.079 0.076
MBBSI 0.177 0.247*2 -0.094
MBBSII 0.167 0.208 0.025 0.770*4
MBBSIII 0.075 0.245*2 0.647*5 0.299*3 0.442*5
MBBSIV 0.151 0.197 0.362*4 0.632*5 0.712*5 0.755*5
MBBS V 0.059 0.114 -0.055 0.544*5 0.404*5 0.224*1 0.512*5 Scores
MBBSS 0.145 0.242*2 0.179 0.806*5 0.789*5 0.631*5 0.872*5 0.7931*5
Source: Niraula et al, 2006 [6].
A review of research process, data collection and analysis
Published: 11 January 2019 005
− Assume the null hypothesis: that the true value is zero (null).
− If we observed value falls in a region of extreme values that would occur only
5% of the time, we reject the null hypothesis.
− That is, we decide that the true value is unlikely to be zero; we can state that the
result is statistically signi icant at the 5% level.
− If the observed value does not fall in the 5% unlikely region, most people
mistakenly accept the null hypothesis: they conclude that the true value is zero
or null!
− The p value helps us to decide whether our result falls in the unlikely region.
If p<0.05, our result is in the unlikely region.
One meaning of the p value: the probability of a more extreme observed value
(positive or negative) when true value is zero. Better meaning of the p value: if we
observe a positive effect, 1 – p/2 is the chance the true value is positive, and p/2 is the
chance the true value is negative. For example: If we observe a 1.5% enhancement
of performance (p=0.08). Therefore there is a 96% chance that the true effect is
any “enhancement” and a 4% chance that the true effect is any “impairment”. This
interpretation does not take into account trivial enhancements and impairments.
Therefore, if we must use p values as possible, show exact values, not p<0.05 or p>0.05.
Meta-analysts also need the exact p value (or con idence limits).
If the true value is zero, there’s a 5% chance of getting statistical signi icance: the
Type I error rate, or rate of false positives or false alarms. There’s also a chance that the
smallest worthwhile true value will produce an observed value that is not statistically
signi icant: the Type II error rate, or rate of false negatives or failed alarms. The type II
error is related to the size of samples in the research. In the old-fashioned approach to
research design, we are supposed to have enough subjects to make a Type II error rate
of 20%: that is, our study is supposed to have a power of 80% to detect the smallest
worthwhile effect. If we look at lots of effects in a study, there’s an increased chance
being wrong about at least one of them. Old-fashioned statisticians like to control this
in lation of the Type I error rate within an ANOVA to make sure the increased chance
is kept to 5%. This approach is misguided.
In summary, the research process begins with de ining research problems and then
review of literatures, formulation of hypothesis, data collection, analysis, interpretation
and end in report writing. There are chances of occurrence of many biases in data
collection. Importantly, the analysis of research data should be done with very caution.
If a researcher use statistical test for signi icance, he/she should show exact p values.
It is also better still, to show con idence limits instead. The standard error of the mean
should be shown only in case of estimating population parameter. Usually between-
subject standard deviation should be presented to convey the spread between subjects.
In population studies, this standard deviation helps convey magnitude of differences
or changes in the mean. In interventions, show also the within-subject standard
deviation (the typical error) to convey precision of measurement. Standard deviation
helps convey magnitude of differences or changes in mean performance.
Chi-square and isher exact tests are used for categorical variables (category versus
category). Two numerical variables are examined by correlation coef icient. For the
model numeric versus two category, t test will be the suitable in case of normal data,
ANOVA should be applied for the model numeric versus >=2 categorical variables.
Multiple regression model is used to ind out adjusted effects of all possible predictors
(>=2) on a numeric response variable.
A review of research process, data collection and analysis
Published: 11 January 2019 006
1. The Advanced Learner’s Dictionary of Current English. Oxford. 1952; 1069. Ref.: https://goo.gl/K7pKvD
2. Farrugia P, Petrisor BA, Farrokhyar F, Bhandari M. Practical tips for surgical research: Research
questions, hypothesis and objectives. J Can Surg. 2010; 53: 278-281. Ref.: https://goo.gl/Rf6DED
3. Niraula SR, Jha N. Review of common statistical tools for critical analysis of medical research.
JNMA. 2003; 42:113-119.
4. Reddy M V. Organisation and collection of data. In Statistics for Mental Health Care Research. 2002;
Edition 1: 13-23.
5. Lindman HR. Analysis of Variance in experimental design. New York: Springer-Verlag, 1992. Ref.:
https://goo.gl/jXeec5
6. Niraula SR, Khanal SS. Critical analysis of performance of medical students. Education for Health.
2006; 19: 5-13. Ref.: https://goo.gl/5dFKUK
7. Indrayan A, Gupta P. Sampling techniques, confi dence intervals and sample size. Natl Med Journal
India. 2000; 13: 29-36. Ref.: https://goo.gl/1nbNpQ
8. Simon R. Confi dence intervals for reporting results of clinical trails. Ann Int Med. 1986; 105: 429-435.
Ref.: https://goo.gl/acDett
- A review of research process, datacollection and analysis
Background
Figure 1
Table 1
Table 2
Table 3
Table 4
Summary
References
RCA
2
Improving Root Ca
u
se
Analyses and Actions
to Prevent Harm
National Patient Safety Foundation
268 Summer Street | Boston, MA 02210 | 617.391.9900 | www.npsf.org
Version 2. January 2016
© Copyright 2015 by the National Patient Safety Foundation.
All rights reserved.
Second online publication, Version 2, January 2016.
First online publication June 2015.
This report is available for downloading on the Foundation’s website, www.npsf.org.
This report or parts of it may be printed for individual use or distributed for training
purposes within your organization.
No one may alter the content in any way, or use the report in any commercial context,
without written permission from the publisher:
National Patient Safety Foundation
Attention: Director, Information Resources
280 Summer Street, Ninth Floor [updated August 1, 2016]
Boston, MA 02210
info@npsf.org
About the National Patient Safety Foundation
®
The National Patient Safety Foundation’s vision is to create a world where patients and
those who care for them are free from harm. A central voice for patient safety since
1997, NPSF partners with patients and families, the health care community, and key
stakeholders to advance patient safety and health care workforce safety and dissemi-
nate strategies to prevent harm.
NPSF is an independent, not-for-profit 501(c)(3) organization. Information about the
work of the National Patient Safety Foundation may be found at www.npsf.org.
®
CONTENTS
iv
vi
vii
1
Objective
Definitions
5
Events Appropriate for RCA2 Review versus Blameworthy Events
Risk-Based Prioritization of Events, Hazards, and System Vulnerabilities
Close Calls
9
Timing
Team Size
Team Membership
Interviewing
14
Analysis Steps and Tools
Actions
Measuring Action Implementation and Effectiveness
Feedback
Leadership and Board Support
Measuring the Effectiveness and Sustainability of the RCA2 Process
21
23
Appendix 2. Triggering Questions for Root Cause Analysis 31
35
37
38
39
40
41
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
ACKNOWLEDGMENTS • iv
Core Working Group
James P. Bagian, MD, PE
Project Co-Chair
Director, Center for Health Engineering and
Patient Safety, University of Michigan
Doug Bonacum, CSP, CPPS
Project Co-Chair
Vice President, Quality, Safety, and Resource
Management,
Kaiser Permanente
Joseph DeRosier, PE, CSP
Program Manager, Center for Health Engineering
and Patient Safety, University of Michigan
John Frost
President, Safety Engineering Services Inc.
Member, Aerospace Safety Advisory Panel (ASAP)
National Aeronautics and Space Administration
(NASA)
Member, Board of Directors, APT Inc.
Rollin J. “Terry” Fairbanks MD, MS, FACEP, CPPS
Director, National Center for Human Factors in
Healthcare and Simulation Training & Education
Lab, MedStar Institute for Innovation, MedStar
Health
Associate Professor of Emergency Medicine,
Georgetown University
Tejal Gandhi, MD, MPH, CPPS
President and Chief Executive Officer,
National Patient Safety Foundation
Helen Haskell, MA
Founder, Mothers Against Medical Error
President, Consumers Advancing Patient Safety
Patricia McGaffigan, RN, MS
Chief Operating Officer and Senior Vice President,
Program Strategy, National Patient Safety
Foundation
Faye Sheppard RN, MSN, JD, CPPS
Principal, Patient Safety Resources
Expert Advisory Group
John S. Carroll
Professor of Organization Studies and
Engineering Systems, Massachusetts Institute of
Technology
Co-Director, Lean Advancement Initiative at MIT
Michael R. Cohen, RPh, MS, ScD (hon), DPS (hon)
President,
Institute for Safe Medication Practices
Thomas W. Diller, MD, MMM
Vice President and System Chief Medical Officer,
CHRISTUS Health
Noel Eldridge, MS
Senior Advisor, Public Health Specialist, Center for
Quality Improvement and Patient Safety, Agency
for Healthcare Research and Quality
Andrew R. Hallahan, MD
Medical Lead, Patient Safety, Children’s Health
Queensland Hospital and Health Service
Robin Hemphill, MD, MPH
Director, National Center for Patient Safety,
US Department of Veterans Affairs
James P. Keller, Jr., MS
Vice President, Health Technology Evaluation and
Safety,
ECRI Institute
Carol Keohane, MS, RN
Assistant Vice President, Academic Medical
Center’s Patient Safety Organization, CRICO
Maria Lombardi, RN, MSN, CCRN
Clinical Nursing Director, Tufts Medical Center
Robert Schreiber, MD
Medical Director of Evidence-Based Programs,
Hebrew Senior Life Department of Medicine
Medical Director, Healthy Living Center for
Excellence
Clinical Instructor of Medicine, Harvard Medical
School
Julie Spencer, RN, CPHRM
System Director of Risk Management, BSWH Risk
Management
Mary J. Tharayil, MD, MPH
Staff Physician
Brigham and Women’s Hospital
Ailish Wilkie, MS, CPHQ, CPHRM
Patient Safety and Risk Management
Atrius Health
Ronald M. Wyatt, MD, MHA, DMS (hon.)
Medical Director, Healthcare Improvement, Office
of the Chief Medical Officer,
The Joint Commission
ACKNOWLEDGMENTS
PANEL PARTICIPANTS
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
ACKNOWLEDGMENTS • v
NPSF STAFF
Tejal K. Gandhi, MD, MPH, CPPS
President and Chief Executive Officer
Patricia McGaffigan, RN, MS
Chief Operating Officer and
Senior Vice President, Program Strategy
Joellen Huebner
Program Coordinator, Special Projects
Patricia McTiernan, MS
Assistant Vice President, Communications
Elma Sanders, PhD
Communications Manager
The National Patient Safety Foundation gratefully acknowledges James Bagian, MD, PE, and
Doug Bonacum, CSP, CPPS, for their work as co-chairs of this project.
Special thanks are due to Joseph DeRosier, PE, CSP, for lead authorship of this report,
and to Mary Tharayil, MD, MPH, for preparatory research.
The National Patient Safety Foundation gratefully acknowledges
The Doctors Company Foundation for its critical and generous support of this project.
ACKNOWLEDGMENTS (cont)
u
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
ENDORSEMENTS • vi
AAMI
AAMI Foundation
Alliance for Quality Improvement and Patient Safety (AQIPS)
American Society of Health-System Pharmacists (ASHP)
Association of Occupational Health Professionals in Healthcare (AOHP)
Atrius Health
Aurora Health Care
Canadian Patient Safety Institute
Children’s Health Queensland Hospital and Health Service
CHRISTUS Health
Citizens for Patient Safety
CRICO | Risk Management Foundation of the Harvard Medical Institutions
The Doctors Company
ECRI Institute
HCA Patient Safety Organization, LLC
Institute for Healthcare Improvement
Institute for Safe Medication Practices
The Joint Commission
Kaiser Permanente
MHA Keystone Center
National Association for Healthcare Quality (NAHQ)
National Council of State Boards of Nursing (NCSBN®)
Tufts Medical Center and Floating Hospital for Children
u
ENDORSEMENTS
The following organizations have endorsed the use of this document as a valuable
resource in efforts to create a more effective event analysis and improvement system:
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
EXECUTIVE SUMMARY • vii
EXECUTIVE SUMMARY
Millions of patients in the United States are harmed every year as a result of the health
care they receive.(1) The National Patient Safety Foundation (NPSF), with support from The
Doctors Company Foundation, convened a panel of subject matter experts and stakehold-
ers to produce recommended practices to improve the manner in which we can learn from
adverse events and unsafe conditions and take action to prevent their occurrence in the
future. Traditionally, the process employed to accomplish this learning has been called root
cause analysis (RCA), but it has had inconsistent success. To improve the effectiveness and
utility of these efforts, we have concentrated on the ultimate objective: preventing future
harm. Prevention requires actions to be taken, and so we have renamed the process Root
Cause Analysis and Action, RCA2 (RCA “squared”) to emphasize this point. This document
describes methodologies and techniques that an organization or individuals involved in
performing an RCA2 can credibly and effectively use to prioritize the events, hazards, and
vulnerabilities in their systems of care to accomplish the real objective, which is to under-
stand what happened, why it happened, and then take positive action to prevent it from
happening again. It cannot be over-emphasized that if actions resulting from an RCA2 are
not implemented and measured to demonstrate their success in preventing or reducing
the risk of patient harm in an effective and sustainable way, then the entire RCA2 activity
will have been a waste of time and resources.
The purpose of this document is to ensure that efforts undertaken in performing RCA2 will
result in the identification and implementation of sustainable systems-based improve-
ments that make patient care safer in settings across the continuum of care. The approach
is two-pronged. The first goal is to identify methodologies and techniques that will lead
to more effective and efficient RCA2. The second is to provide tools to evaluate individual
RCA2 reviews so that significant flaws can be identified and remediated to achieve the
ultimate objective of improving patient safety. The purpose of an RCA2 review is to iden-
tify system vulnerabilities so that they can be eliminated or mitigated; the review is not
to be used to focus on or address individual performance, since individual performance
is a symptom of larger systems-based issues. Root cause analysis and action team find-
ings must not be used to discipline or punish staff, so that the trust in the system is not
undermined. The maximum benefit for the safety of the patient population occurs when
system-based vulnerabilities are addressed, and this can be compromised if the root cause
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
EXECUTIVE SUMMARY • viii
analysis and action process is viewed as a witch hunt. It is critical that each organization
define blameworthy events and actions that fall outside the purview of the safety system
and define how and under what circumstances they will be handled or dealt with using
administrative or human resource systems.
Just as a well-performed and well-executed RCA2 must take a systems-based approach, the
same approach is important in formulating a methodology that will achieve these desired
objectives. Since unlimited resources are not available to identify, analyze, and remediate
hazards, it is essential that an explicit risk-based prioritization system be utilized to credibly
and efficiently determine what hazards should be addressed first. A risk-based approach
that considers both the potential harm and the probability of it impacting a patient—as
opposed to a solely harm-based approach—allows efforts to be focused in a manner that
achieves the greatest benefit possible for the patient population as a whole and allows
learning and preventive action to be taken without having to experience patient harm
before addressing a problem. This prioritization system must be a transparent, formal, and
explicit one that is communicated with both internal and external stakeholders.
The most important step in the RCA2 process is the identification of actions to eliminate or
control system hazards or vulnerabilities identified in the causal statements. Teams should
strive to identify stronger actions that prevent the event from recurring and, if that is not
possible, reduce the likelihood that it will occur or that the severity or consequences are
reduced if it should recur. Using a tool such as the Action Hierarchy will assist teams in
identifying stronger actions that provide effective and sustained system improvement.
The success of any patient safety effort lies in its integration into the fabric of the orga-
nization at all levels. This cannot happen without the active participation of leaders and
managers at all levels. For example, strength of actions should be actively reviewed by
leadership to ensure that teams are identifying strong actions that provide effective and
sustained system improvement. Their participation demonstrates the importance of activi-
ties related to patient safety not just by words but by tangible actions and involvement.
This document answers questions integral to patient safety and the root cause analysis
process including how to:
• Triage adverse events and close calls/near misses
• Identify the appropriate RCA2 team size and membership
• Establish RCA2 schedules for execution
• Use tools provided here to facilitate the RCA2 analysis
• Identify effective actions to control or eliminate system vulnerabilities
• Develop Process/Outcome Measures to verify that actions worked as planned
• Use tools provided here for leadership to assess the quality of the RCA2 process
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
EXECUTIVE SUMMARY • ix
Recommendations
1. Leadership (e.g., CEO, board of directors) should be actively involved in the root
cause analysis and action (RCA2) process. This should be accomplished by support-
ing the process, approving and periodically reviewing the status of actions, under-
standing what a thorough RCA2 report should include, and acting when reviews do
not meet minimum requirements.
2. Leadership should review the RCA2 process at least annually for effectiveness.
3. Blameworthy events that are not appropriate for RCA2 review should be defined.
4. Facilities should use a transparent, formal, and explicit risk-based prioritization sys-
tem to identify adverse events, close calls, and system vulnerabilities requiring RCA2
review.
5. An RCA2 review should be started within 72 hours of recognizing that a review is
needed.
6. RCA2 teams should be composed of 4 to 6 people. The team should include pro-
cess experts as well as other individuals drawn from all levels of the organization,
and inclusion of a patient representative unrelated to the event should be consid-
ered. Team membership should not include individuals who were involved in the
event or close call being reviewed, but those individuals should be interviewed for
information.
7. Time should be provided during the normal work shift for staff to serve on an RCA2
team, including attending meetings, researching, and conducting interviews.
8. RCA2 tools (e.g., interviewing techniques, Flow Diagramming, Cause and Effect Dia-
gramming, Five Rules of Causation, Action Hierarchy, Process/Outcome Measures)
should be used by teams to assist in the investigation process and the identification
of strong and intermediate strength corrective actions.
9. Feedback should be provided to staff involved in the event as well as to patients
and/or their family members regarding the findings of the RCA2 process.
The National Patient Safety Foundation strongly recommends that organizations across
the continuum of care adopt the recommendations of this report in order to improve their
root cause analyses and bring them to the next level, that of root cause analysis and action,
RCA2, to ensure the most effective prevention of future harm.
u
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
INTRODUCTION • 1
INTRODUCTION
Millions of patients are harmed in the United States every year as a result of the health care
they receive.(1) Virtually all health care providers and organizations respond to some events
where patient harm has occurred by investigating the event in question with the intent of
eliminating the possibility or reducing the likelihood of a future similar event. This activity
is commonly referred to as root cause analysis (RCA), although other terms are sometimes
used to describe this process, such as focused review, incident review, and comprehen-
sive system analysis. Some health care organizations have robust RCA processes and have
made huge strides toward improving patient safety, including sharing lessons widely, both
internally and externally, so others can learn from their experience. This is, however, more
the exception than the rule.(2) Currently the activities that constitute an RCA in health care
are not standardized or well defined, which can result in the identification of corrective
actions that are not effective—as demonstrated by the documented recurrence of the
same or similar events in the same facility/organization after completion of an RCA. Some
of the underlying reasons for lack of effectiveness of RCAs in improving patient safety
include the lack of standardized and explicit processes and techniques to:
• Identify hazards and vulnerabilities that impact patient safety and then prioritize
them to determine if action is required
• Identify systems-based corrective actions
• Ensure the timely execution of an RCA and formulation of effective sustainable
improvements and corrective actions
• Ensure follow-through to implement recommendations
• Measure whether corrective actions were successful
• Ensure that leadership at all levels of the organization participate in making certain
that RCAs are performed when appropriate, in a timely manner, and that corrective
actions are implemented to improve patient safety
The National Patient Safety Foundation (NPSF), with support from The Doctors Company
Foundation, convened a panel of subject matter experts and stakeholders to recommend
practices to improve the RCA process in settings across the continuum of care. The term
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
INTRODUCTION • 2
RCA itself is problematic and does not describe the activity’s intended purpose. First, the
term implies that there is one root cause, which is counter to the fact that health care is
complex and that there are generally many contributing factors that must be considered
in understanding why an event occurred. In light of this complexity, there is seldom one
magic bullet that will address the various hazards and systems vulnerabilities, which means
that there generally needs to be more than one corrective action. Second, the term RCA
only identifies its purpose as analysis, which is clearly not its only or principal objective,
as evidenced by existing regulatory requirements for what an RCA is to accomplish. The
ultimate purpose of an RCA is to identify hazards and systems vulnerabilities so that actions
can be taken that improve patient safety by preventing future harm. The term RCA also
seems to violate the Chinese proverb “The beginning of wisdom is to call things by their
right names,” and this may itself be part of the underlying reason why the effectiveness of
RCAs is so variable. While it might be better not to use the term RCA, it is so imbedded in
the patient safety culture that completely renaming the process could cause confusion.
We introduce a more accurate term to describe what is really intended by performing an
RCA, and that is Root Cause Analysis and Action, RCA2 (RCA “squared”), which is the term
used throughout this document. Our discussion describes methodologies and techniques
that an organization or individuals can credibly and effectively use to prioritize the events,
hazards, and vulnerabilities in their systems of care that should receive an RCA2, and then
accomplish the real objective, which is to understand what happened, why it happened,
and what needs to be done(3) to correct the problem, and then to take positive action to
prevent it from happening again.
The actions of an RCA2 must concentrate on systems-level type causations and contrib-
uting factors. If the greatest benefit to patients is to be realized, the resulting corrective
actions that address these systems-level issues must not result in individual blaming or
punitive actions. The determination of individual culpability is not the function of a patient
safety system and lies elsewhere in an organization. “Preventing errors means designing
the health care system at all levels to make it safer. Building safety into processes of care is
a much more effective way to reduce errors than blaming individuals.”(4)
If actions resulting from an RCA2 review are not implemented, or are not measured to
determine their effectiveness in preventing harm, then the entire RCA2 activity may be
pointless.
Many organizations do not provide timely feedback to the parties who brought an issue
to the attention of the patient safety organization or those who were personally impacted
by a particular event. When this feedback loop is broken, the staff and patients involved
can easily come to the conclusion that the event either was ignored or that no meaningful
action was taken. In other words, the report of the event, hazard, or vulnerability fell into a
“black hole.” The lack of feedback can have a negative impact on the future involvement of
staff and patients, who may become cynical and distrustful in the belief that their efforts
or experience will not be used to effect change. To reap the greatest benefit for patients
everywhere, the lessons learned from RCA2—including contributing factors and hazards
that were identified, as well as the corrective actions—should be shared as openly as pos-
sible, both within and outside the organization.
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
INTRODUCTION • 3
Finally, an RCA2 process cannot be successful and have lasting positive effect without
active and tangible leadership support with involvement at all levels, including board
involvement. Leadership demonstrates the real importance that they attach to patient
safety by their level of personal involvement and support.
Objective
The purpose of this document is to provide guidance for performing RCA2 reviews that will
result in the identification and implementation of sustainable and effective systems-based
improvements to make health care safer. The RCA2 approach described in this document
was developed for hospitals, but it is applicable to settings that range from nursing homes
to acute care, doctors’ offices to care units, and from single health care organizations to
large health care systems and patient safety organizations (PSOs).(5) While root cause analy-
sis has typically been used at the hospital level, RCA2 is also applicable at the unit level and
as part of comprehensive unit-based safety programs (CUSP).(6)
The approach presented is two-pronged. The first goal is to identify methodologies and
techniques that will lead to more effective and efficient use of RCA2. The second goal is to
provide tools to health care leaders to evaluate RCA2 reviews so that significant flaws in
individual RCA2 reports can be identified and remediated to achieve the ultimate objec-
tive of improving patient safety. Just as a well-performed, well-executed RCA2 must take a
systems-based approach, the same approach is important in formulating a methodology
that will achieve these desired objectives.
There are many other activities that may need to take place at the same time as RCA2. One
of these is disclosure to the patient or family that an adverse event has occurred. Although
the disclosure may be for the same adverse event for which an RCA2 is being undertaken,
these two processes are independent activities. The disclosure activities should in no way
interfere with the initiation or performance of the RCA2 and, accordingly, further discus-
sion of disclosure is not addressed in this document since it is outside the scope of RCA
improvement.
Definitions
The following definitions were adopted for the discussions and recommendations pre-
sented in this paper:
• Hazard: Potential for harm;(7) a condition precursor to a mishap (adverse event).
• Safety: Freedom from those conditions that can cause death, injury, illness, damage
to or loss of equipment or property, or damage to the environment. (7)
• Quality: The degree to which a set of inherent characteristics fulfills a set of require-
ments.(8)
• Risk: A measure of the expected loss from a given hazard or group of hazards. Risk is a
combined expression of loss severity and probability (or likelihood).(7)
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
INTRODUCTION • 4
• System: A set of interrelated or interacting elements(8) any one of which, if changed,
can impact overall outcome. Some examples of system elements are organizational
culture, technical and equipment related factors, physical environment, organiza-
tional goals and incentives, and professional performance and standards.
• Close Call/Near Miss: A close call is an event or situation that could have resulted in an
adverse event but did not, either by chance or through timely intervention. Some-
times referred to as near miss incidents.(9)
• Adverse Event: Untoward incident, therapeutic misadventure, iatrogenic injury, or
other occurrence of harm or potential harm directly associated with care or services
provided.(7)
u
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
I. IDENTIFYING AND CLASSIFYING EVENTS • 5
I. IDENTIFYING AND CLASSIFYING EVENTS
Events Appropriate for RCA2 Review versus Blameworthy Events
The purpose of an RCA2 review is to identify system vulnerabilities so that they can be
eliminated or mitigated. RCA2 processes are not to be used to focus on or address individ-
ual health care worker performance as the primary cause of an adverse event, but instead
to look for the underlying systems-level causations that were manifest in personnel-related
performance issues. Findings from an RCA2 must not be used to discipline, shame, or pun-
ish staff.
In a 2015 report, the NASA Aerospace Safety Advisory Panel cautions about the inadvisabil-
ity of focusing on individuals and assigning blame:
The releasable nature of NASA mishap reports also creates a vulnerability to focusing
on blame. Generally speaking, all organizations in public view are subject to pres-
sures of answering for errors. These pressures can lead to a focus on fault and assign-
ing blame in a mishap investigation that will inherently inhibit the robustness of an
investigation. Such investigations have two shortcomings: (1) filtered or less-than-
transparent reporting of information, and (2) the inability to discover the true root and
contributing causes. The first can affect the culture of mishap investigation, because
the desire to protect an individual, program, or organization in the short term hinders
risk reduction in the long term. In the second case, disciplinary action associated with
the resultant blame gives a false sense of confidence where it rids the organization of
the problem; however, the root cause likely remains, and latent risk waits patiently for
the next opportunity to strike. . . . In addition, when blame is the focus of the investiga-
tion, the true cause of a mishap can be missed or hidden, thus increasing the risk of
repeating the mishap. This danger is introduced when releasable information is “spun”
to appease short-term public interest. It can contribute to second and third order
negative cultural effects in other areas such as misinterpreting risk and subsequent
incorrect resolution.(10)
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
I. IDENTIFYING AND CLASSIFYING EVENTS • 6
It is critical that each organization define blameworthy events and actions that will
be handled or dealt with using administrative or human resource systems. A common
definition of blameworthy events includes events that are the result of criminal acts,
patient abuse, alcohol or substance abuse on the part of the provider, or acts defined by
the organization as being intentionally or deliberately unsafe.(9,11,12) In the unlikely event
that during a review an RCA2 team discovers that the event is or may be blameworthy, the
team should notify the convening authority and refer the event to the convening author-
ity to be handled as dictated by the local policy. Referral of an event to the convening
authority does not mean that the opportunity to learn from it has been lost or that no
action will ultimately be taken. Referral just means that the primary responsibility to fully
look into the event and formulate and implement corrective actions is assumed by a dif-
ferent organizational entity that will not only look for systems-based solutions, as should
be the case with any safety investigation, but may also take actions that are directed at a
specific individual. Doing so preserves the integrity of a safety system that has committed
to using safety activities for system improvement, not for individual punitive action. This
is important because even the perception that an RCA2 review has led to punitive actions
can permanently and negatively impact the effectiveness of future reviews, as has been
demonstrated in other industries.(13)
To be effective, a risk-based prioritization system must receive reports of adverse events,
close calls, hazards, or system vulnerabilities from staff. Not receiving reports can nega-
tively impact the ability to estimate the probability that an event or hazard may occur.
Solutions to this include educating staff about reporting, making it easy for staff to report,
taking visible action as a result of reports, and providing feedback to reporters when
Developing Trust within the Organization and in the Community
Reports of hazards, vulnerabilities, and adverse events are the fuel for the safety improve-
ment engine. An organization is made up of people, and if the people in an organization aren’t
motivated to report, then the organization is at a definite disadvantage. An organization cannot
fix a problem if they don’t know that it exists.
One of the barriers or disincentives to people reporting is fear of negative results for them-
selves or their colleagues and organization. Adoption of a clear and transparent organizational
policy and absolute adherence by the organization to faithfully following it provides staff clarity
as to how reports that they make will be used and the ramifications for them personally. It is
critical to gain the trust of the members of the organization. Implementing such policies where
employees perceive that they are being treated in a fair and consistent manner is an essential
part of developing that trust. Policies that achieve these goals often include discussions of what
activities are viewed as at risk or blameworthy and often are characterized as promoting a just
culture.
Clear policies and the rationale behind them that are openly communicated to the commu-
nity also are essential to gain the trust and support of the community at large, which includes
patients. When an organization publicly and concretely states what it will do to promote patient
safety, it makes itself accountable to the community it serves. u
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
I. IDENTIFYING AND CLASSIFYING EVENTS • 7
they submit reports. When staff members realize that their input makes a difference, they
are more likely to report to improve safety. Reports that do not end up being reviewed
through the RCA2 process still have significant value in improving patient safety.
Risk-Based Prioritization of Events, Hazards, and
System Vulnerabilities
As resources necessary to identify, analyze, and remediate hazards are not unlimited, it
is essential that an explicit, risk-based prioritization system be utilized so that an orga-
nization can credibly and efficiently determine what hazards should be addressed first.
An explicit, risk-based RCA2 prioritization system is superior to one based solely on the
harm or injury that a patient experienced. In a harm-based approach, currently the most
commonly used, an event must cause harm to a patient to warrant an RCA. A risk-based
system prioritizes hazards and vulnerabilities that may not yet have caused harm so that
these hazards and vulnerabilities can then be mitigated or eliminated before harm occurs.
This thinking is consistent with successful practices in many high-reliability industries, such
as aviation, as well as the recommended approaches of various health care accreditation
organizations.(14,15) (Methodology and examples of risk-based prioritization systems are
shown in Appendix 1.)
Establishing a risk-based prioritization system—and making it transparent to all stakehold-
ers—allows an organization to concentrate on eliminating or mitigating hazards rather
than being distracted by having to explain why they will or will not conduct an RCA. Use
of an explicit, risk-based prioritization methodology lends credibility and objectivity to the
process and reduces the chance of misperception by both internal and external stakehold-
ers that decisions to conduct an RCA are inappropriately influenced by political pressure or
other factors to cover up problems rather than discover what is in the best interests of the
patient.
Risk-based selection criteria should incorporate both the outcome severity or consequence
and its probability of occurrence.(16) An efficient way of doing this is to develop a risk matrix
(see Appendix 1) that has predefined and agreed-upon definitions for the levels of severity
or consequence as well as the probability of occurrence, along with predefined steps that
will be taken when matrix thresholds established by the organization are reached.* When
the definitions for severity or consequence also incorporate events or outcomes that man-
date root cause analysis by accrediting organizations, use of the matrix will ensure compli-
ance with their standards and make the process easier to communicate and operationalize.
The source (e.g., safety reports) of the information related to events, hazards, or vulnerabili-
ties is not important as long as enough information is received to allow prioritization using
an explicit risk-based prioritization tool.
The actual implementation of the prioritization system should be performed by an individ-
ual and not a committee; an explicit, well-devised prioritization system should not require
group deliberation. Also, the efficiency of the process is enhanced and needless inertia is
* Risk-based selection criteria must meet the requirements of applicable accrediting and regulatory
organizations.
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
I. IDENTIFYING AND CLASSIFYING EVENTS • 8
eliminated when prioritization for hazards does not have to wait for a group to be con-
vened and to deliberate. Periodically (e.g., quarterly, semiannually) reviewing a summary of
the recently scored/prioritized events as part of the facility’s quality assurance program will
ensure that scoring does not deviate from the approved prioritization system.
Finally, the prevention of harm is the goal of these efforts. The organization should not be
distracted from taking immediate actions to minimize risk of harm while it is engaged in
the more formal RCA2 process.
Close Calls
Close calls (also called near misses or good catches) should also be prioritized using the
risk matrix by asking what is a plausible severity or consequence for the event, hazard, or
vulnerability, coupled with the likelihood or probability of the event/hazard scenario occur-
ring. This plausible outcome is then used as the severity or consequence when applying
the risk matrix to determine the appropriate response (RCA2 or other actions). Some may
believe that since there was no patient injury, close calls do not need to be reported or
investigated. However, close calls occur 10 to 300 times(17) more frequently than the actual
harm events they are the precursors of and provide an organization the opportunity to
identify and correct system vulnerabilities before injury or death occurs. A concern some-
times expressed is that reviewing close calls will increase the workload to an unmanage-
able level. This concern is unwarranted since the organization can construct a risk matrix
(such as the one provided in Appendix 1) to prioritize all events, hazards, and system
vulnerabilities that also accounts for the level of resources required for RCA2 reviews.(11,18,19)
Additionally, performance of an aggregated review of predefined and pre-selected catego-
ries of events that have the potential for a severe outcome can also ensure that the work-
load is kept to an acceptable level to provide value.
u
Aggregated Review
Aggregated review is a process of analyzing similar events to look for common causes. For
example, close call events in high frequency event categories that would typically require root
cause analysis (e.g., falls, medication adverse events) are collected and reviewed as a group on
a quarterly or semi-annual basis. Data and information on each event is collected as it occurs
by front line staff who complete forms developed for this purpose. The review team looks for
trends or recurring issues in the data or information associated with the events to identify sys-
tem issues needing correction. u
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
II. RCA2 TIMING AND TEAM MEMBERSHIP • 9
II. RCA2 TIMING AND TEAM MEMBERSHIP
Timing
When a hazard is first identified there needs to be a mechanism in place that promptly
assesses if actions are required to mitigate the risk to the patient even before the formal
RCA2 process is under way. Immediate actions may include taking care of the patient,
disclosure, making the situation safe, notifying police or security if appropriate, preserv-
ing evidence, and gathering relevant information to fully understand the situation. Also
included may be tasks such as sequestering equipment, securing the scene as needed,
and conducting fact finding. These immediate actions may be performed in parallel to the
initiation of the RCA2 process.
Immediate actions following the event include taking care of the patient, making the situ-
ation safe for others, and sequestering equipment, products, or devices that were involved.
Within 72 hours of the event’s occurrence, it should be scored using the facility’s approved
risk-based prioritization system. If an RCA2 is required, the review needs to be initiated as
soon as possible following the event in order to capture details while they are still fresh in
the minds of those involved. Starting the event review promptly can be achieved if steps
have been taken ahead of time to ensure staff and resources will be available. Tech-
niques such as scheduling standing RCA2 team meetings each week, which may be can-
celled if not needed, establishes a placeholder and permits meeting space to be reserved.
Requesting that each department or service identify at least one or two staff to be on call
each week to serve on a review team will facilitate timeliness by allowing for the quick
convening of a team if one is needed.
The more rapidly well-thought-out actions are implemented, the less exposure there is for
additional patient injury to occur from the same type of event, hazard, or system vulner-
ability. A number of organizations have recommended that RCA2 type activities be com-
pleted in no longer than 30–45 days.(9,14,20)
Several meetings will be required to complete the RCA2 process. Meetings are typically
1.5 to 2 hours in length, with work required by individual members prior to and between
meetings to complete interviews or locate and review publications and documents. It is
critical that the organization provide adequate resources for the RCA2 process.
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
II. RCA2 TIMING AND TEAM MEMBERSHIP • 10
Team Size
For the purposes of this document the team is defined as those individuals who see the
RCA2 process through from beginning to end. The work of the team is certainly augmented
and assisted through involvement with a myriad of other individuals (e.g., staff, patients,
and subject matter experts) but the involvement of those individuals may not encompass
all activities in which the team must engage. It is suggested that an RCA2 review team be
limited in size to 4 to 6 members. Rationales for doing so include the likelihood that larger
review teams will use more person-hours to complete the review, increase the difficulty
of scheduling team meetings, and add inertia that reduces the nimbleness of the RCA2
process.
Team Membership
For the purposes of this document, “team members” are those who are assigned by the
organization’s leadership to officially serve on the team, participate in the process by
attending meetings, conduct research and interviews, and identify root cause contributing
factors. These team members also are the individuals who make the determination as to
the final contents, findings, and recommendations of the RCA2 report.
Team membership (see Figure 1) should include a subject matter expert and someone who
is familiar with the RCA2 process but is not familiar with (i.e., is naïve to) the event process
being reviewed. Ideally a single team member will meet more than one team experi-
ence requirement; for example, the subject matter expert may be front line staff member
who is also capable of serving as the team leader. This may require bringing in experts
from the outside, provided confidentiality protection is not compromised. Managers and
supervisors may serve as team members provided the event did not occur in their area of
responsibility and their subordinates are not team members. This avoids the possibility of
subordinates censoring themselves if their supervisor or manager is present, thus inhibit-
ing free and open communication.
Team members should have a basic understanding of human factors to provide insight
into how people can be set up to fail by improperly designed systems, equipment, devices,
products, and processes. A patient representative, unrelated to any patient or family mem-
ber of a patient who might be involved in the event undergoing analysis, should be con-
sidered to serve on each RCA2 review team to represent the patient perspective and voice.
Some organizations have experimented with including the patient involved in the adverse
event or their family members on RCA teams, but data supporting this as an effective
method are currently lacking. There are many organizations outside of and within health
care that have prohibited the patient or family members being on RCA teams because of
concern that it inhibits free and open communication.
One team member should be appointed as the team leader and charged with ensuring the
team follows the RCA2 process and completes the work on schedule. The leader needs to
be skilled in the RCA2 process and problem solving in general, and be an effective commu-
nicator. Another team member should be assigned to serve as the recorder. The recorder’s
responsibilities include documenting team findings during the meetings. Less rework will
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
II. RCA2 TIMING AND TEAM MEMBERSHIP • 11
be required if the recorder uses an LCD projector or similar method to project the team’s
work during the meeting so all team members can review and comment on what is being
generated.
Individuals who were involved in the event should not be on the team because they may
feel guilty and insist on corrective measures that are above and beyond what is prudent, or
they may steer the team away from their role in the event and activities that contributed
to the event. It may also be hard for other team members to ask difficult questions and
have frank discussions with these individuals present in the room. These same reasons
apply to having patients or family members who were involved in the event serve on RCA2
teams. However, it is certainly appropriate and usually vital that involved individuals (staff,
patients, family members) should be interviewed by the team, in order to understand
what happened and to solicit feedback on potential corrective actions. Outside individual
and patient involvement with RCA2 reviews should be considered with respect to “federal
statutes, state statutes and case law as well as the readiness and availability of the patient/
family member to participate in a productive manner with the shared goal of significantly
reducing the risk of recurrence of the event and making the system safer.” (21)
It is important to remember that the team is convened to discover what happened, why it
happened, and what can be done to prevent it from happening again. Staff may be drawn
from across the organization and not just from the departments or services intimately
involved with the close call or adverse event being reviewed. Having those intimately
involved in the event on the review team creates a real or perceived conflict of interest
that can negatively impact the success of the RCA2 and must be avoided. It is important
to remember that, in teaching institutions, trainees (e.g., nursing students and resident
physicians) deliver a substantial portion of patient care, and their incorporation in the
Figure 1. RCA2 Team Membership* and Involvement
NOTE: An individual may serve in multiple capacities Team Member? Interview?
Subject matter expert(s) on the event or close call process
being evaluated
Yes
Yes, if not
on the team
Individual(s) not familiar with (naïve to) the event or close
call process
Yes No
Leader who is well versed in the RCA2 process Yes No
Staff directly involved in the event No Yes
Front line staff working in the area/process Yes Yes
Patient involved in the event No Yes**
Family of patient involved in the event No Yes**
Patient representative Yes
Yes
* Strongly consider including facility engineering, biomedical engineering, information technology,
or pharmacy staff on an RCA2 team, as individuals in these disciplines tend to think in terms of
systems and often have system-based mindsets. Including medical residents on a team when they
are available is also suggested.
** This might not be needed for some close calls or events that are far removed from the bedside
(e.g., an incorrect reagent that is used in the lab).
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
II. RCA2 TIMING AND TEAM MEMBERSHIP • 12
RCA2 process both as team members or as sources for information can be invaluable to
understanding what happened. They may also contribute effectively to the formulation
of effective and sustainable corrective actions. Their inclusion may provide a fresh look at
existing systems and a deeper understanding for those involved with how the organization
operates, and that can have future benefits.
Serving on a review team should not be “additional work as assigned.” Serving on an
RCA2 review team is “real work” and it should be prioritized, acknowledged, and treated
as such. Time within the normal work schedule needs to be provided for staff to participate
in the review to send a clear message that management values and supports the activity
to improve patient safety. Facilities may want to consider rotating RCA2 team membership
to include staff in all services/departments throughout the facility, including those working
afternoons, nights, and weekends. Permitting all staff to have the opportunity to partici-
pate in the process exposes them to how and why adverse events occur and may bring
about new understanding. In particular, staff may better understand the way that systems
influence how they complete their daily tasks as well as gain a better understanding of the
value of the RCA2 process.
Patient and Family Involvement
The National Patient Safety Foundation’s report Safety Is Personal: Partnering with Patients and
Families for the Safest Care (2014) challenges leaders of health care systems to “involve patients
and families as equal partners in the design and improvement of care across the organization
and/or practice,” and health care clinicians and staff to not only “provide clear information, apol-
ogies, and support to patients and families when things go wrong” but also “engage patients as
equal partners in safety improvement and care design activities.”
What might this level of involvement and engagement look like with respect to root cause
analysis and action reviews? While there is little industry experience regarding the involve-
ment of patients/families in the process of root cause analysis, an article by Zimmerman and
Amori asserts that, when properly handled, involving patients in post-event analysis allows risk
management professionals to further improve their organization’s systems analysis process,
while empowering patients to be part of the solution.21 The article also acknowledges there are
a number of legal and psychological issues to be considered.
Patients and families are among the most important witnesses for many adverse events,
and organizations are encouraged to interview them if the patient and/or family are able and
willing. This will enable the RCA2 team to gain a more complete understanding of the circum-
stances surrounding the event under consideration and may offer additional perspectives on
how to reduce the risk of recurrence. Consideration should be made to include an uninvolved
patient representative as a member of the RCA2 team. This will help protect the confidentiality
of the process while broadening the perspective on how to further improve organizational per-
formance. This representative may be a member of the organization’s patient and family advi-
sory council (or equivalent) or simply a patient representative selected for this specific RCA2. In
either case, the representative should be unrelated to any patient or family member of a patient
who is involved in the event, should have received education regarding quality and peer review
protections, and should have a signed confidentiality form on file. This can help mitigate the
legal and psychological barriers to direct patient/family involvement in the RCA2 process, while
obtaining the benefit that patient representatives can bring to improvement efforts. u
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
II. RCA2 TIMING AND TEAM MEMBERSHIP • 13
Interviewing
Expertise required for the review that is not already represented or possessed by those on
the team may be obtained through the interview process (tips for conducting these inter-
views are presented in Appendix 3). Individuals who were involved in the event should be
interviewed by the team. Patients and/or the patient’s family, as appropriate, should be
among those interviewed unless they decline. Requesting information from the patient
and family will enable the team to gain a more complete understanding of the circum-
stances surrounding the event under consideration. Patients and/or their family members
provide a unique perspective that would otherwise be unavailable.
u
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
III. THE RCA2 EVENT REVIEW PROCESS • 14
III. THE RCA2 EVENT REVIEW PROCESS
Analysis Steps and Tools
Figure 2 graphically describes the RCA2 process from the occurrence of the event through
fact finding, corrective action effectiveness measurement, and feedback to the patient
and/or family, staff in the organization, and externally to the patient safety organization.
The initial fact finding is used to discover what happened and why it happened. The review
process should include the following actions:
• Graphically describe the event using a chronological Flow Diagram or timeline;
identify gaps in knowledge about the event.
• Visit the location of the event to obtain firsthand knowledge about the workspace
and environment.
• Evaluate equipment or products that were involved.
• Identify team-generated questions that need to be answered.
• Use Triggering Questions (see Appendix 2) and team-generated open-ended ques-
tions that can broaden the scope of the review by adding additional areas of inquiry.
• Identify staff who may have answers to the questions and conduct interviews (see
the Interviewing Tips in Appendix 3) of involved parties including staff and affected
patients.
• Include patients, family, or a patient representative as appropriate to ensure a thor-
ough understanding of the facts.
• Identify internal documents to review (e.g., policies, procedures, medical records,
maintenance records).
• Identify pertinent external documents or recommended practices to review (e.g.,
peer reviewed publications, manufacturers’ literature, equipment manuals, profes-
sional organization guidance and publications).
• Identify and acquire appropriate expertise to understand the event under review. This
may require interactions with internal and external sources of expertise (e.g., manu-
facturers, vendors, professional organizations, regulatory organizations).
• Enhance the Flow Diagram (see the sample in Appendix 4) or timeline to reflect the
final understanding of events and where hazards or system vulnerabilities are located.
• Use the flow diagram to compare what happened with what should have happened
and investigate why all deviations occurred.
• Provide feedback to the involved staff and patients, as well as feedback to the organi-
zation as a whole.
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
III. THE RCA2 EVENT REVIEW PROCESS • 15
72 hours72 hours
30–45 days 30–45 days
Event, hazard,
system vulnerability
Risk-based
prioritization
What happened?
Fact finding and flow
diagramming
Development of
causal statements
Identification of solutions
and corrective actions
Implementation
Measurement
Feedback
Immediate actions are taken to care for the
patient, make the situation safe for others, and
sequester equipment, products, or materials.
Patient safety, risk or quality management is
typically responsible for the prioritization; for con-
sistency one person is assigned responsibility for
applying the risk matrix. See Appendix 1.
Multiple meetings of 1.5 to 2 hours may be
required to: prepare and conduct interviews (see
Appendix 3); visit the site; review equipment or
devices; and prepare the report.
Managers/supervisors responsible for the
processes or areas should be invited to provide
feedback for the team’s consideration.
See Appendix 2 for suggested Triggering
Questions.
See Appendix 6 for the Five Rules of Causation.
Patients/families and managers/supervisors
responsible for the process or area should be
provided feedback and consulted for additional
ideas; however they should not have final deci-
sion authority over the team’s work. See Figure 3
for the Action Hierarchy.
A responsible individual with the authority to act,
not a team or committee, should be responsible
for ensuring action implementation.
Each action should have a process or outcome
measure identifying what will be measured, the
expected compliance level, and the date it will be
measured. An individual should be identified who
will be responsible for measuring and reporting
on action effectiveness.
Feedback should be provided to the CEO/board,
service/department, staff involved, patient and/or
patient’s family, the organization, and the patient
safety organization (if relevant).
Typically a single RCA2 team is
responsible for the entire review
process, however, if different staff
is used for these RCA2 review
phases it is recommended that a
core group of staff from the RCA2
team participate on all phases for
consistency and continuity.
The RCA2 team is not usually
responsible for these activities.
Figure 2. Individual RCA2 Process
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
III. THE RCA2 EVENT REVIEW PROCESS • 16
With the new information acquired through the review process, teams are in a position to
identify contributing factors. Tools such as Cause and Effect Diagramming (a sample is pre-
sented in Appendix 5) and the “Five Whys,” best known as the Five Rules of Causation (see
Appendix 6), may also be used to identify and document contributing factors, but their use
is not mandatory. The Cause and Effect Diagram is an investigative tool as well as a means
to improve communication to stakeholders. Health care processes are complex, and there
are many contributing factors to adverse events or near misses that when identified and
addressed will improve patient safety. Review teams should strive to identify the multiple
contributing factors and not stop the analysis when only a single contributing factor is
found. Once identified, contributing factors should be identified in a manner that focuses
on system issues and does not assign blame to one or more individuals. Applying the Five
Rules of Causation to each contributing factor statement will help ensure that this goal is
met. It is important that supporting evidence or rationale be provided in the report to cor-
roborate or substantiate why a contributing factor was selected.
Actions
The most important step in the RCA2 process is the identification and implementation
of actions to eliminate or control system hazards or vulnerabilities that have been
identified in the contributing factor statements. Therefore, review teams should strive
to identify actions that prevent the event from recurring or, if that is not possible, reduce
the severity or consequences if it should recur. Using a tool such as the Action Hierarchy
(see Figure 3) will assist teams in identifying stronger actions that provide effective and
sustained system improvement.(22) The Action Hierarchy developed by the US Department
of Veterans Affairs National Center for Patient Safety in 2001 was modeled on the National
Institute for Occupational Safety and Health Administration’s Hierarchy of Controls,(23)
which has been used for decades in many other industries to improve worker safety.
Teams should identify at least one stronger or intermediate strength action for each
RCA2 review. In some cases it may be necessary to recommend actions classified as weaker
actions in the Action Hierarchy as temporary measures until stronger actions can be imple-
mented. It should be understood that “weaker” actions such as training and policy changes
are often necessary to establish proficiency and expectations, but when used alone are
unlikely to be sufficient to provide sustained patient safety improvements.(24,25)
Keeping Team Members Engaged and Involved
Projecting the RCA2 team’s work using an LCD projector or displaying it on a large flat screen
monitor during meetings is an effective way of keeping team members engaged. Also, using
self-stick notes to construct Flow Diagrams and Cause and Effect Diagrams during meetings
helps ensure everyone has the same level of knowledge about the event and allows efficient
adjustment of diagrams as the understanding of facts changes with addition of new informa-
tion. Both techniques reduce the need for rework after the meeting, thus saving everyone
time. u
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
III. THE RCA2 EVENT REVIEW PROCESS • 17
Figure 3. Action Hierarchy
Action Category Example
Stronger
Actions
(these tasks
require less reli-
ance on humans
to remember to
perform the task
correctly)
Architectural/physical plant
changes
Replace revolving doors at the main patient entrance into the building with
powered sliding or swinging doors to reduce patient falls.
New devices with usability
testing
Perform heuristic tests of outpatient blood glucose meters and test strips and
select the most appropriate for the patient population being served.
Engineering control (forcing
function)
Eliminate the use of universal adaptors and peripheral devices for medical equip-
ment and use tubing/fittings that can only be connected the correct way (e.g.,
IV tubing and connectors that cannot physically be connected to sequential
compression devices or SCDs).
Simplify process Remove unnecessary steps in a process.
Standardize on equipment
or process
Standardize on the make and model of medication pumps used throughout the
institution. Use bar coding for medication administration.
Tangible involvement by
leadership
Participate in unit patient safety evaluations and interact with staff; support the
RCA2 process; purchase needed equipment; ensure staffing and workload are
balanced.
Intermediate
Actions
Redundancy Use two RNs to independently calculate high-risk medication dosages.
Increase in staffing/decrease
in workload
Make float staff available to assist when workloads peak during the day.
Software enhancements,
modifications
Use computer alerts for drug-drug interactions.
Eliminate/reduce
distractions
Provide quiet rooms for programming PCA pumps; remove distractions for
nurses when programming medication pumps.
Education using simulation-
based training, with periodic
refresher sessions and
observations
Conduct patient handoffs in a simulation lab/environment, with after action
critiques and debriefing.
Checklist/cognitive aids Use pre-induction and pre-incision checklists in operating rooms. Use a checklist
when reprocessing flexible fiber optic endoscopes.
Eliminate look- and
sound-alikes
Do not store look-alikes next to one another in the unit medication room.
Standardized communica-
tion tools
Use read-back for all critical lab values. Use read-back or repeat-back for all ver-
bal medication orders. Use a standardized patient handoff format.
Enhanced documentation,
communication
Highlight medication name and dose on IV bags.
Weaker
Actions
(these tasks require
more reliance on
humans to remem-
ber to perform the
task correctly)
Double checks One person calculates dosage, another person reviews their calculation.
Warnings Add audible alarms or caution labels.
New procedure/
memorandum/policy
Remember to check IV sites every 2 hours.
Training Demonstrate correct usage of hard-to-use medical equipment.
Action Hierarchy levels and categories are based on Root Cause Analysis Tools, VA National Center for Patient Safety,
http://www.patientsafety.va.gov/docs/joe/rca_tools_2_15 . Examples are provided here.
2
http://www.patientsafety.va.gov/docs/joe/rca_tools_2_15
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
III. THE RCA2 EVENT REVIEW PROCESS • 18
Measuring Action Implementation and Effectiveness
In order to improve patient safety, corrective actions must be implemented and their
effectiveness measured. To ensure that actions are implemented, assign an individual,
not a committee, the responsibility for each action, and set a date by which the action
must be completed. This individual must have the authority to effect change and the
resources or access to resources to implement the action. Multiple individuals or a commit-
tee should not be assigned this responsibility because to do so dilutes accountability and
undermines the probability of successful implementation.
Each action identified by the review team requires at least one measure, which may
be either a process measure or an outcome measure. A process measure may be some-
thing as simple as documenting that the action was implemented. For the overall RCA2
process, it is wise to have a combination of both process and outcome measures. Process
measures confirm the action has been implemented, while outcome measures determine
if the action was effective. The length of time required to implement the measure should
also be considered. For example, if an action required beta testing of new technology to
improve staff use of alcohol-based hand gel before and after each patient encounter, a
potential process measure might be to observe 100 staff-patient encounters over a 7-day
period with an expected compliance rate of 95%. A potential outcome measure for this
same action might be a 20% reduction in hospital-acquired infections (HAI) transmitted
by staff-patient contact. The data for the process measure may be collected more quickly
Why Is “Human Error” Not an Acceptable Root Cause?
While it may be true that a human error was involved in an adverse event, the very occur-
rence of a human error implies that it can happen again. Human error is inevitable. If one well-
intentioned, well-trained provider working in his or her typical environment makes an error,
there are system factors that facilitated the error. It is critical that we gain an understanding of
those system factors so that we can find ways to remove them or mitigate their effects.
Our goal is to increase safety in the long term and not allow a similar event to occur. When
the involved provider is disciplined, counseled, or re-trained, we may reduce the likelihood that
the event will recur with that provider, but we don’t address the probability that the event will
occur with other providers in similar circumstances. Wider training is also not an effective solu-
tion; there is always turnover, and a high-profile event today may be forgotten in the future. This
is reflected in Figure 3, the Action Hierarchy, which is based upon safety engineering principles
used for over 50 years in safety-critical industries. Solutions that address human error directly
(such as remediation, training, and implementation of policies) are all weaker solutions. Solu-
tions that address the system (such as physical plant or device changes and process changes)
are much stronger. This is why it’s so important to understand the system factors facilitating
human error and to develop system solutions.
Review teams should not censor themselves when it comes to identifying corrective actions.
This is important because the team’s job is to identify and recommend the most effective
actions they can think of, and it is leadership’s responsibility to decide if the benefit likely to be
realized is worth the investment, in light of the opportunity cost and its impact on the system in
general. Only the top leadership of an organization can accept risk for the organization, and this
is a responsibility that should not be delegated to others. u
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
III. THE RCA2 EVENT REVIEW PROCESS • 19
than the HAI data, and therefore the technology (if effective) may be implemented sooner
to reduce future potential patient harm. Deciding what type of measures to employ is a
risk-based decision. A balance must be struck between the precision and accuracy of mea-
surement required and what conclusions it will permit as opposed to the downside if the
effectiveness is inaccurately determined. Measures should identify what is being measured,
by whom, what compliance level is expected, and a specific date that the measure will be
assessed. An individual, not a committee or group, should be made responsible for ensur-
ing the action effectiveness is reviewed. (Appendix 7 provides the Cause, Action, Process/
Outcome Measure Table structure, plus a sample causal statement.) When actions have
been measured, the CEO, review team, patient, and/or patient’s family should be provided
with feedback on its effectiveness.
Feedback
It is essential that involved staff as well as involved patients/families be provided
feedback of the findings of the RCA2 process, and be given the opportunity to comment
on whether the proposed actions make sense to them. Feedback to the organization as a
whole is also essential in order to create a culture of safety and reporting, permitting staff
to see the improvements that result from these reports.
Leadership and Board Support
For the RCA2 process to be successful it is critical that it be supported by all levels of the
organization including the chief executive officer and the board of directors, as dem-
onstrated by an appropriate investment of resources. Each action recommended by a
review team should be approved or disapproved, preferably by the CEO or alternatively
by another appropriate member of top management. If an action is disapproved, the rea-
son for its disapproval should be documented and shared with the RCA2 team so that the
constraint preventing implementation can be understood and another action developed
by the team to replace it, unless it is otherwise effectively addressed in the action plan.
RCA2 results on significant events as defined by the organization—including the hazards
identified, their causes, and corresponding corrective actions—should be presented
to the board of directors for their review and comment. Figures 3 and 4 present cogni-
tive aids that may be used by CEOs and board members when reviewing RCA2 reports.
These tools will aid the CEO and board in making a qualitative assessment to determine
whether a thorough RCA2 review has been completed. Leaders then need to determine
the applicability of the findings on a broader scale across their organization or beyond and
take further action as appropriate if required. It is recommended that the review of RCA2
reports be added to the board of directors meeting agenda as a recurring topic as part of
efforts to address enterprise risk management. The visible and tangible involvement of
leadership and the board demonstrates that the process of root cause analysis and action
is important.
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
III. THE RCA2 EVENT REVIEW PROCESS • 20
Measuring the Effectiveness and Sustainability of the RCA2 Process
It is recommended that the RCA2 program be reviewed annually by senior leadership
and the board for effectiveness and continued improvement. The following are examples
of measures that may be useful:
• Percent of contributing factors written to meet the Five Rules of Causation
• Percent of RCA2 reviews with at least one stronger or intermediate strength action
• Percent of actions that are classified as stronger or intermediate strength
• Percent of actions that are implemented on time
• Percent of actions completed
• Audits or other checks that independently verify that hazard mitigation has been
sustained over time
• Staff and patient satisfaction with the RCA2 review process (survey)
• Response to AHRQ survey questions pertinent to the RCA2 review process
• Percent of RCA2 results presented to the board
u
Figure 4. Warning Signs of Ineffective RCA2
If any one or more of the following factors are true, then your specific RCA2 review
or your RCA2 process in general needs to be re-examined and revised because it is
failing:
• There are no contributing factors identified, or the contributing factors lack
supporting data or information.
• One or more individuals are identified as causing the event; causal factors point to
human error or blame.
• No stronger or intermediate strength actions are identified.
• Causal statements do not comply with the Five Rules of Causation (see Appendix 6).
• No corrective actions are identified, or the corrective actions do not appear to
address the system vulnerabilities identified by the contributing factors.
• Action follow-up is assigned to a group or committee and not to an individual.
• Actions do not have completion dates or meaningful process and outcome
measures.
• The event review took longer than 45 days to complete.
• There is little confidence that implementing and sustaining corrective action will
significantly reduce the risk of future occurrences of similar events.
2
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
IV. CONCLUSION AND RECOMMENDATIONS • 21
IV. CONCLUSION AND RECOMMENDATIONS
Conclusion
The key to establishing a successful root cause analysis and action process lies in leader-
ship support. The components of a successful program include establishing a transparent
risk-based methodology for triaging events, selecting the correct personnel to serve on the
team, providing the team with the resources and time to complete the review, identifying
at least one stronger or intermediate strength action in each review, and measuring the
actions to assess if they were effective in mitigating the risk. Using tools such as risk-based
prioritization matrices, Triggering Questions, the Five Rules of Causation, and the Action
Hierarchy will aid the team in identifying and communicating causal factors and taking
actions that will improve patient care and safety.
Recommendations
1. Leadership (e.g., CEO, board of directors) should be actively involved in the root
cause analysis and action (RCA2) process. This should be accomplished by support-
ing the process, approving and periodically reviewing the status of actions, under-
standing what a thorough RCA2 report should include, and acting when reviews do
not meet minimum requirements.
2. Leadership should review the RCA2 process at least annually for effectiveness.
3. Blameworthy events that are not appropriate for RCA2 review should be defined.
4. Facilities should use a transparent, formal, and explicit risk-based prioritization sys-
tem to identify adverse events, close calls, and system vulnerabilities requiring RCA2
review.
5. An RCA2 review should be started within 72 hours of recognizing that a review is
needed.
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
IV. CONCLUSION AND RECOMMENDATIONS • 22
6. RCA2 teams should be composed of 4 to 6 people. The team should include pro-
cess experts as well as other individuals drawn from all levels of the organization,
and inclusion of a patient representative unrelated to the event should be consid-
ered. Team membership should not include individuals who were involved in the
event or close call being reviewed, but those individuals should be interviewed for
information.
7. Time should be provided during the normal work shift for staff to serve on an RCA2
team, including attending meetings, researching, and conducting interviews.
8. RCA2 tools (e.g., interviewing techniques, Flow Diagramming, Cause and Effect Dia-
gramming, Five Rules of Causation, Action Hierarchy, Process/Outcome Measures)
should be used by teams to assist in the investigation process and the identification
of strong and intermediate strength corrective actions.
9. Feedback should be provided to staff involved in the event as well as to patients
and/or their family members regarding the findings of the RCA2 process.
u
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 1. THE SAFETY ASSESSMENT CODE (SAC) MATRIX • 23
APPENDIX 1. THE SAFETY ASSESSMENT CODE (SAC) MATRIX
This appendix reproduces a modified version of the VA National Center for Patient Safety’s Safety Assessment Code Matrix
as an example of a risk-based prioritization methodology for ranking hazards, vulnerabilities, and events so that an orga-
nization can consistently and transparently decide how to utilize its available resources to determine which risks to study
and mitigate first. Five sample scenarios and their assessments are provided on pages 25–30.
Any event prioritization tool such as the SAC Matrix presented in this appendix should meet local organizational regulatory
requirements and standards as well as those of applicable accrediting and regulatory organizations. For a prioritization
tool’s use to be successful, a system should be instituted to ensure that the tool is updated periodically to reflect changes
in applicable requirements, regulations, and standards.May 23, 2008 VHA HANDBOOK 1050.01
APPENDIX B
B-1
THE SAFETY ASSESSMENT CODE (SAC) MATRIX
The Severity Categories and the Probability Categories that are used to develop the Safety Assessment Codes
(SACs) for adverse events and close calls are presented in the following, and are followed by information on the
SAC Matrix.
1. SEVERITY CATEGORIES
a. Key factors for the severity categories are extent of injury, length of stay, level of care required for remedy, and
actual or estimated physical plant costs. These four categories apply to actual adverse events and potential events (close
calls). For actual adverse events, assign severity based on the patient’s actual condition.
b. If the event is a close call, assign severity based on a reasonable “worst case” systems level scenario. NOTE: For
example, if you entered a patient’s room before they were able to complete a lethal suicide attempt, the event is
catastrophic, because the reasonable “worst case” is suicide.
Catastrophic Major
Patients with Actual or Potential: Patients with Actual or Potential:
Death or major permanent loss of function (sensory, motor,
physiologic, or intellectual) not related to the natural course of
the patient’s illness or underlying condition (i.e., acts of
commission or omission). This includes outcomes that are a
direct result of injuries sustained in a fall; or associated with an
unauthorized departure from an around-the-clock treatment
setting; or the result of an assault or other crime. Any of the
adverse events defined by the Joint Commission as reviewable
“Sentinel Events” should also be considered in this category.
Visitors: A death; or hospitalization of three or more visitors
Staff: A death or hospitalization of three or more staff*
Permanent lessening of bodily functioning (sensory, motor,
physiologic, or intellectual) not related to the natural
course of the patient’s illness or underlying conditions
(i.e., acts of commission or omission) or any of the following:
a. Disfigurement
b. Surgical intervention required
c. Increased length of stay for three or more patients
d. Increased level of care for three or more patients
Visitors: Hospitalization of one or two visitors
Staff: Hospitalization of one or two staff or three or more
staff experiencing lost time or restricted duty injuries or
illnesses
Equipment or facility: Damage equal to or more than
$100,000**, ♦
Moderate Minor
Patients with Actual or Potential: Increased length of stay or
increased level of care for one or two patients
Visitors: Evaluation and treatment for one or two visitors (less
than hospitalization)
Staff: Medical expenses, lost time or restricted duty injuries or
illness for one or two staff
Equipment or facility: Damage more than $10,000, but less than
$100,000**, ♦
Patients with Actual or Potential: No injury, nor increased
length of stay nor increased level of care
Visitors: Evaluated and no treatment required or refused
treatment
Staff: First aid treatment only with no lost time, nor
restricted duty injuries nor illnesses
Equipment or facility: Damage less than $10,000 or loss of
any utility without adverse patient outcome (e.g., power,
natural gas, electricity, water, communications, transport, heat
and/or air conditioning)**, ♦
*Title 29 Code of Federal Regulations (CFR) 1960.70 and 1904.8 requires each Federal agency to notify the Occupational Safety and
Health Administration (OSHA) within 8 hours of a work-related incident that results in the death of an employee or the in-patient
hospitalization of three or more employees. Volunteers are considered to be non-compensated employees.
**The Safe Medical Devices Act of 1990 requires reporting of all incidents in which a medical device may have caused or contributed to
the death, serious injury, or serious illness of a patient or another individual.
♦The effectiveness of the facilities disaster plan must be critiqued following each implementation to meet The Joint Commission’s
Environment of Care Standards.
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 1. THE SAFETY ASSESSMENT CODE (SAC) MATRIX • 24
Based on Department of Veterans Affairs, Veterans Health Administration, VHA Patient Safety Improvement Handbook
1050.01, May 23, 2008. Available at http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Triaging-
Adverse-Events-and-Close-Calls-SAC
VHA HANDBOOK 1050.01 May 23, 2008
APPENDIX B
B-2
2. PROBABILITY CATEGORIES
a. Like the severity categories, the probability categories apply to actual adverse events and close calls.
b. In order to assign a probability rating for an adverse event or close call, it is ideal to know how often it occurs
at your facility. Sometimes the data will be easily available because they are routinely tracked (e.g., falls with
injury, Adverse Drug Events (ADEs), etc.). Sometimes, getting a feel for the probability of events that are not
routinely tracked will mean asking for a quick or informal opinion from staff most familiar with those events.
Sometimes it will have to be your best educated guess.
Like the severity categories, the probability categories apply to actual adverse events and close calls.
c. In order to assign a probability rating for an adverse event or close call, it is ideal to know how often it occurs
at your facility. Sometimes the data is easily available because the events are routinely tracked (e.g., falls with
injury, ADEs, etc.). Sometimes, getting a feel for the probability of events that are not routinely tracked will mean
asking for a quick or informal opinion from staff most familiar with those events. Sometimes it will have to be the
best educated guess.
(1) Frequent – Likely to occur immediately or within a short period (may happen several times in 1 year).
(2) Occasional – Probably will occur (may happen several times in 1 to 2 years).
(3) Uncommon – Possible to occur (may happen sometime in 2 to 5 years).
(4) Remote – Unlikely to occur (may happen sometime in 5 to 30 years).
3. How the Safety Assessment Codes (SAC) Matrix Looks
Probability
and
Severity
Catastrophic
Major
Moderate
Minor
Frequent
3
3
2
1
Occasional
3
2
1
1
Uncommon
3
2
1
1
Remote
3
2
1
1
4. How the SAC Matrix Works. When a severity category is paired with a probability category for either an
actual event or close call, a ranked matrix score (3 = highest risk, 2 = intermediate risk, 1 = lowest risk) results.
These ranks, or SACs, can then be used for doing comparative analysis and for deciding who needs to be notified
about the event.
5. Reporting
a. All known reporters of events, regardless of SAC score (one, two, or three), must receive appropriate and
timely feedback.
b. The Patient Safety Manager, or designee, must refer adverse events or close calls related solely to staff,
visitors, or equipment and/or facility damage to relevant facility experts or services on a timely basis, for assessment
and resolution of those situations.
http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Triaging-Adverse-Events-and-Close-Calls-SAC
http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Triaging-Adverse-Events-and-Close-Calls-SAC
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 1. THE SAFETY ASSESSMENT CODE (SAC) MATRIX • 25
Using the Safety Assessment Code Matrix: Five Examples
Excerpted and adapted from Bagian JP, Lee CZ, Cole JF, “A Method for Prioritizing Safety Related Actions,” in
Strategies for Leadership: a Toolkit for Improving Patient Safety, developed by the Department of Veterans Affairs
National Center for Patient Safety, sponsored by the American Hospital Association.
EXAMPLE 1
The nursing staff was providing the patient with routine a.m. care. This consisted of showering the
patient in the shower room on the ward. The patient was seated in a chair being washed when
he slid off the chair and hit his face, hip, and shoulder. The patient was examined by the doctor at
7:55 a.m. and transferred to the acute evaluation unit (AEU) for further evaluation. The AEU physi-
cian ordered x-rays. No fractures noted. The patient was returned to the ward where neuro checks
were initiated as per policy and reported as normal.
Severity Determination
The first step in assigning the SAC score is determining the severity of the event. We can see
from the report that no injury was reported after evaluation by x-ray and clinical evaluation on
the ward. Therefore, the actual severity would be rated as minor.
Actual Severity Score = MINOR
However, when one considers the potential for injury, the evaluator could reasonably assess it
as potentially catastrophic. This is true because their past experience with similar falls had dem-
onstrated that the most likely worst case scenario could have resulted in a lethal injury. Therefore,
while the actual severity would be rated as minor the potential severity would be considered to be
catastrophic.
In general, the severity score assigned should be whichever one is the most severe when com-
paring the actual versus the potential/risk thereof (close call) assessment. In this way, the most
conservative course will be selected, which will maximize the potential to prevent future events
of this nature.
Potential
Severity Score = CATASTROPHIC
Probability Determination
The probability determination should be made based on the situation that results in the most
severe severity assessment. The evaluator should base the probability assessment on their
own experience at their facility and locally generated data. This, in most cases, will be the most
subjective portion of the SAC score determination. It should be noted that the SAC Matrix that
is used has been constructed in such a way that it minimizes the impact of this subjectivity.
The purpose of the SAC score process is to provide a framework to prioritize future actions. If
the facility feels that there are circumstances that warrant a more in-depth follow-up than that
which the SAC score indicates, they are free to pursue it.
Based on the experience of the evaluator, the probability of a catastrophic (using the SAC
definition) outcome for a patient of this type whose head struck a hard object as the result of a
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 1. THE SAFETY ASSESSMENT CODE (SAC) MATRIX • 26
fall would be occasional to uncommon. Wanting to be conservative, the occasional assessment
would be selected.
Probability Score = OCCASIONAL
Using the SAC matrix one need only locate the severity rating and then follow down the column
until reaching the row containing the probability score. In this case this would yield the SAC score of
“3.” Notice that even if the probability of the event had been rated as uncommon, the SAC score still
would have been determined to be a “3.”
SAC Score = 3, therefore an RCA2 review would be conducted.
All actual SAC 3 and potential SAC 3 events require that a root cause analysis and action review be
conducted.
EXAMPLE 2
YXZ monitor did not trigger an alarm in the Surgical ICU. The problem was observed by the nurses
while they cared for a DNR patient who developed cardiac arrhythmias, but the monitor failed to
trigger the alarm. Since the patient had a DNR order he was not resuscitated.
Severity Determination
The first step in assigning the SAC score is determining the actual severity score for the event.
We can see from the report that the actual outcome of this event was the death of the patient.
While this would definitely be thought of as a catastrophic event, there are other factors to be
considered.
Since the patient was classified as a DNR, and the nurses who were caring for the patient wit-
nessed the cardiac arrhythmias, the patient’s death was not the result of the failure of the alarm
to annunciate the cardiac abnormalities. Instead, there was an appropriate decision made not
to resuscitate based on the DNR order. This then would mean that the actual outcome would be
considered to be a result of the natural course of the patient’s disease. As such, the severity code
based on the actual outcome would be N/A (not applicable) and the case would not receive any
further consideration if scoring were to stop at the actual severity.
However, such an action does not take into account the potential/risk thereof (close call) assess-
ment and does not make common sense. It was purely serendipitous that the patient was a
Example 1 SAC Matrix
Severity and Probability Catastrophic Major Moderate Minor
Frequent 3 3 2 1
Occasional 3 2 1 1
Uncommon 3 2 1 1
Remote 3 2 1 1
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 1. THE SAFETY ASSESSMENT CODE (SAC) MATRIX • 27
DNR. Had this not been the case, the death would not have been placed in the natural course
of the disease category. It was probably also serendipitous that the cardiac arrhythmias were
witnessed. This would mean that had this happened in a patient that was not in DNR status,
a catastrophic event may reasonably be construed to have occurred. For these reasons the
severity for this event would be determined to be catastrophic from a potential perspective.
Remember, the severity score assigned should be whichever one is the most severe when com-
paring the actual versus the potential/risk thereof (close call) assessment. In this way, the most
conservative course will be selected, which will maximize the potential to prevent future events
of this nature.
Severity Score = CATASTROPHIC
Probability Determination
The probability determination should be made based on the situation that results in the most
severe severity assessment. The evaluator should base the probability assessment on their own
experience at their facility. This, in most cases, will be the most subjective portion of the SAC
score determination. It should be noted that the SAC Matrix that is used has been constructed
in such a way that it minimizes the impact of this subjectivity. It must be remembered that the
entire purpose of the SAC score process is to provide a framework within which to prioritize
future actions and that a higher rating can be assigned if the facility feels that there are particu-
lar circumstances that warrant more in-depth follow-up.
The probability determination would rely on the experience of the evaluator. For the purposes
of this illustration we will assume that the probability is thought to be uncommon.
Probability Score = UNCOMMON
Using the SAC matrix one need only locate the severity rating and then follow down the column
until reaching the row containing the probability score. In this case this would yield a ‘”3.” Notice
that even if the probability of the event had been rated as remote, the SAC score still would have
been determined to be a “3.”
SAC Score = 3, therefore an RCA2 review would be conducted.
Example 2 SAC Matrix
Severity and Probability Catastrophic Major Moderate Minor
Frequent 3 3 2 1
Occasional 3 2 1 1
Uncommon 3 2 1 1
Remote 3 2 1 1
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 1. THE SAFETY ASSESSMENT CODE (SAC) MATRIX • 28
EXAMPLE 3
An outpatient received an MRI scan and bought his oxygen cylinder into the magnet room, where
it was pulled into the bore of the magnet. The MR technician activated the emergency shutdown,
which turned off all electrical power to the magnet and expelled the liquid helium cooling the mag-
net to atmosphere outside of the building. Neither the patient nor the tech was injured. The magnet
sustained superficial damage but was out of service for 5 days until a contractor could be brought
in to replace the helium. (Appendix 4 provides the Final Flow Diagram for this event.)
Severity Determination
The first step in assigning the SAC score is determining the actual severity score for the event.
We can see from the report that the actual outcome of this event was no injury to either the
patient or staff, superficial damage to the MRI, and loss of business income generated by the
MRI for 5 days.
As such, the severity score based on the actual severity for the patient is minor, for the staff mem-
ber is minor, and for the equipment is moderate when lost income is factored in.
Actual Severity Score = MODERATE
However, such an action does not take into account the potential/risk thereof (close call) assess-
ment. It was by chance or luck that the patient or tech was not injured by the flying oxygen
cylinder as it was pulled into the bore of the magnet or that the MR magnet did not crack. The
most likely worst case scenario for this event is determined to be major to catastrophic. Had the
oxygen cylinder struck the tech or patient in the head it likely would have resulted in death or
permanent loss of function; a likely outcome for the magnet after quenching is cracking from
the thermal shock, and a replacement magnet costs in excess of $100,000. Based on the poten-
tial injury, a severity level of catastrophic was selected.
Potential Severity Score = CATASTROPHIC
Probability Determination
The probability determination should be based on the situation that results in the most severe sever-
ity assessment. In this case it is the probability of ferromagnetic objects being brought into the MRI
magnet room that could result in catastrophic severity. Based on past experience at the facility, this
was assessed to be uncommon (possible to occur, may happen sometime in 2 to 5 years).
Probability Score = UNCOMMON
Using the SAC matrix the score is a ‘”3” which would require that a root cause analysis and action
review be completed.
SAC Score = 3, therefore an RCA2 review would be conducted.
Refer to Appendix 4 for a sample Final Flow Diagram, and Appendix 5 for a Cause and Effect Dia-
gram, of this (fictitious) MRI close call event.
Example 3 SAC Matrix
Severity and Probability Catastrophic Major Moderate Minor
Frequent 3 3 2 1
Occasional 3 2 1 1
Uncommon 3 2 1 1
Remote 3 2 1 1
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 1. THE SAFETY ASSESSMENT CODE (SAC) MATRIX • 29
EXAMPLE 4
An employee working in Food and Nutrition Service was loading large cans of vegetables into a
flow-through rack in the dry goods storage area. A can slipped and fell, hitting the employee on the
toe. The employee sustained broken bones and was on medical leave for 5 days before returning to
work in a light/limited duty position.
Severity Determination
The first step in assigning the SAC score is determining the actual severity score for the event.
We can see from the report that the actual outcome of this event was an injury that required
time away from work and a limited/light duty assignment when the employee returned to work.
The employee was not wearing safety shoes, which are required for employees performing this
task.
The severity score based on the actual severity for the employee is moderate.
Actual Severity Score = MODERATE
The severity score for most likely worst case scenario for this event is determined to be major
based on the possibility for permanent loss of function.
Potential Severity Score = MAJOR
Probability Determination
The probability determination should be based on the situation that results in the most severe
severity assessment. Based on past experience at the facility this was assessed to be occasional
(probably will occur, may happen several times in 1 to 2 years).
Probability Score = OCCASIONAL
SAC Score = 2, therefore an RCA2 review is not mandated.
Example 4 SAC Matrix
Severity and Probability Catastrophic Major Moderate Minor
Frequent 3 3 2 1
Occasional 3 2 1 1
Uncommon 3 2 1 1
Remote 3 2 1 1
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 1. THE SAFETY ASSESSMENT CODE (SAC) MATRIX • 30
EXAMPLE 5
An Environmental Management staff member was cleaning a waiting room in the pediatrics hospi-
tal and noticed that there were new potted philodendron plants on the end tables by the couches.
Understanding that philodendrons can be poisonous if ingested, the staff member submitted a
patient safety report.
Severity Determination
The first step in assigning the SAC score is determining the actual severity score for the event.
We can see from the report that the actual outcome of this event was no injury to patients or
employees.
The severity score based on the actual severity for the employee or patient is minor.
Actual Severity Score = MINOR
The severity score for the most likely worst case scenario for this event is determined to be mod-
erate since the risk of fatal poisonings is extremely rare in pediatric patients; however, the plants
contain calcium oxalate, which if ingested could cause inflammation of the mucus membranes
in the mouth or throat.
Potential Severity Score = MODERATE
Probability Determination
The probability determination should be based on the situation that results the most severe
severity assessment. There has been no experience with pediatric patients eating plants in the
waiting rooms, but there have been reports of patients eating other objects. The best educated
guess is that the probability is remote to uncommon.
Probability Score = UNCOMMON
SAC Score = 1, therefore an RCA2 review is not mandated.
However, just because no RCA2 review was required, action to mitigate the risk was still thought to
be appropriate. The plants were removed from the hospital, and the contract with the vendor was
reviewed and modified to prevent a recurrence.
u
Example 5 SAC Matrix
Severity and Probability Catastrophic Major Moderate Minor
Frequent 3 3 2 1
Occasional 3 2 1 1
Uncommon 3 2 1 1
Remote 3 2 1 1
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 2. TRIGGERING QUESTIONS FOR ROOT CAUSE ANALYSIS • 31
APPENDIX 2. TRIGGERING QUESTIONS FOR
ROOT CAUSE ANALYSIS
Developed by Department of Veterans Affairs National Center for Patient Safety. Available at http://cheps.
engin.umich.edu/wp-content/uploads/sites/118/2015/04/Triggering-Questions-for-Root-Cause-Analysis
Introduction
Triggering Questions are used by the RCA2 team to help them consider areas of inquiry that might
otherwise be missed. The questions are initially answered as “yes,” “no,” or “not applicable.” When
questions are answered “no,” it is incumbent upon the team to investigate further to understand
why and determine if corrective actions need to be identified and implemented.
Instructions
• After reviewing the initial Flow Diagram (which is based on what is known about the event
before the RCA2 team’s first meeting), identify and document all questions team members
have about the adverse event or close call. (These are referred to as the team questions.)
• Review the Triggering Questions as a team, with the goal of identifying those questions that
are applicable to the adverse event being investigated.
• Combine the applicable Triggering Questions with the team questions, and as a team identify
where the answers may be obtained. This may include: interviewing staff, reviewing docu-
mentation (e.g., policies, procedures, the medical record, equipment maintenance records),
regulatory requirements (e.g., The Joint Commission, CMS, other accreditation or regulatory
agencies) guidelines (e.g., AORN, ISMP, ECRI Institute), publications, and codes and standards.
• As the investigation progresses, the team may identify additional questions that will need to
be answered.
• By the end of the investigation, the RCA team should be able to identify which Triggering
Questions are not applicable and the answers to the remaining questions.
Triggering Questions
Communication
1. Was the patient correctly identified?
2. Was information from various patient assessments shared and used by members of the treat-
ment team on a timely basis?
3. Did existing documentation provide a clear picture of the work-up, the treatment plan, and the
patient’s response to treatment? (e.g., Assessments, consultations, orders, progress notes, medica-
tion administration record, x-ray, labs, etc.)
4. Was communication between management/supervisors and front line staff adequate? (i.e., Accu-
rate, complete, unambiguous, using standard vocabulary and no jargon)
5. Was communication between front line team members adequate?
6. Were policies and procedures communicated adequately?
7. Was the correct technical information adequately communicated 24 hours/day to the people
who needed it?
http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Triggering-Questions-for-Root-Cause-Analysis
http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Triggering-Questions-for-Root-Cause-Analysis
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 2. TRIGGERING QUESTIONS FOR ROOT CAUSE ANALYSIS • 32
8. Were there methods for monitoring the adequacy of staff communications? (e.g., Read back,
repeat back, confirmation messages, debriefs)
9. Was the communication of potential risk factors free from obstacles?
10. Was there a manufacturer’s recall/alert/bulletin issued on the medication, equipment, or prod-
uct involved with the event or close call? If yes, were relevant staff members made aware of this
recall/alert/bulletin, and were the specified corrective actions implemented?
11. Were the patient and their family/significant others actively included in the assessment and
treatment planning?
12. Did management establish adequate methods to provide information to employees who
needed it in a timely manner that was easy to access and use?
13. Did the overall culture of the department/work area encourage or welcome observations, sug-
gestions, or “early warnings” from staff about risky situations and risk reduction? (Also, if this has
happened before what was done to prevent it from happening again?)
14. Did adequate communication across organizational boundaries occur?
Training
15. Was there an assessment done to identify what staff training was actually needed?
16. Was training provided prior to the start of the work process?
17. Were the results of training monitored over time?
18. Was the training adequate? If not, consider the following factors: supervisory responsibility,
procedure omission, flawed training, and flawed rules/policy/procedure.
19. Were training programs for staff designed up-front with the intent of helping staff perform their
tasks without errors?
20. Were all staff trained in the use of relevant barriers and controls?
Fatigue/Scheduling
21. Were the levels of vibration, noise, or other environmental conditions appropriate?
22. Were environmental stressors properly anticipated?
23. Did personnel have adequate sleep?
24. Was fatigue properly anticipated?
25. Was the environment free of distractions?
26. Was there sufficient staff on-hand for the workload at the time? (i.e., Workload too high, too low,
or wrong mix of staff.)
27. Was the level of automation appropriate? (i.e., Neither too much nor not enough.)
Environment/Equipment
28. Was the work area/environment designed to support the function it was being used for?
29. Had there been an environmental risk assessment (i.e., safety audit) of the area?
30. Were the work environment stress levels (either physical or psychological) appropriate? (e.g.,
Temperature, space, noise, intra-facility transfers, construction projects)
31. Had appropriate safety evaluations and disaster drills been conducted?
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 2. TRIGGERING QUESTIONS FOR ROOT CAUSE ANALYSIS • 33
32. Did the work area/environment meet current codes, specifications, and regulations?
33. Was the equipment designed to properly accomplish its intended purpose?
34. Did the equipment work smoothly in the context of: staff needs and experience; existing proce-
dures, requirements, and workload; and physical space and location?
35. Did the equipment involved meet current codes, specifications, and regulations?
36. Was there a documented safety review performed on the equipment involved? (If relevant, were
recommendations for service/recall/maintenance, etc., completed in a timely manner?)
37. Was there a maintenance program in place to maintain the equipment involved?
38. If there was a maintenance program, did the most recent previous inspections indicate that the
equipment was working properly?
39. If previous inspections pointed to equipment problems, where corrective actions implemented
effective?
40. Had equipment and procedures been reviewed to ensure that there was a good match between
people and the equipment they used or people and the tasks they did?
41. Were adequate time and resources allowed for physical plant and equipment upgrades, if prob-
lems were identified?
42. Was there adequate equipment to perform the work processes?
43. Were emergency provisions and back-up systems available in case of equipment failure?
44. Had this type of equipment worked correctly and been used appropriately in the past?
45. Was the equipment designed such that usage mistakes would be unlikely to happen?
46. Was the design specification adhered to?
47. Was the equipment produced to specifications and operated in a manner that the design was
intended to satisfy?
48. Were personnel trained appropriately to operate the equipment involved in the adverse event/
close call?
49. Did the design of the equipment enable detection of problems and make them obvious to the
operator in a timely manner?
50. Was the equipment designed so that corrective actions could be accomplished in a manner that
minimized/eliminated any undesirable outcome?
51. Were equipment displays and controls working properly and interpreted correctly and were
equipment settings including alarms appropriate?
52. Was the medical equipment or device intended to be reused (i.e., not reuse of a single use
device)?
53. Was the medical equipment or device used in accordance with its design and manufacturer’s
instructions?
Rules/Policies/Procedures
54. Was there an overall management plan for addressing risk and assigning responsibility for risk?
55. Did management have an audit or quality control system to inform them how key processes
related to the adverse event were functioning?
56. Had a previous investigation been done for a similar event, were the causes identified, and were
effective interventions developed and implemented on a timely basis?
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 2. TRIGGERING QUESTIONS FOR ROOT CAUSE ANALYSIS • 34
57. Would this problem have gone unidentified or uncorrected after an audit or review of the work
process/equipment/area?
58. Was required care for the patient within the scope of the facility’s mission, staff expertise and
availability, technical and support service resources?
59. Was the staff involved in the adverse event or close call properly qualified and trained to per-
form their function/duties?
60. Did the equipment involved meet current codes, specifications, and regulations?
61. Were all staff involved oriented to the job, department, and facility policies regarding: safety,
security, hazardous material management, emergency preparedness, life safety management,
medical equipment and utilities management?
62. Were there written up-to-date policies and procedures that addressed the work processes
related to the adverse event or close call?
63. Were these policies/procedures consistent with relevant state and national guidance, regulatory
agency requirements, and/or recommendations from professional societies/organizations?
64. Were relevant policies/procedures clear, understandable, and readily available to all staff?
65. Were the relevant policies and procedures actually used on a day-to-day basis?
66. If the policies and procedures were not used, what got in the way of their usefulness to staff?
67. If policies and procedures were not used, what positive and negative incentives were absent?
Barriers
(Barriers protect people and property from adverse events and can be physical or procedural. Nega-
tive/positive pressure rooms are an example of a physical barrier that controls the spread of bacteria/
viruses. The pin indexing system used on medical gas cylinders is another example of a physical barrier
that prevents gas cylinders being misconnected. The “surgical time out” is an example of a procedural
barrier that protects patients from wrong site, wrong patient, wrong procedure surgeries.)
68. What barriers and controls were involved in this adverse event or close call?
69. Were these barriers designed to protect patients, staff, equipment, or the environment?
70. Was patient risk considered when designing these barriers and controls?
71. Were these barriers and controls in place before the adverse event or close call occurred?
72. Had these barriers and controls been evaluated for reliability?
73. Were there other barriers and controls for work processes?
74. Was the concept of “fault tolerance’”applied in the system design? (A fault tolerant system can
withstand the failure of one or more barriers without the patient being harmed.)
75. Were relevant barriers and controls maintained and checked on a routine basis by designated
staff?
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 3. INTERVIEWING TIPS FOR RCA2 REVIEWS • 35
APPENDIX 3. INTERVIEWING TIPS FOR RCA2 REVIEWS
The goal of the interview process is to discover information about what happened and why that
will lead to the identification of system issues and ultimately to effective and sustainable corrective
actions.
From the writings of Sidney Dekker, we find that a fundamental question of this process is not
“where did people go wrong?” but “why did their action make sense to them at the time?”(26) To
answer questions like these and to achieve the goal of the interview process requires effective inter-
viewing skills and close attention to the tips provided below.
• Interviews should be conducted by the RCA2 team immediately after they have identified
their interview questions. The preferred method is to conduct interviews in person. In some
cases it may be necessary to conduct an interview via telephone. This may be acceptable if the
individuals involved know and trust each other.
• After an adverse event, staff should be asked not to discuss the event among themselves, in
order to promote the integrity and objectivity of the review process.
• If needed, notify the staff member/employee’s immediate supervisor that the employee will
be needed for an interview so that coverage can be arranged. Supervisors should not be pres-
ent during the interview.
• Interview only one individual at a time, which will permit information to be compared and
weighed. Expect differences between descriptions given by different staff when they describe
what happened, and use additional information gathered by the team to support the final
conclusions.
• Have the team’s questions ready so that the required information may be obtained in one ses-
sion.
• Ask only one or two RCA2 team members to conduct the interview. Approaching the inter-
viewee with a large group may be intimidating and potentially add to the stress of recounting
the event.
• In some cases staff members/employees may wish to have a representative or attorney pres-
ent during the interview. The institution should set the ground rules for such participation.
• Patients may have family present during their interview.
• If the staff member/employee was involved in the adverse event, be sensitive to this. Let them
know that no one is judging them and that the interview is being conducted to identify and
implement systems-level sustainable corrective actions so a similar event does not happen
again.
• Express to the patient and/or any family present that you are sorry the event occurred. Explain
to them that the review is being conducted to identify system issues and implement sustain-
able and effective corrective actions, and that the team will not be assigning blame to anyone
involved in the event.
• Conduct the interview in the staff member’s/employee’s area or in an area that may help them
relax. Avoid the appearance of summoning them to a deposition or administrative review.
• For interviews of patients and/or family members conduct the interview at a location that is
acceptable to them.
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 3. INTERVIEWING TIPS FOR RCA2 REVIEWS • 36
• If practical, match your attire to that of the interviewee, while maintaining a level of profes-
sionalism. The goal is to avoid having them feel intimidated.
• Request permission to take notes and explain what the notes will be used for.
• Explain the purpose of the interview. Stress that the RCA2 review team is seeking to identify
system issues and not to assign blame to any individuals.
• Effective interview skills help make fact finding easier and the staff involved more comfortable
with the process. Start with broad, open-ended questions and then narrow them down; move
from general interrogatories, to specific clarifying questions, and then where appropriate, to
closed questions to clarify your understanding of what has been shared. The process should
not feel like an inquisition, and it is essential that you make the interviewee feel as safe as pos-
sible.
• Use active listening and reflect what is being said. Build confidence by restating and summa-
rizing what you have heard. Keep an open body posture, good eye contact, and nod appro-
priately. Demonstrate empathy and be patient. Do not prejudge, lay blame, or interrupt. Tell
them that the information obtained during the RCA2 process is protected and confidential
and will not be shared outside of the process. Union representatives, if present, should be
informed that they are not permitted to talk about what was discussed with anyone other
than the employee and RCA2 team members.
• If the interviewee is having difficulty remembering the details surrounding the event, ask
them to describe what they normally do when completing the task/procedure that was
involved. Drawing a sketch of the process or work area may also trigger their memory.
• Thank the interviewee at the conclusion of the process, provide your contact information in
case they have additional information that they remember, and if you sense they need emo-
tional support, be aware of what resources are available to them.
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 4. FINAL FLOW DIAGRAM EXAMPLE • 37
APPENDIX 4. FINAL FLOW DIAGRAM EXAMPLE
All events appearing in this diagram are fictitious. Any resemblance to real events is purely coincidental.
3. JP arrives at the MRI
suite with his oxygen
cylinder.
7. The MR tech is called
away in the middle of
questioning JP and
returns a few minutes
later to finish.
11. A vendor is contacted,
the MR unit helium
is recharged and the
cracked cowling is
replaced.
The oxygen cylinder that JP is using
looks identical to the MRI safe oxygen
cylinders used in the MRI suite. The
receptionist didn’t question the oxygen
cylinder as it wasn’t part of the job
but sometimes he did to help out; the
MRI tech thought that the cylinder had
already been switched to an MRI safe
cylinder.
The tech was called away to answer
a question from a physician; while
he was taking care of this the clerk
reminded him that they were 3
appointments behind and that maybe
they could get caught up over lunch.
The day before staff had been told that
their new quality measure was timeli-
ness and patient waiting times.
The MR unit was short staffed on this
day due to an illness.
4. JP checks in and is
asked to change out of
his street clothes and
put on scrubs. He was
also asked to remove
any chains, watches,
and jewelry.
8. The MR tech asks JP
to follow him into the
magnet room. JP does
so pulling the oxygen
cylinder behind him.
12. MRI service is resumed
approximately 5
days after the event
occurred.
It is the policy to change into scrubs.
A changing room is available along
with lockers for patient use.
A ferrous metal detector is not pro-
vided at the entrance into the magnet
room and hand held scanners are not
used. A sign on the door warns to
remove all metal before entering.
The magnet room does not have piped
in oxygen.
1. Patient (JP) has COPD
and is on oxygen
(2 lpm) and requires
knee surgery.
JP could have had his oxygen therapy
discontinued for the duration of
the MR scan without causing
complications.
5. The MR tech escorts
JP from the changing
room to just outside the
entrance of the magnet
room. JP still has his
oxygen cylinder with
him.
9. As JP approaches the
MR table the oxygen
cylinder is drawn into
the bore of the magnet
narrowly missing the
tech as it flies by him.
The MR suite is not designed in accor-
dance with the four zone, dirty (ferrous
metal) to clean (no ferrous metal)
concept advocated by the American
College of Radiology.
There are no visual clues or indicators
in the room to warn individuals about
the increasing magnetic field.
2. JP reports for a
previously scheduled
outpatient MRI.
6. The MR tech questions
JP about jewelry,
implants, patches, etc.
10. The tech activates
the emergency
MRI shutdown.
Engineering/Facilities
are called.
There were no notes in the EMR
about the patient being on oxygen or
whether it could be discontinued for
the duration of the scan.
JP was not given any informational
material about the scan.
A standardized form/checklist is used
to question all patients about metal
objects they may be carrying or have
implanted; oxygen cylinders are
supposed to be provided by the facility
and are not on the form.
The protocol is for objects such as
gurneys, wheelchairs, oxygen cylinders
to be switched out to MR safe or MR
conditional equipment before the MR
tech meets the patient.
The tech thought that the oxygen
cylinder could explode. He was not
aware of the possible safety conse-
quences or equipment damage when
the magnet is quenched by instituting
an emergency MRI shutdown.
The tech did not recall any training
being done on emergency shutdowns.
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 5. CAUSE AND EFFECT DIAGRAM EXAMPLE • 38
APPENDIX 5. CAUSE AND EFFECT DIAGRAM EXAMPLE
Based on the Cause and Effect Diagramming Model
from Apollo Root Cause Analysis: A New Way of Thinking
by Dean L. Gano (Apollonian Publications, 1999.)
All events appearing in this diagram
are fictitious. Any resemblance to
real events is purely coincidental.
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 6. THE FIVE RULES OF CAUSATION • 39
APPENDIX 6. THE FIVE RULES OF CAUSATION
The wording of the rules below is based on The Five Rules of Causation developed by the Department of
Veterans Affairs, Veterans Health Administration, and appearing in their NCPS Triage CardsTM for Root Cause
Analysis (version October 2001, see http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/
Five-Rules-of-Causation ) and their document Root Cause Analysis (RCA), http://nj.gov/health/ps/
documents/va_triage_questions . The five rules were adapted from the Federal Aviation Administration
technical report “Maintenance Error Causation,” by David A. Marx, June 9, 1999.
After the RCA2 team has identified system vulnerabilities, these need to be documented and written up
to comply with the Five Rules of Causation. Applying the rules is not a grammar exercise. When the rules
are met, causal statements will be focused on correcting system issues. Causal statements also have to
“sell” why the corrective actions identified by the team are important. Using the format described in this
appendix will increase the likelihood that the corrective actions will be supported.
Causal statements are written to describe (1) Cause, (2) Effect, and (3) Event. Something (Cause) leads to
something (Effect) which increases the likelihood that the adverse Event will occur.
Example: A high volume of activity and noise in the emergency department led to (cause) the resident
being distracted when entering medication orders (effect) which increased the likelihood that the wrong
dose would be ordered (event).
Rule 1. Clearly show the “cause and effect” relationship.
INCORRECT: A resident was fatigued.
CORRECT: Residents are scheduled 80 hours per week, which led to increased levels of fatigue,
increasing the likelihood that dosing instructions would be misread.
Rule 2. Use specific and accurate descriptors for what occurred, rather than negative and vague
words. Avoid negative descriptors such as: Poor; Inadequate; Wrong; Bad; Failed; Careless.
INCORRECT: The manual is poorly written.
CORRECT: The pumps user manual had 8 point font and no illustrations; as a result nursing staff
rarely used it, increasing the likelihood that the pump would be programmed incorrectly.
Rule 3. Human errors must have a preceding cause.
INCORRECT: The resident selected the wrong dose, which led to the patient being overdosed.
CORRECT: Drugs in the Computerized Physician Order Entry (CPOE) system are presented to the
user without sufficient space between the different doses on the screen, increasing the likelihood
that the wrong dose could be selected, which led to the patient being overdosed.
Rule 4. Violations of procedure are not root causes, but must have a preceding cause.
INCORRECT: The techs did not follow the procedure for CT scans, which led to the patient receiv-
ing an air bolus from an empty syringe, resulting in a fatal air embolism.
CORRECT: Noise and confusion in the prep area, coupled with production pressures, increased
the likelihood that steps in the CT scan protocol would be missed, resulting in the injection of an
air embolism from using an empty syringe.
Rule 5. Failure to act is only causal when there is a pre-existing duty to act.
INCORRECT: The nurse did not check for STAT orders every half hour, which led to a delay in the
start of anticoagulation therapy, increasing the likelihood of a blood clot.
CORRECT: The absence of an assignment for designated RNs to check orders at specified times
increased the likelihood that STAT orders would be missed or delayed, which led to a delay in
therapy.
http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Five-Rules-of-Causation
http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Five-Rules-of-Causation
http://nj.gov/health/ps/documents/va_triage_questions
http://nj.gov/health/ps/documents/va_triage_questions
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
APPENDIX 7. CAUSE, ACTION, PROCESS/OUTCOME MEASURE TABLE • 40
APPENDIX 7. CAUSE, ACTION, PROCESS/OUTCOME MEASURE TABLE
Cause/Contributing
Factor (CCF) Statement #1:
Each RCA2 will most likely have multiple CCFs.
Action 1 Each CCF may have multiple Actions.
Action Due Date
Date Action Completed
Responsible Person:
Process/Outcome Measure 1 (Each
Process/Outcome Measure needs to include: what
will be measured; how long it will be measured; and
the expected level of compliance.)
Each Action may have multiple
Process/Outcome Measures.
Date Measured:
Responsible Person:
Was the Compliance Level Met?
Y/N
Management concurs with this Action and
Process/Outcome Measure
Y/N
If No, why not? (Answered by Management)
Is the identification of another action required? Y/N
Causal statement example based on the MRI close call scenario in Appendices 1, 4, and 5:
Cause/Contributing
Factor (CCF) Statement #1:
The lack of a ferromagnetic detection system at the entrance into
the MR magnet room increased the likelihood that the patient’s
oxygen cylinder would be permitted in the room resulting in the
cylinder being drawn into the bore of the magnet, the magnet being
quenched, and the MR room being out of service for 5 days.
Action 1 Install a ferromagnetic detection system at the entrance to all four
MRI magnet rooms.
Action Due Date April 30, 2015
Date Action Completed Pending
Responsible Person: Ms. B, Facility Engineer
Process/Outcome Measure 1 (Each
Process/Outcome Measure needs to
include: what will be measured; how
long it will be measured; and the
expected level of compliance.)
Five ferrous objects including an oxygen
cylinder will be passed by the ferromagnetic
sensors of each detector and 100% will result
in alarms sounding in the adjacent MR
Control Room.
Date To Be Measured: May 10, 2015
Responsible Person: Dr. A, MRI Safety Officer
Was the Compliance Level
Met?
To be determined
Management concurs with this Action and
Process/Outcome Measure
Yes
If No, why not? (Answered by Management)
Is the identification of another action
required?
To be determined
RCA2 Improving Root Cause Analyses and Actions to Prevent Harm
REFERENCES • 41
REFERENCES
1. Agency for Healthcare Research and Quality. Efforts To
Improve Patient Safety Results in 1.3 Million Fewer Patient
Harms. AHRQ Publication #15-0011–EF, December 2014.
Rockville, MD: Agency for Healthcare Research and
Quality. http://www.ahrq.gov/professionals/quality-
patient-safety/pfp/interimhacrate2013.html
2 Nasca T, Weiss KB, Bagian JP. Improving clinical learning
environments for tomorrow’s physicians. New England
Journal of Medicine 2014(Mar 13);370(11):991–993.
3 Bagian JP et al. The Veterans Affairs root cause analysis
system in action. Joint Commission Journal on Quality and
Patient Safety 2002;28(10):531–545.
4. Phimister JR, Bier VM, Kunreuther HC. Accident Precursor
Analysis and Management. Washington, DC: National
Academy of Engineering, National Academies Press,
2004.
5. Agency for Healthcare Research and Quality. Patient
Safety Organization (PSO) Program. http://www.pso.
ahrq.gov/
6. Johns Hopkins Medicine, Center for Innovation in
Quality Patient Care, The Comprehensive Unit-based
Safety Program (CUSP). http://www.hopkinsmedicine.
org/innovation_quality_patient_care/areas_expertise/
improve_patient_safety/cusp/
7. SAE International. Standard Best Practices for System
Safety Program Development and Execution WIP
Standard GEIASTD0010, 2014-08-01. http://standards.sae.
org/wip/geiastd0010a/
8. ISO 9000:2005(en), Quality management systems—
Fundamentals and vocabulary, 3: Terms and definitions.
https://www.iso.org/obp/ui/#iso:std:iso:9000:ed-3:v1:en
9. Department of Veterans Affairs, Veterans Health
Administration, VHA Patient Safety Improvement
Handbook 1050.01, March 4, 2011. http://www1.va.gov/
vhapublications/ViewPublication.asp?pub_ID=2389
10. Aerospace Safety Advisory Panel Annual Report for 2014.
Washington, DC: National Aeronautics and Space
Administration, January 2015, 15–16. http://oiir.hq.nasa.
gov/asap/documents/2014_ASAP_Annual_Report
11. Department of Veterans Affairs, Veterans Health
Administration, VHA Patient Safety Improvement
Handbook 1050.01, May 23, 2008, http://cheps.engin.
umich.edu/wp-content/uploads/sites/118/2015/04/
Triaging-Adverse-Events-and-Close-Calls-SAC
12. National Patient Safety Agency, NHS, UK. Root Cause
Analysis Investigation Tools: Guide to Investigation Report
Writing Following Root Cause Analysis of Patient Safety
Incidents. http://www.nrls.npsa.nhs.uk/EasySiteWeb/
getresource.axd?AssetID=60180
13. European Transport Safety Council. Confidential Incident
Reporting and Passenger Safety in Aviation, May 1996.
http://archive.etsc.eu/documents/bri_air3
14. The Joint Commission. Comprehensive Accreditation
Manual for Hospitals. CAMH Update 2, January 2015.
http://www.jointcommission.org/assets/1/6/CAMH_24_
SE_all_CURRENT
15. Accreditation Council for Graduate Medical Education,
Clinical Learning Environment Review (CLER) Program.
http://www.acgme.org/CLER
16. Bagian JP et al. Developing and deploying a patient
safety program in a large health care delivery system: you
can’t fix what you don’t know about. Joint Commission
Journal on Quality and Patient Safety 2001;27(10):522–
532.5.
17. Heinrich HW. Industrial Accident Prevention: A Scientific
Approach. New York: McGraw-Hill, 1931.
18. Health Quality & Safety Commission New Zealand.
Severity Assessment Criteria Tables, http://www.
hqsc.govt.nz/our-programmes/reportable-events/
publications-and-resources/publication/636/. And: Guide
To Using the Severity Assessment Code (SAC), http://
www.hqsc.govt.nz/assets/Reportable-Events/Resources/
guide-to-using-sac-2008
19. New South Wales Health Severity Assessment Code (SAC)
Matrix, 2005. http://www0.health.nsw.gov.au/pubs/2005/
sac_matrix.html
20. Conway J, Federico F, Stewart K, Campbell MJ. Respectful
Management of Serious Clinical Adverse Events, 2nd ed. IHI
Innovation Series white paper. Cambridge, MA: Institute
for Healthcare Improvement, 2011.
21. Zimmerman TM and Amori G. Including patients in root
cause and system failure analysis: legal and psychological
implications. Journal of Healthcare Risk Management
2007;27(2):27–34.
22. Bagian JP, King BJ, Mills PD, McKnight SD. Improving
RCA performance: the Cornerstone Award and the
power of positive reinforcement. BMJ Quality & Safety
2011;20(11):974–982.
23. Centers for Disease Control and Prevention, National
Institute for Occupational Safety and Health. Hierarchy of
Controls. http://www.cdc.gov/niosh/topics/hierarchy/
24. Mills PD, Neily J, Kinney LM, Bagian J, Weeks WB. Effective
interventions and implementation strategies to reduce
adverse drug events in the Veterans Affairs (VA) system.
Quality & Safety in Health Care 2008(Feb);17(1):37–46.
25. Hettinger AZ et al. An evidence‐based toolkit for the
development of effective and sustainable root cause
analysis system safety solutions. Journal of Healthcare Risk
Management 2013;33(2):11–20.
26. Dekker S. The Field Guide to Understanding Human Error.
Burlington, VT: Ashgate Publishing, 2006.
http://www.ahrq.gov/professionals/quality-patient-safety/pfp/interimhacrate2013.html
http://www.ahrq.gov/professionals/quality-patient-safety/pfp/interimhacrate2013.html
http://www.pso.ahrq.gov/
http://www.pso.ahrq.gov/
http://www.hopkinsmedicine.org/innovation_quality_patient_care/areas_expertise/improve_patient_safety/cusp/
http://www.hopkinsmedicine.org/innovation_quality_patient_care/areas_expertise/improve_patient_safety/cusp/
http://www.hopkinsmedicine.org/innovation_quality_patient_care/areas_expertise/improve_patient_safety/cusp/
http://standards.sae.org/wip/geiastd0010a/
http://standards.sae.org/wip/geiastd0010a/
https://www.iso.org/obp/ui/#iso:std:iso:9000:ed-3:v1:en
http://www1.va.gov/vhapublications/ViewPublication.asp?pub_ID=2389
http://www1.va.gov/vhapublications/ViewPublication.asp?pub_ID=2389
http://oiir.hq.nasa.gov/asap/documents/2014_ASAP_Annual_Report
http://oiir.hq.nasa.gov/asap/documents/2014_ASAP_Annual_Report
http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Triaging-Adverse-Events-and-Close-Calls-SAC
http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Triaging-Adverse-Events-and-Close-Calls-SAC
http://cheps.engin.umich.edu/wp-content/uploads/sites/118/2015/04/Triaging-Adverse-Events-and-Close-Calls-SAC
http://www.nrls.npsa.nhs.uk/EasySiteWeb/getresource.axd?AssetID=60180
http://www.nrls.npsa.nhs.uk/EasySiteWeb/getresource.axd?AssetID=60180
http://archive.etsc.eu/documents/bri_air3
http://www.jointcommission.org/assets/1/6/CAMH_24_SE_all_CURRENT
http://www.jointcommission.org/assets/1/6/CAMH_24_SE_all_CURRENT
http://www.acgme.org/CLER
http://www.hqsc.govt.nz/our-programmes/reportable-events/publications-and-resources/publication/636/
http://www.hqsc.govt.nz/our-programmes/reportable-events/publications-and-resources/publication/636/
http://www.hqsc.govt.nz/our-programmes/reportable-events/publications-and-resources/publication/636/
http://www.hqsc.govt.nz/assets/Reportable-Events/Resources/guide-to-using-sac-2008
http://www.hqsc.govt.nz/assets/Reportable-Events/Resources/guide-to-using-sac-2008
http://www.hqsc.govt.nz/assets/Reportable-Events/Resources/guide-to-using-sac-2008
http://www0.health.nsw.gov.au/pubs/2005/sac_matrix.html
http://www0.health.nsw.gov.au/pubs/2005/sac_matrix.html
http://www.cdc.gov/niosh/topics/hierarchy/
-
Acknowledgments
- Appendix 2. Triggering Questions for
Root Cause Analysis
Endorsements
Executive Summary
Introduction
Objective
Definitions
I. Identifying and Classifying Events
Events Appropriate for RCA2 Review versus Blameworthy Events
Risk-Based Prioritization of Events, Hazards, and
System Vulnerabilities
Close Calls
II. RCA2 Timing and Team Membership
Timing
Team Size
Team Membership
Interviewing
III. The RCA2 Event Review Process
Analysis Steps and Tools
Actions
Measuring Action Implementation and Effectiveness
Feedback
Leadership and Board Support
Measuring the Effectiveness and Sustainability of the RCA2 Process
IV. Conclusion and Recommendations
Appendix 1. The Safety Assessment Code (SAC) Matrix
Appendix 3. Interviewing Tips for RCA2 Reviews
Appendix 4. Final Flow Diagram Example
Appendix 5. Cause and Effect Diagram Example
Appendix 6. The Five Rules of Causation
Appendix 7. Cause, Action, Process/Outcome Measure Table
References
Big data in healthcare: management,
analysis and future prospect
s
Sabyasachi Dash1†, Sushil Kumar Shakyawar2,3†, Mohit Sharma4,5 and Sandeep Kaushik6*
Introduction
Information has been the key to a better organization and new developments. The more
information we have, the more optimally we can organize ourselves to deliver the best
outcomes. That is why data collection is an important part for every organization. We
can also use this data for the prediction of current trends of certain parameters and
future events. As we are becoming more and more aware of this, we have started pro-
ducing and collecting more data about almost everything by introducing technological
developments in this direction. Today, we are facing a situation wherein we are flooded
with tons of data from every aspect of our life such as social activities, science, work,
health, etc. In a way, we can compare the present situation to a data deluge. The tech-
nological advances have helped us in generating more and more data, even to a level
Abstract
‘Big data’ is massive amounts of information that can work wonders. It has become a
topic of special interest for the past two decades because of a great potential that is
hidden in it. Various public and private sector industries generate, store, and analyze
big data with an aim to improve the services they provide. In the healthcare indus-
try, various sources for big data include hospital records, medical records of patients,
results of medical e
x
aminations, and devices that are a part of internet of things.
Biomedical research also generates a significant portion of big data relevant to public
healthcare. This data requires proper management and analysis in order to derive
meaningful information. Otherwise, seeking solution by analyzing big data quickly
becomes comparable to finding a needle in the haystack. There are various challenges
associated with each step of handling big data which can only be surpassed by using
high-end computing solutions for big data analysis. That is why, to provide relevant
solutions for improving public health, healthcare providers are required to be fully
equipped with appropriate infrastructure to systematically generate and analyze big
data. An efficient management, analysis, and interpretation of big data can change
the game by opening new avenues for modern healthcare. That is exactly why various
industries, including the healthcare industry, are taking vigorous steps to convert this
potential into better services and financial advantages. With a strong integration of bio-
medical and healthcare data, modern healthcare organizations can possibly revolution-
ize the medical therapies and personalized medicine.
Keywords: Healthcare, Biomedical research, Big data analytics, Internet of things,
Personalized medicine, Quantum computing
Open Access
© The Author(s) 2019. This article is distributed under the terms of the Creative Commons Attribution 4.0 International License
(http://creat iveco mmons .org/licen ses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium,
provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and
indicate if changes were made.
SURVEY PAPER
Dash et al. J Big Data (2019) 6:54
https://doi.org/10.1186/s40537-019-0217-0
*Correspondence:
sandeep.kaushik.
nii2012@gmail.com;
skaushik@i3bs.uminho.pt
†Sabyasachi Dash and
Sushil Kumar Shakyawar
contributed equally to this
work
6 3B’s Research
Group, Headquarters
of the European
Institute of Excellence
on Tissue Engineering
and Regenerative Medicine,
AvePark – Parque de
Ciência e Tecnologia, Zona
Industrial da Gandra, Barco,
4805-017 Guimarães,
Portugal
Full list of author information
is available at the end of the
article
http://creativecommons.org/licenses/by/4.0/
http://crossmark.crossref.org/dialog/?doi=10.1186/s40537-019-0217-0&domain=pdf
Page 2 of 25Dash et al. J Big Data (2019) 6:54
where it has become unmanageable with currently available technologies. This has led
to the creation of the term ‘big data’ to describe data that is large and unmanageable. In
order to meet our present and future social needs, we need to develop new strategies to
organize this data and derive meaningful information. One such special social need is
healthcare. Like every other industry, healthcare organizations are producing data at a
tremendous rate that presents many advantages and challenges at the same time. In this
review, we discuss about the basics of big data including its management, analysis and
future prospects especially in healthcare sector.
The data overload
Every day, people working with various organizations around the world are generating
a massive amount of data. The term “digital universe” quantitatively defines such mas-
sive amounts of data created, replicated, and consumed in a single year. International
Data Corporation (IDC) estimated the approximate size of the digital universe in 2005
to be 130 exabytes (EB). The digital universe in 2017 expanded to about 16,000 EB or 16
zettabytes (ZB). IDC predicted that the digital universe would expand to 40,000 EB by
the year 2020. To imagine this size, we would have to assign about 5200 gigabytes (GB)
of data to all individuals. This exemplifies the phenomenal speed at which the digital
universe is expanding. The internet giants, like Google and Facebook, have been collect-
ing and storing massive amounts of data. For instance, depending on our preferences,
Google may store a variety of information including user location, advertisement prefer-
ences, list of applications used, internet browsing history, contacts, bookmarks, emails,
and other necessary information associated with the user. Similarly, Facebook stores and
analyzes more than about 30 petabytes (PB) of user-generated data. Such large amounts
of data constitute ‘big data’. Over the past decade, big data has been successfully used by
the IT industry to generate critical information that can generate significant revenue.
These observations have become so conspicuous that has eventually led to the birth
of a new field of science termed ‘Data Science’. Data science deals with various aspects
including data management and analysis, to extract deeper insights for improving the
functionality or services of a system (for example, healthcare and transport system).
Additionally, with the availability of some of the most creative and meaningful ways to
visualize big data post-analysis, it has become easier to understand the functioning of
any complex system. As a large section of society is becoming aware of, and involved in
generating big data, it has become necessary to define what big data is. Therefore, in this
review, we attempt to provide details on the impact of big data in the transformation of
global healthcare sector and its impact on our daily lives.
Defining big data
As the name suggests, ‘big data’ represents large amounts of data that is unmanageable
using traditional software or internet-based platforms. It surpasses the traditionally used
amount of storage, processing and analytical power. Even though a number of definitions
for big data exist, the most popular and well-accepted definition was given by Douglas
Laney. Laney observed that (big) data was growing in three different dimensions namely,
volume, velocity and variety (known as the 3 Vs) [1]. The ‘big’ part of big data is indic-
ative of its large volume. In addition to volume, the big data description also includes
Page 3 of 25Dash et al. J Big Data (2019) 6:54
velocity and variety. Velocity indicates the speed or rate of data collection and making it
accessible for further analysis; while, variety remarks on the different types of organized
and unorganized data that any firm or system can collect, such as transaction-level data,
video, audio, text or log files. These three Vs have become the standard definition of big
data. Although, other people have added several other Vs to this definition [2], the most
accepted 4th V remains ‘veracity’.
The term “big data” has become extremely popular across the globe in recent years.
Almost every sector of research, whether it relates to industry or academics, is generat-
ing and analyzing big data for various purposes. The most challenging task regarding
this huge heap of data that can be organized and unorganized, is its management. Given
the fact that big data is unmanageable using the traditional software, we need technically
advanced applications and software that can utilize fast and cost-efficient high-end com-
putational power for such tasks. Implementation of artificial intelligence (AI) algorithms
and novel fusion algorithms would be necessary to make sense from this large amount
of data. Indeed, it would be a great feat to achieve automated decision-making by the
implementation of machine learning (ML) methods like neural networks and other AI
techniques. However, in absence of appropriate software and hardware support, big data
can be quite hazy. We need to develop better techniques to handle this ‘endless sea’ of
data and smart web applications for efficient analysis to gain workable insights. With
proper storage and analytical tools in hand, the information and insights derived from
big data can make the critical social infrastructure components and services (like health-
care, safety or transportation) more aware, interactive and efficient [3]. In addition,
visualization of big data in a user-friendly manner will be a critical factor for societal
development.
Healthcare as a big‑data repository
Healthcare is a multi-dimensional system established with the sole aim for the preven-
tion, diagnosis, and treatment of health-related issues or impairments in human beings.
The major components of a healthcare system are the health professionals (physicians or
nurses), health facilities (clinics, hospitals for delivering medicines and other diagnosis
or treatment technologies), and a financing institution supporting the former two. The
health professionals belong to various health sectors like dentistry, medicine, midwifery,
nursing, psychology, physiotherapy, and many others. Healthcare is required at several
levels depending on the urgency of situation. Professionals serve it as the first point of
consultation (for primary care), acute care requiring skilled professionals (secondary
care), advanced medical investigation and treatment (tertiary care) and highly uncom-
mon diagnostic or surgical procedures (quaternary care). At all these levels, the health
professionals are responsible for different kinds of information such as patient’s medi-
cal history (diagnosis and prescriptions related data), medical and clinical data (like data
from imaging and laboratory examinations), and other private or personal medical data.
Previously, the common practice to store such medical records for a patient was in the
form of either handwritten notes or typed reports [4]. Even the results from a medical
examination were stored in a paper file system. In fact, this practice is really old, with the
oldest case reports existing on a papyrus text from Egypt that dates back to 1600 BC [5].
Page 4 of 25Dash et al. J Big Data (2019) 6:54
In Stanley Reiser’s words, the clinical case records freeze the episode of illness as a story
in which patient, family and the doctor are a part of the plot” [6].
With the advent of computer systems and its potential, the digitization of all clinical
exams and medical records in the healthcare systems has become a standard and widely
adopted practice nowadays. In 2003, a division of the National Academies of Sciences,
Engineering, and Medicine known as Institute of Medicine chose the term “electronic
health records” to represent records maintained for improving the health care sector
towards the benefit of patients and clinicians. Electronic health records (EHR) as defined
by Murphy, Hanken and Waters are computerized medical records for patients any
information relating to the past, present or future physical/mental health or condition
of an individual which resides in electronic system(s) used to capture, transmit, receive,
store, retrieve, link and manipulate multimedia data for the primary purpose of provid-
ing healthcare and health-related services” [7].
Electronic health records
It is important to note that the National Institutes of Health (NIH) recently announced
the “All of Us” initiative (https ://allof us.nih.gov/) that aims to collect one million or more
patients’ data such as EHR, including medical imaging, socio-behavioral, and environ-
mental data over the next few years. EHRs have introduced many advantages for han-
dling modern healthcare related data. Below, we describe some of the characteristic
advantages of using EHRs. The first advantage of EHRs is that healthcare profession-
als have an improved access to the entire medical history of a patient. The information
includes medical diagnoses, prescriptions, data related to known allergies, demograph-
ics, clinical narratives, and the results obtained from various laboratory tests. The rec-
ognition and treatment of medical conditions thus is time efficient due to a reduction in
the lag time of previous test results. With time we have observed a significant decrease
in the redundant and additional examinations, lost orders and ambiguities caused by
illegible handwriting, and an improved care coordination between multiple healthcare
providers. Overcoming such logistical errors has led to reduction in the number of drug
allergies by reducing errors in medication dose and frequency. Healthcare professionals
have also found access over web based and electronic platforms to improve their medi-
cal practices significantly using automatic reminders and prompts regarding vaccina-
tions, abnormal laboratory results, cancer screening, and other periodic checkups. There
would be a greater continuity of care and timely interventions by facilitating communi-
cation among multiple healthcare providers and patients. They can be associated to elec-
tronic authorization and immediate insurance approvals due to less paperwork. EHRs
enable faster data retrieval and facilitate reporting of key healthcare quality indicators to
the organizations, and also improve public health surveillance by immediate reporting of
disease outbreaks. EHRs also provide relevant data regarding the quality of care for the
beneficiaries of employee health insurance programs and can help control the increas-
ing costs of health insurance benefits. Finally, EHRs can reduce or absolutely eliminate
delays and confusion in the billing and claims management area. The EHRs and internet
together help provide access to millions of health-related medical information critical
for patient life.
https://allofus.nih.gov/
Page 5 of 25Dash et al. J Big Data (2019) 6:54
Digitization of healthcare and big data
Similar to EHR, an electronic medical record (EMR) stores the standard medical and
clinical data gathered from the patients. EHRs, EMRs, personal health record (PHR),
medical practice management software (MPM), and many other healthcare data com-
ponents collectively have the potential to improve the quality, service efficiency, and
costs of healthcare along with the reduction of medical errors. The big data in health-
care includes the healthcare payer-provider data (such as EMRs, pharmacy prescription,
and insurance records) along with the genomics-driven experiments (such as genotyp-
ing, gene expression data) and other data acquired from the smart web of internet of
things (IoT) (Fig. 1). The adoption of EHRs was slow at the beginning of the 21st century
however it has grown substantially after 2009 [7, 8]. The management and usage of such
healthcare data has been increasingly dependent on information technology. The devel-
opment and usage of wellness monitoring devices and related software that can gener-
ate alerts and share the health related data of a patient with the respective health care
providers has gained momentum, especially in establishing a real-time biomedical and
health monitoring system. These devices are generating a huge amount of data that can
be analyzed to provide real-time clinical or medical care [9]. The use of big data from
healthcare shows promise for improving health outcomes and controlling costs.
Big data in biomedical research
A biological system, such as a human cell, exhibits molecular and physical events of
complex interplay. In order to understand interdependencies of various components and
events of such a complex system, a biomedical or biological experiment usually gathers
data on a smaller and/or simpler component. Consequently, it requires multiple simpli-
fied experiments to generate a wide map of a given biological phenomenon of interest.
This indicates that more the data we have, the better we understand the biological pro-
cesses. With this idea, modern techniques have evolved at a great pace. For instance,
one can imagine the amount of data generated since the integration of efficient tech-
nologies like next-generation sequencing (NGS) and Genome wide association studies
(GWAS) to decode human genetics. NGS-based data provides information at depths
that were previously inaccessible and takes the experimental scenario to a completely
Fig. 1 Workflow of Big data Analytics. Data warehouses store massive amounts of data generated from
various sources. This data is processed using analytic pipelines to obtain smarter and affordable healthcare
options
Page 6 of 25Dash et al. J Big Data (2019) 6:54
new dimension. It has increased the resolution at which we observe or record biologi-
cal events associated with specific diseases in a real time manner. The idea that large
amounts of data can provide us a good amount of information that often remains uni-
dentified or hidden in smaller experimental methods has ushered-in the ‘-omics’ era. The
‘omics’ discipline has witnessed significant progress as instead of studying a single ‘gene’
scientists can now study the whole ‘genome’ of an organism in ‘genomics’ studies within
a given amount of time. Similarly, instead of studying the expression or ‘transcription’
of single gene, we can now study the expression of all the genes or the entire ‘transcrip-
tome’ of an organism under ‘transcriptomics’ studies. Each of these individual experi-
ments generate a large amount of data with more depth of information than ever before.
Yet, this depth and resolution might be insufficient to provide all the details required to
explain a particular mechanism or event. Therefore, one usually finds oneself analyzing
a large amount of data obtained from multiple experiments to gain novel insights. This
fact is supported by a continuous rise in the number of publications regarding big data
in healthcare (Fig. 2). Analysis of such big data from medical and healthcare systems can
be of immense help in providing novel strategies for healthcare. The latest technologi-
cal developments in data generation, collection and analysis, have raised expectations
towards a revolution in the field of personalized medicine in near future.
Big data from omics studies
NGS has greatly simplified the sequencing and decreased the costs for generating
whole genome sequence data. The cost of complete genome sequencing has fallen
from millions to a couple of thousand dollars [10]. NGS technology has resulted in
an increased volume of biomedical data that comes from genomic and transcriptomic
studies. According to an estimate, the number of human genomes sequenced by 2025
could be between 100 million to 2 billion [11]. Combining the genomic and transcrip-
tomic data with proteomic and metabolomic data can greatly enhance our knowledge
about the individual profile of a patient—an approach often ascribed as “individual,
Fig. 2 Publications associated with big data in healthcare. The numbers of publications in PubMed
are plotted by year
Page 7 of 25Dash et al. J Big Data (2019) 6:54
personalized or precision health care”. Systematic and integrative analysis of omics
data in conjugation with healthcare analytics can help design better treatment strate-
gies towards precision and personalized medicine (Fig. 3). The genomics-driven experi-
ments e.g., genotyping, gene expression, and NGS-based studies are the major source of
big data in biomedical healthcare along with EMRs, pharmacy prescription information,
and insurance records. Healthcare requires a strong integration of such biomedical data
from various sources to provide better treatments and patient care. These prospects are
so exciting that even though genomic data from patients would have many variables to
be accounted, yet commercial organizations are already using human genome data to
help the providers in making personalized medical decisions. This might turn out to be a
game-changer in future medicine and health.
Internet of Things (IOT)
Healthcare industry has not been quick enough to adapt to the big data movement com-
pared to other industries. Therefore, big data usage in the healthcare sector is still in
its infancy. For example, healthcare and biomedical big data have not yet converged to
enhance healthcare data with molecular pathology. Such convergence can help unravel
various mechanisms of action or other aspects of predictive biology. Therefore, to assess
an individual’s health status, biomolecular and clinical datasets need to be married. One
such source of clinical data in healthcare is ‘internet of things’ (IoT).
In fact, IoT is another big player implemented in a number of other industries includ-
ing healthcare. Until recently, the objects of common use such as cars, watches, refriger-
ators and health-monitoring devices, did not usually produce or handle data and lacked
internet connectivity. However, furnishing such objects with computer chips and sen-
sors that enable data collection and transmission over internet has opened new avenues.
The device technologies such as Radio Frequency IDentification (RFID) tags and readers,
Fig. 3 A framework for integrating omics data and health care analytics to promote personalized treatment
Page 8 of 25Dash et al. J Big Data (2019) 6:54
and Near Field Communication (NFC) devices, that can not only gather information but
interact physically, are being increasingly used as the information and communication
systems [3]. This enables objects with RFID or NFC to communicate and function as
a web of smart things. The analysis of data collected from these chips or sensors may
reveal critical information that might be beneficial in improving lifestyle, establishing
measures for energy conservation, improving transportation, and healthcare. In fact, IoT
has become a rising movement in the field of healthcare. IoT devices create a continuous
stream of data while monitoring the health of people (or patients) which makes these
devices a major contributor to big data in healthcare. Such resources can interconnect
various devices to provide a reliable, effective and smart healthcare service to the elderly
and patients with a chronic illness [12].
Advantages of IoT in healthcare
Using the web of IoT devices, a doctor can measure and monitor various parameters
from his/her clients in their respective locations for example, home or office. Therefore,
through early intervention and treatment, a patient might not need hospitalization or
even visit the doctor resulting in significant cost reduction in healthcare expenses. Some
examples of IoT devices used in healthcare include fitness or health-tracking wear-
able devices, biosensors, clinical devices for monitoring vital signs, and others types
of devices or clinical instruments. Such IoT devices generate a large amount of health
related data. If we can integrate this data with other existing healthcare data like EMRs
or PHRs, we can predict a patients’ health status and its progression from subclinical to
pathological state [9]. In fact, big data generated from IoT has been quiet advantageous
in several areas in offering better investigation and predictions. On a larger scale, the
data from such devices can help in personnel health monitoring, modelling the spread of
a disease and finding ways to contain a particular disease outbreak.
The analysis of data from IoT would require an updated operating software because of
its specific nature along with advanced hardware and software applications. We would
need to manage data inflow from IoT instruments in real-time and analyze it by the min-
ute. Associates in the healthcare system are trying to trim down the cost and ameliorate
the quality of care by applying advanced analytics to both internally and externally gen-
erated data.
Mobile computing and mobile health (mHealth)
In today’s digital world, every individual seems to be obsessed to track their fitness and
health statistics using the in-built pedometer of their portable and wearable devices
such as, smartphones, smartwatches, fitness dashboards or tablets. With an increasingly
mobile society in almost all aspects of life, the healthcare infrastructure needs remod-
eling to accommodate mobile devices [13]. The practice of medicine and public health
using mobile devices, known as mHealth or mobile health, pervades different degrees of
health care especially for chronic diseases, such as diabetes and cancer [14]. Healthcare
organizations are increasingly using mobile health and wellness services for implement-
ing novel and innovative ways to provide care and coordinate health as well as wellness.
Mobile platforms can improve healthcare by accelerating interactive communication
between patients and healthcare providers. In fact, Apple and Google have developed
Page 9 of 25Dash et al. J Big Data (2019) 6:54
devoted platforms like Apple’s ResearchKit and Google Fit for developing research appli-
cations for fitness and health statistics [15]. These applications support seamless interac-
tion with various consumer devices and embedded sensors for data integration. These
apps help the doctors to have direct access to your overall health data. Both the user
and their doctors get to know the real-time status of your body. These apps and smart
devices also help by improving our wellness planning and encouraging healthy lifestyles.
The users or patients can become advocates for their own health.
Nature of the big data in healthcare
EHRs can enable advanced analytics and help clinical decision-making by providing
enormous data. However, a large proportion of this data is currently unstructured in
nature. An unstructured data is the information that does not adhere to a pre-defined
model or organizational framework. The reason for this choice may simply be that
we can record it in a myriad of formats. Another reason for opting unstructured for-
mat is that often the structured input options (drop-down menus, radio buttons, and
check boxes) can fall short for capturing data of complex nature. For example, we cannot
record the non-standard data regarding a patient’s clinical suspicions, socioeconomic
data, patient preferences, key lifestyle factors, and other related information in any other
way but an unstructured format. It is difficult to group such varied, yet critical, sources
of information into an intuitive or unified data format for further analysis using algo-
rithms to understand and leverage the patients care. Nonetheless, the healthcare indus-
try is required to utilize the full potential of these rich streams of information to enhance
the patient experience. In the healthcare sector, it could materialize in terms of better
management, care and low-cost treatments. We are miles away from realizing the ben-
efits of big data in a meaningful way and harnessing the insights that come from it. In
order to achieve these goals, we need to manage and analyze the big data in a systematic
manner.
Management and analysis of big data
Big data is the huge amounts of a variety of data generated at a rapid rate. The data gath-
ered from various sources is mostly required for optimizing consumer services rather
than consumer consumption. This is also true for big data from the biomedical research
and healthcare. The major challenge with big data is how to handle this large volume
of information. To make it available for scientific community, the data is required to be
stored in a file format that is easily accessible and readable for an efficient analysis. In the
context of healthcare data, another major challenge is the implementation of high-end
computing tools, protocols and high-end hardware in the clinical setting. Experts from
diverse backgrounds including biology, information technology, statistics, and math-
ematics are required to work together to achieve this goal. The data collected using the
sensors can be made available on a storage cloud with pre-installed software tools devel-
oped by analytic tool developers. These tools would have data mining and ML functions
developed by AI experts to convert the information stored as data into knowledge. Upon
implementation, it would enhance the efficiency of acquiring, storing, analyzing, and vis-
ualization of big data from healthcare. The main task is to annotate, integrate, and pre-
sent this complex data in an appropriate manner for a better understanding. In absence
Page 10 of 25Dash et al. J Big Data (2019) 6:54
of such relevant information, the (healthcare) data remains quite cloudy and may not
lead the biomedical researchers any further. Finally, visualization tools developed by
computer graphics designers can efficiently display this newly gained knowledge.
Heterogeneity of data is another challenge in big data analysis. The huge size and
highly heterogeneous nature of big data in healthcare renders it relatively less inform-
ative using the conventional technologies. The most common platforms for operating
the software framework that assists big data analysis are high power computing clusters
accessed via grid computing infrastructures. Cloud computing is such a system that has
virtualized storage technologies and provides reliable services. It offers high reliability,
scalability and autonomy along with ubiquitous access, dynamic resource discovery and
composability. Such platforms can act as a receiver of data from the ubiquitous sensors,
as a computer to analyze and interpret the data, as well as providing the user with easy
to understand web-based visualization. In IoT, the big data processing and analytics can
be performed closer to data source using the services of mobile edge computing cloud-
lets and fog computing. Advanced algorithms are required to implement ML and AI
approaches for big data analysis on computing clusters. A programming language suit-
able for working on big data (e.g. Python, R or other languages) could be used to write
such algorithms or software. Therefore, a good knowledge of biology and IT is required
to handle the big data from biomedical research. Such a combination of both the trades
usually fits for bioinformaticians. The most common among various platforms used for
working with big data include Hadoop and Apache Spark. We briefly introduce these
platforms below.
Hadoop
Loading large amounts of (big) data into the memory of even the most powerful of com-
puting clusters is not an efficient way to work with big data. Therefore, the best logical
approach for analyzing huge volumes of complex big data is to distribute and process
it in parallel on multiple nodes. However, the size of data is usually so large that thou-
sands of computing machines are required to distribute and finish processing in a rea-
sonable amount of time. When working with hundreds or thousands of nodes, one has
to handle issues like how to parallelize the computation, distribute the data, and handle
failures. One of most popular open-source distributed application for this purpose is
Hadoop [16]. Hadoop implements MapReduce algorithm for processing and generating
large datasets. MapReduce uses map and reduce primitives to map each logical record’
in the input into a set of intermediate key/value pairs, and reduce operation combines
all the values that shared the same key [17]. It efficiently parallelizes the computation,
handles failures, and schedules inter-machine communication across large-scale clusters
of machines. Hadoop Distributed File System (HDFS) is the file system component that
provides a scalable, efficient, and replica based storage of data at various nodes that form
a part of a cluster [16]. Hadoop has other tools that enhance the storage and processing
components therefore many large companies like Yahoo, Facebook, and others have rap-
idly adopted it. Hadoop has enabled researchers to use data sets otherwise impossible
to handle. Many large projects, like the determination of a correlation between the air
quality data and asthma admissions, drug development using genomic and proteomic
Page 11 of 25Dash et al. J Big Data (2019) 6:54
data, and other such aspects of healthcare are implementing Hadoop. Therefore, with
the implementation of Hadoop system, the healthcare analytics will not be held back.
Apache Spark
Apache Spark is another open source alternative to Hadoop. It is a unified engine for
distributed data processing that includes higher-level libraries for supporting SQL que-
ries (Spark SQL), streaming data (Spark Streaming), machine learning (MLlib) and graph
processing (GraphX) [18]. These libraries help in increasing developer productivity
because the programming interface requires lesser coding efforts and can be seamlessly
combined to create more types of complex computations. By implementing Resilient
distributed Datasets (RDDs), in-memory processing of data is supported that can make
Spark about 100× faster than Hadoop in multi-pass analytics (on smaller datasets) [19,
20]. This is more true when the data size is smaller than the available memory [21]. This
indicates that processing of really big data with Apache Spark would require a large
amount of memory. Since, the cost of memory is higher than the hard drive, MapReduce
is expected to be more cost effective for large datasets compared to Apache Spark. Simi-
larly, Apache Storm was developed to provide a real-time framework for data stream
processing. This platform supports most of the programming languages. Additionally,
it offers good horizontal scalability and built-in-fault-tolerance capability for big data
analysis.
Machine learning for information extraction, data analysis and predictions
In healthcare, patient data contains recorded signals for instance, electrocardiogram
(ECG), images, and videos. Healthcare providers have barely managed to convert such
healthcare data into EHRs. Efforts are underway to digitize patient-histories from pre-
EHR era notes and supplement the standardization process by turning static images into
machine-readable text. For example, optical character recognition (OCR) software is one
such approach that can recognize handwriting as well as computer fonts and push digi-
tization. Such unstructured and structured healthcare datasets have untapped wealth of
information that can be harnessed using advanced AI programs to draw critical action-
able insights in the context of patient care. In fact, AI has emerged as the method of
choice for big data applications in medicine. This smart system has quickly found its
niche in decision making process for the diagnosis of diseases. Healthcare professionals
analyze such data for targeted abnormalities using appropriate ML approaches. ML can
filter out structured information from such raw data.
Extracting information from EHR datasets
Emerging ML or AI based strategies are helping to refine healthcare industry’s informa-
tion processing capabilities. For example, natural language processing (NLP) is a rapidly
developing area of machine learning that can identify key syntactic structures in free
text, help in speech recognition and extract the meaning behind a narrative. NLP tools
can help generate new documents, like a clinical visit summary, or to dictate clinical
notes. The unique content and complexity of clinical documentation can be challenging
Page 12 of 25Dash et al. J Big Data (2019) 6:54
for many NLP developers. Nonetheless, we should be able to extract relevant informa-
tion from healthcare data using such approaches as NLP.
AI has also been used to provide predictive capabilities to healthcare big data. For
example, ML algorithms can convert the diagnostic system of medical images into auto-
mated decision-making. Though it is apparent that healthcare professionals may not be
replaced by machines in the near future, yet AI can definitely assist physicians to make
better clinical decisions or even replace human judgment in certain functional areas of
healthcare.
Image analytics
Some of the most widely used imaging techniques in healthcare include computed
tomography (CT), magnetic resonance imaging (MRI), X-ray, molecular imaging, ultra-
sound, photo-acoustic imaging, functional MRI (fMRI), positron emission tomography
(PET), electroencephalography (EEG), and mammograms. These techniques capture
high definition medical images (patient data) of large sizes. Healthcare professionals like
radiologists, doctors and others do an excellent job in analyzing medical data in the form
of these files for targeted abnormalities. However, it is also important to acknowledge
the lack of specialized professionals for many diseases. In order to compensate for this
dearth of professionals, efficient systems like Picture Archiving and Communication
System (PACS) have been developed for storing and convenient access to medical image
and reports data [22]. PACSs are popular for delivering images to local workstations,
accomplished by protocols such as digital image communication in medicine (DICOM).
However, data exchange with a PACS relies on using structured data to retrieve medical
images. This by nature misses out on the unstructured information contained in some of
the biomedical images. Moreover, it is possible to miss an additional information about
a patient’s health status that is present in these images or similar data. A professional
focused on diagnosing an unrelated condition might not observe it, especially when
the condition is still emerging. To help in such situations, image analytics is making an
impact on healthcare by actively extracting disease biomarkers from biomedical images.
This approach uses ML and pattern recognition techniques to draw insights from mas-
sive volumes of clinical image data to transform the diagnosis, treatment and monitor-
ing of patients. It focuses on enhancing the diagnostic capability of medical imaging for
clinical decision-making.
A number of software tools have been developed based on functionalities such as
generic, registration, segmentation, visualization, reconstruction, simulation and diffu-
sion to perform medical image analysis in order to dig out the hidden information. For
example, Visualization Toolkit is a freely available software which allows powerful pro-
cessing and analysis of 3D images from medical tests [23], while SPM can process and
analyze 5 different types of brain images (e.g. MRI, fMRI, PET, CT-Scan and EEG) [24].
Other software like GIMIAS, Elastix, and MITK support all types of images. Various
other widely used tools and their features in this domain are listed in Table 1. Such bio-
informatics-based big data analysis may extract greater insights and value from imaging
data to boost and support precision medicine projects, clinical decision support tools,
and other modes of healthcare. For example, we can also use it to monitor new targeted-
treatments for cancer.
Page 13 of 25Dash et al. J Big Data (2019) 6:54
Ta
bl
e
1
Bi
oi
nf
or
m
at
ic
s
to
ol
s
fo
r m
ed
ic
al
im
ag
e
pr
oc
es
si
ng
a
nd
a
na
ly
si
s
VT
K
(h
tt
ps
://
vt
k.
or
g/
),
IT
K
(h
tt
ps
://
itk
.o
rg
/)
, D
TI
-T
K
(h
tt
p:
//
dt
i-t
k.
so
ur
c e
fo
rg
e.
ne
t/
p
m
w
ik
i/p
m
w
ik
i.p
hp
),
IT
K-
Sn
ap
(h
tt
p:
//
w
w
w
.it
ks
n a
p.
or
g/
pm
w
ik
i/p
m
w
ik
i.p
hp
),
FS
L
(h
tt
ps
://
fs
l.f
m
rib
.o
x
.
ac
.u
k/
fs
l/f
sl
w
i k
i/)
, S
PM
(h
tt
ps
://
w
w
w
.
fil
.io
n.
uc
l.a
c.
uk
/s
pm
/)
,
N
ift
yR
eg
(h
tt
p:
//
so
ur
c e
fo
rg
e.
ne
t/
pr
oj
e c
ts
/n
ift
y r
eg
),
N
ift
yS
eg
(h
tt
p:
//
so
ur
c e
fo
rg
e.
ne
t/
pr
oj
e c
ts
/n
ift
y s
eg
),
N
ift
tS
im
(h
tt
p:
//
so
ur
c e
fo
rg
e.
ne
t/
pr
oj
e c
ts
/n
ift
y s
im
),
N
ift
Re
c
(h
tt
p:
//
so
ur
c e
fo
rg
e.
ne
t/
pr
oj
e c
ts
/
ni
ft
y r
ec
),
A
N
TS
(h
tt
p:
//
pi
cs
l .u
pe
nn
.e
du
/s
of
tw
ar
e/
an
ts
/)
, G
I
M
IA
S
(h
tt
p:
//
w
w
w
.g
im
ia
s.o
rg
/)
, e
la
st
i
x
(h
tt
p:
//
el
as
t ix
.is
i.u
u.
nl
/)
, M
IA
(h
tt
p:
//
m
ia
.s
ou
rc
ef
or
g e
.n
et
/)
, M
IT
K
(h
tt
p:
//
w
w
w
.m
itk
.o
rg
/w
ik
i),
C
am
in
o
(h
tt
p:
//
w
eb
4.
cs
.u
cl
.
ac
.u
k/
re
se
a r
ch
/m
ed
ic
/c
a
m
in
o/
pm
w
ik
i/p
m
w
ik
i.p
hp
?n
=
M
ai
n.
H
om
eP
ag
e)
, I
M
O
D
(h
tt
ps
://
om
ic
t o
ol
s.c
om
/im
od
-t
oo
l),
M
RI
Cr
on
(h
tt
ps
://
om
ic
t o
ol
s.c
om
/m
ric
r o
n-
to
ol
),
O
st
riX
(h
tt
ps
://
om
ic
t o
ol
s.c
om
/o
si
ri x
-t
oo
l)
To
ol
s/
so
ft
w
ar
es
V
TK
IT
K
D
TI
‑T
K
IT
K‑
Sn
ap
FS
L
SP
M
N
ift
yR
eg
N
ift
yS
eg
N
ift
tS
im
N
ift
Re
c
A
N
TS
G
IM
IA
S
el
as
tix
M
IA
M
IT
K
Ca
m
in
o
O
si
ri
X
M
RI
cr
on
IM
O
D
In
pu
t i
m
ag
e
su
pp
or
t
M
RI
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
U
ltr
as
ou
nd
x
x
x
x
x
x
x
X
-r
ay
x
x
x
x
x
x
fM
RI
x
x
x
x
x
x
x
P
ET
x
x
x
x
C
T-
Sc
an
x
x
x
x
E
EG
x
x
x
x
M
am
m
og
ra
m
x
x
x
G
ra
ph
ic
al
u
se
r i
nt
er
fa
ce
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
Fu
nc
tio
ns
G
en
er
ic
x
x
x
x
x
x
x
x
x
x
R
eg
is
tr
at
io
n
x
x
x
x
x
x
x
x
x
x
x
x
S
eg
m
en
ta
tio
n
x
x
x
x
x
x
x
x
x
x
x
V
is
ua
liz
at
io
n
x
x
x
x
x
x
x
x
x
x
x
x
R
ec
on
st
ru
ct
io
n
x
x
x
x
x
x
x
x
x
x
x
x
S
im
ul
at
io
n
x
x
x
x
x
x
x
x
x
D
iff
us
io
n
x
x
x
x
x
x
x
x
http://dti-tk.sourceforge.net/pmwiki/pmwiki.php
http://www.itksnap.org/pmwiki/pmwiki.php
https://fsl.fmrib.ox.ac.uk/fsl/fslwiki/
https://www.fil.ion.ucl.ac.uk/spm/
https://www.fil.ion.ucl.ac.uk/spm/
http://sourceforge.net/projects/niftyreg
http://sourceforge.net/projects/niftyseg
http://sourceforge.net/projects/niftysim
http://sourceforge.net/projects/niftyrec
http://sourceforge.net/projects/niftyrec
http://www.gimias.org/
http://elastix.isi.uu.nl/
http://mia.sourceforge.net/
http://www.mitk.org/wiki
http://web4.cs.ucl.ac.uk/research/medic/camino/pmwiki/pmwiki.php?n=Main.HomePage
http://web4.cs.ucl.ac.uk/research/medic/camino/pmwiki/pmwiki.php?n=Main.HomePage
https://omictools.com/imod-tool
https://omictools.com/mricron-tool
https://omictools.com/osirix-tool
Page 14 of 25Dash et al. J Big Data (2019) 6:54
Big data from omics
The big data from “omics” studies is a new kind of challenge for the bioinformati-
cians. Robust algorithms are required to analyze such complex data from biological
systems. The ultimate goal is to convert this huge data into an informative knowledge
base. The application of bioinformatics approaches to transform the biomedical and
genomics data into predictive and preventive health is known as translational bioin-
formatics. It is at the forefront of data-driven healthcare. Various kinds of quantita-
tive data in healthcare, for example from laboratory measurements, medication data
and genomic profiles, can be combined and used to identify new meta-data that can
help precision therapies [25]. This is why emerging new technologies are required to
help in analyzing this digital wealth. In fact, highly ambitious multimillion-dollar pro-
jects like “Big Data Research and Development Initiative” have been launched that
aim to enhance the quality of big data tools and techniques for a better organization,
efficient access and smart analysis of big data. There are many advantages antici-
pated from the processing of ‘omics’ data from large-scale Human Genome Project
and other population sequencing projects. In the population sequencing projects like
1000 genomes, the researchers will have access to a marvelous amount of raw data.
Similarly, Human Genome Project based Encyclopedia of DNA Elements (ENCODE)
project aimed to determine all functional elements in the human genome using bio-
informatics approaches. Here, we list some of the widely used bioinformatics-based
tools for big data analytics on omics data.
1. SparkSeq is an efficient and cloud-ready platform based on Apache Spark framework
and Hadoop library that is used for analyses of genomic data for interactive genomic
data analysis with nucleotide precision
2. SAMQA identifies errors and ensures the quality of large-scale genomic data. This
tool was originally built for the National Institutes of Health Cancer Genome Atlas
project to identify and report errors including sequence alignment/map [SAM] for-
mat error and empty reads.
3. ART can simulate profiles of read errors and read lengths for data obtained using
high throughput sequencing platforms including SOLiD and Illumina platforms.
4. DistMap is another toolkit used for distributed short-read mapping based on Hadoop
cluster that aims to cover a wider range of sequencing applications. For instance, one
of its applications namely the BWA mapper can perform 500 million read pairs in
about 6 h, approximately 13 times faster than a conventional single-node mapper.
5. SeqWare is a query engine based on Apache HBase database system that enables
access for large-scale whole-genome datasets by integrating genome browsers and
tools.
6. CloudBurst is a parallel computing model utilized in genome mapping experiments
to improve the scalability of reading large sequencing data.
7. Hydra uses the Hadoop-distributed computing framework for processing large pep-
tide and spectra databases for proteomics datasets. This specific tool is capable of
performing 27 billion peptide scorings in less than 60 min on a Hadoop cluster.
Page 15 of 25Dash et al. J Big Data (2019) 6:54
8. BlueSNP is an R package based on Hadoop platform used for genome-wide associa-
tion studies (GWAS) analysis, primarily aiming on the statistical readouts to obtain
significant associations between genotype–phenotype datasets. The efficiency of
this tool is estimated to analyze 1000 phenotypes on 106 SNPs in 104 individuals in a
duration of half-an-hour.
9. Myrna the cloud-based pipeline, provides information on the expression level differ-
ences of genes, including read alignments, data normalization, and statistical mod-
eling.
The past few years have witnessed a tremendous increase in disease specific datasets
from omics platforms. For example, the ArrayExpress Archive of Functional Genomics
data repository contains information from approximately 30,000 experiments and more
than one million functional assays. The growing amount of data demands for better
and efficient bioinformatics driven packages to analyze and interpret the information
obtained. This has also led to the birth of specific tools to analyze such massive amounts
of data. Below, we mention some of the most popular commercial platforms for big data
analytics.
Commercial platforms for healthcare data analytics
In order to tackle big data challenges and perform smoother analytics, various compa-
nies have implemented AI to analyze published results, textual data, and image data to
obtain meaningful outcomes. IBM Corporation is one of the biggest and experienced
players in this sector to provide healthcare analytics services commercially. IBM’s Wat-
son Health is an AI platform to share and analyze health data among hospitals, provid-
ers and researchers. Similarly, Flatiron Health provides technology-oriented services
in healthcare analytics specially focused in cancer research. Other big companies such
as Oracle Corporation and Google Inc. are also focusing to develop cloud-based storage
and distributed computing power platforms. Interestingly, in the recent few years, sev-
eral companies and start-ups have also emerged to provide health care-based analytics
and solutions. Some of the vendors in healthcare sector are provided in Table 2. Below
we discuss a few of these commercial solutions.
AYASDI
Ayasdi is one such big vendor which focuses on ML based methodologies to primarily
provide machine intelligence platform along with an application framework with tried
& tested enterprise scalability. It provides various applications for healthcare analytics,
for example, to understand and manage clinical variation, and to transform clinical care
costs. It is also capable of analyzing and managing how hospitals are organized, conver-
sation between doctors, risk-oriented decisions by doctors for treatment, and the care
they deliver to patients. It also provides an application for the assessment and manage-
ment of population health, a proactive strategy that goes beyond traditional risk analysis
methodologies. It uses ML intelligence for predicting future risk trajectories, identifying
risk drivers, and providing solutions for best outcomes. A strategic illustration of the
company’s methodology for analytics is provided in Fig. 4.
Page 16 of 25Dash et al. J Big Data (2019) 6:54
Linguamatics
It is an NLP based algorithm that relies on an interactive text mining algorithm (I2E).
I2E can extract and analyze a wide array of information. Results obtained using this
technique are tenfold faster than other tools and does not require expert knowledge for
data interpretation. This approach can provide information on genetic relationships and
facts from unstructured data. Classical, ML requires well-curated data as input to gener-
ate clean and filtered results. However, NLP when integrated in EHR or clinical records
per se facilitates the extraction of clean and structured information that often remains
hidden in unstructured input data (Fig. 5).
IBM Watson
This is one of the unique ideas of the tech-giant IBM that targets big data analytics in
almost every professional sector. This platform utilizes ML and AI based algorithms
Table 2 List of some of big companies which provide services on big data analysis
in healthcare sector
Company Description Web link
IBM Watson Health Provides services on sharing clinical and health
related data among hospital, researchers, and
provider for advance researches
https ://www.ibm.com/watso n/
healt h/index -1.html
MedeAnalytics Provides performance management solutions,
health systems and plans, and health analytics
along with long track record facility of patient
data
https ://medea nalyt ics.com/
Health Fidelity Provides management solution for risks assess-
ment in workflows of healthcare organization
and methods for optimization and adjustment
https ://healt hfide lity.com/
Roam Analytics Provides platforms for digging into big unstruc-
tured healthcare data for getting meaningful
information
https ://roama nalyt ics.com/
Flatiron Health Provides applications for organizing and improv-
ing oncology data for better cancer treatment
https ://flati ron.com/
Enlitic Provides deep learning using large-scale data sets
from clinical tests for healthcare diagnosis
https ://www.enlit ic.com/
Digital Reasoning Systems Provides cognitive computing services and data
analytic solutions for processing and organizing
unstructured data into meaningful data
https ://digit alrea sonin g.com/
Ayasdi Provides AI accommodated platform for clinical
variations, population health, risk management
and other healthcare analytics
https ://www.ayasd i.com/
Linguamatics Provides text mining platform for digging impor-
tant information from unstructured healthcare
data
https ://www.lingu amati cs.com/
Apixio Provides cognitive computing platform for analyz-
ing clinical data and pdf health records to gener-
ate deep information
https ://www.apixi o.com/
Roam Analytics Provides natural language processing infrastruc-
ture for modern healthcare systems
https ://roama nalyt ics.com/
Lumiata Provides services for analytics and risk manage-
ment for efficient outcomes in healthcare
https ://www.lumia ta.com
OptumHealth Provides healthcare analytics, improve modern
health system’s infrastructure and comprehen-
sive and innovative solutions for the healthcare
industry
https ://www.optum .com/
https://www.ibm.com/watson/health/index-1.html
https://www.ibm.com/watson/health/index-1.html
https://medeanalytics.com/
https://healthfidelity.com/
https://roamanalytics.com/
https://flatiron.com/
https://digitalreasoning.com/
https://www.ayasdi.com/
https://www.linguamatics.com/
https://www.apixio.com/
https://roamanalytics.com/
https://www.lumiata.com
https://www.optum.com/
Page 17 of 25Dash et al. J Big Data (2019) 6:54
Fig. 4 Illustration of application of “Intelligent Application Suite” provided by AYASDI for various analyses
such as clinical variation, population health, and risk management in healthcare sector
Fig. 5 Schematic representation for the working principle of NLP-based AI system used in massive data
retention and analysis in Linguamatics
Fig. 6 IBM Watson in healthcare data analytics. Schematic representation of the various functional modules
in IBM Watson’s big-data healthcare package. For instance, the drug discovery domain involves network
of highly coordinated data acquisition and analysis within the spectrum of curating database to building
meaningful pathways towards elucidating novel druggable targets
Page 18 of 25Dash et al. J Big Data (2019) 6:54
extensively to extract the maximum information from minimal input. IBM Wat-
son enforces the regimen of integrating a wide array of healthcare domains to provide
meaningful and structured data (Fig. 6). In an attempt to uncover novel drug targets
specifically in cancer disease model, IBM Watson and Pfizer have formed a produc-
tive collaboration to accelerate the discovery of novel immune-oncology combinations.
Combining Watson’s deep learning modules integrated with AI technologies allows the
researchers to interpret complex genomic data sets. IBM Watson has been used to pre-
dict specific types of cancer based on the gene expression profiles obtained from various
large data sets providing signs of multiple druggable targets. IBM Watson is also used in
drug discovery programs by integrating curated literature and forming network maps to
provide a detailed overview of the molecular landscape in a specific disease model.
In order to analyze the diversified medical data, healthcare domain, describes ana-
lytics in four categories: descriptive, diagnostic, predictive, and prescriptive analytics.
Descriptive analytics refers for describing the current medical situations and comment-
ing on that whereas diagnostic analysis explains reasons and factors behind occurrence
of certain events, for example, choosing treatment option for a patient based on clus-
tering and decision trees. Predictive analytics focuses on predictive ability of the future
outcomes by determining trends and probabilities. These methods are mainly built up of
machine leaning techniques and are helpful in the context of understanding complica-
tions that a patient can develop. Prescriptive analytics is to perform analysis to propose
an action towards optimal decision making. For example, decision of avoiding a given
treatment to the patient based on observed side effects and predicted complications. In
order to improve performance of the current medical systems integration of big data
into healthcare analytics can be a major factor; however, sophisticated strategies need
to be developed. An architecture of best practices of different analytics in healthcare
domain is required for integrating big data technologies to improve the outcomes. How-
ever, there are many challenges associated with the implementation of such strategies.
Challenges associated with healthcare big data
Methods for big data management and analysis are being continuously developed espe-
cially for real-time data streaming, capture, aggregation, analytics (using ML and pre-
dictive), and visualization solutions that can help integrate a better utilization of EMRs
with the healthcare. For example, the EHR adoption rate of federally tested and certified
EHR programs in the healthcare sector in the U.S.A. is nearly complete [7]. However,
the availability of hundreds of EHR products certified by the government, each with dif-
ferent clinical terminologies, technical specifications, and functional capabilities has led
to difficulties in the interoperability and sharing of data. Nonetheless, we can safely say
that the healthcare industry has entered into a ‘post-EMR’ deployment phase. Now, the
main objective is to gain actionable insights from these vast amounts of data collected as
EMRs. Here, we discuss some of these challenges in brief.
Storage
Storing large volume of data is one of the primary challenges, but many organizations
are comfortable with data storage on their own premises. It has several advantages like
control over security, access, and up-time. However, an on-site server network can be
Page 19 of 25Dash et al. J Big Data (2019) 6:54
expensive to scale and difficult to maintain. It appears that with decreasing costs and
increasing reliability, the cloud-based storage using IT infrastructure is a better option
which most of the healthcare organizations have opted for. Organizations must choose
cloud-partners that understand the importance of healthcare-specific compliance and
security issues. Additionally, cloud storage offers lower up-front costs, nimble disaster
recovery, and easier expansion. Organizations can also have a hybrid approach to their
data storage programs, which may be the most flexible and workable approach for pro-
viders with varying data access and storage needs.
Cleaning
The data needs to cleansed or scrubbed to ensure the accuracy, correctness, consistency,
relevancy, and purity after acquisition. This cleaning process can be manual or automa-
tized using logic rules to ensure high levels of accuracy and integrity. More sophisticated
and precise tools use machine-learning techniques to reduce time and expenses and to
stop foul data from derailing big data projects.
Unified format
Patients produce a huge volume of data that is not easy to capture with traditional EHR
format, as it is knotty and not easily manageable. It is too difficult to handle big data
especially when it comes without a perfect data organization to the healthcare provid-
ers. A need to codify all the clinically relevant information surfaced for the purpose of
claims, billing purposes, and clinical analytics. Therefore, medical coding systems like
Current Procedural Terminology (CPT) and International Classification of Diseases
(ICD) code sets were developed to represent the core clinical concepts. However, these
code sets have their own limitations.
Accuracy
Some studies have observed that the reporting of patient data into EMRs or EHRs is not
entirely accurate yet [26–29], probably because of poor EHR utility, complex workflows,
and a broken understanding of why big data is all-important to capture well. All these
factors can contribute to the quality issues for big data all along its lifecycle. The EHRs
intend to improve the quality and communication of data in clinical workflows though
reports indicate discrepancies in these contexts. The documentation quality might
improve by using self-report questionnaires from patients for their symptoms.
Image pre‑processing
Studies have observed various physical factors that can lead to altered data quality and
misinterpretations from existing medical records [30]. Medical images often suffer tech-
nical barriers that involve multiple types of noise and artifacts. Improper handling of
medical images can also cause tampering of images for instance might lead to delinea-
tion of anatomical structures such as veins which is non-correlative with real case sce-
nario. Reduction of noise, clearing artifacts, adjusting contrast of acquired images and
image quality adjustment post mishandling are some of the measures that can be imple-
mented to benefit the purpose.
Page 20 of 25Dash et al. J Big Data (2019) 6:54
Security
There have been many security breaches, hackings, phishing attacks, and ransomware
episodes that data security is a priority for healthcare organizations. After noticing an
array of vulnerabilities, a list of technical safeguards was developed for the protected
health information (PHI). These rules, termed as HIPAA Security Rules, help guide
organizations with storing, transmission, authentication protocols, and controls over
access, integrity, and auditing. Common security measures like using up-to-date anti-
virus software, firewalls, encrypting sensitive data, and multi-factor authentication can
save a lot of trouble.
Meta‑data
To have a successful data governance plan, it would be mandatory to have complete,
accurate, and up-to-date metadata regarding all the stored data. The metadata would be
composed of information like time of creation, purpose and person responsible for the
data, previous usage (by who, why, how, and when) for researchers and data analysts.
This would allow analysts to replicate previous queries and help later scientific studies
and accurate benchmarking. This increases the usefulness of data and prevents creation
of “data dumpsters” of low or no use.
Querying
Metadata would make it easier for organizations to query their data and get some
answers. However, in absence of proper interoperability between datasets the query
tools may not access an entire repository of data. Also, different components of a dataset
should be well interconnected or linked and easily accessible otherwise a complete por-
trait of an individual patient’s health may not be generated. Medical coding systems like
ICD-10, SNOMED-CT, or LOINC must be implemented to reduce free-form concepts
into a shared ontology. If the accuracy, completeness, and standardization of the data are
not in question, then Structured Query Language (SQL) can be used to query large data-
sets and relational databases.
Visualization
A clean and engaging visualization of data with charts, heat maps, and histograms to
illustrate contrasting figures and correct labeling of information to reduce potential con-
fusion, can make it much easier for us to absorb information and use it appropriately.
Other examples include bar charts, pie charts, and scatterplots with their own specific
ways to convey the data.
Data sharing
Patients may or may not receive their care at multiple locations. In the former case, shar-
ing data with other healthcare organizations would be essential. During such sharing, if
the data is not interoperable then data movement between disparate organizations could
be severely curtailed. This could be due to technical and organizational barriers. This
may leave clinicians without key information for making decisions regarding follow-
ups and treatment strategies for patients. Solutions like Fast Healthcare Interoperabil-
ity Resource (FHIR) and public APIs, CommonWell (a not-for-profit trade association)
Page 21 of 25Dash et al. J Big Data (2019) 6:54
and Carequality (a consensus-built, common interoperability framework) are making
data interoperability and sharing easy and secure. The biggest roadblock for data shar-
ing is the treatment of data as a commodity that can provide a competitive advantage.
Therefore, sometimes both providers and vendors intentionally interfere with the flow of
information to block the information flow between different EHR systems [31].
The healthcare providers will need to overcome every challenge on this list and more
to develop a big data exchange ecosystem that provides trustworthy, timely, and mean-
ingful information by connecting all members of the care continuum. Time, commit-
ment, funding, and communication would be required before these challenges are
overcome.
Big data analytics for cutting costs
To develop a healthcare system based on big data that can exchange big data and pro-
vides us with trustworthy, timely, and meaningful information, we need to overcome
every challenge mentioned above. Overcoming these challenges would require invest-
ment in terms of time, funding, and commitment. However, like other technological
advances, the success of these ambitious steps would apparently ease the present burdens
on healthcare especially in terms of costs. It is believed that the implementation of big
data analytics by healthcare organizations might lead to a saving of over 25% in annual
costs in the coming years. Better diagnosis and disease predictions by big data analyt-
ics can enable cost reduction by decreasing the hospital readmission rate. The health-
care firms do not understand the variables responsible for readmissions well enough. It
would be easier for healthcare organizations to improve their protocols for dealing with
patients and prevent readmission by determining these relationships well. Big data ana-
lytics can also help in optimizing staffing, forecasting operating room demands, stream-
lining patient care, and improving the pharmaceutical supply chain. All of these factors
will lead to an ultimate reduction in the healthcare costs by the organizations.
Quantum mechanics and big data analysis
Big data sets can be staggering in size. Therefore, its analysis remains daunting even with
the most powerful modern computers. For most of the analysis, the bottleneck lies in the
computer’s ability to access its memory and not in the processor [32, 33]. The capacity,
bandwidth or latency requirements of memory hierarchy outweigh the computational
requirements so much that supercomputers are increasingly used for big data analy-
sis [34, 35]. An additional solution is the application of quantum approach for big data
analysis.
Quantum computing and its advantages
The common digital computing uses binary digits to code for the data whereas quantum
computation uses quantum bits or qubits [36]. A qubit is a quantum version of the classi-
cal binary bits that can represent a zero, a one, or any linear combination of states (called
superpositions) of those two qubit states [37]. Therefore, qubits allow computer bits to
operate in three states compared to two states in the classical computation. This allows
quantum computers to work thousands of times faster than regular computers. For
example, a conventional analysis of a dataset with n points would require 2n processing
Page 22 of 25Dash et al. J Big Data (2019) 6:54
units whereas it would require just n quantum bits using a quantum computer. Quantum
computers use quantum mechanical phenomena like superposition and quantum entan-
glement to perform computations [38, 39].
Quantum algorithms can speed-up the big data analysis exponentially [40]. Some
complex problems, believed to be unsolvable using conventional computing, can be
solved by quantum approaches. For example, the current encryption techniques such
as RSA, public-key (PK) and Data Encryption Standard (DES) which are thought to be
impassable now would be irrelevant in future because quantum computers will quickly
get through them [41]. Quantum approaches can dramatically reduce the information
required for big data analysis. For example, quantum theory can maximize the distin-
guishability between a multilayer network using a minimum number of layers [42]. In
addition, quantum approaches require a relatively small dataset to obtain a maximally
sensitive data analysis compared to the conventional (machine-learning) techniques.
Therefore, quantum approaches can drastically reduce the amount of computational
power required to analyze big data. Even though, quantum computing is still in its
infancy and presents many open challenges, it is being implemented for healthcare data.
Applications in big data analysis
Quantum computing is picking up and seems to be a potential solution for big data anal-
ysis. For example, identification of rare events, such as the production of Higgs bosons
at the Large Hadron Collider (LHC) can now be performed using quantum approaches
[43]. At LHC, huge amounts of collision data (1PB/s) is generated that needs to be fil-
tered and analyzed. One such approach, the quantum annealing for ML (QAML) that
implements a combination of ML and quantum computing with a programmable quan-
tum annealer, helps reduce human intervention and increase the accuracy of assessing
particle-collision data. In another example, the quantum support vector machine was
implemented for both training and classification stages to classify new data [44]. Such
quantum approaches could find applications in many areas of science [43]. Indeed,
recurrent quantum neural network (RQNN) was implemented to increase signal sepa-
rability in electroencephalogram (EEG) signals [45]. Similarly, quantum annealing was
applied to intensity modulated radiotherapy (IMRT) beamlet intensity optimization [46].
Similarly, there exist more applications of quantum approaches regarding healthcare e.g.
quantum sensors and quantum microscopes [47].
Conclusions and future prospects
Nowadays, various biomedical and healthcare tools such as genomics, mobile biometric
sensors, and smartphone apps generate a big amount of data. Therefore, it is manda-
tory for us to know about and assess that can be achieved using this data. For example,
the analysis of such data can provide further insights in terms of procedural, technical,
medical and other types of improvements in healthcare. After a review of these health-
care procedures, it appears that the full potential of patient-specific medical specialty
or personalized medicine is under way. The collective big data analysis of EHRs, EMRs
and other medical data is continuously helping build a better prognostic framework.
The companies providing service for healthcare analytics and clinical transforma-
tion are indeed contributing towards better and effective outcome. Common goals of
Page 23 of 25Dash et al. J Big Data (2019) 6:54
these companies include reducing cost of analytics, developing effective Clinical Deci-
sion Support (CDS) systems, providing platforms for better treatment strategies, and
identifying and preventing fraud associated with big data. Though, almost all of them
face challenges on federal issues like how private data is handled, shared and kept safe.
The combined pool of data from healthcare organizations and biomedical researchers
have resulted in a better outlook, determination, and treatment of various diseases. This
has also helped in building a better and healthier personalized healthcare framework.
Modern healthcare fraternity has realized the potential of big data and therefore, have
implemented big data analytics in healthcare and clinical practices. Supercomputers to
quantum computers are helping in extracting meaningful information from big data in
dramatically reduced time periods. With high hopes of extracting new and actionable
knowledge that can improve the present status of healthcare services, researchers are
plunging into biomedical big data despite the infrastructure challenges. Clinical trials,
analysis of pharmacy and insurance claims together, discovery of biomarkers is a part of
a novel and creative way to analyze healthcare big data.
Big data analytics leverage the gap within structured and unstructured data sources.
The shift to an integrated data environment is a well-known hurdle to overcome. Inter-
esting enough, the principle of big data heavily relies on the idea of the more the infor-
mation, the more insights one can gain from this information and can make predictions
for future events. It is rightfully projected by various reliable consulting firms and health
care companies that the big data healthcare market is poised to grow at an exponential
rate. However, in a short span we have witnessed a spectrum of analytics currently in use
that have shown significant impacts on the decision making and performance of health-
care industry. The exponential growth of medical data from various domains has forced
computational experts to design innovative strategies to analyze and interpret such
enormous amount of data within a given timeframe. The integration of computational
systems for signal processing from both research and practicing medical professionals
has witnessed growth. Thus, developing a detailed model of a human body by combining
physiological data and “-omics” techniques can be the next big target. This unique idea
can enhance our knowledge of disease conditions and possibly help in the development
of novel diagnostic tools. The continuous rise in available genomic data including inher-
ent hidden errors from experiment and analytical practices need further attention. How-
ever, there are opportunities in each step of this extensive process to introduce systemic
improvements within the healthcare research.
High volume of medical data collected across heterogeneous platforms has put a chal-
lenge to data scientists for careful integration and implementation. It is therefore sug-
gested that revolution in healthcare is further needed to group together bioinformatics,
health informatics and analytics to promote personalized and more effective treatments.
Furthermore, new strategies and technologies should be developed to understand the
nature (structured, semi-structured, unstructured), complexity (dimensions and attrib-
utes) and volume of the data to derive meaningful information. The greatest asset of
big data lies in its limitless possibilities. The birth and integration of big data within the
past few years has brought substantial advancements in the health care sector ranging
from medical data management to drug discovery programs for complex human dis-
eases including cancer and neurodegenerative disorders. To quote a simple example
Page 24 of 25Dash et al. J Big Data (2019) 6:54
supporting the stated idea, since the late 2000′s the healthcare market has witnessed
advancements in the EHR system in the context of data collection, management and
usability. We believe that big data will add-on and bolster the existing pipeline of health-
care advances instead of replacing skilled manpower, subject knowledge experts and
intellectuals, a notion argued by many. One can clearly see the transitions of health care
market from a wider volume base to personalized or individual specific domain. There-
fore, it is essential for technologists and professionals to understand this evolving situa-
tion. In the coming year it can be projected that big data analytics will march towards a
predictive system. This would mean prediction of futuristic outcomes in an individual’s
health state based on current or existing data (such as EHR-based and Omics-based).
Similarly, it can also be presumed that structured information obtained from a certain
geography might lead to generation of population health information. Taken together,
big data will facilitate healthcare by introducing prediction of epidemics (in relation to
population health), providing early warnings of disease conditions, and helping in the
discovery of novel biomarkers and intelligent therapeutic intervention strategies for an
improved quality of life.
Acknowledgements
Not applicable.
Authors’ contributions
MS wrote the manuscript. SD and SKS further added significant discussion that highly improved the quality of manu-
script. SK designed the content sequence, guided SD, SS and MS in writing and revising the manuscript and checked the
manuscript. All authors read and approved the final manuscript.
Funding
None.
Availability of data and materials
Not applicable.
Ethics approval and consent to participate
Not applicable.
Consent for publication
Not applicable.
Competing interests
The authors declare that they have no competing interests.
Author details
1 Department of Pathology and Laboratory Medicine, Weill Cornell Medicine, New York 10065, NY, USA. 2 Center
of Biological Engineering, University of Minho, Campus de Gualtar, 4710-057 Braga, Portugal. 3 SilicoLife Lda, Rua do
Canastreiro 15, 4715-387 Braga, Portugal. 4 Postgraduate School for Molecular Medicine, Warszawskiego Uniwersytetu
Medycznego, Warsaw, Poland. 5 Małopolska Centre for Biotechnology, Jagiellonian University, Kraków, Poland. 6 3B’s
Research Group, Headquarters of the European Institute of Excellence on Tissue Engineering and Regenerative Medicine,
AvePark – Parque de Ciência e Tecnologia, Zona Industrial da Gandra, Barco, 4805-017 Guimarães, Portugal.
Received: 17 January 2019 Accepted: 6 June 2019
References
1. Laney D. 3D data management: controlling data volume, velocity, and variety, Application delivery strategies. Stam-
ford: META Group Inc; 2001.
2. Mauro AD, Greco M, Grimaldi M. A formal definition of big data based on its essential features. Libr Rev.
2016;65(3):122–35.
3. Gubbi J, et al. Internet of Things (IoT): a vision, architectural elements, and future directions. Future Gener Comput
Syst. 2013;29(7):1645–60.
4. Doyle-Lindrud S. The evolution of the electronic health record. Clin J Oncol Nurs. 2015;19(2):153–4.
5. Gillum RF. From papyrus to the electronic tablet: a brief history of the clinical medical record with lessons for the
digital Age. Am J Med. 2013;126(10):853–7.
6. Reiser SJ. The clinical record in medicine part 1: learning from cases*. Ann Intern Med. 1991;114(10):902–7.
Page 25 of 25Dash et al. J Big Data (2019) 6:54
7. Reisman M. EHRs: the challenge of making electronic data usable and interoperable. Pharm Ther. 2017;42(9):572–5.
8. Murphy G, Hanken MA, Waters K. Electronic health records: changing the vision. Philadelphia: Saunders W B Co;
1999. p. 627.
9. Shameer K, et al. Translational bioinformatics in the era of real-time biomedical, health care and wellness data
streams. Brief Bioinform. 2017;18(1):105–24.
10. Service, R.F. The race for the $1000 genome. Science. 2006;311(5767):1544–6.
11. Stephens ZD, et al. Big data: astronomical or genomical? PLoS Biol. 2015;13(7):e1002195.
12. Yin Y, et al. The internet of things in healthcare: an overview. J Ind Inf Integr. 2016;1:3–13.
13. Moore SK. Unhooking medicine [wireless networking]. IEEE Spectr 2001; 38(1): 107–8, 110.
14. Nasi G, Cucciniello M, Guerrazzi C. The role of mobile technologies in health care processes: the case of cancer sup-
portive care. J Med Internet Res. 2015;17(2):e26.
15. Apple, ResearchKit/ResearchKit: ResearchKit 1.5.3. 2017.
16. Shvachko K, et al. The hadoop distributed file system. In: Proceedings of the 2010 IEEE 26th symposium on mass
storage systems and technologies (MSST). New York: IEEE Computer Society; 2010. p. 1–10.
17. Dean J, Ghemawat S. MapReduce: simplified data processing on large clusters. Commun ACM. 2008;51(1):107–13.
18. Zaharia M, et al. Apache Spark: a unified engine for big data processing. Commun ACM. 2016;59(11):56–65.
19. Gopalani S, Arora R. Comparing Apache Spark and Map Reduce with performance analysis using K-means; 2015.
20. Ahmed H, et al. Performance comparison of spark clusters configured conventionally and a cloud servicE. Procedia
Comput Sci. 2016;82:99–106.
21. Saouabi M, Ezzati A. A comparative between hadoop mapreduce and apache Spark on HDFS. In: Proceedings of the
1st international conference on internet of things and machine learning. Liverpool: ACM; 2017. p. 1–4.
22. Strickland NH. PACS (picture archiving and communication systems): filmless radiology. Arch Dis Child.
2000;83(1):82–6.
23. Schroeder W, Martin K, Lorensen B. The visualization toolkit. 4th ed. Clifton Park: Kitware; 2006.
24. Friston K, et al. Statistical parametric mapping. London: Academic Press; 2007. p. vii.
25. Li L, et al. Identification of type 2 diabetes subgroups through topological analysis of patient similarity. Sci Transl
Med. 2015;7(311):311ra174.
26. Valikodath NG, et al. Agreement of ocular symptom reporting between patient-reported outcomes and medical
records. JAMA Ophthalmol. 2017;135(3):225–31.
27. Fromme EK, et al. How accurate is clinician reporting of chemotherapy adverse effects? A comparison with patient-
reported symptoms from the Quality-of-Life Questionnaire C30. J Clin Oncol. 2004;22(17):3485–90.
28. Beckles GL, et al. Agreement between self-reports and medical records was only fair in a cross-sectional
study of performance of annual eye examinations among adults with diabetes in managed care. Med Care.
2007;45(9):876–83.
29. Echaiz JF, et al. Low correlation between self-report and medical record documentation of urinary tract infection
symptoms. Am J Infect Control. 2015;43(9):983–6.
30. Belle A, et al. Big data analytics in healthcare. Biomed Res Int. 2015;2015:370194.
31. Adler-Milstein J, Pfeifer E. Information blocking: is it occurring and what policy strategies can address it? Milbank Q.
2017;95(1):117–35.
32. Or-Bach, Z. A 1,000x improvement in computer systems by bridging the processor-memory gap. In: 2017 IEEE SOI-
3D-subthreshold microelectronics technology unified conference (S3S). 2017.
33. Mahapatra NR, Venkatrao B. The processor-memory bottleneck: problems and solutions. XRDS. 1999;5(3es):2.
34. Voronin AA, Panchenko VY, Zheltikov AM. Supercomputations and big-data analysis in strong-field ultrafast optical
physics: filamentation of high-peak-power ultrashort laser pulses. Laser Phys Lett. 2016;13(6):065403.
35. Dollas, A. Big data processing with FPGA supercomputers: opportunities and challenges. In: 2014 IEEE computer
society annual symposium on VLSI; 2014.
36. Saffman M. Quantum computing with atomic qubits and Rydberg interactions: progress and challenges. J Phys B:
At Mol Opt Phys. 2016;49(20):202001.
37. Nielsen MA, Chuang IL. Quantum computation and quantum information. 10th anniversary ed. Cambridge: Cam-
bridge University Press; 2011. p. 708.
38. Raychev N. Quantum computing models for algebraic applications. Int J Scientific Eng Res. 2015;6(8):1281–8.
39. Harrow A. Why now is the right time to study quantum computing. XRDS. 2012;18(3):32–7.
40. Lloyd S, Garnerone S, Zanardi P. Quantum algorithms for topological and geometric analysis of data. Nat Commun.
2016;7:10138.
41. Buchanan W, Woodward A. Will quantum computers be the end of public key encryption? J Cyber Secur Technol.
2017;1(1):1–22.
42. De Domenico M, et al. Structural reducibility of multilayer networks. Nat Commun. 2015;6:6864.
43. Mott A, et al. Solving a Higgs optimization problem with quantum annealing for machine learning. Nature.
2017;550:375.
44. Rebentrost P, Mohseni M, Lloyd S. Quantum support vector machine for big data classification. Phys Rev Lett.
2014;113(13):130503.
45. Gandhi V, et al. Quantum neural network-based EEG filtering for a brain-computer interface. IEEE Trans Neural Netw
Learn Syst. 2014;25(2):278–88.
46. Nazareth DP, Spaans JD. First application of quantum annealing to IMRT beamlet intensity optimization. Phys Med
Biol. 2015;60(10):4137–48.
47. Reardon S. Quantum microscope offers MRI for molecules. Nature. 2017;543(7644):162.
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
- Big data in healthcare: management, analysis and future prospects
Abstract
Introduction
The data overload
Defining big data
Healthcare as a big-data repository
Electronic health records
Digitization of healthcare and big data
Big data in biomedical research
Big data from omics studies
Internet of Things (IOT)
Advantages of IoT in healthcare
Mobile computing and mobile health (mHealth)
Nature of the big data in healthcare
Management and analysis of big data
Hadoop
Apache Spark
Machine learning for information extraction, data analysis and predictions
Extracting information from EHR datasets
Image analytics
Big data from omics
Commercial platforms for healthcare data analytics
AYASDI
Linguamatics
IBM Watson
Challenges associated with healthcare big data
Storage
Cleaning
Unified format
Accuracy
Image pre-processing
Security
Meta-data
Querying
Visualization
Data sharing
Big data analytics for cutting costs
Quantum mechanics and big data analysis
Quantum computing and its advantages
Applications in big data analysis
Conclusions and future prospects
Acknowledgements
References