Académique Documents
Professionnel Documents
Culture Documents
Constanta
1|Page
2|Page
4|Page
5|Page
The term risk is used in a variety of ways in everyday speech. We frequently refer to activities
such as rock-climbing or day-trading stocks as risky; or discuss our risk of getting the flu
this coming winter. In the case of rock-climbing and day-trading, risky is used to mean
hazardous or dangerous. In the latter reference, risk refers to the probability of a defined
outcome (the chance of contracting the flu). Before beginning a discussion of risk assessment, it
is important to provide a clear definition of the term risk and some of the other terminology
used in the risk assessment field.
For our purposes, we will limit our discussion to the risk of unintended incidents occurring
which may threaten the safety of individuals, the environment or a facilitys physical assets. In
this setting, we can define a number of terms:
Hazards or threats are conditions which exist which may potentially lead to an
undesirable event.
Controls are the measures taken to prevent hazards from causing undesirable events.
Controls can be physical (safety shutdowns, redundant controls, conservative designs,
etc.), procedural (written operating procedures), and can address human factors
(employee selection, training, supervision).
An event is an occurrence that has an associated outcome. There are typically a number
of potential outcomes from any one initial event which may range in severity from trivial
to catastrophic, depending upon other conditions and add-on events.
Risk is composed of two elements, frequency and consequence. Risk is defined as the
product of the frequency with which an event is anticipated to occur and the consequence
of the events outcome.
Risk = Frequency Consequence
1.2.
The frequency of a potential undesirable event is expressed as events per unit time,
usually per year. The frequency should be determined from historical data if a significant
number of events have occurred in the past. Often, however, risk analyses focus on
events with more severe consequences (and low frequencies) for which little historical
data exist. In such cases, the event frequency is calculated using risk assessment models.
Consequence can be expressed as the number of people affected (injured or killed),
property damaged, amount of spill, area affected, outage time, mission delay, dollars lost,
etc. Regardless of the measure chosen, the consequences are expressed per event. Thus
the above equation has the units events/year times consequences/event, which equals
consequences/year, the most typical quantitative risk measure.
Risk assessment is the process of gathering data and synthesizing information to develop an
understanding of the risk of a particular enterprise. To gain an understanding of the risk of an
operation, one must answer the following three questions:
What can go wrong?
6|Page
Before initiating a risk assessment, all parties involved should have a common understanding of
the goals of the exercise, the methods to be used, the resources required, and how the results will
be applied.
7|Page
Probability
Impact
Very low
Unlikely to occur
Negligible impact
Low
Medium
High
Is likely to occur
Very high
Risk Matrix Methods. Risk matrices provide a traceable framework for explicit consideration of
the frequency and consequences of hazards. This may be used to rank them in order of
significance, screen out insignificant ones, or evaluate the need for risk reduction of each hazard.
A risk matrix uses a matrix dividing the dimensions of frequency (also known as likelihood or
probability) and consequence (or severity) into typically 3 to 6 categories. There is little
standardisation in matters such as the size of the matrix, the labelling of the axes etc. To illustrate
this, three different risk matrix approaches are presented below.
In each case, a list of hazards is generated by a structured HAZID technique, and each hazard is
allocated to a frequency and consequence category according to qualitative criteria. The risk
matrix then gives some form of evaluation or ranking of the risk from that particular hazard.
Sometimes risk matrices use quantitative definitions of the frequency and consequence
categories. They may also use numerical indices of frequency and consequence (e.g. 1 to 5) and
then add the frequency and consequence pairs to rank the risks of each hazard or each box on the
risk matrix. In the terms of this guide, this does not constitute quantification (semi or full) and
the technique is still classed as qualitative.
Defence Standard Matrix. This sets out a 6 x 4 risk matrix based on frequency and consequence
definitions as follows. The severity categories are defined as:
Category
Catastrophic
Critical
Marginal
Negligible
Definition
Multiple deaths
A single death; and/or multiple severe injuries or severe occupational
illness
A single severe injury or occupational illness; and/or multiple minor
injuries or minor occupational illness
At most a single minor injury or minor occupational illness.
9|Page
Risk class
A
B
C
D
Interpretation
Intolerable
Undesirable and shall only be accepted when risk reduction is
impracticable
Tolerable with the endorsement
Tolerable with the endorsement
Frequent
Probable
Occasional
Remote
Improbable
Incredible
Catastrophic
A
A
A
B
C
C
Critical
A
A
B
C
C
D
Marginal
A
B
C
C
D
D
Negligible
B
C
C
D
D
D
ISO Risk Matrix. This provides a 5 x 5 risk matrix with consequence and likelihood categories
that are easier for many people to interpret.
The ISO matrix uses 4 types of consequence category: people, assets, environment and
reputation reflecting current good practice in integrating safety and environmental risk decision
making. The inclusion of asset and reputation risk is more for corporate well-being, but is useful
as it makes the risk matrix central to the total risk decision process used by companies.
The ISO risk matrix uses more factual likelihood terminology (has occurred in operating
company) instead of more general statements (remote likely to occur some time). Whilst
this makes it easier to apply, it also highlights the difficulty of these approaches for novel
technology, with no operational reliability statistics.
10 | P a g e
Risk Ranking Matrix. A risk matrix has been proposed for a revision of the IMO Guidelines on
Formal Safety Assessment to assist with hazard ranking. It uses a 7 x 4 matrix, reflecting the
greater potential variation for frequencies than for consequences.
The severity index (SI) is defined like in figure 2.5.
SI
Severity
1
2
3
Minor
Significant
Severe
Catastrophic
Effects on ship
S
(fatalities)
0,01
0,1
1
10
FI
Frequency
Definition
7
5
Frequent
Reasonably probable
Remote
Extremely remote
F
(per ship year)
10
0,1
10-3
10-5
Applying numeric scales to risk linear. The next figure doubles the numeric value each time
on the impact scale. This is perhaps a more useful model as it gives more weight to risks with a
high impact. A risk with a low probability but a high impact is thus viewed as much more severe
than a risk with a high probability and a low impact. This avoids any averaging out of serious
risks.
Applying numeric scales to risk doubled. It is questionable whether the amber risks warrant
separate classification in terms of your response strategy and it is suggested that you examine
13 | P a g e
Applying numeric scales to risk demoted/promoted. Cutting your risk categories down in this
way leaves you with two sets of risks requiring a response strategy:
Red risks = Unacceptable. We must spend time, money and effort on a response. This is likely
to be at the level of the individual risk.
Green risks = Acceptable. This does not mean they can be ignored. We will cover them by
means of contingency.
14 | P a g e
15 | P a g e
Because hazards are the source of events that can lead to undesirable consequences, analyses to
understand risk exposures must begin by understanding the hazards present. Although hazard
identification seldom provides information directly needed for decision making, it is a critical
step.
Sometimes hazard identification is explicitly performed using structured techniques. Other times
(generally when the hazards of interest are well known), hazard identification is more of an
implicit step that is not systematically performed. Overall, hazard identification focuses a risk
analysis on key hazards of interest and the types of mishaps that these hazards may create. The
following are some of the commonly used techniques to identify hazards.
3.1.
HAZID is a general term used to describe an exercise whose goal is to identify hazards and
associated events that have the potential to result in a significant consequence. For example, a
HAZID of an offshore petroleum facility may be conducted to identify potential hazards which
could result in consequences to personnel (e.g., injuries and fatalities), environmental (oil spills
and pollution), and financial assets (e.g., production loss/delay). The HAZID technique can be
applied to all or part of a facility or vessel or it can be applied to analyze operational procedures.
Depending upon the system being evaluated and the resources available, the process used to
conduct a HAZID can vary.
Typically, the system being evaluated is divided into manageable parts, and a team is led through
a brainstorming session (often with the use of checklists) to identify potential hazards associated
with each part of the system. This process is usually performed with a team experienced in the
16 | P a g e
What-if analysis
What-if analysis is a brainstorming approach that uses broad, loosely structured questioning to
(1) postulate potential upsets that may result in mishaps or system performance problems and
(2) ensure that appropriate safeguards against those problems are in place.
This technique relies upon a team of experts brainstorming to generate a comprehensive review
and can be used for any activity or system.
What-if analysis generates qualitative descriptions of potential problems (in the form of
questions and responses) as well as lists of recommendations for preventing problems. It is
applicable for almost every type of analysis application, especially those dominated by relatively
simple failure scenarios.
It can occasionally be used alone, but most often is used to supplement other, more structured
techniques (especially checklist analysis).
Table 3.1 is an example of a portion of a what-if analysis of a vessels compressed air system.
17 | P a g e
Checklist analysis
Checklist analysis is a systematic evaluation against pre-established criteria in the form of one or
more checklists. It is applicable for high-level or detailed-level analysis and is used primarily to
provide structure for interviews, documentation reviews and field inspections of the system
being analyzed. The technique generates qualitative lists of conformance and nonconformance
determinations with recommendations for correcting non-conformances. Checklist analysis is
frequently used as a supplement to or integral part of another method (especially what-if
analysis) to address specific requirements.
Table 3.2 is an example of a portion of a checklist analysis of a vessels compressed air system.
.
.
.
.
.
.
Cargo tanks
Cargo tanks
Cargo tanks
.
.
.
.
.
.
.
.
.
Compressors
Compressors
Compressors
Are air compressor intakes Yes, except for intake of flammable Consider routing the cargo tank
protected
against gases. There is a nearby cargo tank vent to a different location
contaminants.
vent
.
.
.
.
.
.
.
.
.
18 | P a g e
The HAZOP analysis technique uses special guidewords to prompt an experienced group of
individuals to identify potential hazards or operability concerns relating to pieces of equipment
or systems. Guidewords describing potential deviations from design intent are created by
applying a predefined set of adjectives (i.e. high, low, no, etc.) to a pre-defined set of process
parameters (flow, pressure, composition, etc.). The group then brainstorms potential
consequences of these deviations and if a legitimate concern is identified, they ensure that
appropriate safeguards are in place to help prevent the deviation from occurring. This type of
analysis is generally used on a system level and generates primarily qualitative results, although
some simple quantification is possible. The primary use of the HAZOP methodology is
identification of safety hazards and operability problems of continuous process systems
(especially fluid and thermal systems). For example, this technique would be applicable for an
oil transfer system consisting of multiple pumps, tanks, and process lines.
The HAZOP analysis can also be used to review procedures and sequential operations. Table 3.3
is an example of a portion of a HAZOP analysis performed on a compressed air system onboard
a vessel.
Item
1.1
1.2
1.3
.
.
.
19 | P a g e
FMEA is an inductive reasoning approach that is best suited for reviews of mechanical and
electrical hardware systems. This technique is not appropriate to broader marine issues such as
harbor transit or overall vessel safety. The FMEA technique
(1) considers how the failure mode of each system component can result in system performance
problems and
(2) ensures that appropriate safeguards against such problems are in place.
This technique is applicable to any well-defined system, but the primary use is for reviews of
mechanical and electrical systems (e.g., fire suppression systems, vessel steering/propulsion
systems). It also is used as the basis for defining and optimizing planned maintenance for
equipment because the method systematically focuses directly and individually on equipment
failure modes. FMEA generates qualitative descriptions of potential performance problems
(failure modes, root causes, effects, and safeguards) and can be expanded to include quantitative
failure frequency and/or consequence estimates.
Failure
mode
A.
No
start
signal
when the
system
pressure
is low
Local
Open
control
circuit
Effects
Higher
level
Low
pressure
and air
flow in
the
system
Causes
Indications
Safeguards
Recommendations
/Remarks
Sensor failure
or
miscalibrated
Low
pressure
indicated on
air receiver
pressure
gauge
Rapid
detection
because of
quick
interruption
of
the
supported
systems
Consider
a
redundant
compressor with
separate controls
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
End
Interruption
of
the
systems
supported by
compressed
air
Controller
failure or set
incorrectly
Wiring fault
Control circuit
relay failure
B.
No
stop
signal
when the
system
pressure
is high
.
.
.
.
.
.
.
.
.
.
.
.
Loss of power
for the control
circuit
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
Compressor
not
operating
(but
has
power and
no
other
obvious
failure)
Calibrate sensors
periodically
in
accordance with
written procedure
20 | P a g e
In any effort to identify hazards and assess their associated risks, there must be full consideration
of the interface between the human operators and the systems they operate. Human Factors
Engineering (HFE) issues can be integrated into the methods used to identify hazards, assess
risks, and determine the reliability of safety measures. For instance, hazard identification
guidewords have been developed to prompt a review team to consider human factor design
issues like access, control interfaces, etc.
An understanding of human psychology is essential in estimating the effectiveness of procedural
controls and emergency response systems.
Persons performing risk assessments need to be aware of the human factors impact, and training
for such persons can improve their ability to spot the potential for human contributions to risk.
Risk analysts can easily learn to spot the potential for human error any time human interaction is
an explicit mode of risk control. However, it is equally important to recognize human
contributions to risk when the human activity is implicit in the risk control measure. For
example, a risk assessment of a boiler would soon identify overpressure as a hazard that can
lead to risk of rupture and explosion. The risk assessment might conclude that the combination
of two pressure control measures will result in an acceptably low level of risk. The two measures
are: 1) have a high pressure alarm that will tell the operator to shut down the boiler and vent the
steam, and 2) provide an adequately sized pressure relief valve. The first risk control measure
involves explicit human interaction. Any such control measure should immediately trigger
evaluation of human error scenarios that could negate the effectiveness of the control measure.
The second risk control measure involves implicit human interaction (i.e., a functioning pressure
relief valve does not appear on the boiler all by itself but must be installed by maintenance
personnel.)
A checklist of common errors or an audit of the management system for operator training are
examples of methods used to address the human error potential and ensure that it also is
controlled.
The purpose of any tool would be to identify the potential for error and identify how the error is
prevented. Does the operator know what the alarm means? Does he know how to shut down the
boiler? What if the overpressure event is one of a series of events (e.g. what if the operator has
five alarms sounding simultaneously)? Did the engineer properly size and specify the relief
valve? Was it installed correctly? Has it been tested or maintained to ensure its function? A
corollary to each of the above questions is required in the analysis: How do you know?
The answer to that last question is most often found in the management system, thus Human
Factors is the glue that ties risk assessment from a technology standpoint to risk assessment
from an overall quality management standpoint.
21 | P a g e
Analysis of Historical Data. The best way to assign a frequency to an event is to research
industry databases and locate good historical frequency data which relates to the event being
analyzed. Before applying historical frequency data, a thoughtful analysis of the data should be
performed to determine its applicability to the event being evaluated. The analyst needs to
consider the source of the data, the statistical quality of the data (reporting accuracy, size of data
set, etc.) and the relevance of the data to the event being analyzed. For example, transportation
data relating to helicopter crashes in the North Sea may not be directly applicable to Gulf of
Mexico operations due to significant differences in atmospheric conditions and the nature of
helicopter operating practices. In another case, frequency data for a certain type of vessel
navigation equipment failure may be found to be based on a very small sample of reported
failures, resulting in a number which is not statistically valid.
When good, applicable frequency data cannot be found, it may be necessary to estimate the
frequency of an event using one of the analytical methods described below.
Event Tree Analysis (ETA). Event tree analysis utilizes decision trees to graphically model the
possible outcomes of an initiating event capable of producing an end event of interest. This type
of analysis can provide
(1) qualitative descriptions of potential problems (combinations of events producing various
types of problems from initiating events) and
(2) quantitative estimates of event frequencies or likelihoods, which assist in demonstrating the
relative importance of various failure sequences.
Event tree analysis may be used to analyze almost any sequence of events, but is most effectively
used to address possible outcomes of initiating events for which multiple safeguards are in line
as protective features.
The following example event tree (Figure 3.3) illustrates the range of outcomes for a tanker
having redundant steering and propulsion systems. In this particular example, the tanker can be
steered using the redundant propulsion systems even if the vessel loses both steering systems.
23 | P a g e
24 | P a g e
Once the successful steps were identified, then the assessor could determine what the person
might do wrong at each step to reach the undesirable result. Some examples of potential
problems areas are:
Written procedures not complete or hard to understand
Instrumentation inoperative or inadequate
Lack of knowledge by the operator
Conflicting priorities
Labeling inadequacies
Policy versus practice discrepancies
Equipment not operating according to design specifications
Communication difficulties
Poor ergonomics
Oral versus written procedures
Making a repair or performing maintenance with a wrong tool
Each of the above situations increases the probability that an individual will err in the
performance of a task. This is important since the next stage in human reliability analysis is
assigning likelihood estimates to human errors. When examining each of the potential human
errors in the context of a scenario, the analysis must systematically look at each step and each
potential error identified. If there are a large number of potential errors, the assessor may decide
to conduct a preliminary screening to determine which errors are less or more likely to occur and
then choose to only assign values to the more likely errors. For determining likelihood, the
assessor can produce qualitative estimates, (e.g., low, medium or high) or quantitative estimates
(e.g., 0.003) using existing human failure databases. From either, it can be determined what
individual errors are the most likely to cause an individuals performance to fall short of the
desired result. Upon reviewing the estimates, error reduction strategies can be developed to
minimize the frequency of human error. Minimizing the human error will also reduce the
likelihood of the overall scenario itself from occurring.
25 | P a g e
Consequence modeling typically involves the use of analytical models to predict the effect of a
particular event of concern. Examples of consequence models include source term models,
atmospheric dispersion models, blast and thermal radiation models, aquatic transport models and
mitigation models. Most consequence modeling today makes use of computerized analytical
models.
Use of these models in the performance of a risk assessment typically involves four activities:
Characterizing the source of the material or energy associated with the hazard being
analyzed
Measuring (through costly experiments) or estimating (using models and correlations)
the transport of the material and/or the propagation of the energy in the environment to
the target of interest
Identifying the effects of the propagation of energy or material on the target of interest
Quantifying the health, safety, environmental, or economic impacts on the target of
interest
Many sophisticated models and correlations have been developed for consequence analysis.
Millions of dollars have been spent researching the effects of exposure to toxic materials on the
health of animals. The effects are extrapolated to predict effects on human health. A considerable
empirical database exists on the effects of fires and explosions on structures and equipment, and
large, sophisticated experiments are sometimes performed to validate computer algorithms for
predicting the atmospheric dispersion of toxic materials. All of these resources can be used to
help predict the consequences of accidents. But, only those consequence assessment steps
needed to provide the information necessary for decision making should be performed.
The result from the consequence assessment step is an estimate of the statistically expected
exposure of the target population to the hazard of interest and the safety/health effects related to
that level of exposure.
The form of consequence estimate generated should be determined by the objectives and scope
of the study. Consequences are usually stated in the expected number of injuries or casualties or,
in some cases, exposure to certain levels of energy or material release. These estimates
customarily account for average meteorological conditions and population distribution and may
include mitigating factors, such as evacuation and sheltering. In some cases, simply assessing the
quantity of material or energy released will provide an adequate basis for decision making.
Like frequency estimates, consequence estimates may have very large uncertainties. Estimates
that vary by a factor of up to two orders of magnitude can result from (1) basic uncertainties in
26 | P a g e
3.9.
Once the hazards and potential mishaps or events have been identified for a system or process,
and the frequencies and consequences associated with these events have been estimated, we are
able to evaluate the relative risks associated with the events. There are a variety of qualitative
and quantitative techniques used to do this.
Subjective Prioritization. Perhaps the simplest qualitative form of risk characterization is
subjective prioritization. In this technique, the analysis team identifies potential mishap scenarios
using structured hazard analysis techniques (e.g., HAZOP, FMEA). The analysis team
subjectively assigns each scenario a priority category based on the perceived level of risk.
Priority categories can be:
Low, medium, high;
Numerical assignments; or
Priority levels.
Risk Categorization/Risk Matrix. Another method to characterize risk is categorization. In this
case, the analyst must
(1) define the likelihood and consequence categories to be used in evaluating each scenario
and
(2) define the level of risk associated with likelihood/consequence category combination.
Frequency and consequence categories can be developed in a qualitative or quantitative manner.
Qualitative schemes (i.e., low, medium, or high) typically use qualitative criteria and examples
of each category to ensure consistent event classification. Multiple consequence classification
criteria may be required to address safety, environmental, operability and other types of
consequences. Table 3.5 and Table 3.6 provide examples of criteria for categorization of
consequences and likelihood.
Category
1
2
Description
Negligible
Marginal
3
4
Critical
Catastrophic
Definition
Passenger inconvenience, minor damage
Marine injuries treated by first aid, significant damage not
affecting seaworthiness
Reportable marine casualty
Death, loss of vessel, serious marine incident
Likelihood
Low
Low to Medium
Medium to High
High
Description
The mishap scenario is considered highly unlikely.
The mishap scenario is considered unlikely. It could happen, but it would be
surprising if it did.
The mishap scenario might occur. It would not be too surprising if it did.
The mishap scenario has occurred in the past and/or is expected to occur in
the future.
Table 3.6. Likelihood criteria
27 | P a g e
Risk Sensitivity. When presenting quantitative risk assessment results, it is often desirable to
demonstrate the sensitivity of the risk estimates to changes in critical assumptions made within
the analysis. This can help illustrate the range of uncertainty associated with the exercise. Risk
sensitivity analyses can also be used to demonstrate the effectiveness of certain risk mitigation
approaches. For example, if by increasing inspection frequency on a piece of equipment, the
failure rate could be reduced, a sensitivity analysis could be used to demonstrate the difference in
estimated risk levels when inspection frequencies are varied.
28 | P a g e
4.1.
If a risk or reliability assessment is to efficiently satisfy a particular need, the charter for the risk
assessment team must be well defined. Figure 4.1 contains the various elements of a risk
assessment charter. Defining these elements requires a clear understanding of the reason for the
study, a description of managements needs and an outline of the type of information required
for the study.
Sufficient flexibility must be built into the analysis scope, technical approach, schedule and
resources to accommodate later refinement of any undefined charter element(s) based on
knowledge gained during the study. The risk assessment team must understand and support the
analysis charter; otherwise a useless product may result.
Study Objective. An important and difficult task is concisely translating requirements into study
objectives. For example, if is necessary to decide between two methods of storing a hazardous
chemical on a vessel, the analysis objective should precisely define that what is needed is the
relative difference between the methods, not the general Determine the risk of these two storage
methods. Asking the risk assessment team for more than is necessary to satisfy the particular
need is counterproductive and can be expensive. For any risk assessment to efficiently produce
the necessary types of results, the requirements must be clearly communicated through wellwritten objectives.
Scope. Establishing the physical and analytical boundaries for a risk assessment is also a difficult
task. The scope will often need to be proposed by the risk assessment team. Of the items listed in
Figure 4.1, selection of an appropriate level of detail is the scope element that is most crucial to
performing an efficient risk assessment. The risk assessment project team should be encouraged
to use approximate data and gross levels of resolution during the early stages of the risk
assessment. Once the project team determines the areas that are the large contributors to risk,
29 | P a g e
30 | P a g e
There are literally hundreds of diverse risk analysis methods and tools, many of which are highly
applicable to the analysis of marine and offshore systems. Of course, a key to any successful risk
analysis is choosing the right method (or combination of methods) for the situation at hand. A
number of factors influence the choice of analysis approach.
Levels of Analysis. The goal of any risk analysis is to provide information that helps
stakeholders make more informed decisions whenever the potential for losses (e.g., mishaps or
shutdowns) is an important consideration. Thus, the whole process of performing a risk
assessment should focus on providing the type of loss exposure information that decision-makers
will need. The required types of information vary according to many factors, including the
following:
The types of issues being evaluated
The different stakeholders involved
The significance of the risks
The costs associated with controlling the risks
The availability of information/data related to the issue being analyzed
Information needs determine how the analysis should be performed.
The goal is always to perform the minimum level of analysis necessary to provide information
that is just adequate for decision making. In other words, do as little analysis as possible to
develop the information that decision-makers need. Although not always obvious initially,
decision-makers can often make their decisions with risk information that is surprisingly limited
in detail and/or uncertain.
In other cases, very detailed risk assessment models with complicated quantitative risk
characterizations may be necessary. The key is to always begin analyses at as high (i.e., general)
a level as practical and to only perform more detailed evaluations in areas where the additional
analysis will significantly benefit the decision-makers.
More detailed analysis than is necessary not only does not benefit the decision-maker, but also
inappropriately uses time and financial resources that could have been spent implementing
solutions or analyzing other issues.
Figure 4.2 illustrates the concept of performing risk analyses through repetitious layers of
analysis. Each layer of analysis provides more detailed and certain loss exposure information,
but the resources invested in the analysis increase at each level. The filtering effect of each layer
allows only key issues to move into the next more detailed level of analysis. At any point,
sufficient information for decision making may be developed, and the analysis may end at that
level. (All levels of analysis will not be performed for every issue that arises). In fact, most
issues will probably be resolved through risk/reliability screening analyses or broadly focused,
detailed analyses. At each level of analysis, the analysis may involve qualitative or quantitative
risk characterizations.
The following sections briefly describe each level of analysis.
31 | P a g e
Hazard Identification. Because hazards are the source of events that lead to losses, analyses to
understand loss exposures must begin by understanding the hazards. All risk/reliability analyses
begin at this level (implicitly or explicitly). Analysts with little risk/reliability analysis
experience and some training can successfully perform these types of analyses.
Risk Screening Analysis. In most situations, there are hundreds or even thousands of ways that
losses may occur. Analyzing each of these possibilities individually in detail is not practical in
most instances. Risk screening analyses are high-level (i.e., very general) analyses that broadly
characterize risk levels and identify the most significant areas for further investigation.
Sometimes, this level of analysis is sufficient to provide all of the information that decision
makers need; however, more refined analysis of important issues identified through the risk
screening is most common.
Once the hazards are understood, risk screening should be the next step of any analysis.
Generally, analysts with a modest amount of risk analysis experience and some training can
successfully perform these types of analyses.
Broadly Focused, Detailed Analysis. When specific activities or systems are found to have
particularly significant or uncertain risks, broadly focused, detailed analyses are generally
employed. These analyses use structured tools for identifying the specific combinations of
human errors, equipment failures and external events that lead to consequences of interest. These
analyses may also use qualitative and/or quantitative risk characterizations to help identify the
most appropriate risk management strategies.
Most risk analyses performed are broadly focused, detailed analyses that primarily use
qualitative (or at most, quantitative categorization) risk characterizations. These analyses require
32 | P a g e
Motivation for analysis. This consideration should be the most important to every analyst.
Performing a risk analysis without understanding its motivation and without having a welldefined purpose is likely to waste valuable resources. A number of issues can shape the purpose
of a given analysis. For example:
What is the primary reason for performing the analysis?
Is the analysis performed as a result of a required policy?
Are insights needed to make risk-based decisions concerning the design or improvement
of an operation or system?
Does the analysis satisfy a regulatory, legal or stakeholder requirement?
Individuals responsible for selecting the most appropriate technique and assembling the
necessary human, technical and physical resources must be provided with a well-defined, written
purpose so that they can efficiently execute the objectives of the analysis.
Types of results needed. The types of results needed are important factors in choosing an
analysis technique. Depending on the motivation for the risk analysis, a variety of results could
be needed to satisfy the studys charter.
Defining the specific type of information needed to satisfy the objective of the analysis is an
important part of selecting the most appropriate analysis technique. The following five categories
of information can be produced from most risk analyses:
List of potential problem areas
List of how these problems occur (i.e., failure modes, causes, sequence)
List of alternatives for reducing the potential for these problems
List of areas needing further analysis and/or input for a quantitative risk analysis
Prioritization of results
33 | P a g e
Some risk analysis techniques are used solely to identify the critical problem areas associated
with a specific activity or system. If that is the only purpose of the analysis, select a technique
that provides a list or a screening of areas of the activity/system possessing the potential for
some performance problems.
Nearly all of the analysis techniques provide lists of how these problems occur and possible risk
reduction alternatives (i.e., action items). Several of the techniques also prioritize the action
items based on the teams perception of the level of risk associated with the action item.
Types of information available. Two primary conditions define what information is available to
the analysis team:
(1) the current stage of the activity or system at the time of the analysis and
(2) the quality of the documentation and how current it is.
The first condition is generally fixed for any analysis. The stage of life establishes the practical
limit of detailed information available to the analysis team. For example, if a risk analysis is to
be performed on a proposed marine activity, it is unlikely that an organization will have already
produced detailed descriptions of the activity and documented procedures and/or design
drawings for the proposed activity. Thus, if the analyst must choose between the HAZOP
analysis and What-If analysis, this phase-of-life factor would dictate a less-detailed analysis
technique (What-If analysis).
The second condition deals with the quality of the existing documentation and how current it is.
For a risk analysis of an existing activity or system, analysts may find that the design drawings
are not up to date or do not exist in a suitable form. Using any analysis technique with out-ofdate information is not only futile, it is a waste of time and resources. Thus, if all other factors
point to using a specific technique for the proposed analysis that requires such information, then
the analysts should request that the information be updated before the analysis is performed.
34 | P a g e
Selecting an approach
Table 4.2 summarizes the risk analysis methods and key characteristics that differentiate the
various methods. The information is summarized in a format to assist in selecting the appropriate
techniques for specific applications.
Often, an assessment is conducted in phases, and it is only necessary to specify the methods to be
used for hazard identification and high-level risk screening analysis to begin the study. As the
scope of more detailed or focused analyses identified during risk screening becomes clear, the
methods for conducting these detailed analyses can be selected.
35 | P a g e
Preliminary risk
analysis (PRA)
What-if/checklist
analysis
Hazard and
operability
(HAZOP) analysis
Summary of methods
36 | P a g e
Relative
ranking/risk
indexing
Coarse risk
analysis (CRA)
Pareto analysis
Summary of methods
FTA is a deductive analysis technique that use Generally applicable for almost
graphically models how logical relationship
every
type
of
analysis
between equipment failures, human errors and
application, but most effectively
external events can combine to cause specific
used to address the fundamental
mishaps of interest.
causes of specific system
failures dominated by relatively
complex
combinations
of
events.
Often used for complex
electronic,
control
or
communication systems.
ETA is an inductive analysis technique that Generally applicable for almost
graphically models the possible outcomes of
every
type
of
analysis
an initiating event capable of producing a
application, but most effectively
mishap of interest.
used to address possible
outcomes of initiating events for
which multiple safeguards are in
place as protective features.
Often used for analysis of vessel
movement
mishaps
and
propagation of fire/explosions
or toxic releases.
Relative ranking/risk indexing uses attributes Generally applicable to any type
of a vessel, shore facility, port or waterway to
of analysis situation as long as a
calculate index numbers that are useful for
pertinent scoring tool exists.
making relative comparisons of various
alternative.
CRA
uses
operations/evaluations
and Primarily used to analyze the
associated functions for accomplishing those
broad
range
of
operations/evolutions to describe the activities
operations/evolutions associated
of a type of vessel or shore facility. Then,
with a specific class of vessel.
possible deviations in carrying out functions Especially useful when riskare postulated and evaluated to characterize the
based information is sought to
risk of possible mishaps, to generate risk
optimize field inspections
profiles in a number of formats and to
recommend appropriate risk mitigation actions.
Pareto analysis is a prioritization technique Generally applicable to any type
based solely on historical data that identifies
of system, process or activity.
the most significant items among many. This Most often used to broadly
technique employs the rule which states that
characterize the most important
around 80 percent of the problems are
risk contributors for more
produced by around 20 percent of the causes.
detailed analysis.
37 | P a g e
Change analysis
Common cause
failure analysis
(CCFA)
Summary of methods
38 | P a g e
Summary of methods
4.5.
Once an assessment has been chartered and an approach selected, the risk assessment team can
begin the study effort. The team should follow the approach defined in the charter, and should
arrange for periodic reviews with involved personnel (technical and operations) and
management.
It is critical that the boundaries and conditions set forth in the charter be honored by the team as
the study progresses. If the team determines that changes need to be made to the documented
approach, recommendations should be made to management, and the agreed changes should be
documented.
Periodic reviews are essential to ensure effective transmittal of data and review of the
assumptions and methods used by the risk analysts. The organization must identify a focal point
or focal points who are responsible for coordinating the transmittal of data and review of the
assumptions and techniques applied by the risk analysts and/or risk assessment team. Time must
be allocated for these focal points to conduct this most critical task. If adequate involvement is
not obtained, it is the responsibility of the risk analysts to make the personnel aware of the
potential impact on study validity and/or schedule.
Adequate management reviews should be defined in the charter and conducted throughout the
assessment process. For short studies, it will be adequate to conduct management reviews only at
the times of chartering and presenting results. For longer studies, intermediate management
reviews should be scheduled to review results of various phases of the assessment and to agree
on the path forward based on preliminary findings. The chartering document should be modified
to reflect any agreed changes to study boundaries or approach which arise from these reviews.
Quality reviews should be conducted within the risk analysts organization to assure that the
study process and deliverables meet established quality criteria. Any shortfalls should be
promptly addressed to assure a high quality service is provided. In some cases, quality programs
may also impact the study. It is important that quality process impacts are identified in the
chartering phase so that they can be incorporated into the study plan and schedule.
Upon conclusion of the risk assessment, final results, conclusions and recommendations should
be documented and approved by the organization.
39 | P a g e
4.6.
In any decision-making process, there is a tension between (1) the desire for more/better
information and (2) the practicality of improving the information. Even with extraordinary
investment in data collection, significant uncertainty generally remains. So, throughout a
decision-making process, the decision makers and those supplying information must work
together to ensure that efforts to improve data collection (including risk analyses) are only
carried out to an extent proportional to the value of the more refined data obtained through those
efforts. This is why analysts should never jump to highly refined analysis tools without first
trying to satisfy decision-making needs with simpler tools.
Because dealing with uncertainty is inherent in any decision-making process, those involved in
decision making (directly or indirectly) must be aware of the most common sources of
uncertainty: model uncertainty and data uncertainty.
Model uncertainty. The models used in both the overall decision-making framework and in
specific analyses that support decision making (e.g., risk analyses) will never be perfect. The
level of detail in models and defined scope limitations will determine how accurately the model
reflects reality. Often, relatively simple models focusing on the issues that the stakeholders agree
to be most important suffice for decision making. Even if the data were perfect, the model used
would generally introduce some uncertainty into the results.
Data uncertainty. Data uncertainty is an issue that raises much concern during decision making
and can arise from any or all of the following:
The data needed does not exist
The analysts do not know where to collect or do not have the resources to collect the
needed data
The quality of the data is suspect (generally because of the methods used to catalog the
data)
The data have significant natural variability, making use of the data complex
Although steps can be taken to minimize uncertainty in data, all measurements (i.e., data) have
uncertainty associated with them.
There are a number of things that can go wrong when applying risk assessment techniques. It is
critical that those leading the study are experienced in conducting risk assessments and can steer
40 | P a g e
41 | P a g e
Hazards and safety regulations for offshore oil and gas systems
In an ideal world, rules and standards developed to regulate a new industry would be the result of
a systematic evaluation of the hazards and concerns associated with that industry. The potential
risks to be encountered by operators, owners, the public and other impacted groups would be
carefully evaluated, as well as the risks imposed on the natural environment. Following thorough
assessments of risks, a comprehensive and workable set of rules and standards could be
developed which would protect all of the people and natural systems exposed to the new
industry.
In reality, however, rules and standards have seldom been developed in this fashion. At the onset
of an industrys development, the knowledge base does not exist to predict what types of rules
will be needed. Typically, initial regulations and codes are developed to meet the most pressing
needs of the industry and governments involved to enable the new industry to get started.
Requirements usually increase over time in response to events that occur in the industry.
Accidents, environmental incidents and commercial or legal difficulties point to chinks in the
protective armor provided by regulations, and regulators and industry groups rush to fill the gaps
with additional requirements.
This cumulative adding on of requirements accurately describes regulatory development for
the oil and gas industry in most countries and for the marine industry. However, with the
emergence of the nuclear industry in the mid-1900s, more systematic approaches to industrial
regulation were developed. Due to the huge perceived risks associated with accidents in the
nuclear industry, it was acknowledged that more predictive methodologies must be used to set
standards for the industry prior to wide-scale development of nuclear facilities. The potential
consequences associated with nuclear incidents were too great to allow operators and regulators
to learn from their mistakes. Many of the predictive risk assessment techniques applied within
the marine and oil and gas industries today originated from the nuclear industry.
5.1.
Offshore oil and gas production systems present a unique combination of equipment and
conditions not observed in any other industry. Although there are few aspects of the industry
which are completely new or novel, the application in an offshore environment can result in new
potential hazards which must be identified and controlled.
Much of the oil and gas processing equipment which is utilized on offshore facilities is similar to
the equipment used onshore for oil production activities or in chemical process plants. Therefore,
many of the hazards associated with the process equipment are well known. However, the
inherent space constraints on offshore structures have resulted in the application of some new
process equipment, and, more importantly, make it difficult to mitigate hazards by separating
equipment, personnel and hazardous materials. Due to the facilities remote locations, personnel
who operate or service offshore facilities typically live and work offshore for extended periods
of time. In many ways, these aspects of offshore operations are similar to those found in the
shipping industry. However, the operations that take place on offshore oil and gas production are
different than those which take place on trading ships.
Another difference between offshore and onshore oil and gas production is the relative
complexity of drilling and construction activities, which contribute significantly to the risk
picture. Due to the remoteness of most offshore facilities and the challenges presented by a
marine environment, drilling and construction projects are typically major undertakings which
42 | P a g e
Equipment-related Hazards:
Rotating equipment hazards
Electrical equipment hazards
Lifting equipment hazards
Defective equipment
Impact by foreign objects
Process-related Hazards:
High pressure liquids and gas
Hydrocarbons under pressure
Temperature (High or very low)
Hydrocarbons and other flammable materials
Toxic substances
Storage of flammable or hazardous materials
Internal erosion/corrosion
Seal or containment failures
Production upsets or deviations
Vent and flare conditions
Ignition sources
Process control failures
Operator error
Safety system failures
Pyrophoric materials
Well-related Hazards:
Pressure containment
Unexpected fluid characteristics (sand, etc.)
Well-servicing activities
43 | P a g e
External Hazards:
Gas releases
Fires
Dropped objects
Internal Hazards:
Flammable materials/internal fires
Toxic construction materials
Inadequate escape routes and lifesaving equipment
Emergency system failures
Bacterial hazards
Drinking water supply
Food preparation and delivery
Living conditions
Waste disposal
Security hazards
o
Personnel Safety
Drilling Operations
o
Rig Operations
Well control
Tubular handling
Lifting operations
o
Air and Marine Transport
Vessel approach and docking or mooring procedures
Sea and atmosphere conditions
Severe weather
Vessel failures
Diving operations
o
Materials Handling
Rig transfers
Crane operations
Storage of drilling equipment and supplies
Chemical/flammable storage
Radioactive sources
Explosives
o
Personnel Safety
Construction and Maintenance Operations
o
Marine Transport
Vessel traffic and mooring
Sea conditions
Vessel failures
44 | P a g e
45 | P a g e
The U.S. was an early leader in the development of codes and regulations governing oil and gas
development. In more recent years, the U.K. has emerged as a leader in developing performance
oriented requirements. The tables below are not all-inclusive, but summarize the progression of
regulatory development in several key nations. It can be seen that the U.K. has been the most
active in recent years, and many other nations are using U.K. regulations as a model for new
regulatory development.
In the U.K., the Health and Safety Executive (HSE) has jurisdiction over safety regulations for
the offshore oil and gas industry.
Regulation
Driver
Description
Offshore
Installations Development of Central North Followed contemporary industry
(Construction
and
Survey) Sea area required larger and practice,
and
required
Regulations
more complex offshore facilities certification
demonstrating
compliance
to
prescriptive
requirements
and
periodic
surveys
of
completed
installations.
Offshore Installations (Safety Implementation of Lord Cullens For each offshore installation, the
Case) Regulations
recommendations following the operator must prepare a detailed
Piper Alpha disaster
Safety Case describing their
safety management system, the
measures taken to identify and
address all hazards with the
potential to cause a major
accident and to evaluate risks to
assure a risk level as low as
reasonably practicable.
Offshore installation (Prevention Clarifying
Safety
Case Promotes an integrated riskof Fire and Explosion, and requirements
based approach to managing fire
Emergency
Response)
and explosion hazards and
Regulations
emergency response.
Offshore Installation (Design and Aid in Implementing Safety Case Dispenses with the concept of a
Construction) Regulations
Regulations
Certifying Authority, placing
responsibility with the owner or
operator (duty holder) to identify
safety critical elements and to
verify performance through
independent
review
and
verification throughout their life
cycle.
46 | P a g e
Regulation
Driver
Regulations
Concerning Norwegian response to
Implementation and Use of Risk Safety Case Regulations
Analyses in the Petroleum
Activities
Description
UK A brief regulation aimed at
improving safety performance
through implementation of risk
analysis. Operators are required
to define acceptable risk and are
given flexibility in the methods
used
to
demonstrate
the
acceptability of their operations.
In Australia, the Department of Minerals and Energy (DME) is the Designated Authority
regarding offshore safety regulations.
Regulation
Australian Safety Case Regime
Driver
Description
Australian response to UK Safety Requires submittal of a number
Case Regulations
of Safety Cases which are similar
in content to those required in the
UK. Operators are expected to
prioritize hazards using QRA, set
acceptance criteria, demonstrate
that these standards are met, and
use cost-benefit analysis to show
the risks are ALARP. Nonquantitative approaches may be
accepted.
47 | P a g e
Regulation
Code of Federal Regulations
Driver
Description
Need to provide comprehensive Provides requirements based
regulatory coverage of the largely on API Specifications and
industry
Recommended Practices related
to structures, process equipment,
piping, safety devices and
electrical components. Also
addresses minimum training
requirements. Because hazards
associated with offshore systems
are considered well-known and
well-analyzed, MMS regulations
emphasize design in accordance
with good engineering practice
and
that
operations
and
maintenance activities follow
fundamental safety management
principles.
Voluntary
Safety
and Desire to encourage operators to Operators are required to
Environmental
Management develop
effective
safety implement safety management
Program
management systems without the systems that address 12 key
effort and expense of totally elements. The elements include
redrafting existing regulatory Hazards Analysis (quantitative
requirements.
risk assessment is not required),
and Assurance of Quality and
Mechanical Integrity of Critical
Equipment,
Emergency
Response and Control, and
Audits. Voluntary compliance
with this standard is being
monitored.
If
voluntary
participation levels are not
satisfactory, regulatory solutions
will be pursued.
State Regulations
Varied
With the exception of offshore
California and Alaska, state
regulations are prescriptive,
minimal, and focused on
environmental protection and
safety of well design. With the
exception of requirements for a
structural risk analysis offshore
California,
there
are
no
requirements for the use of risk
analysis.
Future trends
Although regulatory requirements which apply to offshore oil and gas development are still quite
different from nation to nation, a degree of uniformity is beginning to emerge in the approach
operators are taking toward project development, design and risk assessment. The dominance of
the major operators in the newest areas of offshore development has played a major role in this
progression. Many of the risk assessments and safety studies that are now required for North Sea
developments in response to Safety Case legislation are becoming corporate standards for the
large global operators.
Ongoing improvement in the safety of offshore facilities relies upon a union of good regulations
and industry codes and standards. Modern regulations are generally becoming more performance
oriented, requiring operators to demonstrate the effectiveness of their safety management
techniques.
More and more, operators are being given the opportunity to demonstrate, typically by means of
risk assessments, the acceptability of new or novel approaches. Industry codes and standards
which are continually improved remain a critical tool for operators to document practices which
have been shown to produce acceptable results and to share learning from new experiences and
approaches.
49 | P a g e
6.1.
Design methodology consists of a formal description of the design process, its premises,
objectives and procedures. One of its essential foundations is the approach of systems analysis
which became known and rapidly spread after the 1950s.
The prevailing design procedure was well captured in the image of the famous design spiral.
This schema correctly depicts the iterative nature of design, but overemphasizes an apparently
prescribed sequence of design steps. In practice the procedure varies from case to case and is
much more flexible, given that provisional assumptions permit starting subtasks independently.
At a later stage when concurrent engineering was pursued, the design team actually endeavored
to perform several design subtasks simultaneously. Nevertheless the design spiral served well as
guidance in coordinating design activities.
By about 1970 the methods of systems analysis had matured in many other applications and
began to make a profound and lasting impact on ship design methodology. System analysis
serves as a decision-making approach in the analysis, design and operation of large, complex
systems. It can equally well be applied to ships, their subsystems and to the fleet or transport
system of which the ship may be a part.
The approach of systems analysis made a deep impact on ship design methodology, not only
because of its greater rigor, but also because it facilitated a coordinated division of labor in the
design team. The introduction of computer aids in design enabled each designer to perform a
greater share of subtasks in the design process and thus necessitated a reorganization of the
division of labor in design. The subtasks of design attained greater scope and granularity,
increasing the responsibilities of the individual team member. But systems analysis also provides
criteria and methods for harmonizing the results of subsystem design in consonance with overall
system performance.
Thus the system approach has been providing a common platform for many new developments
and innovative design techniques for many decades. The degree of change in ship design
methodology during several decades was significant and must be rated by the sum of many
individual innovations in this general framework.
Economic efficiency. Economy remains with safety the most essential goals of commercial ship
design. There is no doubt that significant improvements were made in the economic efficiency of
ships. The economic assessment of alternatives has become a routine matter in early design
stages. The computer made it even more feasible to get an immediate evaluation of economic
performance for a proposed design. Design decisions thus have become more transparent and
more rational. The sometimes superficially conflicting requirements of economy and safety can
usually be reconciled by quantification. Several approaches exist for making these criteria more
commensurable. The future trends are the design for lower lifecycle cost, i.e., shipbuilding and
operating cost as well as the design for better product quality, i.e., improved functionality,
performance and reliability of the ship. The reduction of the lead time for design and production
to achieve shorter time to market is obviously also an important issue.
Ship safety and risk assessment. Ship safety requirements are as essential to shipping ventures
as economic objectives. This concerns the safety of human lives, the risks of damage to or loss of
ship and cargo, and the hazards to the environment. In fact it is the art of ship design to find
solutions meeting both economic and safety requirements without compromising any safety
50 | P a g e
Today ship design can be viewed as an ad hoc process. It must be considered in the context of
integration with other design development activities, such as production, costing, quality control,
and others. In that context, it is possible for the designer to work on a difficult product, requiring
high material or labor cost, and containing some design flaws that the production engineers have
to correct or send back a new design before production. Any adjustment required after the design
stage will result in a high penalty of extra time and cost. Deficiencies in the design of a ship will
influence the succeeding stages of production. In addition to designing a ship that fulfils
producing requirements, it is also desirable to design a ship that satisfies risk, performance, cost,
and customer requirements criteria. More recently, environmental concerns, safety, passenger
comfort, and life-cycle issues are becoming essential parts of the current shipbuilding industry.
With this paradigm, the selected design will be a producible, cost-effective, safe, clean, and
functionally efficient design. This will enable shipyards to obtain great rewards, such as the
reduction of construction time and costs, reduction of lead time, improving product quality,
simplification of products, and gaining sustainable competitive advantages in the shipbuilding
market.
Throughout the engineering disciplines, many design processes have been developed in order to
correct the inadequacies of the designs during the ship design stages. This is the process of proactively designing products to optimize all the functions throughout the life of the product.
Design for cost. Design to cost is a management strategy and supporting methodologies to
achieve an affordable product by treating target cost as an independent design parameter that
needs to be achieved during the development of a product. Design to cost is an area which has
attracted much attention recently. The objective with this strategy is to make the design converge
to an acceptable cost, rather than to let the cost converge to design. Design to cost can produce
massive savings on product cost before production begins. The basic concept is to estimate the
manufacturing cost during the conceptual and early design stages in order to achieve the
following objectives:
To identify the model parts that might cause high manufacturing costs,
Reduce the number of parts to minimize the possibility of a defective part or an assembly
error;
Improve the accessibility for testing or inspections of the components of the product;
Use modular design for components with greater probability of replacement to facilitate
assembly/disassembly;
To adopt the interior of passenger ships to varying passenger needs and comfort
requirements;
To adopt ships to new operational tasks, like conversion of tankers into FPSO.
For complex ships, the cost related to refurbishment can reach the order of magnitude of the
original investment. Even if retrofitting is in many cases not driven by structural aspects, the
structure of a ship is often affected by changes in the outfitting part.
Design for robustness. Robustness is defined as insensitivity, or stability, with respect to
uncontrollable parameters and is becoming a standard concept, particularly for innovative
designs. Many input parameters (e.g. loads, material data, thickness) held constant during the
optimization process, are subject to uncertainties causing variations of the values in the criteria
set and/or violation of constraints. They can also be costly to control. One way is to introduce
safety margins on the constraints, but this leads to a reduction of the design space. Robust design
has been developed with the expectation that the insensitive design can be obtained.
The robust design method greatly improves engineering productivity. Variation reduction is
universally recognized as a key to reliability and productivity improvement. There are many
approaches to reducing the variability, each one having its place in the product development
cycle. The robustness strategy provides the crucial methodology for systematically arriving at
solutions that make designs less sensitive to various causes of variation. It can be used for
optimizing product design as well as for the manufacturing process design.
6.3.
Recent research advances in the economics of maritime transport discuss issues related to the
value of ships, design methods to maximize this for stake holders, shipbuilding as a service, ship
speed and others.
Observing the world trade figures can clearly state that without seaborne shipping, world trade
would not be possible on the scale necessary for the modern world to functions. Around 90% of
world trade is carried by the international shipping industry and this accounts for 4.5 trillion
USD of exported goods. According to the same statistics, this figure brings 380 billion USD in
freight rates, which is equivalent to about 5% of total world trade.
These figures indicate the efficiency of shipping. The ratio between the total freight rates and
goods transported leads shows that on average less than 10% of the value of goods transported is
54 | P a g e
59 | P a g e
When barriers are designed and integrated in the systems one has to pay specific attention to the
nature of the target of a hazard: the ship, the cargo, the crew or other humans involved, the
environment. Different hazards and targets require different barriers.
Safety management is therefore a continuous process of assessment of safety barriers. The
existing barriers are monitored constantly. In addition our safety critical system is monitored,
too. The focus is here on missing barriers resulting from insufficient risk assessment or changes
in the systems.
After accidents an analysis of the function of our pre-installed safety barriers is carried out.
These barriers were not always installed based on previous experience. They can also be
installed based on personal judgment etc.
It is therefore vital to analyze if the safety barriers in each system have the right dimensions.
Accidents, unfortunately, are practical tests for our barriers. If they did not function, we have to
improve them.
Before we install safety barriers we assess our systems. Risk management is a complex process.
It consists of the following phases:
Risk analysis and estimation
Risk assessment
Risk management and control
During the analysis the vital components of technical/operational systems and potential hazards
endangering the functionality of these systems are identified.
The next step is concerned with the estimation of frequencies of the appearance of these hazards
and the resulting consequences. During risk assessment suitable Risk Control Options (RCOs)
are identified, evaluated, and the most appropriate Risk Control Measure (RCM) selected. The
selected RCMs are the barriers that should prevent a hazard from hampering the vital
components in our technical/operational systems.
In order to facilitate the approval of novel designs or novel systems, there is a need for different
risk evaluation criteria, sometimes also referred to as risk acceptance criteria or risk tolerability
criteria. The actual approval process may be considered independent from the risk evaluation
60 | P a g e
Risk-based design is an alternative to the present prescriptive rules, replacing the actual design
regulation by goals and functional requirements. The risk-based ship system approval process
requires suitable evaluation criteria. These criteria may be defined for the overall ship level, but
also for specific ship functions. For the risk-based design of a specific ship system, evaluation
criteria for this system may be provided. The relation between the overall risk and the risk
contribution of a specific system is defined by the risk model and the risk analyses that have
been performed as part of the risk-based design and approval process.
Based on the ALARP principle, a general procedure for how to derive risk evaluation criteria for
ship functions may be described as follows:
develop a risk model, including all scenarios that are affected by the function in question;
use the decision criteria for cost-effectiveness for the function in question;
derive the target reliability or availability by cost-effectiveness criteria;
use the optimum reliability as a target for the function that is analyzed.
This procedure is a simplified FSA limited to the relevant function and it is implicitly assumed
that the risk level is in the ALARP area, rendering cost-effectiveness criteria applicable.
It may be noted that risk evaluation criteria derived in this way may not be dimensioning for the
function in question.
61 | P a g e
66 | P a g e
Following the public inquiry into the Piper Alpha accident, the responsibilities for offshore
safety regulations were transferred from the Department of Energy to the Health and Safety
Commission (HSC) through the Health and Safety Executive (HSE) as the single regulatory
body for offshore safety. In response to the accepted findings of the Piper Alpha inquiry, the
HSE Offshore Safety Division launched a review of all offshore safety legislation and
implemented changes. The changes sought to replace legislation that was seen as prescriptive
with a more goal setting regime. The mainstay of the regulations is the Health and Safety at
Work Act. Under that act, a draft of the offshore installation (safety case) regulations was
produced. It was then modified, taking into account comments arising from public consultation.
The regulations came into force in two phases:
(a) at the end of May 1993 for new installations and
(b) on November 1993 for existing installations.
The regulations require operational safety cases to be prepared for all offshore installations. Both
fixed and mobile installations are included. Additionally, all new fixed installations require a
design safety case. For mobile installations, the duty holder is the owner.
The HSE framework for decisions on the tolerability of risk has three regions: (a) intolerable, (b)
as low as is reasonably practicable (ALARP), and (c) broadly acceptable. Offshore operators
must submit operational safety cases for all existing and new offshore installations to the HSE
Offshore Safety Division for acceptance. An installation cannot legally operate without an
accepted operational safety case. To be acceptable, a safety case must show that hazards with the
potential to produce a serious accident have been identified and that associated risks are below a
tolerability limit and have been reduced ALARP. For example, the occurrence likelihood of
events causing a loss of integrity of the safety refuge should be less than 10-3 per platform year
and associated risks should be reduced to an ALARP level.
It should be noted that the application of numerical risk criteria may not always be appropriate
because of uncertainties in inputs. Accordingly, acceptance of a safety case is unlikely to be
based solely on a numerical assessment of risk.
Fires and explosions may be the most significant hazards with potential to cause disastrous
consequences in offshore installations. Prevention of fire and explosion and emergency response
regulations (PFEER) were developed in order to manage fire and explosion hazards and the
corresponding emergency responses that protect persons from their effects. A risk-based
approach is used to deal with problems involving fire and explosion and emergency response.
PFEER supports the general requirements by specifying goals for preventive and protective
measures to manage fire and explosive hazards, to secure effective emergency response, and to
ensure compliance with regulations by the duty holder. Management and administration
regulations (MAR) were introduced to cover areas such as notification to the HSE of changes of
owner or operator, functions, and powers of offshore installation managers. MAR is applied to
both fixed and mobile offshore installations (excluding sub-sea offshore installations).
The importance of safety of offshore pipelines has also been recognized. As a result, pipeline
safety regulations (PSR) were introduced to embody a single integrated, goal-setting, risk-based
approach to regulations covering both onshore and offshore pipelines.
After several years of experience, the safety case regulations were amended in 1996 to include
verification of safety-critical elements, and the offshore installations and wells (design,
67 | P a g e
72 | P a g e
Over the past several years, innovative vessel concepts have been built by major operators. We
discuss the merits of these designs and under what conditions they provide advantages over
existing vessels.
Large deadweight PSVs. In many deepwater scenarios, mud supply is currently a bottleneck.
This is only expected to get worse with water depth. A mud change occurs at the request of the
drillers when they need a change in mud composition or density. An industry rule of thumb for a
typical deepwater mud change volume is around 950 cubic meters. As such vessels have been
designed and built around this standard. PSVs in the current fleet built before 2005 have an
average deadweight of 1,000 tons, while PSVs built between 2005 and 2010 have an average
deadweight of 2,500 tons. Pushing the boundary of this trend toward increasing deadweight have
been vessels explicitly designed to serve more than one drilling platform. These vessels
incorporate mud capacities that are multiples of the standard 950 cubic meters mud change
volume. Vessels with extremely large mud capacities may become attractive for either supply
scenarios that include multiple deepwater drilling rigs or rigs in extremely deepwater where the
mud requirements to fill the riser are very high.
Faster and larger FSIVs. In 2008 has been launched by Seacor Marine a twin-hulled catamaran
FSIV capable of speeds up to 40 knots. At such speeds, the intent of the vessel is to compete
with helicopters for crew transfer. Despite being significantly faster than other OSVs, this ship
and her sister ship have not succeeded in displacing helicopter crew transport. According to
industry interviews, most platform operators prefer to send crew out to platforms on helicopters,
and will likely not change their mind in the near future. The main advantage of an extremely fast
crew boat is reduced crew transport cost when compared to a helicopter, while the disadvantages
include paying crew for an extended crew-boat ride and long crew-boat ride recovery periods for
platform personnel. In addition, highly-trained technical crew are often required on short notice.
Even as a contingency vessel, a faster FSIV does not offer significant advantages over a
traditional PSV, let alone a standard CSV. As the contingencies a crew-boat can handle probably
do not occur more than once every couple weeks, it is unlikely that faster FSIVs will provide any
significant advantage over traditional CSVs. The only possible niche for fast crew-boats is in the
delivery of extremely low-cost personnel to highly-manned and tightly-clustered production and
drilling platforms very far from shore. These conditions presently only exist in very few
deepwater fields, mainly off the coast of Brazil. Even these CSV opportunities are extremely
limited by vessel motions, which are severe at high speeds and can be very uncomfortable for
crew. As such, we expect only innovative hull shapes, such as Small Waterplane Area Twin Hull
Craft (SWATHs), that significantly reduce ship motions to offer feasible crew transport
solutions.
Redundancy. In the recent past, major oil companies have focused increasingly on reliability and
incident avoidance. In the wake of the BP Macondo spill, accident avoidance will be intensified.
Even before the Macondo spill, most newbuild OSVs were expected to be DP II for almost all
service types. In the future, almost all OSVs will be expected to not only be built, but also
operated, according to DP II standards, and some oil companies are already requesting DP III
vessels or DP II vessels that are easily upgradeable to DP III. The demand for redundancy is so
great that even crewboats are being outfitted with DP II systems.
Automation. Aside from specialized large vessels, OSVs are typically built to minimum
manning standards by staying below 6,000 GRT. As even standard PSVs are getting significantly
more complex, outfitted with DP systems, advanced liquid cargo handling systems, and often
Diesel Electric propulsion, while the number of crewmembers stays constant, automation is
73 | P a g e
74 | P a g e
Forecasting is the process of making statements about events whose actual outcomes (typically)
have not yet been observed. A commonplace example might be estimation of some variable of
interest at some specified future date. Prediction is a similar, but more general term. Both might
refer to formal statistical methods employing time series, cross-sectional or longitudinal data, or
alternatively to less formal judgemental methods. Usage can differ between areas of application:
for example, in hydrology, the terms "forecast" and "forecasting" are sometimes reserved for
estimates of values at certain specific future times, while the term "prediction" is used for more
general estimates, such as the number of times floods will occur over a long period.
Risk and uncertainty are central to forecasting and prediction; it is generally considered good
practice to indicate the degree of uncertainty attaching to forecasts. In any case, the data must be
up to date in order for the forecast to be as accurate as possible.
Formal strategic planning calls for an explicit written process for determining the firm's longrange objectives, the generation of alternative strategies for achieving these objectives, the
evaluation of these strategies, and a systematic procedure for monitoring results. Each of these
steps of the planning process should be accompanied by an explicit procedure for gaining
commitment. The need for commitment is relevant for all phases. The specification of objectives
should be done before the generation of strategies which, in turn, should be completed before the
evaluation. The monitoring step is last.
The various steps of the planning process are described below along with some formal
techniques that can be used to make each step explicit. This discussion is prescriptive; it suggests
how planning should be done. Numerous accounts are available of how formal strategic planning
is done.
Specify Objectives: Formal planning should start with the identification of the ultimate
objectives of the organization. Frequently, companies confuse their objectives (what they
want and by when) with their strategies (how they will achieve the objectives). The
analysis and setting of objectives has long been regarded as a major step in formal
strategic planning. Informal planners seldom devote much energy to this step.
Unfortunately, the identification of objectives is a difficult step for organizations. It is
even difficult for individuals. The simplest way to demonstrate this is the following: The
difficulties in setting objectives have led some observers to recommend that formal
planners ignore this step. The recommendation here is just the opposite. Significant time
and money should be allocated to the analysis of objectives. This difficult step might be
aided by use of an outside consultant to help the group focus only upon the objectives.
Generate Alternative Strategies: A strategy is a statement about the way in which the
objectives should be achieved. Strategies should be subordinate to objectives. That is,
they are relevant only to the extent that they help to meet the objectives. This advice is
obvious but often ignored. The generation of alternative strategies helps to avoid this
problem. It recognizes explicitly that the objectives may be achieved in many different
ways. Strategies should first be stated in general terms. The more promising strategies
should be explained in more detail.
Evaluate Alternative Strategies: Once sufficient strategies have been proposed, the
evaluation of alternatives can begin. This requires a procedure by which each alternative
plan is judged for its ability to meet the objectives of the organization. Such a process is
75 | P a g e
76 | P a g e
The main purpose of the theoretical methods and practical approaches to work with probabilities is to
elicit the conditional likelihood of the states that result from the chance points, as well as those
associated with the events in each lottery. The set of all these estimates form the probability structure
of a problem, whereas the process of collecting these probabilities is called probability quantification
of uncertainty.
There are problems, where the probabilities of some states in the decision table or some chance
points in the decision tree may be directly elicited subjectively by the DM (or by an expert to whom
this task has been assigned by the DM) using all the available information. Of course, if information
78 | P a g e
Decision theory
Choosing between alternatives under risk and uncertainty is a matter of professional and personal
importance. There are decision problems that strongly affect the decision maker, and which are
very complicated mainly due to the large amount of information that has to be processed. These
situations ask for systematic techniques for rational choice that analyze the available information
step by step, take into account the objectives of the individual in the problem and in the same
time are easy to use and do not require complicated and highly specialized theoretical knowledge
from the decision maker. Decision theory (DT) has established as a well-developed and easily
applicable quantitative analysis approach to support choices between uncertain alternatives
using. It is based on utility theory and is part of the scientific discipline, operations research that
evolved after World War II. Its key feature is the ability to define an adequate decision criterion
that accounts for the subjective preference, risk attitude and expectations. The decisions reflect
the opinion of the one that makes them, which is why they are considered only correct for her.
Modern DT applies successfully in situations that obey all of the following four conditions:
1. A problem must exist.
2. The problem must be important for the decision maker.
3. There should be resources available to apply DT.
4. The problem must be difficult, i.e.:
a) the problem requires to analyze great amount of information;
b) the analysis is performed according to a set of criteria;
c) there is uncertainty in the problem;
d) the decision must be made by a group of people.
There exist other approaches to individual decision making, such as interactive multi-objective
programming, analytical hierarchy process (AHP), Markov decision processes, arkov flows
over graphs, Pareto analysis, multi-criteria decision making (MCDM), fuzzy logic, etc. Those
techniques shall be given explanation on their essence during the lecture, and a comparison with the
particular decision techniques shall be provided.
Several facts to support the usage of DT shall be outlined:
1. If one is aware of the procedures to make correct decisions, then it is possible to comment,
analyze and eventually criticize the decisions of other people.
2. DT may help improve the quality of decisions.
3. DT prevents from regreting about a decision.
4. DT allows and requires from the DM to use 100% of the available information.
5. Documenting decisions in a modern constitutional state according to the paradigms of DT is a
way to avoid further accusations to the people authorized to make choices.
6. If DT becomes an obligatory legal standard in public decision making, then it can substantially
decrease (to bearable levels) the devastating effect corruption has over national and international
economies and over the society.
DT has had applications in various areas, e.g. industry, transport, marketing, strategic
management, public healthcare, etc. This proves its vast capabilities as a decision support tool.
Quantitative decision analysis is yet to widen its potentials in areas like law, medicine and
engineering.
80 | P a g e
In real life problems, DMs face the necessity to choose between several courses of action, which
in turn leads to another choice in time, and so on. The possible consequences of the choice form
a set X, and the DM receives one regardless of her wish. Instead, the DM may and should choose
exactly one alternative out of a set (of possible) alternatives L. The consequences from the
choice of an alternative from L depend on random events, called states, which are also out of the
control of the DM. As a rule, a single event occurs and it defines the consequence. It is obvious
that consequences are a result of DMs choice, and in the same time defined by the combination
of random factors that model a given state. The structure of consequences should be defined so
as to describe all aspects of the problem that are of importance for the DM. It should also show
the extent to which the result of the decision meets all significant objectives of the DM,
described by measurable parameters. That is why a typical form of the consequences is a multidimensional vector, whose coordinates (components) equal to the values of these parameters.
There are specific objectives that must be taken into account in each situation. There are
philosophical trends, which assume that everything is connected to everything. That is why
choosing an alternative affects the world and the future. Most of these effects are practically
negligible. On the other hand, the more effects analyzed the more difficult the choice that
balances these effects in the best possible way. For practical purposes one has to outline the part
of the world over which the influence of the choice shall be analyzed. This part of the world
combines objectives, alternatives and consequences and is called decision context. The wider the
context the more alternatives defined and the more global the objectives, all this making the
choice much harder. hree aspects are crucial in choosing the correct decision context.
The first one is the third type error (solving the wrong problem).
81 | P a g e
82 | P a g e
83 | P a g e
Based on the environmental exposure (inside and out), the material of construction, the heat
treated condition, the operating parameters and other factors, equipment may be subject to one or
more types of damage. Corrosion, erosion, pitting, crevice or under deposit attack, stress
corrosion cracking, and fatigue are examples of typical types of damage that are measurable.
Predictive maintenance such as gauging, pit depth measurement and visual examination is used
to monitor the extent and progression of damage.
Past experience, previous survey data, and models for corrosion and other mechanisms are useful
for determining the potential existence of a damage mechanism, and an approximation of the rate
of damage. A most important consideration is that the rate is rarely known with certainty due to
variations in the rate (which may average out over time), and especially due to insufficient or
inaccurate data. Even if gaugings have been performed, the corrosion in localized areas that were
not gauged may greatly exceed the measured rate. Therefore, damage rates determined by
gauging should be compared to damage rates from models or other sources of information. Once
the validity of available data is evaluated, a final estimate should be made of the potential for
variation of damage rates from the measured or expected rate.
As new information is gathered from surveys, the estimate of the variation in the damage rate
can be updated and refined.
An analytical tool known as Bayes Theorem is commonly used to evaluate problems such as
this.
The state or condition of a thing is unknown, and there are tests that can be conducted to learn
more about it. However, the test results themselves are uncertain. Having performed the test,
Bayes Theorem allows one to determine logically how much was actually learned from the test.
In Bayes Theorem, the knowledge of the thing before the test is called the Prior Probability,
the accuracy of the test is called the Conditional Probability, and the final result after the test is
called the Posterior Probability. These are illustrated in the flow diagram below.
84 | P a g e
Structural reliability. In a previous chapter, it was determined how rapidly an equipment item
might be deteriorating, based both on the expected rate of damage, and based on the
consideration that the damage rate might be worse. In the next step, the actual amount of damage
is determined (from rate and age), and this is compared to the amount of damage the equipment
is designed to withstand. This comparison is related to the likelihood of failure, and analytical
methods are available to quantify this value.
The methods used vary from complicated to quite simple; however, there is generally a trade off
in accuracy and credibility as one goes from the complex to the simple. One possibility is to use
simplified models that are calibrated to the generic, or average, or typical failure rate for
the equipment being studied.
85 | P a g e
Note that the above evaluation can provide an estimate of the likelihood of failure, however, it
may not assure that the equipment is in compliance with all applicable laws and regulations. For
example, the ASME pressure vessel code is not based on risk, except in an indirect way. Thus
the likelihood of failure of a vessel that is just above the minimum allowable wall thickness
(MAWT) is not very much different from one that is just below the MAWT, but the latter case
has an additional consequence of possible fines or citations.
Consequence of failure. Determination of the consequence of failure on an offshore installation
requires special considerations compared to onshore facilities, due to the proximity of equipment
and relative lack of escape routes.
Some of the methods typically employed are: a release/dispersion model (usually a software
package, highly analytical), a Failure Modes, Effects, and Criticality Analysis (FMECA, a more
subjective approach), or the use of event trees to allow consideration of multiple potential
outcomes.
A major consideration is to determine what units consequence will be measured in. Some typical
measures (all per event) are:
Area (affected by fire/explosion)
Area (affected by toxic fumes)
Environmental damage (barrels of oil spilled)
Safety (deaths, injuries)
Costs (can include most consequences on a common basis)
86 | P a g e
Risk evaluation and risk management. Completion of the analysis and building of the Risk
Based Survey Plan is accomplished in the final step. The likelihood of failure and the
consequence of failure are simply multiplied to determine the risk. Typically, on completion of
the first Risk Based Survey analysis, the equipment is ranked in order of decreasing risks and
examined on this basis. This allows performance of a baseline and acts as a check on all data and
assumptions made during the analysis.
The next step (or this is sometimes done as the first step) is to increment the age of the
equipment by a certain number of years, and/or increment the inspection count by one. This
allows what-if planning for determining optimal times and locations for surveys.
87 | P a g e
Figure 9.4.
Risk assessment is a well-developed field which many operators are currently applying to
improve their operations and reduce their risk exposure. In the offshore oil and gas industry,
some progressive regulators have encouraged the application of risk assessment techniques by
enacting performance based safety regulations which require operators to demonstrate reduced
risk levels. In many areas of the offshore and marine industries there is a dichotomy: operators
must still comply with prescriptive old-style regulations while being encouraged on other
fronts to develop a risk-based approach to safety.
This chapter has attempted to paint a picture of the current state of risk assessment application in
these industries and to provide some basic information to guide those who would like to apply
risk assessment techniques. There are many challenging issues that organizations must address as
they begin to incorporate risk assessment into their businesses:
What are my risk acceptance criteria?
What types of internal guidelines are needed to assure consistency in the approach and
quality of risk assessments we conduct?
88 | P a g e
89 | P a g e
Risk assessment techniques can be applied in almost all areas of the offshore oil and gas and
marine industries. Corporations know that to be successful they must have a good understanding
of their risks and how the risks impact the people associated with their operations, their financial
performance and corporate reputation. More and more, regulators are striving to use risk-based
approaches in formulating new regulations. The ability to conduct meaningful risk assessments
continues to improve as more and better data are collected, and computer applications become
more accessible.
The four key areas where risk assessment has been seen to be useful are:
identifying hazards and protecting against them
improving operations
efficient use of resources
developing or complying with rules and regulations
10.1.
The primary goal of many risk assessments is to identify the hazards that are involved in a
particular process or system and to develop adequate safeguards to prevent or reduce negative
consequences from the related hazardous events. As previously discussed, the first step in
performing a risk assessment is hazard identification. Whether done in an explicit or implicit
form, this step provides an understanding of the basic hazards (e.g., high temperatures, toxic
chemicals, rotating machinery) that are involved in a process or operation. Because of the
negative consequences that can occur if these hazards are not controlled, the hazard
identification step is key in developing an understanding of the contributors to the risk of
operating a particular system or process. Once these hazards are identified and the potential
undesirable events involving these hazards are described, risk assessment techniques can allow
personnel to identify the safeguards, or risk-reducing measures, that are currently in place and to
make recommendations for additional safeguards that would further reduce the risk. These
safeguards can either prevent an event from occurring, or reduce (mitigate) the consequences if
an event does occur.
Hazard identification is most effectively applied early in a projects life-cycle. If hazards can be
identified early, they can often be designed out or eliminated completely during the early
design phases. If the hazards are not recognized until design is complete or the system is
operational, they will be more costly to address, and the only feasible way to address the hazards
may be to provide measures to mitigate the hazardous events they may cause.
It is best to integrate hazard identification activities into the project development process to
assure these activities are conducted at optimal times. For instance, high level Preliminary
Hazards Analyses should be conducted as early as possible in the project life-cycle, while
multiple project options are under consideration. This will enable risk assessments of the various
options and help identify the major hazards which will need to be managed as the project goes
forward. As the development process progresses, more and more detailed hazard analyses can be
conducted. In the offshore oil and gas industry, hazard identification is typically performed on
process systems during conceptual design (when process flow diagrams and layouts are
available) and again at the detailed design phase (when P&IDs and equipment specifications are
available).
90 | P a g e
Improving operations
Over the years, standard approaches have been developed for operating oil and gas related
equipment.
Many of these have been documented as industry standards and/or codified into regulation. For
example, regulatory bodies such as the U.S.s OSHA and Coast Guard require adherence to basic
standards in the areas of Hearing Conservation, Lock-out/Tag-out, Fall Protection, Electrical
Safety, Fire Protection, Emergency Response, etc. In addition, most operators have developed
internal requirements to address recognized operational hazards.
In efforts to continually improve business performance, successful operators continue to
challenge the established ways of conducting their operations. Opportunities for improved
business performance are continually identified, and must be assessed for risk impact in addition
to financial impact and feasibility. Risk studies can be conducted to assess the relative risks
associated with various modes of operation, including:
Simultaneous Operations (Concurrent Production and/or Drilling and/or Construction
Operations)
Construction Activities: (Hazard analysis of construction activities, Risk impact of major
marine activities at producing locations, etc.)
Automation of Drilling Activities
91 | P a g e
10.3.
Design option comparisons. When significant design decisions are being made, a thorough
comparison of the options available is typically performed. This comparison should include an
evaluation of the risks associated with each option, with the goal of selecting an option which
meets the organizations risk acceptance criteria and provides the best overall value with regard
to other factors, such as economics, political considerations, environmental concerns, legal
issues, reliability, operability and safety. An organizations risk acceptance criteria may define
tolerable risk levels, or may require that one show that the risk is As Low As Reasonably
Practicable (ALARP), and hence acceptable, subject to certain maximum limits. UK regulators
hold the operators of offshore facilities accountable to an ALARP criterion.
92 | P a g e
Risk-based regulatory and standards development. Many regulatory bodies and industry groups
now understand the importance of taking a risk-based approach when developing new
regulations and standards. More and more, as industry and regulators work together to draft new
requirements, risk assessments are becoming an integral part of the process. In many cases, new
safety regulations are performance-oriented and leave the operator with the responsibility to
demonstrate the effectiveness of his safety management system (U.K. Safety Case). In other
cases, regulators have commissioned risk assessments to be performed as a part of the regulatory
development process, to assure risks are assessed before new regulations are drafted.
For example, following a near-miss collision between a Gulf of Mexico Deepwater Tension Leg
Platform and an 800-foot tankship in 1997, the National Offshore Safety Advisory Committee
(NOSAC), sponsored by the U.S. Coast Guard, appointed a special subcommittee made up of
members from the Coast Guard, MMS, the oil industry and the marine industry to examine the
incident. The subcommittee was asked to use a risk-based approach to identify potential
regulatory and non-regulatory means to reduce the risk of this type of incident recurring.
93 | P a g e
What this approach lacks is a systematic consideration beginning with operating scenarios and
the identification of hazard in each scenario, through to assessing and recommending effective
risk reduction measures. An improved approach is illustrated in Figure 10.2.
94 | P a g e
A risk-based framework as shown in Figure 10.2 may be looked upon as a systematic, firstprinciple approach to accomplishing what the existing rule- and regulation-based framework
seeks to accomplish. Figure 2 may be used as a generic safety framework within which the
existing rules and regulations can be populated. In fact it could be used to assess the
comprehensiveness of the existing fragmented regimes of rules and regulations: any gaps or lack
of considerations can be identified and addressed with risk-analysis techniques. Figure 10.3
illustrates how this may be conduced.
95 | P a g e
97 | P a g e
For the past 20 to 30 years the offshore oil industry, including the marine industry which
supports the offshore industry, has focused its safety efforts on preventing incidents and injuries
to people, basically, preventing slips, trips and falls. This focus can be called occupational
health and safety.
In parallel, there have been efforts to prevent major incidents involving multiple fatalities or
asset threating events such as Piper Alpha, Alexander Kielland, Sleipner and Exxon Valdez.
These events occurred during production, drilling, construction and transportation where
procedures, design, planning and personnel were amongst the root causes. The responses have
included emphasis on all these aspects as well as fundamental changes like introduction of safety
cases, which details how offshore installations will be managed safely.
However, some incidents within the offshore oil industry have shown that safety efforts should
not only be focused on occupational health and safety but also need to continue to emphasize
preventing major incidents.
There are distinct differences between these two ways of seeing safety, but there are common
hazards which can lead to occupational health and safety and major incidents and there are
similar control measures which can prevent, detect, control and mitigate against occupational
health and safety and major incidents.
In the chemical processing industry process safety generally refers to the prevention of
unintentional releases of chemicals, energy, or other potentially dangerous materials during the
course of chemical processes that can have a serious effect to the plant and environment. The
goal of process safety is to protect major assets, the environment or a large group of people from
the effects of a low probability but catastrophic or severe incident. A parallel can be drawn with
marine industry and in marine industry terms, such major incidents can be asset threatening to a
vessel, and/or cause multiple fatalities amongst its crew.
In the marine industry, some examples of major hazards are shown in the table below.
Shipboard fires and Fires and explosions in machinery spaces may possibly affect many people in
explosions
the space, including loss of ship systems within or passing through the space.
Fires and explosions on the bridge may possible affect many people in the
space, including loss of ship systems within or passing through the space and
loss of command and control centre.
Fire or explosion in the accommodation may possibly affect many people in
the accommodation, including loss of ship systems within or passing through the
space.
Disintegration of rotating equipment may possible affect many people in the
space, including loss of ship systems within or passing through the space.
Failure of a pressure vessel may possibly affect many people in the space,
including loss of ship systems within or passing through the space.
Release of dangerous Release of certain dangerous substances in significant quantities to cause death
substances
or serious injury to several personnel. May possibly affect many people on deck.
Helicopter crash on Crashing of a helicopter on to the helideck with a potential resultant fire. May
vessel
possibly affect many people on board.
Vessel
Collision between a vessel and another vessel or offshore structure which causes
collision/impact
impact damage, with a potential effect on watertight integrity and stability. May
98 | P a g e
Structural failure
Loss of position
Loss of stability
Extreme weather
Vessel grounding
Process
fire
explosion
Other incidents
Human factors
and
possibly affect many people on board. In the event any of these incidents occur,
emergency response arrangements may be affected.
Failure of the hull or superstructure, with a potential effect on watertight
integrity and stability. May possibly affect many people on board. In the event
any of these incidents occur, emergency response arrangements may be affected.
Failure of superstructure components, cranes, derrick, etc., with a potential
effect on watertight integrity and stability. May possibly affect many people on
deck.
Loss of station keeping capabilities (dynamic positioning (DP) or anchors, etc.)
from:
Failure of power generations systems;
Failure of DP reference systems;
Failure of thrusters;
Operator error;
Mooring failure (e.g. from environmental forces).
May possibly affect many people on board. In the event any of these incidents
occur, emergency response arrangements may be affected.
Failure of ballast or bilge systems imparting excessive list (pitch or roll) to a
vessel. May possibly affect many people on board. In the event any of these
incidents occur, emergency response arrangements may be affected.
Vessel experiences weather conditions up to and exceeding the design weather
criteria of the vessel or the capabilities of the vessel when systems are
downgraded.
Vessel impact on seabed or seabed protrusions causing damage to the vessel
superstructure and/or limiting the vessels ability to manoeuvre. May possibly
affect many people on board. In the event any of these incidents occur,
emergency response arrangements may be affected.
Impact of dropped object on the deck, superstructure or equipment with the
potential for significant damage and/or loss of systems. May possibly affect
many people on board. In the event any of these incidents occur, emergency
response arrangements may be affected.
Loss of the diving bell from the vessel with or without loss of services. May
possibly affect several people on board. Loss of chamber pressure in saturation
diving system. May possibly affect several people on board.
Loss of well or hydrocarbons control subsea through the loss of, or incorrect
operation of well or hydrocarbon barriers causing hydrocarbons and/or gases to
rise to surface with the potential for ignition, ingestion into intakes and/or loss
of stability. May possibly affect many people on board. In the event any of these
incidents occur, emergency response arrangements may be affected.
Loss of well or hydrocarbons control topside through the loss of, or incorrect
operation of well and/or hydrocarbon barriers causing hydrocarbons and/or
gases to escape on to the deck with the potential for ignition or ingestion into
intakes. May possibly affect many people on deck. In the event any of these
incidents occur, emergency response arrangements may be affected.
See topside well releases and subsea well or hydrocarbon releases. Premature
detonation of explosive substances on deck. May possibly affect people on deck.
Incidents with the potential to cause multiple deaths or multiple serious injuries.
Behavioural factors/fatigue may affect crew members which could have the
potential to lead to the major incident hazards identified above.
Table 11.1. Major incident hazards which are asset threatening to vessels or can cause multiple
crew fatalities
99 | P a g e
100 | P a g e
1.
2.
3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
27.
28.
29.
30.
31.
32.
33.
34.
35.
36.
37.
38.
39.
40.
41.
42.
43.
Goal-based new ship construction standards: Draft Guidelines on goalbased standards, MSC82/5/8, 2006
Goal-based new ship construction standards: Report of the Working Group.
MSC 83/WP.5, 2007
Goal-based New Ship Construction Standards Report of the Working Group.
Part 1. MSC85/WP.5, 2008
Goal-based New Ship Construction Standards - Report of the Working
Group. Part 2. MSC85/WP.5/Add.1, 2008
Goal-based New Ship Construction Standards: The Report of the Working
Group on Goal-based New Ship Construction Standard. MSC84/WP.4,
2008
Goal-Based New Ship Construction Standards Guidelines on approval of
risk-based ship design. MSC 86/5/3, 2009
Goal-Based New Ship Construction Standards The safety level approach introducing the safety knob to control maritime safety. MSC 86/6/8, 2009
Goal-Based New Ship Construction Standards Safety level approach Safety level criteria, 2009
International Convention on Standards of Training, Certification and
Watchkeeping for Seafarers, 1978, with Manila ammendments, IMO
International Transport Workers Federation Report, Seafarer Fatigue:
Wake up to the dangers
International Maritime Organisation, Interim Guidelines for the Application
of Formal Safety Assessment (FSA) to the IMO Rule Making Process,
MSC/Circ. 829 and MEPC/Circ. 335, IMO
IMCA, Preventing major incidents in offshore operations and marine
construction, IMCA sel 13/12
IMCA, International guidelines for the safe operation of dynamically
positioned offshore supply vessels, Revision I, 2009
International Convention on the Prevention of Pollution at sea, 1973,
consolidated edition, IMO
IMO, Resolution MSC 266(84), Code of safety for special purpose ships,
2008
Ingemar Palsson, Gert Swenson. Formal Safety Assessment, Introduction of
Modern Risk Assessment into Shipping, Report 7594, Swedish National
Maritime Administration, SSPA Maritime Consulting
J.S. Arendt, D.K. Lorenzo, A.F. Lusby. Evaluating Process Safety in the
Chemical Industry, Chemical Manufacturers Association
Managing Risk in Shipping, The Nautical Institute
Nancy Leveson. Safeware, System Safety and Computers, A Guide to
Preventing Accidents and Losses Caused by Technology
Pradeep Chawla. Reducing the Paperwork of Risk Assessment - How to
Make Your Safety System Efficient and User-friendly, presented at Safety
Risk Assessment in Shipping Conference, Athens, Greece
Panel on Risk Assessments of Offshore Platforms - Draft Report (7th Draft),
Panel for Marine Board of National Research Council
Philippe Boisson. Safety At Sea, Policies, Regulations & International Law,
Bureau Veritas, Paris
102 | P a g e
103 | P a g e