Académique Documents
Professionnel Documents
Culture Documents
Abstract
Within the Nordtest project 1551-01 an evaluation has been carried out concerning the
similarities and dissimila rities for a number of safety-related software standards. The
evaluation and comparison of the safety-related standards have been carried out by two
Nordic parties, DELTA (Denmark) and SP (Sweden).
Six different standards have been selected within the scope of the project:
• EN 954 – Safety of machinery - Safety-related parts of control systems
• IEC 61508 – Functional safety of electrical / electronic / programmable electronic
safety-related systems
• IEC 61713 – Software dependability through the software lifecycle processes
• RTCA/DO-178B – Software Considerations in Airborne Systems and Equipment
Certification
• IEC 60601-1-4 – Medical Electrical Equipment – Part 1: General Requirements
for Safety – 4. Collateral Standard: Programmable Electrical Medical Systems
• EN 50128 – Railway Applications: Software for Railway Control and Protection
Systems
These standards are well known and generally accepted. For comparing standards a
methodology has been defined that considers a standard from different views. These are
used for evaluating and categorising the standards one at a time. For processes the SPICE
(ISO/IEC TR 15504) definition has been used. An aspect that is especially considered is
testing. For testing, the related information of the standards has been analysed and
compared separately.
The result of this project is twofold: First to perform the analysis of the selected standards
and, second, to specify a methodology that could be used for other standards and being
extensible. The general conclusion of this work is that standards vary to a high extent
(apart from different application areas) in scope, details, language, strictness, guidance,
and generality. But we can also see that several standards have many aspects in common
and that the application specif ic parts of standards do not rule out the possibility of using
standards in other application areas.
Key words: Safety, Safety-related Control Systems, Validation, Test, SPICE, Safety-
related software standards, Standard comparison, Standard evaluation, Safety of
machinery, NORDTEST
3
Contents
Abstract 2
Contents 3
Preface 5
Summary 7
1 Introduction 9
2 Definitions 10
3 Scope 11
3.1 Standards for evaluation 11
3.2 Interpreting standards 12
3.3 View of standards 13
3.4 Evaluation 14
8 Conclusions 64
8.1 Methodology 64
8.2 Feasibility of SPICE 64
8.3 The test perspective 65
8.4 Could standards be compared? 65
8.5 How to use the results 66
9 References 67
10 Annex 1: Project 69
10.1 Overall objectives 69
10.2 Work organisation 69
10.3 Presentation of results 69
10.4 Data of participating partners 69
Preface
This work was made possible by financial support from NORDTEST (Nordtest project
1551-01). The work has been carried out by Claus Trebbien-Nielsen DELTA (Denmark)
and Lars Strandén SP (Sweden).
The project and the preliminary results have been presented at RTiS (Real Time in
Sweden) conference in Halmstad on August 22, 2001 and at the international testing
conference EuroSTAR in Stockholm on November 21, 2001.
This work will support Nordic industry, especially SME (Small and Medium sized
Enterprises), in finding new markets where they are qualified to work. Based on their
current experience, the survey will help them to find similarities between the standards
they are used to work with and standards for other products. The result of the project will
also support Nordic industry in its conformity assessment.
In Annexes 2 – 10 extracted versions of the evaluated standards are given. These have
been produced by starting from the original text and then removing examples, guidelines,
details and motivations and further by focusing on aspects related to processes, artefacts
and referenced standards. The ratio of the extracted information to the original is 1:10 or
higher. Thus, it is very important to emphasize that the extracted version cannot be used
as a substitute for the original standard. Instead the original standards can be bought:
• from IEC, start at www.iec.ch
• from ISO, start at www.iso.ch
• or from national standardisation organisations (in Sweden SIS, start at
www.sis.se)
or downloaded for free where applicable.
For SPICE (ISO/IEC TR 15504) summaries of processes are given. Again these are not
enough and cannot be used as a substitute for the original.
The chapter Definitions explains shortly some words and expressions that are used in this
report apart from those defined in the standards themselves.
The chapter Scope defines the standards to be evaluated and the principal methodology
for evaluation.
The chapter Standard view contents describes in detail the methodology used including
definition of SPICE (ISO/IEC TR 15504) processes.
The chapter Single standard evaluation gives the results of the work for each evaluated
standard.
The chapter The testing perspective gives the evaluation results concerning tests i.e. at
verification and validation.
The chapter Summarized and referenced standards lists a number of significant safety-
related standards with summaries but not evaluated in this work.
The chapter Conclusions summarises and evaluates the results of this work.
6
Annexes 2-10 give the extracted versions of each evaluated standard. These have been
produced by starting from the original text and then removing examples, guidelines,
details and motivations and further by focusing on aspects related to processes, artefacts
and referenced standards.
Different readers are interested in different aspects and guidance is given below. In any
case the chapters Introduction, Definitions, Scope and Conclusions should be read.
• If you are interested in the methodology of how to compare standards, study
Standard view contents and at least one standard of Single standard evaluation
and of Annexes 2-10.
• If you are interested in understanding a specific standard, study the corresponding
sub chapter in Single standard evaluation, The testing perspective and the
corresponding annex within Annexes 2-10.
• If you are interested in testing, study The testing perspective.
• If you are interested in the contents of a specific standard (related to processes,
artefacts and referenced standards) study the corresponding annex within Annexes
2-10.
The results of this work will significantly ease studying the original standard as a whole.
7
Summary
The overall purpose of the project was to ease the understanding of standards for safety-
related applications and to transfer a judgement of the standards to the reader without
him/her having to study them in detail. This will be a help especially for SME where
limited resources are available for standard surveys. The report was written jointly by SP
and DELTA and exists in two forms: a word document (this one) and an HTML-version
available using Internet.
Six standards have been selected from different application areas, different degrees of
strictness, different levels of details, and from different scopes. The standards are:
• EN 954 – Safety of machinery - Safety-related parts of control systems
• IEC 61508 – Functional safety of electrical / electronic / programmable electronic
safety-related systems
• IEC 61713 – Software dependability through the software lifecycle processes
• RTCA/DO-178B – Software Considerations in Airborne Systems and Equipment
Certification
• IEC 60601-1-4 – Medical Electrical Equipment – Part 1: General Requirements
for Safety – 4. Collateral Standard: Programmable Electrical Medical Systems
• EN 50128 – Railway Applications : Software for Railway Control and Protection
Systems
There is no intention of being complete. Instead the standards are chosen to reflect
different types but with the common basic issue: safety-related software.
Generally we can say that analysing and comparing text masses written by humans for
humans is not a strict science and thus not easy. As a conclusion we cannot have a single
way of handling text. Instead it is necessary to use both a more formal approach i.e. based
on some kind of objective measure and an informal approach i.e. some kind of evaluation
and/or judgement. Thus a methodology has to be defined and this is made in this work by
defining a number of views i.e. different ways of looking at standards:
For mapping to processes SPICE is used. SPICE defines a superset of processes of those
defined in ISO 12207 and can be included under ISO 9000. The SPICE processes are
general and relatively complete and used as reference for comparing and evaluating the
process requirements of the evaluated standards.
The extracted versions are available in the annexes. They have been created starting from
the original standard texts and removing motivations, details, examples, guidelines etc
and concentrating the information to filter out information only relevant to processes,
artefacts, and referenced standards. The original chapter numbering has been kept so it is
possible to compare the standard with the original text in an easy way.
For comparing different standards the views listed above are used but in this work the
standards are also evaluated according to a test perspective i.e. to what extent and with
8
which quality is the verification and the validation handled in the standards. Since testing
is a single aspect these considerations have been gathered in one chapter The testing
perspective.
1 Introduction
There are many standards and EC directives that specify safety requirements and system
behaviour at fault. Also there are many standards for safety-related applications in use
today that specify how development of software and systems should be made in order to
adhere to the requirements in specific industrial areas. For example railway, avionics,
medical systems, are covered by a number of standards to ensure their reliability and
safety. However, the requirements in these standards are not co-coordinated, and
differences apply for different sectors of industry. For example, the standards differ in
scope and level of details and thus it is difficult to interpret and identify similarities and
differences across the standards, e.g. in connection with test. Further, since products are
continuously getting more complex, more than one standard may be applicable and so the
requirements of different standards can be identical, complement, exclude or even
contradict each other.
Small and Medium sized Enterprises (SME) may have difficulties to find the
requirements applicable for their product. So a valuable insight would be to be able to
evaluate a standard on its own and in relation to other standards. However it is not trivial
to define a procedure for this work. In its most generic sense the comparison work is to
evaluate and compare two text masses containing partly different information, with
different strictness and seen from different views. The written text in itself (i.e. the
implementation) contributes with complicating aspects. Some examples are listed below.
• The text is written for humans and cannot be formalized into a version leaving no
space for interpretations.
• The use of must, shall, should etc makes it difficult to isolate and compare
requirements. Further, within a standard both should and shall can exist for the
same item.
• The standards have different principle views e.g. if handling of documentation is
a separate process or if it is a sub process of several processes.
• The meaning of process, activity, procedure and when they can be applied.
• The standards have different scopes e.g. one can concern the contents of a
product and another the way to produce it.
From above we can generally say that we cannot have a single way of analysing and
comparing standards. Instead, it is necessary to use both a more formal approach i.e.
based on some kind of objective measure and an informal approach i.e. some kind of
evaluation and/or judgement. This will be further described below.
For the evaluated standards we can summarise the most important contents aspects as:
• development processes and procedures, qualities and the contents of them
• safety functions and the qualities of them
• functional classification schemes e.g. regarding criticalit y
• references to other standards that shall be fulfilled
• guidelines of how to use the standard itself
• rationale concerning the standard itself, including relations to other standards
Apart from the selected standards, the process standard SPICE is used as a means for
evaluation. SPICE is a superset of ISO 12207 and can be included under ISO 9000. The
SPICE processes are used as reference to compare and evaluate the process requirements
of the different standards. However the SPICE capability levels are not used since they
are generally not significant when analysing the evaluated standards of this work.
10
2 Definitions
Most of the standards evaluated have their own terminology and definitions. Since they
can differ, and if in doubt, it is necessary to check the considered standard. Here only
definitions are given that are specific to this report.
HW Hardware
SIL Safety Integrity Level. The definition depends on the standard under
consideration.
SPICE The standard ISO/IEC TR 15504 defining a set of about 40 processes and
also capability levels for how to apply them.
SW Software
11
3 Scope
3.1 Standards for evaluation
The six specially selected standards are listed below wit h a short description. However,
there exist many other standards specifying the development of software and systems.
The six standards have been chosen because they belong to the most widely used
standards for safety-related programmable electronic systems. The standards also cover
different application areas and differ also by different levels of details, strictness and
focus (e.g. concerning processes, functional contents etc). Note that there is no attempt of
being complete i.e. the set of analysed and referenced standards may be representative
but others exist as well.
3.1.1 EN 954
Safety of machinery – Safety-related parts of control systems
Part 1: General principles for design
Part 2: Validation
The international standard IEC 61713 “Software dependability through the software
lifecycle processes – Application guide” was published in June 2000, and has not yet
been extensively adopted in industry. The acquisition process, the supply process, the
development process, the operation process and the maintenance process are described
with respect to dependability. No division into levels of risk or criticality is made.
3.1.4 RTCA/DO-178B
Software Considerations in Airborne Systems and Equipment Certification
12
Medical systems are subject to standard IEC 60601-1-4 “Medical Electrical Equipment
Part 1: General Requirements for Safety – 4. Collateral Standard: Programmable
Electrical Medical Systems”. The risk management process is connected to the software
development process.
3.1.6 EN 50128
Railway Applications: Software for Railway Control and Protection Systems
The European standard EN 50128 “Railway Applications: Software for Railway Control
and Protection Systems” is used in railway industry all over Europe. Five software
integrity levels (SIL) are specified.
RTCA/DO-178B explicitly states that “must” and “shall” have been removed in order to
allow alternative methods. Also included in many standards are examples, suggestions
and guidelines. In IEC 61508 there exist parts of the standard (however not included in
this work) with hundreds of pages describing guidelines. In EN 50128 classification is
made as Recommended and Highly Recommended, how rigorous is this? Even if
processes and documents are recommended, how should the quality of them be related to
the standard?
Apart from difficulties with the word interpretations and clearness there exist differences
with respect to contents that make comparisons difficult. For example EN 954-1 is mainly
focused on functional contents and not processes while e.g. RTCA/DO-178B is a process
oriented standard.
We can thus list a number of aspects that makes standard comparison troublesome.
• Interpretation of words. Intentionally or unintentionally vague. Different
interpretation by different standards and persons.
• Inconsistent use of words, sentences at different places in the standard.
• Standards contain different scopes.
• Different levels of detail e.g. EN 954-1 only mention verification plan but
RTCA/DO-178B describes many details of it.
13
STANDARD
EXTRACT
SUMMARY
MAPPINGS
JUDGEMENT
METRICS
- General perspective
- Test perspective
As shown in the figure an orthogonal aspect is the area of interest (or perspective) in the
figure General and Test perspectives, which can be applied to each view. However, the
significance varies. Since Test perspective is a subset of General, General is performed
first (for all views) and the Test perspective can be made by extracting relevant
information.
3.3.1 Summary
A standard can be summarized in plain English expressing the opinion of the reviewer of
what is important. If the reviewer is experienced the summary will give the essence and
the character of the standard.
14
3.3.3 Mappings
In this case certain aspects of the analysed standard are mapped to specific sets. Three
sets are defined in this work: Processes, Artefacts and Referenced standards. The sets will
be defined and discussed in detail below. The quality of the mappings gives a
characterisation of the analysed standard. For example, the process set is defined using
SPICE processes and the analysed standard normally specifies a subset of the processes
defined in SPICE.
3.3.4 Metrics
Metrics can be defined and used for comparisons, however, simple values are very coarse
and it is important to be very careful when drawing conclusions from them. There might
be other metrics e.g. related to the sentences/words used but in this work metrics will be
related to mappings as will be described below.
3.3.5 Judgement
A judgement can be given in plain English expressing the opinion of the evaluator(s).
Typical aspects can be applicability, strengths, weaknesses, how easy it is to use,
consequences of using it and general quality of the standard.
3.4 Evaluation
The evaluation takes place by first studying each of the six standards and creating the
views, as defined above, for each evaluated standard. The extracted version is especially
important since it gives the basis for some of the others. The status after this phase is:
• each of the six standards has been evaluated separately
• all views are complete seen from a General perspective
• the mapping sets Processes, Artefacts and Referenced standards are complete
The next phase is comparing standards for studying similarities. Comparisons can be
made by comparing the different views described above. For Mappings (when not
including qualities) and Metric views comparisons can be made by simply comparing
subsets contents and metric values. For the other views it is necessary to compare
concentrated information in plain English.
By extracting information related to test from the two phases it is possible to get an
evaluation seen from a Test perspective.
15
4.2 Mappings
4.2.1 Framework
Mappings are used for making information more explicit and preparing for standard
comparisons. As described above three mapping sets are defined: Processes, Artefacts
and Referenced standards. Since SPICE (see [1]) is defined as an overall standard for
software development SPICE will be the start for the Process mapping set. If necessary it
is extended by more processes along with the evaluation of standards. For the other two
mapping sets the situation is different. Initially these sets are empty and then built up
successively with each analysed standard. But for all three mapping sets it can be
assumed (which of course has to be verified) that the sets, after evaluating the six
standards, are relatively stable and complete. Thus it is possible that few modifications
occur when more standards are analysed.
REFERENCED
STANDARDS
STANDARD
STANDARD
1
STANDARD
1
PROCESSES 1
ARTEFACTS
PROCESS PRODUCT
PROCESS
1 PRODUCT
1
PROCESS
1 PRODUCT
1
1 1
with quality
STANDARD TO
EVALUATE
Since an arrow only shows that a relation exists i.e. there is an item of the evaluated
standard that can be mapped to a mapping set, further information is needed. As
described above, for this a quality is attached to the arrow and for each mapped item it
contains the corresponding information. The mapping may include different properties as
shown below.
• Extent; how much can be mapped.
• Properties; what are the properties of the item in the standard compared to what is
defined in the mappin g set.
• Conditions; if the mapping is always valid or if it varies.
• Strictness; if “shall”, “should” or “may” is used.
Thus, qualities are generally expressed in plain English and have to be compared
informally. If just looking at the mappings without qualities it is possible to get a more
simplified view. This will be discussed under Metrics below. However, we can note some
consequences directly. Without the quality aspects it can be difficult to see:
• the safety-related aspects such as models for criticality, safety functions etc
• the application type
• the functional information
From the Referenced standards mapping set we see the recursive nature of this analysis. If
necessary a referenced standard can be included for evaluation if it can be seen as part of
the total standard scope.
4.2.3 SPICE
SPICE defines a software development reference model and the list below shows all
defined processes according to SPICE. Note the existence of support processes that act
independently of the others. In SPICE there is also a capability reference model defined
but it is not used in this evaluation (it reflects the maturity of a company and is more used
for assessments and not directly connected to software development).
Below is a list of processes for each group. The definitions and descriptions are
quotations from reference [1]. The information is not enough for applied work and cannot
be used as a substitute for the original.
establish a requirements baseline that serves as the basis for defining the needed
software work products.
• CUS.4 Operation process: The purpose of the Operation process is to operate the
software product in its intended environment and to provide support to the
customers of the software product.
o CUS.4.1 Operational use process: The purpose of the Operational use
process is to ensure the correct and efficient operation of the software
product for the duration of its intended usage and in its installed
environment.
o CUS.4.2 Customer support process: The purpose of the Customer
support process is to establish and maintain an acceptable level of service
to the customer to support effective use of the software product.
Assistance and consultation to the customer is provided as requested to
support the operation of the software product.
4.3.1 Processes
For evaluating a single standard with respect to processes the following information is
extracted:
• strictness (shall or should)
• description of processes and activities (sub processes) but no details, only so
much information is given that it is possible to understand the meaning
• if and in what way processes and/or activities co-operate
21
4.3.2 Artefacts
For evaluating a single standard with respect to artefacts the following information is
extracted:
• strictness (shall or should)
• description but no details, only so much information is given that it is possible to
understand the meaning
• description of safety critical contents (functions) but no details
• if and in what way artefacts co-operate
• if and to what extent artefacts and their contents are affected by classifications
4.4 Metrics
Metric values used in this report are represented by a Kiviat diagram. Three axes exist
where A denotes the number of artefacts, S is the number of referenced (normative)
standards and P is the number of processes. Each axis is normalized according to the
maximum number counted for the six evaluated standards. The example below shows a
standard that is extensive and does not reference many others.
Artefacts (A)
Referenced
Standard (S)
Processes (P)
22
However, it is important to keep in mind that simple values cannot represent complex
descriptions but metrics can be valuable for giving hints.
4.5 Judgement
Judgement is a kind of conclusion and concerns e.g. applicability, strengths, weaknesses,
how easy the standard is to use, consequences of using it and general qualities of the
standard. The input to this view is the other views and aspects directly from the standard.
23
5.1.1.1 EN 954 –1
This part of the standard considers safety functions that are defined as functions that
enable the system to achieve a safe state at fault. A safety function could be a separate
function added for increasing safety. The system behaviour at fault is classified in 5
categories. These are based on resistance to faults and do not consider probability values.
The choice of category depends on the application and also the part of system under
consideration. The selected category can be used for assessment. The standard is general
and covers e.g. electrical, hydraulic, pneumatic and mechanical systems. The standard
refers to several other (normative) standards.
When using the standard it is necessary to check to what extent the risk can be reduced
for each applied safety-related part e.g. for an emergency stop.
A list of safety functions with extra requirements and relation to standards is given
containing e.g. Stop function, Emergency stop function, Manual reset, Start and restart,
Local control function, Muting, Manual suspension of safety functions, Unexpected start-
up, Indications and alarms, Response time, Safety-related parameters, Man-machine
interface, Fluctuations, loss and restoration of power sources. For all, references and
additional requirements are listed (where applicable).
Categories, as mentioned above, are defined according to fault resistance. At the lowest
level the safety function can be lost at fault, at the highest level accumulated faults could
be handled. Each category is described and guidance and examples are given.
5.1.1.2 EN 954 –2
This part of the standard is exclusively defined for validation but it is not enough for
programmable systems. This part complements part 1 by giving the conditions for
validation. The purpose is to validate the safety-related parts and their associated
category. Requirements on documentation are given according to category.
5.1.3 Mappings
The explicitly defined artefacts are listed below.
• Validation plan
• Validation report
• User information
• Test plan
• Test records
• Fault list
The explicitly defined processes are mapped to SPICE processes according to the table
below.
25
• EN 842
• EN 981
• EN 982
• EN 983
• prEN 999
• EN 1037
• EN 1050
• EN 60204-1
• EN 60447
• EN 60529
• EN 60721-3-0
• IEC 50 (191):1990
5.1.4 Metrics
Below is the Kiviat diagram shown. Three axes exist where A denotes the number of
artefacts, S is the number of referenced (normative) standards and P is the number of
processes. Each axis is normalized according to the maximum number counted for the six
evaluated standards.
5.1.5 Judgement
This standard starts from a functional requirement point of view and from the EU
machinery directive perspective. The safety-related aspects come from requirements
concerning avoidance of human harm (also e.g. ergonomic aspects are included) and
damage to machinery. The standard lists many safety functions that could be considered
for development and assessment and thus the standard serves as a reference. The
classification of categories makes it possible to classify and assess the system without
considering the implementation. On the other hand, since e.g. probabilities are not
defined the development cannot directly be controlled by category and so further
requirements are necessary. The standard requirements are complemented with many
guidelines and contains e.g. lists of safety principles, fault lists etc in the annexes.
Different types of techniques are described e.g. pneumatic, hydraulic implementation and
thus programmable systems is only one of several ways to implement safety functions.
Thus there is not much information concerning software development. For example,
Annex C describes hardware and mechanical fault types but not software faults.
From the mappings we can see, as expected, that mapping to SPICE is not natural. Just a
few processes are mentioned and the artefacts of them are expressed in general terms.
Many standards are referenced and this is one sign that the standard has to be
27
complemented with other standards. For assessments the standard as such is more suitable
e.g. when verifying claimed categories for an application.
The figure below shows the relation between the analysed parts.
Part 1:
Overall system
Part 2:
Programmable system
Part 2:
Hardware
Part 3:
Software
This part handles the overall system issues and in principle only programmable systems,
however, it is pointed out that safety could rely on other kinds of systems as well e.g.
mechanical. The main purpose is to handle systems where safety functions are carried out
by programmable electronic systems. For this type of systems an overall safety life cycle
is defined.
Four Safety Integrity Levels (SIL 1-4), based on risk, are defined according to probability
of failure. Two different groups of values exist depending on amount of demands (low
and high/continuous demands). For low demands SIL limits range from 10-1 to 10-5 for a
demand and for high/continuous demands range from 10-5 to 10-9 faults per hour.
The overall safety life cycle is the basis of the standard. The life cycle consists of 16
phases and contains the flow of actions, however, it is stressed that iterations between
them are normal.
The following phases are defined: Concept, Overall scope definition, Hazard and risk
analysis, Overall safety requirements, Safety requirements allocation, Overall operation
and maintenance planning, Overall safety validation planning, Overall installation and
commissioning planning, Programmable system realisation, Other technology realisation
(outside the scope of this standard), External risk reduction facilities (outside the scope of
this standard), Overall installation and commissioning, Overall safety validation, Overall
operation maintenance and repair, Overall modification and retrofit, and
Decommissioning or disposal. Programmable system realisation is further specified by
including hardware and software aspects.
For each phase description is given in short form (table) and the following is specified:
• objectives
• scope
• input
• output
Verification, Management of functional safety, Functional safety assessment are not
listed in short form but in expanded form (separate chapters).
An important phase is the Hazard and risk analysis. The analysis shall include fault
conditions and foreseeable misuse. This includes also human factors and abnormal or
infrequent modes of operation. Since Safety Integrity Levels are based on probabilities
the likelihood of hazardous events has to be calculated.
The phase Safety requirements allocation is important and the reasons are listed below.
• Safety functions are allocated to the system.
• Safety Integrity Level is specified for each safety function. Note that this is made
from a system/requirement point of view without considering software and
hardware aspects.
Guidelines are given for how to handle SIL and combinations of safety functions.
Verification takes place at all phases and the means are reviews, analyses and tests.
Functional safety assessments also take place at all phases and are made in order to verify
the achieved safety level. Personnel competence and independence (for person,
department or organization) are described and for independence HR (Highly
Recommended) and NR (Not Recommended) are used and related to Safety Integrity
Level and to consequences.
This part is a refinement of Part 1 and applicable for systems that contain at least one
programmable sub system. Thus it is necessary to study Part 1 first (there are also direct
references to Part 1). The system should be based on a decomposition into sub systems.
The programmable system safety life cycle (Programmable system realisation phase)
consists of 6 sub phases: Safety requirement specification (concerning functions and
integrity levels), Safety validation planning, Design and development, Integration,
Installation commissioning operation maintenance, and Safety validation.
For each phase of the safety lifecycle described here a description is given in short form
(table) and the following is specified:
• objectives
• scope
• input
• output
The life cycle also includes Modification, Verification, and Functional safety assessment
aspects (the same as in Part 1).
More detailed requirements are stated concerning e.g. performance, modes of operation,
hardware/software interaction, probability of critical hardware faults, fault avoidance,
handling faults, safety and non-safety function independence,
Safety Integrity Level (SIL) is split up for hardware and software. For hardware, tables
show highest possible SIL according to chosen fault tolerance and safe failure fraction i.e.
the fraction of faults that leads to a non-dangerous situation. Further, the tables are
specified for two categories A and B defined according to available information
concerning the constituent components. Guidelines are given for how to decide maximum
hardware SIL depending e.g. on hardware architecture.
Guidelines are given for how to compute probabilities for random hardware faults.
Aspects included are e.g. diagnostic test interval, hardware architecture etc. Also handling
of fault avoidance, control of systematic faults (e.g. software, operator faults etc), and
fault detection is described.
A number of implementation aspects are listed e.g. rates of failure, diagnostic coverage,
hardware fault tolerance, safe failure fraction, highest possible SIL, and proven in use or
not. Data communication is handled separately.
Within this part references are given to Part 3 concerning software but programmable
systems and hardware aspects are handled in this part. This part also includes hardware
architecture, sensors and actuators.
This part is a refinement of Part 1 and 2 and focuses on software for a programmable
system. Thus it is necessary to study Part 1 and 2 first (there are also direct references to
Part 1 and 2). Partitioning should be made into software modules.
The software safety life cycle (Programmable system realisation phase) consists of 6 sub
phases: Software safety requirement specification (concerning functions and integrity
levels), Software safety validation planning, Software design and development,
Programmable electronics integration, Software operation and modification, and Software
safety validation.
Software design and development are further specified in: Architecture, Support tools and
programming languages, Detailed design and development, Detailed code
implementation, Software module testing, and Software integration testing.
For each phase of the software safety lifecycle described here a description is given in
short form (table) and the following is specified:
• objectives
• scope
• input
• output
The lifecycle also includes Software modification, Software verification, and Software
functional safety assessment aspects (including the same aspects as in Part 1). Direct
reference is made to V-model for development, however, iterations are necessary at
changes.
5.2.3 Mappings
The explicitly defined artefacts are listed below.
• Description (overall concept)
• Description (overall scope definition)
• Description (hazard and risk analysis)
• Specif ication (overall safety requirements)
• Description (safety requirements allocation)
• Plan (overall operation and maintenance)
• Plan (overall safety validation)
• Plan (overall installation)
• Plan (overall commissioning)
• Realisation of E/E/PE safety-related syste ms
• Report (overall installation)
• Report (overall commissioning)
• Report (overall safety validation)
• Log (overall operation and maintenance)
• Request (overall modification)
• Report (overall modification and retrofit impact analysis)
• Log (overall modification and retrofit)
• Report (overall decommissioning or disposal impact analysis)
• Plan (overall decommissioning or disposal)
• Log (overall decommissioning or disposal)
• Plan (safety)
• Plan (verification)
• Report (verification)
• Plan (functional safety assessment)
• Report (functional safety assessment)
• Specification (E/E/PES safety requirements)
• Plan (E/E/PES safety validation)
• Description (E/E/PES architecture design)
• Specification (programmable electronic integration tests)
• Specification (integration tests of programmable electronic
• and non programmable electronic hardware)
• Description (hardware architecture design)
• Specification (hardware architecture integration tests)
• Specification (hardware modules design)
• Specifications (hardware modules test)
• Hardware modules
• Report (hardware modules test)
• Report (programmable electronic and software integration test)
• Report (programmable electronic and other hardware integration test)
32
• Instruction (user)
• Instruction (operation and maintenance)
• Report (E/E/PES safety validation)
• Instruction (E/E/PES modification procedures)
• Request (E/E/PES modification)
• Report (E/E/PES modification impact analysis)
• Log (E/E/PES modification)
• Plan (E/E/PES safety)
• Plan (E/E/PES verification)
• Report (E/E/PES verification)
• Plan (E/E/PES functional safety assessment)
• Report (E/E/PES functional safety assessment)
• Specification (software safety requirements)
• Plan (software safety validation)
• Description (software architecture design)
• Specification (software architecture integration tests)
• Specification (programmable electronic and software integration tests)
• Instruction (development tools and coding manual)
• Description (software system design)
• Specification (software system integration tests)
• Specification (software module design)
• Specification (software module tests)
• List (source code)
• Report (software module test)
• Report (code review)
• Report (software module test)
• Report (software module integration test)
• Report (software system integration test)
• Report (software architecture integration test)
• Report (programmable electronic and software integration test)
• Instruction (user)
• Instruction (operation and maintenance)
• Report (software safety validation)
• Instruction (software modification procedures)
• Request (software modification)
• Report (software modification impact analysis)
• Log (software modification)
• Plan (software safety)
• Plan (software verification)
• Report (software verification)
• Plan (software functional safety assessment)
• Report (software functional safety assessment)
The explicitly defined processes are mapped to SPICE processes according to the table
below.
33
5.2.4 Metrics
Below is the Kiviat diagram shown. Three axes exist where A denotes the number of
artefacts, S is the number of referenced (normative) standards and P is the number of
processes. Each axis is normalized according to the maximum number counted for the six
evaluated standards.
P
35
5.2.5 Judgement
This is a huge application independent standard (part 1-3 180 pages, 4-7 exist also) for
system and software development and defined to be a general (generic) standard that
could be used for creating more specific standards. One example is EN 50128 and there
are others. This standard is generally accepted and widely used.
The reason why it is so big is that many details and a lot of guidance are included. The
size as such could be a problem but the clear structure between the three first parts
improves the situation i.e. there is a clear top-down procedure defined. Also the structure
is similar at the top level (including other means than programmable systems), at the
programmable system level (including hardware) and at the software level. Important is
also the functional safety assessment that occurs on each level. At all three levels the top-
down approach is visible e.g. when allocating safety requirements, sub systems and sub
software module decomposition etc.
For each lifecycle description (one in each part) there is a figure (with boxes for the
phases) and an associated table with lifecycle phase, objectives, scope, requirement sub
clause, inputs and outputs. This makes the scope easier to grasp and connections between
the parts are more obvious. Each box of the figure is described thoroughly in separate
chapters.
The SIL levels are defined in Part 1 as probabilities decided in advance by looking at the
role of the developed application. Thus there is quantity that is mapped to and guiding the
implementation. Look up tables (showing e.g. techniques, measures, and documentation)
are easy to use when SIL levels have been decided.
Parameterised software is not particularly pointed out and not subcontractor roles and
COTS either.
The standard lists many processes and many artefacts (maximum for both of the six
evaluated standards). The mapping to SPICE is not completely obvious. The number of
referenced standards (not so few) does not reflect the fact that this is a “stand alone”
standard.
The language is direct and clear (it tells you what to do) and the contents is not difficult to
grasp. For the extracted version it was relatively easy to separate the guidance from the
rest.
Here dependability means availability performance and the relevant factors for it are:
reliability performance, maintainability performance, and maintenance support
performance. Thus safety critical aspects are not the major concern and so no division
into levels of risk or criticality is made. This standard is a support to IEC 60300-3-6. The
36
basis of this standard is the software life cycle processes defined in ISO/IEC 12207 and
thus the definition is close to SPICE.
Primary software lifecycle processes (i.e. the acquisition process, the supply process, the
development process, the operation process, and the maintenance process) are mainly
considered and described. Support and organization processes are only shortly
commented. The following roles are used: Acquirer, Supplier, Developer, Operator, and
Maintainer.
For primary software life-cycle processes, activities that inf luence dependability are listed
and guidelines are given.
• For acquisition process: Specification of dependability requirements, Selection of
supplier, Preparation of contracts, Supplier monitoring, Acceptance and
completion.
• For supply process: Initiation, Preparation of response, Contract, Planning,
Execution and control, Review and evaluation, Delivery and completion,.
• For development process: Process implementation, System requirement analysis,
System architectural design, Software requirements analysis, Software
architectural design, Software detailed design, Software coding and testing,
Software integration, Software qualification testing, System integration, System
qualification testing, Software installation, Software acceptance support,
• For operation process: Process implementation, Operational testing, System
operation, User support.
• For maintenance process: Process implementation, Problem and modification
analysis, Modification implementation, Maintenance review/acceptance,
Migration, Software retirement.
5.3.3 Mappings
The explicitly defined artefacts are listed below.
• Contract
• Project management plan
• System engineering management plan-SEMP
• Software development plan-SDP
• Work breakdown structure (WBS)
• Software QA plan
• Reliability program plan
• Software FMECA
37
The explicitly defined processes are mapped to SPICE processes according to the table
below.
38
5.3.4 Metrics
Below is the Kiviat diagram shown. Three axes exist where A denotes the number of
artefacts, S is the number of referenced (normative) standards and P is the number of
processes. Each axis is normalized according to the maximum number counted for the six
evaluated standards.
5.3.5 Judgement
This application independent standard does not directly relate to safety but by changing
the meaning of dependability to include safety the difference to other standards is
decreased. The procedure when developing this standard was to look at each activity and
give guidelines at each what could be improved with regard to dependability.
As seen above this standard maps directly to SPICE processes (as expected).
A significant aspect of this standard is the discussion of roles and these concern
acquisition, supply, development, operation, and maintenance.
This standard supports IEC 60300-3-6 by adding dependability aspects for all life cycle
activities. Thus this standard is not a “stand-alone” even if it does not reference many
other standards.
Since safety-related aspects are not considered directly there is no crit icality
classification.
Background information is clearly separated from the guidance. The language is clear and
is not difficult to grasp.
5.4 RTCA/DO-178B
5.4.1 Summary
The aviation standard RTCA/DO-178B ”Software Considerations in Airborne Systems
and Equipment Certification” is the most important standard for certification of software
used in commercial aircrafts in the US and in the EC. In this standard, Annex is
normative and Appendix is informative.
The purpose is to give guidelines for software development for airborne systems. Thus
there are no “shall” nor “must” in the text. The focus is on software life cycle processes;
40
definition of objectives, how to achieve them, and how to document that the objectives
have been fulfilled. Also the relation between system life cycle processes and software
life cycle processes is specified.
Five different software levels (A, B, C, D and E) are specified based on consequences
ranging from catastrophic failure to no effect on safety issues. The levels are initially set
by the system safety assessment process. Guidelines are given for handling of safety
aspects using software levels. No software failure rates are considered. Aspects on user
modifiable software, option-selectable software, COTS, and field-loadable software are
included.
Three types of software life cycle processes are defined: software planning process,
software development processes (requirement, design, coding, and integration processes)
and integral processes (verification, configuration management, quality assurance and
certification liaison processes). The integral processes are performed concurrently with
software development processes. For example a development process can require that
certain integral processes are performed. The processes may be iterative. Guidance on
how to handle processes is given. For each process the objectives, the activities and
sometimes further aspects e.g. documents are defined. The only specific airborne related
process is the certification liaison processes.
Requirements are split up in three types high-level, low-level and derived. The latter
defines requirements that are not directly traceable to high-leve l requirements e.g.
implementation requirements.
A relatively large part of the standard concerns verification (analysis, tests etc).
Documentation (called software life cycle data) is gathered in a separate chapter. For each
document a list of included aspects is given. Some specific airborne related documents
exist.
See Annex 8.
5.4.3 Mappings
The explicitly defined artefacts are listed below.
• Control data (such as minutes, memoranda)
• Plan for Software Aspects of Certification
• Software Development Plan
• Software Verification Plan
• Software Configuration Management Plan.
• Software Quality Assurance Plan
• Software Requirements Standards
• Software Design Standards
• Software Code Standards
• Software Requirements Data
• Design Description
• Source Code
• Executable Object Code
• Software Verification Cases and Procedures
• Software Verification Results
• Software Life Cycle Environment Configuration Index (SECI)
• Software Configuration Index (SCI)
• Problem Reports
• Software Configuration Management Records
• Software Quality Assurance Records.
• Software Accomplishment Summary
• Tool Qualification Plan
• Tool Operational Requirements
• Tool Accomplishment Summary
• User installation guides
• User manuals.
• Product service history
The explicitly defined processes are mapped to SPICE processes according to the table
below.
42
5.4.4 Metrics
Below is the Kiviat diagram shown. Three axes exist where A denotes the number of
artefacts, S is the number of referenced (normative) standards and P is the number of
processes. Each axis is normalized according to the maximum number counted for the six
evaluated standards.
5.4.5 Judgement
This application specific standard is directed towards avionics but noticeable only in just
a few separate chapters. For example, certification and probably also the emphasis on
user modifiable software are application specific. Thus this standard can actually be used
quite generally. Even though this standard points out that it contains only guidelines
(must and shall are removed from the standard, lists are not necessarily complete,
examples are just examples etc.) it seems that the impact of the standard is stronger. One
reason is that the standard is widely used and generally accepted.
One peculiar thing is that there are no normative referenced standards. Even though the
purpose is to be “stand alone” not everything is stated in the standard. For example, it
covers most aspects of software development, but not organization aspects.
The layout of the standard is according to processes but since all documents are listed in a
separate chapter one could alternatively start from this aspect instead.
The standard is not that easy to grasp directly. One reason is the reference touch of the
standard. In a way a reader has to know it before he/she can read it i.e. in practise he/she
has to read it several times. Another reason is that it does not clearly point out what to do
instead it describes different qualities (as a guideline would do). This also makes it
difficult to generate the extracted version (the most difficult one of the six evaluated
standards).
Since categories are based on software criticality more facts are needed for guiding
development, for example a highly critical software but extremely unlikely to fail must be
handled in some other way.
That the standard is safety-related is shown by the addressed application type i.e. aircrafts
and by the definition of software levels.
5.5.1 Summary
Medical systems are subject to standard IEC 60601-1-4 “Medical Electrical Equipment –
Part 1: General Requirements for Safety – 4. Collateral Standard: Programmable
Electrical Medical Systems”. This is a collateral standard to IEC 60601-1. This standard
assumes that a process is followed.
The standard specifies what to do but not how to do it. Thus it does not consider software
or hardware aspects i.e. implementation issues. The standard is focused on risk handling.
The list below shows the considered aspects.
• Documentation.
• Risk management plan.
• Development lifecycle. It shall be selected but there is no guidance on which to
choose.
• Risk management process. It includes risk analysis and risk control.
• Qualification of personnel.
• Requirement specification.
• Architecture.
• Design and implementation.
• Verification.
• Validation. Independence aspects are addressed.
• Modification.
• Assessment.
Iterations are expected.
A fundamental aspect is the Risk management file where all safety-related information is
placed. The contents is shown in a figure together with a more detailed view of the Risk
management summary (which is included in the Risk management file).
5.5.3 Mappings
The explicitly defined artefacts are listed below.
• Instructions for use
• Risk management file
• Quality records
• Risk management summary
• RISK management plan
45
• Verification plan
• Validation plan
• PEMS requirement specification
• Subsystem (e.g. PESS) requirement specification
• PEMS architecture specification
• PESS architecture specification
• Subsystem design specification
• Subsystem test specification
• Assessment report
The explicitly defined processes are mapped to SPICE processes according to the table
below.
46
5.5.4 Metrics
Below is the Kiviat diagram shown. Three axes exist where A denotes the number of
artefacts, S is the number of referenced (normative) standards and P is the number of
processes. Each axis is normalized according to the maximum number counted for the six
evaluated standards.
5.5.5 Judgement
This is a collateral standard. It is completely focused on what to do and not how it shall
be made. Some (but not much) guidance is given in annexes. The layout is close to a
checklist and very easy to read. Actually it is very close to the extracted version i.e.
focusing on processes, artefacts and referenced standards and is the easiest one to extract
of the six evaluated standards.
From the mappings we can see that this is a “small” standard and relies on other
concerning processes and artefacts. It does not map well to SPICE since processes is not
the main issue. Instead it is a specific standard for risk handling within medical
applications.
5.6 EN 50128
5.6.1 Summary
The European standard EN 50128 “Railway Applications: Software for Railway Control
and Protection Systems” is used in railway industry all over Europe. This standard is one
of a group of railway standards and together the purpose is to be complete within their
scope (for railway applications EN 50126 and EN 50129 should also be studied). Even if
not referenced, this standard is based on IEC 61508. This standard is primarily focused on
new software development.
Five software integrity levels (SIL) are specified 0-4 (4 is the most critical and 0 is non
safety-related software), but choice of SIL according to risk is not considered. The system
level is decided before applying this standard and documented in System safety
requirements specification together with the safety functions allocated to software. Here
the software SIL is handled and guidelines are included.
48
This standard is focused on methods for achieving safe systems and considers only
aspects directly related to software. The corresponding functional steps are: Definition of
requirements, Design development and testing, Hardware integration, Validation, and
Maintenance. Iterations are possible. The activities Verification, Assessment, and Quality
assurance run in parallel with the development.
The following functional steps are defined. For each step, input and output documents are
specified.
• Software requirement specification
• Software architecture
• Software design and implementation
• Software/hardware integration
• Software validation
• Software maintenance
The following tasks run in parallel with the development. For each task, input and output
documents are also specified.
• Software verification and testing
• Software assessment
• Software quality assurance
A document cross reference table is given showing the relation between document
(created/used), phase and associated clause in this standard.
If not using HR (Highly Recommended) for techniques and measures, as defined by the
SIL level, it must be motivated.
5.6.3 Mappings
The explicitly defined artefacts are listed below.
• System Requirements Specification
• System Safety Requirements Specification
• System Architecture Description
• System Safety Plan
• SW Quality Assurance Plan
• SW Configuration Management Plan
• SW Verification Plan
• SW Integration Test Plan
• SW/HW Integration Test Plan
• SW Validation Plan
• SW Maintenance Plan
• Data Preparation Plan
• Data Test Plan
• SW Requirements Specification
• Application Requirements Specification
• SW Requirements Test Specification
• SW Requirements Verification
• SW Architecture Specification
• SW Design Specification
• SW Arch. and Design Verification Report
• SW Module Design Specification
• SW Module Test Specification
• SW Module Verification Report
• SW Source Code
• SW Source Code Verification Report
• SW Module Test Report
• SW Integration Test Report
• Data Test Report
• SW/HW Integration Test Rep
• SW Validation Report
• SW Assessment Report
• SW Change Records
• SW Maintenance Records
• SW coding standards
• SW Requirements Verification Report
The explicitly defined processes are mapped to SPICE processes according to the table
below.
50
5.6.4 Metrics
Below is the Kiviat diagram shown. Three axes exist where A denotes the number of
artefacts, S is the number of referenced (normative) standards and P is the number of
processes. Each axis is normalized according to the maximum number counted for the six
evaluated standards.
5.6.5 Judgement
For a complete overview EN 50126 and EN 50129 should also be considered. The focus
for this standard is on new development of software.
The clauses of this standard focus in some cases on what to produce, e.g. one chapter is
“Software requirements specification”, and in other cases on processes e.g. “Software
verification and testing”. “Software requirements specification” contains both functional
and test specifications. As a consequence it is not clear what the difference is between
processes, activities, procedures, tasks, phases etc. Area of interest is described instead at
some places. Some chapters contain different aspects that would be better if they were
separately handled e.g. chapter “Lifecycle issues and documentation” should have been
split up in two. No lifecycle model is specified but since two examples are given these are
assumed to be starting points. Taken together it is a little bit confusing and this can
actually be seen at the mapping to SPICE processes.
Otherwise the language is clear and the contents is easy to grasp. Each phase is described
by objectives, input, output and requirements which makes it easy to orientate. There is a
document cross-reference table that is useful for an overview and connects phases,
documents, clauses, and use and definition phase of documents.
Parameterisation and COTS are included. All phases concerning parameterisation are
gathered in a separate chapter i.e. as a parallel development line which could be practical
since it is a specific feature.
Different roles are included e.g. assessor, validator, designer etc and in many cases the
standard requirements start from a specific role.
The large number of listed techniques, even if not completely updated, gives a valuable
start for finding suitable methods in actual development.
Not much is railway specific and thus in principle the standard could be used for any
application.
53
Note that when a number of aspects are listed below, the list is not necessarily complete.
Instead the most important aspects are shown and aspects with special software interest.
6.2 EN 954
6.2.1.1 EN 954 –1
Validation is made for the specified safety functions, their realisation, and categories. As
a result redesign may have to be made i.e. an iterative procedure is assumed. Validation is
made using tests and analyses according to the validation plan (containing the validation
strategy). Only validation of safety-related parts is considered in the standard. Activities
and decisions shall be documented and test procedures and the results of them shall be
documented in a validation report.
Validation analysis can be made using e.g. fault lists, fault tree analysis, failure mode and
effect analysis, criticality analysis, checklists etc.
Validation testing includes functional testing of the safety functions. Important is that
normal, and foreseeable abnormal cases are considered. Validation testing shall also
verify categories. Suggested methods are: analysis based on circuit diagrams, practical
tests on circuits, and simulation. Also tests related to environmental parameters shall be
made concerning e.g. temperature, vibration etc.
6.2.1.2 EN 954 –2
Analysis should be started as early as possible in parallel with the design process. Here
mechanical, pneumatic, hydraulic and electrical technologies are included. The standard
points out that, in many cases, validation of programmable electronic system is not
complete in the standard.
An overview of the validation process is given. Details of the validation plan are listed
and a list of documents for validation is given. Emphasis is on hardware but general
software aspects are also included. Documentation requirements relative categories are
listed.
Guidance is given for analysis. If analysis is not sufficient testing is necessary. For tests a
test plan shall be made with accompanying test records. Guidance is given for how to
carry out testing.
Fault lists shall be made defining the faults to be considered (some might be excluded)
54
For each category, guidance is presented for validation according to the definition of
categories. Also guidelines for validation of environmental and maintenance requirements
are stated.
The objective of the Overall safety validation planning is to create an overall validation
plan. This standard specifies a number of aspects that shall be included such as: who will
validate and when, list of relevant modes of operation, specification of the safety-related
system, validation strategy, confirmation means including means for safety function and
safety integrity level confirmation, environment specification, policies and procedures for
evaluation.
The objective of Overall safety validation is to validate that the system fulfils overall
safety function requirements and overall safety integrity levels requirements. The
validation is made according to the overall validation plan taking into account safety
requirements allocation. Results shall be documented including safety functions being
validated (by test or by analysis), tools and equipment, results with judgement, and
configuration identification. The resulting action (e.g. a change request) shall also be
documented.
Verification is used for verifying (by review, analysis and/or test) that outputs of any
lifecycle phase fulfil objectives and requirements for that phase. For each such phase
there shall be a corresponding plan that shall be used for verification. The plan shall
contain criteria, techniques, and tools to be used. Results with judgement shall be
documented.
The objective of the programmable system safety validation planning is to plan the
validation. The planning shall define the steps for making it possible to show that the
system fulfils safety requirements. The following aspects should be considered: safety
requirements, the steps to be applied at validation, required environment, evaluation
procedures and policies.
The objective of programmable system safety validation is to validate that the system
fulfils safety function requirements and safety integrity levels requirements. The
validation is made according to the validation plan and each safety function shall be
validated by test and/or analysis. For each safety function the following shall be
documented: tools, equipment and results with judgement. The resulting action (e.g. a
change request) shall also be documented. Techniques and measures related to SIL for
fault avoidance during validation are given in Annex B.
Verification is used for verifying that outputs of a phase are correct. For each such phase
there shall be a corresponding plan that shall be used for verification. The plan shall
contain criteria, techniques, and tools to be used. The plan shall consider: strategies and
techniques, test equipment, relevant activities, and evaluation of results. In each design
and development phase safety function and integrity level requirements shall be met. Test
cases, results with judgements shall be documented. It shall also be verified that safety
requirements are consistent with those defined at the overall level (see Part 1) and with
tests and documentation. The same consistency checks shall be made for design and
development verification.
The V-model is shown defining the role of software validation and testing. The annexes
give recommendations for software module testing and integration, HW/SW integration,
software safety validation, and software verification. For some of these more detailed
information is given as well.
The objective of the software safety validation planning is to develop a plan for software
safety validation. The planning shall define the steps for making it possible to show that
the software fulfils safety requirements. The following aspects should be considered: who
will validate and when, list of relevant modes of operation, specification of the safety-
related software, validation strategy, confirmation means including means for safety
function and safety integrity level confirmation, safety requirements, required
environment, evaluation procedures and policies. An assessor shall review the validation
planning scope and contents (according to SIL).
Each software module shall be tested during design and results shall be documented.
Software integration testing shall be specified during design and performed at integration.
The specification shall contain: tests to be performed, test environment, test criteria, and
procedures for corrective actions. Results with judgements shall be documented. At
changes, an impact analysis shall be made in order to determine the necessary amount of
reverification.
56
The objective of software safety validation is to validate that the system fulfils software
safety requirements according to SIL. The validation is made according to the software
validation plan. For each safety function the following shall be documented: tools,
equipment, and results with judgement. The resulting action (e.g. a change request) shall
also be documented. Testing is the main validation method. Software tool for validation
shall be qualified.
Verification is used for verifying that outputs of a software phase are correct. For each
such phase there shall be a corresponding plan that shall be used for verification. The plan
shall contain criteria, techniques, and tools to be used. The plan shall consider: strategies
and techniques, verification tools, evaluation of results, corrective actions. Results with
judgement shall be documented.
Each software safety phase and its output should be verified according to specific criteria.
The list below shows the verification activities that shall be performed.
• Software safety requirements
• Software architecture
• Software system design
• Software module design
• Code
• Data
• Software module testing
• Software integration testing
• Programmable electronics integration testing
• Software safety requirements testing
It shall also be verified that safety requirements are consistent with those defined at the
higher level (see Part 2) and with software validation planning.
For software architecture the following should be considered: if the software architecture
fulfils requirements, if tests are adequate, and if partitioning is adequate. The same
aspects apply to software system design and software module design.
For data, verification details are given that shall be verified e.g. data structures.
For each case guidelines and motivations are given. A general description of verification
is given at the end of the standard.
6.5 RTCA/DO-178B
Software verification is an integral process i.e. could be applied at several places. Apart
from recognizing the interface between system and software, system verification aspects
are not included in the standard.
A software verification plan shall be made defining how verification shall be made. Test
planning and verification activities should be defined in the software planning process. A
significant part of the standard is devoted to verification thus pointing out the importance
of it.
Verification includes review, analysis and test. Five tables in the annex show objectives
and outputs related to software level (A-D) for verification. The outputs from verification
are stored in Software Verification Cases and Procedures and Software Verification
Results.
The following items are verified by reviews and analysis (for each a number of objectives
is listed).
• High-level requirements
• Low-level requirements
• Software architecture
• Source code
• Outputs of integration process i.e. verifying linking and loading data, memory
map.
• Test cases, procedures and results i.e. that code testing development and
performance were accurate and complete.
For testing, a diagram shows the testing process. Three types are defined:
Hardware/software integration testing, Software integration testing (integrating software
components), and Low-level testing (implementation). Requirement-based testing is
emphasized and two categories exist: normal range tests and robustness (abnormal) range
tests. Guidelines are given for requirement-based testing methods concerning
Hardware/software integration testing, Software integration testing, and Low-level
testing.
The following documents (life cycle data) are related to verification (for each there is a
number of aspects that should be included):
• software verification plan
• software verification cases and procedures
• software verification results
58
Guidelines are given for handling previously developed code, change of application or
development environment, and software verification tool qualification.
In Annex DDD the V-model is shown with validation and verification activities.
6.7 EN 50128
Two development life cycles are shown (one is the V-model) where verification and
validation activities are explicitly shown. The annexes give recommendations in rela tion
to SIL for documentation, verification and testing, Software validation, and standard
clauses to be assessed. For some of these more detailed information is given as well.
From the document cross-reference table the following documents related to testing shall
be made: SW verification plan, SW integration test plan, SW/HW integration test plan,
SW validation plan, Data test plan, SW requirement test specification, SW requirement
verification report, SW architecture and design verification report, SW module test
specification, SW module verification report, SW source code verification report, SW
module test report, SW integration test report, Data test report, SW/HW integration test
report, and SW validation report.
The SW requirement test specificatio n contains: test cases with inputs, outputs and
criteria for acceptance.
The SW module test specification defines the degree of test coverage and testing
procedures. The SW module test report contains test results (including test coverage) and
judgements.
The SW verification plan contains criteria techniques, tools to used and description of
activities. The plan shall address: test equipment, what shall be documented, evaluation of
results (including reliability, test coverage), and personnel.
The SW integration test plan contains: test cases, test environment, and test criteria.
Verification dependence issues shall be handled according to SIL. After each verification
activity a report shall be produced containing items not fulfilled and detected errors.
The SW integration test report contains results with judgements. If there is a failure
reasons for it shall be recorded.
The SW/HW integration test plan contains: test cases, test environment, and test criteria.
The SW/HW integration test report contains: test cases, results and judgement, and
opinion of verifier.
For validation, analysing and testing are the main activities. An independent party shall
evaluate the results of validation. The SW validation plan contains: strategy,
identification of steps for demonstrating adequacy of requirements, architecture, design,
and module design specifications.
The SW validation report contains: results (including test coverage) and judgements, and
identity of included items.
60
The UK Ministry of Defence has published this standard for programmable electronic
defence equipment. Two parts exist; Requirements and Guidance. Def Stan 00-55 is a
sector-specific standard for IEC 61508 Safety Integrity Level 4 software. Reference to
Def Stan 00-56 is included. The text below is a quotation from the standard. SRS means
Safety Related Software and SCS is Safety Critical Software.
“0.2 ….This standard places particular emphasis on describing the procedures necessary
for specification, design, coding, production and in-service maintenance and modification
of SCS (safety integrity level S4). It also details the less rigorous approaches that are
required to produce software of lower safety integrity levels (S1 to S3).”
“1.1 This Standard specifies the requirements for all SRS used in Defence Equipment. It
relates only to software and does not deal with safety of the whole system. Evidence of
the safety principles applied during the development of the SRS contributes to the overall
system safety case.
1.2 This Standard contains requirements for the tools and support software used to
develop, test, certify and maintain SRS through all phases of the project life cycles.”
The UK Ministry of Defence has published this standard for programmable electronic
defence equipment. Two parts exist; Requirements and Guidance. This standard defines a
systematic process for the safety analysis of defence equipment. The text below is a
quotation from the standard.
“0.2 This standard provides uniform requirements for implementing a system safety
programme in order to identify hazards and to impose design techniques and management
controls to identify, evaluate and reduce their associated risks to a tolerable level.”
“1.2 The purpose of this Part of the Standard is to define the safety programme
management procedures, the analysis techniques and the safety verification activities
which are applicable during the project lifecycle. This is in order to minimize the
possibility of a system entering service with unacceptable safety characteristics.”
The three railway standards EN 50126, EN 50128 (evaluated in this report) and EN
50129 belong together and are extensions of IEC 61508. They describe processes to be
followed for safety of railway applications. A specific notion is the safety case which is a
number of arguments for making it possible to convince a third party e.g. safety authority
that the system is safe and can be used.
EN 50126 is the top-level document that defines the overall process. It defines Safety
Integrity Levels and the scope for EN 50128 and EN 50129.
This standard is focused on risk analysis and the purpose is to “provide guidelines for
selecting and implementing risk analysis techniques”. The standard does not focus on risk
management and risk control. A process is defined containing the following steps: scope
definition, hazard identification and consequence evaluation, risk estimation, verification,
documentation, and analysis update.
This purpose of this standard is to give guidance for production, operation, and
maintenance of “highly reliable software required for computers to be used in the safety
systems of nuclear power plants for safety functions” and “This includes safety actuation
systems, safety system support features, and the protection systems.” The standard is
widely used and describes the whole software lifecycle. Seven lifecycle phases are
defined: Software requirements analysis and specification, Development, Verification,
Hardware software integration, System validation, Operations, and Maintenance and
modification.
This standard is an application of the generic standard IEC 61508 and used for the
process industry.
Part 1 - General framework, definitions, system software and hardware requirements
This part is equivalent in scope with Parts 1, 2, 3 and 4 of IEC 61508. This is the
normative part.
Preliminary versions exist and the three parts of the standard may be published in 2002.
This standard is an application of the basic safety publication IEC 61508 to the machinery
sector. Programmable electronic systems are handled more extensively in this standard
than in the European standard EN 954-1 for safety-related parts of machine control
systems. The work is in progress, and the standard should not be expected to be published
before 2003. Below is a quotation of the scope of the standard:
“This International Standard specifies requirements and makes recommendations for the
design, integration and validation of safety-related electrical, electronic and
programmable electronic control systems (SRECS) for machines (see note 1). It is
applicable to control systems used, either singly or in combination, to carry out safety
functions on machines which are not portable by hand while working, including a group
of machines working together in a coordinated manner.”
ISO 12207 is a generally accepted standard for software lifecycle processes. This
standard gives a general framework that has to be specified further. The standard defines
and groups processes to be used in a software life cycle.
• Primary Processes: Acquisition, Supply, Development, Operation, and
Maintenance.
• Supporting Processes: Documentation, Configuration Management, Quality
Assurance, Verification, Validation, Joint Review, Audit, and Problem
Resolution.
• Organization Processes: Management, Infrastructure, Improvement, and
Training.
The following roles are used: Acquirer, Supplier, Developer, Operator, Maintainer.
This is the international equivalent of the European standard EN 954-1. The contents is in
principle exactly the same, however, references are naturally different.
7.12 MIL-STD-882D
Mishap Risk Management (System Safety)
“1.1 Scope. This document outlines a standard practice for conducting system safety.
The system safety practice as defined herein conforms to the acquisition procedures in
DoD Regulation 5000.2-R and provides a consistent means of evaluating identified
risks. Mishap risk must be identified, evaluated, and mitigated to a level acceptable (as
defined by the system user or customer) to the appropriate authority and compliant with
federal (and state where applicable) laws and regulations, Executive Orders, treaties,
and agreements. Program trade studies associated with mitigating mishap risk must
consider total life cycle cost in any decision. When requiring MIL-STD-882 in a
solicitation or contract and no specific paragraphs of this standard are identified, then
apply only those requirements presented in section 4.”
64
8 Conclusions
8.1 Methodology
The different views are a practical way to approach a standard in shorter time than
otherwise would be possible. In this evaluation much of the vocabulary is borrowed from
the considered standard. Thus an extra help would be to study the terminology and
definitions often listed in the beginning of a standard. However, many general words have
a common meaning e.g. independence, safety integrity level (but definitions vary) etc.
The opposite is e.g. the definition of dependability, fault, and failure that unfortunately
varies with the standard.
The summary and judgement are plain English texts and describes the essence of the
standard. The mappings are formal but gives a direct overview of processes and artefacts
(by name) and referenced standards. A further improvement would be to include qualities
with the mappings as discussed under methodology earlier. Some examples are shown
below.
The methodology could be expanded in other ways also. One way is to introduce a new
mapping set e.g. for techniques and measures. Another way is to take a new perspective,
other than general and test, e.g. management, organization etc.Adding a new standard for
analysis within the current scope could be done directly and without problems.
Metrics can be improved in several ways. We can think of clarity metrics such as
counting long words, words in sentences etc. Other metrics could be the number of
“shall” vs. “should”, the number of forward references within the standard (many will
indicate that “a person has to know the standard before reading it”) etc. It is very
important to point out that reliance on these metrics will have to be made with great care;
it is really impossible to reflect a quality for humans by stating one or more numbers. In
this perspective the used metric in this evaluation might be sufficient for its purpose.
Finally it should be noted that the driving force behind mappings and judgement are the
extracted standard versions. These are in some cases difficult to do but give a significant
indication of the clarity of the standard.
SPICE also contains processes for organisation and management which is not addressed
at all or just briefly in the evaluated standards. But the benefits of SPICE are that it is
65
generally accepted, that it takes an overall view of all processes, and that both safety-
related and non safety-related development can be mapped. Note the close resemblance
with ISO/IEC 12207 that is widely used today.
However, for all standards verification and validation are important processes. Plans are
produced for performance and results are documented. Validation concerns both safety
functions and SIL. Most of the standards recognise that verification and validation are
integral/support processes i.e. they are applied at several (or all) phases included in the
software lifecycle.
If processes are iterative, follow V-model etc are of second concern and could, at least to
a significant extent, be considered independent. Thus there are more software
development standards around than necessary.
Note the strong influence of IEC 61508 as being a generic standard and useful for
developing more specific standards. There is a risk with such a big standard that e.g. a
developer follows the standard exactly and nothing more and does not reflect on if there
is anything missing, that is, something specific for his/her application development.
So, which standard of the evaluated one is the best? Since the standards are different in
scope we have to look at the set of evaluated standards for different roles. Some
conclusions concerning role s are presented below.
• For a software developer do not consider EN 954. IEC 61508 is the overall
answer but might be too much to start with if more limited aspects are of interest
e.g. at a first survey. IEC 61713 is too general since it only gives guidance and
focuses on dependability aspects (according to its definition). RTCA/DO-178B
66
and EN 50128 are more like each other even though they are used for different
application areas. One of these is probably a good compromise. A specific aspect
of EN 50128 is the extensive list of techniques and measures (69 all together) for
increasing safety. If EN 50128 is chosen EN 50126 and EN 50129 should also be
considered. IEC 60601-1-4 is too specialised if general software development is
considered.
• For a standard developer start with IEC 61508 if it concerns development and EN
954 if it concerns requirements for machinery.
• For assessing safety integrity level (or corresponding name) EN 954 is suitable
for principal behaviour (resistance to faults) and IEC 61508 for quantitative
values and requirements on techniques, measures and documentation.
• For process improvement and assessment use SPICE as the overall reference
(perhaps also its capability model).
• For specific medical equipment and if IEC 60601-1 is used IEC 60601-1-4 is the
choice.
• If parameterised software is essential, RTCA/DO-178B and EN 50128 are
primary candidates since they explicitly mention this aspect. They are also
primary candidates if verification is the main issue or if activities related to SIL
are of primary interest.
Persons using a specific standard might find it insufficient and would like to use another
established standard instead adding a minimum of company specific requirements. A
natural way could then be to study IEC 61508 and SPICE. By conforming to these, much
information could be retrieved directly, e.g. via Internet, concerning practical devices and
guidance.
In a company, if more than one standard is used this gives rise to extra costs e.g.
concerning education of personnel, documentation etc. By comparing the standards
internally or by introducing a new more general standard it is possible to reach a common
view and achieve synergy effects.
If an enterprise wants to change its business goals to include new types of applications, a
comparison with the corresponding standards is necessary to see the influence on the
company organisation and methods.
Persons in a company realise that they do have to follow an established standard and they
do not know which one to choose. For example, could the same one be used for safety-
related and not safety-related applications?
67
9 References
[1] ISO/IEC TR 15504-1 up to –9 1998 (SPICE)
Information technology – Software process assessment
ISBN 0-8186-8008-3
69
10 Annex 1: Project
10.1 Overall objectives
The overall purpose of the project is to ease the understanding of standards for safety-
related applications and to transfer a judgement of the standards to the reader without
him/her having to study them in detail. This overall purpose can be split up in sub
objectives according to the following list:
1. To thoroughly analyse and compare six specially selected standards for safety-
related applications and to reference and briefly describe other standards for
safety-related applications.
2. To define a methodology for comparing standards and to show that it is
applicable for additional standards.
3. To evaluate standards for safety-related applications from a test perspective.
Since DELTA and SP have somewhat different scopes the work has been approached
from a process side and from an artefact side. This was of importance when describing
the informal aspects of standards. For example, if opinions differed significantly then it
was possible to conclude that a standard leaves room for interpretations and vice versa.
The project has been active from March 2001 until December 2001.
Preliminary results have been presented at RTiS (Real Time in Sweden) conference in
Halmstad on August 22, 2001 and at the international testing conference EuroSTAR in
Stockholm on November 21, 2001.
DELTA has the following division in Norway: DELTA Electronic Testing, Norway.
DELTA has the following division in Sweden: DELTA Development Technology AB.
The safety-engineering group at DELTA is a part of The Electronic testing division. The
safety-engineering group is working with consultancy and is participating in projects with
customer’s development projects. The activities of the safety engineering group is mainly
concerned with the following directives:
• Machinery Directive
• Low Voltage Directive
• Atex Directive
• Medical Device Directive
• R&TTE Directive.
The work of the safety-engineering group is often undertaken in collaboration with other
groups, specialised in testing of e.g. Fire Alarm and security equipment and EMC testing.
SP, Sweden
SP Swedish National Testing and Research Institute, is the national institute for technical
evaluation, testing, certification, metrology and research.
Research and development, which are a major activity, consists of two closely lined parts.
One concentrates on fundamental research in the field of testing and metrology. The other
is concerned with applied work in technological and industrial sectors. Safety of
machinery is a sector in which much testing and research resources are devoted.
In organisations such as Eurolab, and various EC, EFTA and UN bodies, the ongoing
work is aimed at harmonising requirements, test methods and applications in order to
achieve as technically relevant, efficient and reliable services as possible. In other bodies,
such as Nordtest and Euromet, work is devoted to technical development of method of
testing and calibration, carried out by experts from the various countries working closely
together.
71
2 Normative references
• EN 292-1:1991,
Safety of machinery — Basic concepts, general principles for design — Part 1:
Basic terminology, methodology.
• EN 292-2:1991/A1:1995,
Safety of machinery — Basic concepts, general principles for design — Part 2:
Technical principles and specifications.
• EN 418,
Safety of machinery — Emergency stop equipment, functional aspects —
Principles for design.
• EN 457,
Safety of machinery — Auditory danger signals — General requirements, design
and testing (ISO 7731:1986 modified)
• EN 614-1,
Safety of machinery — Ergonomic design principles — Part 1: Terminology and
general principles.
• EN 842,
Safety of machinery — Visual danger signals — General requirements, design
and testing.
• EN 981,
Safety of machinery — System of auditory and visual danger and information
signals.
• EN 982,
Safety of machinery — Safety requirements for fluid power systems and their
components — Hydraulics.
• EN 983,
Safety of machinery — Safety requirements for fluid power systems and their
components — Pneumatics.
• prEN 999:1995,
Safety of machinery — The positioning of protective equipment in respect of
approach speeds of parts of the human body.
• EN 1037,
Safety of machinery — Prevention of unexpected start-up.
• EN 1050:1996,
Safety of machinery — Principles for risk assessment.
• EN 60204-1:1992,
Safety of machinery — Electrical equipment of machines — Part 1: General
requirements (IEC 204-1:1992, modified)
• EN 60447:1993,
Man-machine interface (MMI) — Actuating principles (IEC 447:1993)
• EN 60529,
Degrees of protection provided by enclosures (IP Code) (IEC 529:1989)
• EN 60721-3-0,
72
4 General considerations
4.1 Safety objectives in design
Designed and constructed so that the principles of EN 1050 are fully taken into account at
different modes.
6 Categories
The safety-related parts of control systems shall (should is used in 6.2) be in accordance
with the requirements of one or more of the categories: B, 1, 2, 3, 4.
7 Fault consideration
If additional faults should be considered, they should be listed and the method of
validation should also be clearly described. The designer shall declare, justify and list all
fault exclusions.
8 Validation
8.1 General
The validation of the safety-related parts of control systems should contain management
and execution of validation activities (test specifications, testing procedures, analysis
procedures) and documentation (auditable reports of all validation activities and
decisions).
9 Maintenance
The provisions for the maintainability of the safety-related part(s) of a control system
shall follow the principles of EN 292-2:1991, 6.2.1 and EN 292-2:1991/A1:1995, annex
A, 1.6. All information for maintenance shall comply with EN 292-2:1991, 5.5.1 e).
2 Normative references
• EN 292-1: 1991,
Safety of machinery – Basic concepts, general principles for design – Part 1:
Basic terminology, methodology
• EN 954-1:1996,
Safety of machinery – Safety-related parts of control systems – Part 1: General
principles for design
3 Validation process
The validation shall demonstrate that each safety-related part meets the requirements of
EN 954-1in accordance with the validation plan.
4 Validation by analysis
5 Validation by testing
5.1 General
A test plan shall be produced. Test records shall be produced.
6 Fault lists
6.2 Specific fault lists
A specific fault list shall be generated as a reference document.
8 Validation of categories
2 Normative references
• ISO/IEC Guide 51:1990,
Guidelines for the inclusion of safety aspects in standards
• IEC Guide 104:1997,
Guide to the drafting of safety standards, and the role of Committees with safety
pilot functions and safety group functions
• IEC 61508-2,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 2: Requirements for electrical/electronical/programmable
electronic safety-related systems 1)
• IEC 61508-3:1998,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 3: Software requirements
• IEC 61508-4:1998,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 4: Definitions and abbreviations
• IEC 61508-5:1998,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 5: Examples of methods for the determination of safety
integrity levels
• IEC 61508-6,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 6: Guidelines on the application of parts 2 and 3
• IEC 61508-7,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 7: Overview of techniques and measures
5 Documentation
5.2 Requirements
5.2.1 The documentation shall contain sufficient information, for each phase of the
overall, E/E/PES and software safety lifecycles completed.
5.2.4 Unless justifie d in the functional safety planning or specified in the application
sector standard, the information to be documented shall be as stated in the various clauses
of this standard.
5.2.9 The documents or set of information shall have a revision index (version numbers).
5.2.11 All relevant documents shall be revised, amended, reviewed, approved and be
under the control of an appropriate document control scheme.
(see 7.4), functional safety assessment (see clause 8), verification activities (see 7.18),
validation activities (see 7.8 and 7.14), configuration management (see 6.2.1 o), 7.16 and
IEC 61508-2 and IEC 61508-3).
Training of staff and the retraining of staff at periodic intervals shall be made.
6.2.2 The activities specified as a result of 6.2.1 shall be implemented and progress
monitored.
6.2.3 The requirements developed as a result of 6.2.1 shall be formally reviewed by the
organizations concerned, and agreement reached.
6.2.5 Suppliers providing products or services to an organization having overall
responsibility shall have an appropriate quality management system.
Concept
Overall scope definition
Hazard and risk analysis
Overall safety requirements
Safety requirements allocation
Overall operation and maintenance planning
Overall safety validation planning
Overall installation and commissioning planning
E/E/PE safety-related systems: realisation
Other technology safety-related systems: realisation
External risk reduction facilities: realisation
Overall installation and commissioning
Overall safety validation
Overall operation, maintenance and repair
Overall modification and retrofit
Decommissioning or disposal
Table 1 – Overall safety lifecycle: overview, only column for safety lifecycle phase is
shown here.
7.1.4 Requirements
79
7.1.4.1 If another overall safety lifecycle than figure 2 is used, it shall be specified during
the functional safety planning.
7.1.4.2 The requirements for the management of functional safety (see clause 6) shall run
in parallel with the overall safety lifecycle phases.
7.1.4.4 Each phase of the overall safety lifecycle shall be divided into elementary
activities with the scope, inputs and outputs specified for each phase.
7.2 Concept
The information and results concerning concept requirements shall be documented.
7.9.2.1 A plan for the installation of the E/E/PE safety-related systems shall be developed.
7.9.2.2 A plan for the commissioning of the E/E/PE safety-related systems shall be
developed,
7.9.2.3 The overall installation and commissioning planning shall be documented.
– the carrying out, periodically, of functional safety audits (see 6.2.1 k))
– the documenting of modifications that have been made to the E/E/PE safety-related
systems
7.15.2.3 Chronological documentation of operation, repair and maintenance of the
E/E/PE safety-related systems shall be maintained.
7.15.2.4 The exact requirements for chronological documentation will be dependent on
the specific application and shall, where relevant, be detailed in application sector
standards.
7.18 Verification
7.18.2 Requirements
7.18.2.1 For each phase of the overall, E/E/PES and software safety lifecycles, a plan for
the verification shall be established concurrently with the development for the phase.
7.18.2.3 The verification shall be carried out according to the verification plan.
7.18.2.4 Information on the verification activities shall be collected and documented as
evidence that the phase being verified has, in all respects, been satisfactorily completed
8.2.5 If tools are used as part of design or assessment for any overall, E/E/PES or
software safety lifecycle activity, they should themselves be subject to the functional
safety assessment.
8.2.7 The functional safety assessment activities for the different phases of the overall,
E/E/PES and software safety lifecycles shall be consistent and planned
Annex A
(Informative)
Example documentation structure
Table A.1 – Example documentation structure for information related to the overall safety
lifecycle
83
Hardware modules;
Report (hardware modules test)
Table A.2 – Example documentation structure for information related to the E/E/PES
safety lifecycle
84
Table A.3 – Example documentation structure for information related to the software
safety lifecycle
85
2 Normative references
• IEC 60050(371):1984,
International Electrotechnical Vocabulary – Chapter 371: Telecontrol
• IEC 60300-3-2:1993,
Dependability management – Part 3: Application guide – Section 2: Collection of
dependability data from the field
• IEC 61000-1-1:1992,
Electromagnetic compatibility (EMC) – Part 1: General – Section 1: Application
and interpretation of fundamental definitions and terms
• IEC 61000-2-5:1995,
Electromagnetic compatibility (EMC) – Part 2: Environment – Section 5:
Classification of electromagnetic environments – Basic EMC publication
• IEC 61508-1:1998,
Functional safety of electrical/electronic/programmable electronic safety-related
systems – Part 1: General requirements
• IEC 61508-3:1998,
Functional safety of electrical/electronic/programmable electronic safety-related
systems – Part 3: Software requirements
• IEC 61508-4:1998,
Functional safety of electrical/electronic/programmable electronic safety-related
systems – Part 4: Definitions and abbreviations
• IEC 61508-5:1998,
Functional safety of electrical/electronic/programmable electronic safety-related
systems – Part 5: Examples of methods for the determination of safety integrity
levels
• IEC 61508-6,
Functional safety of electrical/electronic/programmable electronic safety-related
systems – Part 6: Guidelines on the application of parts 2 and 3
• IEC 61508-7:2000,
Functional safety of electrical/electronic/programmable electronic safety-related
systems – Part 7: Overview of techniques and measures
• IEC Guide 104:1997,
The preparation of safety publications and the use of basic safety publications
and group safety publications
• ISO/IEC Guide 51:1990,
Guidelines for the inclusion of safety aspects in standards
• IEEE 352:1987,
IEEE guide for general principles of reliability analysis of nuclear power
generating station safety systems
7.1.3.2 The procedures for management of functional safety (see clause 6 of IEC 61508-
1) shall run in parallel with the E/E/PES safety lifecycle phases.
7.1.3.3 Each phase of the E/E/PES safety lifecycle shall be divided into elementary
activities, with the scope, inputs and outputs specified for each phase (see table 1).
7.1.3.4 Unless justified during functional safety pla nning, the outputs of each phase of the
E/E/PES safety lifecycle shall be documented (see clause 5 of IEC 61508-1).
Table 1 – Overview – Realisation phase of the E/E/PES safety lifecycle, only column for
safety lifecycle phase or activity is shown here.
7.4.4.3 Maintenance requirements, to ensure the safety integrity of the E/E/PE safety-
related systems is kept at the required level, shall be formalised at the design stage.
7.4.4.5 During the design, E/E/PES integration tests shall be planned and documented.
7.4.5 Requirements for the control of systematic faults
7.4.6 Requirements for system behaviour on detection of a fault
7.4.7 Requirements for E/E/PES implementation
7.4.7.3 The following information shall be available for each safety-related subsystem:
a) a functional specification of those functions and interfaces of the subsystem which can
be used by safety functions;
o) documentary evidence that the subsystem has been validated.
7.4.7.6 A previously developed subsystem shall only be regarded as proven in use when
there is adequate documentary evidence which is based on the previous use of a specific
configuration of the subsystem (during which time all failures have been formally
recorded, see 7.4.7.10), and which takes into account any additional analysis or testing.
7.4.7.10 Only previous operation where all failures of the subsystem have been
effectively detected and reported (for example, when failure data has been collected in
accordance with the recommendations of IEC 60300-3-2) shall be taken into account
when determining whether the above requirements (7.4.7.6 to 7.4.7.9) have been met.
7.4.8 Requirements for data communications
7.7.2.1 The validation of the E/E/PES safety shall be carried out in accordance with a
prepared plan (see also 7.7 of IEC 61508-3).
7.7.2.3 Each safety function specified in the requirements for E/E/PES safety (see 7.2),
and all the E/E/PES operation and maintenance procedures shall be validated by test
and/or analysis.
7.7.2.4 Appropriate documentation of the E/E/PES safety validation testing shall be
produced.
7.7.2.5 When discrepancies occur (i.e. the actual results deviate from the expected results
by more than the stated tolerances), the results of the E/E/PES safety validation testing
shall be documented,
Annex A
(normative)
Techniques and measures for E/E/PE safety-related systems: control of
failures during operation
89
Procedural and organisational techniques and measures are necessary throughout the
E/E/PES safety lifecycle to avoid introducing faults.
Annex B
(normative)
Techniques and measures for E/E/PE safety-related systems: avoidance
of systematic failures during the different phases of the lifecycle
Annex C
(normative)
Diagnostic coverage and safe failure fraction
Carry out a failure mode and effect analysis.
90
2 Normative references
• IEC 61508-1:1998,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 1: General requirements
• IEC 61508-2,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 2: Requirements for electrical/electronical/programmable
electronic safety-related systems
• IEC 61508-4:1998,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 4: Definitions and abbreviations of terms
• IEC 61508-5:1998,
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 5: Examples of methods for the determination of safety
integrity levels
• IEC 61508-6:
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 6: Guidelines on the application of parts 2 and 3
• IEC 61508-7:
Functional safety of electrical/electronical/programmable electronic safety-
related systems – Part 7: Overview of techniques and measures
• ISO/IEC Guide 51:1990,
Guidelines for the inclusion of safety aspects in standards
• IEC Guide 104:1997,
Guide to the drafting of safety standards, and the role of Committees with safety
pilot functions and safety group functions
7.1.2 Requirements
7.1.2.1 A safety lifecycle for the development of software shall be selected and specified
during safety planning in accordance with clause 6 of IEC 61508-1.
7.1.2.4 A V-model is used.
7.1.2.8 If at any stage of the software safety lifecycle, a change is required pertaining to
an earlier lifecycle phase, then that earlier safety lifecycle phase and the following phases
shall be repeated.
8.2 Unless otherwise stated in application sector international standards, the minimum
level of independence of those carrying out the functional safety assessment shall be as
specified in 8.2.12 of IEC 61508-1.
Annex A
(normative)
Guide to the selection of techniques and measures
The ranking of the techniques and measures is linked to the concept of effectiveness used
in IEC 61508-2. For a particular application, the appropriate combination of techniques or
measures are to be stated during safety planning.
94
1 Scope
This guide is intended to be used to support IEC 60300-3-6 and overall software life-
cycle as defined in ISO/IEC 12207.
2 Normative references
• IEC 60050(191),
International Electrotechnical Vocabulary (IEV) – Chapter 191: Dependability
and quality of service
• IEC 60300-2:1995,
Dependability management – Part 2: Dependability programme elements and
tasks
• IEC 60300-3-6:1997,
Dependability management – Part 3: Application guide – Section 6: Software
aspects of dependability
• IEC 61160,
Formal design review
• ISO/IEC 12207,
Information technology – Software life cycle processes
• ISO 8402,
Quality management and quality assurance – Vocabulary.
b) The procedure for software product test and validation should be documented and
should include a corrective action procedure.
c) There should be a fully documented, coordinated procedure for reporting software
defects and tracking their subsequent correction.
5.3.8 Software integration
The integration test results should be presented in an auditable form. The procedure for
software integration, system test and installation should be fully documented.
5.3.9 Software qualification testing
a) The developer should conduct qualification testing in accordance with any specific
dependability qualification requirements.
b) The developer should evaluate test coverage and conformance with any dependability
requirements.
c) There should be a full programme of technical reviews, internal audits and change
control reviews for any specific qualification tests.
5.3.10 System integration
a) The procedure for system integration and test should be fully documented.
b) The system integration test results should be subject to a full programme of technical
reviews, internal and external audits.
5.3.11 System qualification testing
The qualification test results should be audited and a qualification report prepared.
5.3.12 Software installation
a) The developer should install the software product or system according to the
installation documentation and verify that it is installed and operating as required.
b) Conformance with any specified installation related dependability requirements should
be verified and documented.
5.3.13 Software acceptance support
Given by the developer.
validation process determines whether the software has been made according to the
specified requirements and should be carried out as the software progresses through its
life cycle via the primary and supporting processes. A final verification and validation
should form part of the product acceptance plan.
a) The management process contains the activities and tasks used to carry out product
management, project management and task management of the primary and supporting
processes.
b) The infrastructure includes hardware, software, tools, techniques, and standards for
development, operation or maintenance of the software.
c) The improvement process is a process for establishing, assessing, measuring,
controlling and improving a software life-cycle process. The process activities establish a
suite of organizational processes required to implement the primary and supporting
processes, set up an assessment mechanism to ensure their continuing effectiveness and
implement improvements considered necessary as a result of the assessment.
d) The training process provides and maintains trained personnel for the acquisition,
supply, development, operation and maintenance processes. The acquirer should confirm
that the supplier of the software has planned and implemented a training programme
including training documentation. The training plan should be reviewed for compliance
with requirements.
Annex B
(informative)
Interaction of users with primary software life -cycle processes
100
1.0 INTRODUCTION
Words such as "shall" and "must" are avoided. Generally system processes are not
considered.
previous outputs of the software development and software verification processes are still
valid.
Requirements Standards from the software planning process. Output is the software high-
level requirements, the Software Requirements Data (subsection 11.9). Derived high-
level requirements are indicated to the system safety assessment process. Incorrect inputs
should be reported as feedback to the input source processes for clarification or
correction.
5.5 Traceability
SW tracability concerns both directions system requirement <=> high-level requirement.
Traceability between system requirements and software requirements, traceability
between the low-level requirements and high-level and traceability between Source Code
and low-level requirements should be provided.
11.13) and Software Verification Results (subsection 11.14). This includes traceability
between the software requirements and the test cases and between the code structure and
the test cases. Further aspects on traceability and quality aspects on verification are
described. Deficiencies and errors should be reported to the software development
processes for clarification and correction.
9.3 Minimum Software Life Cycle Data That Is Submitted to Certification Authority
Plan for Software Aspects of Certification, Software Configuration Index, Software
Accomplishment Summary.
Identify and record software product anomalous behavior, process non-compliance with
software plans and standards, and deficiencies in software life cycle data.
b. Assurance that changes to the software life cycle processes are stated in the software
plans.
rationale that confirms equivalent software verification coverage. The tool qualification
process may be modified.
12.3.4 Software Reliability Models
Rationale for the model should be included in the Plan for Software Aspects of
Certification, and agreed with by the certification authority.
12.3.5 Product Service History
If equivalent safety for the software can be demonstrated by the use of the software's
product service history, some certification credit may be granted. The data described
should be specified in the Plan for Software Aspects of Certification.
Annex A
(Normative)
PROCESS OBJECTIVES AND OUTPUTS BY SOFTWARE LEVEL
Guidelines are given in tables A-1 to A-10.
111
52.204.3.2.5 The estimated RISK shall be recorded against each HAZARD in the RISK
MANAGEMENT SUMMARY. Compliance is checked by inspection of the RISK
MANAGEMENT FILE.
52.204.4 RISK control
52.204.4.4 Adequate USER information on the RESIDUAL RISK.
52.204.4.5 The requirement(s) to control the RISK shall be documented in the RISK
MANAGEMENT SUMMARY (directly or as a cross reference).
52.204.4.6 An evaluation of the effectiveness of the RISK controls shall be recorded in
the RISK MANAGEMENT SUMMARY. Compliance is checked by inspection of the
RISK MANAGEMENT FILE.
52.207 Architecture
52.207.5 The architecture specification shall be made
52.209 VERIFICATION
52.209.2 A VERIFICATION plan shall be produced to show how the SAFETY
requirements for each DEVELOPMENT LIFE-CYCLE phase will be verified.
52.209.3 The VERIFICATION shall be performed according to the VERIFICATION
plan. The results of the VERIFICATION activities shall be documented, analyzed and
assessed.
52.209.4 A reference to the methods, techniques and results of the VERIFICATION shall
be included in the RISK MANAGEMENT SUMMARY.
52.210 VALIDATION
52.210.2 A VALIDATION plan shall be produced to show that correct SAFETY
requirements have been implemented.
52.210.3 The VALIDATION shall be performed according to the VALIDATION plan.
The results of
VALIDATION activities shall be documented, analyzed and assessed.
52.210.5 All professional relationships of the members of the VALIDATION team with
members of the design team shall be documented in the RISK MANAGEMENT FILE.
52.210.7 A reference to the methods and results of the VALIDATION shall be included
in the RISK MANAGEMENT FILE. Compliance is checked by inspection of the RISK
MANAGEMENT FILE.
52.211 Modification
52.211.1 If any or all of a design results from a modification of an earlier design then
either all of this standard applies as if it were a new design or the continued validity of
114
52.212 Assessment
52.212.1 Assessment shall be carried out to ensure that the PEMS has been developed in
accordance with the requirements of this standard and recorded in the RISK
MANAGEMENT FILE. Compliance is checked by inspection of the RISK
MANAGEMENT FILE.
Annex BBB
(informative)
Rationale
Iteration of portions of the process is expected, but no requirements have been given
because the need to repeat processes is unique to a particular project. Iterations also arise
from the more detailed understanding that merges during the design process.
Annex DDD
(informative)
Development life-cycle
Figure DDD.1 – A DEVELOPMENT LIFE-CYCLE model for PEMS
DOCUMENT LIST
Identified hazards and their initiating causes
Estimated RISK
Requirements to control RISK
RISK management plan
Deveklopment life-cycle
PEMS requirement specification
Verification plan
Validation plan
Subsystem (e.g. PES S) requirement specification
PEMS architecture specification
PESS architecture specification
Subsystem design specification
Subsystem test specification
Verification methods and results
Validation methods and results
Evaluation of effectiveness of the RISK controls
Residual RISK
Assessment report
RISK management summary
115
2 Normative references
• EN 50126
Railway applications - The specification and demonstration of Reliability,
Availability, Maintainability and Safety (RAMS)
• EN 50129 (at draft stage)
Railway applications - Safety related electronic systems for signalling
• EN 50159-1
Railway applications - Communication, signalling and processing systems Part 1:
Safety-related communication in closed transmission systems
• EN 50159-2
Railway applications - Communication, signalling and processing systems Part 2:
Safety-related communication in open transmission systems
• EN ISO 9001
Quality systems - Model for quality assurance in design/development,
production, installation and servicing
• EN ISO 9000-3
Quality management and quality assurance standards Œ Part 3: Guidelines for the
application of ISO 9001:1994 to the development, supply, installation and
maintenance of computer software
5.2.6 The software safety integrity level shall be specified in the Software Requirements
Specification (clause 8). If different software components have different software safety
integrity levels, these shall be specified in the Software Architecture Specification (clause
9).
8.4 Requirements
8.4.1 Express the required properties of the software. These properties are all (except
safety) defined in ISO/IEC 9126. The software safety integrity level shall be derived as
defined in clause 5 and recorded in the Software Requirements Specification.
8.4.2 To the extent required by the software safety integrity level the Software
Requirements Specification shall be expressed and structured in such a way that it is
traceable back to all documents mentioned under 8.2.
8.4.13 A Software Requirements Test Specification shall be developed from the Software
Requirements Specification. This test specification shall be used for verification of all the
requirements as described in the Software Requirements Specification.
8.4.15 Traceability to requirements and means shall be provided to allow this to be
demonstrated throughout all phases of the lifecycle.
118
9 Software architecture
9.2 Input documents
Software Requirements Specification, System Safety Requirements Specification, System
Architecture Description, Software Quality Assurance Plan
9.4 Requirements
9.4.1 The proposed software architecture shall be established by the software supplier
and/or developer and detailed in the Software Architecture Specification.
9.4.2 The Software Architecture Specification shall consider the feasibility of achieving
the Software Requirements Specification at the required software safety integrity level.
9.4.3 The Software Architecture Specification shall identify, evaluate and detail the
significance of all hardware/software interactions. As required by EN 50126 and EN
50129, the preliminary studies concerning the interactions between hardware and
software shall have been recorded in the System Safety Requirements Specification.
9.4.5 If COTS software is to be used at software safety integrity levels 1 or 2, it shall be
included in the software validation process. For levels 3 or 4, COTS software shall be
included in the validation testing. Error logs shall exist and shall be evaluated;
9.4.6 If previously developed software is to be used then it shall be clearly documented.
There shall be evidence that interface specifications to other modules which are not being
re-verified, re-validated and re-assessed are being followed.
10.4 Requirements
10.4.1 The Software Requirements Specification and the Software Architecture
Specification shall be available, although not necessarily finalised, prior to the start of the
design process.
10.4.3 The Software Design Specification shall describe the software design based on a
decomposition into modules with each module having a Software Module Design
Specification and a Software Module Test Specification.
10.4.9 To the extent required by the software safety integrity level, the programming
language selected shall have a translator/compiler which has one of the following:
i) a "Certificate of Validation" to a recognised National/International standard;
ii) an assessment report which details its fitness for purpose;
iii) a redundant signature control based process that provides detection of the translation
errors.
10.4.11 For any alternative language detailing its fitness for purpose shall be recorded in
the Software Architecture Specification or Software Quality Assurance Plan.
10.4.12 Coding standards shall be developed and used for the development of all
software. These shall be referenced in the Software Quality Assurance Plan (see 15.4.5).
10.4.14 Software Module Testing: Each module shall have a Software Module Test
Specification which the module shall be tested against. A Software Module Test Report
119
shall be produced and shall include test cases and their results shall be recorded in a
machine readable form for subsequent analysis.
10.4.18 Traceability of requirements to the design or other objects which fulfil them.
Traceability of design objects to the implementation objects which instantiate them. The
output of the traceability process shall be the subject of formal configuration
management.
11.4 Requirements
11.4.1 A Software Verification Plan shall be created.
11.4.3 The Software Verification Plan shall describe the activities to be performed to
ensure correctness and consistency with respect to the products and standards provided as
input to that phase.
11.4.9 The results of each verification shall be retained in a form defined or referenced in
the Software Verification Plan such that it is auditable.
11.4.10 After each verification activity a verification report shall be. The verification
reports shall address the following:
i) items which do not conform to the Software Requirements Specification, Software
Design Specification or Software Module Design Specifications;
ii) items which do not conform to the Software Quality Assurance Plan;
11.4.11 Software Requirements Verification: Once the Software Requirements
Specification has been established, verification shall address:
i) the adequacy of the Software Requirements Specification in fulfilling the requirements
set out in the System Requirements Specification, the System Safety Requirements
Specification and the Software Quality Assurance Plan;
ii) the adequacy of the Software Requirements Test Specification as a test of the Software
Requirements Specification;
iii) the internal consistency of the Software Requirements Specification.
The results shall be recorded in a Software Requirements Verification Report.
11.4.12 Software Architecture and Design Verification: After the Software Architecture
Specification and the Software Design Specification have been established, verification
shall address:
i) the adequacy of the Software Architecture Specification and the Software Design
Specification in fulfilling the Software Requirements Specification;
ii) the adequacy of the Software Design Specification for the Software Requirements
Specification.
iii) the adequacy of the Software Integration Test Plan as a set of test cases for the
Software Architecture Specification and the Software Design Specification;
iv) the internal consistency of the Software Architecture and Design Specifications.
The results shall be recorded in a Software Architecture and Design Verification Report.
120
11.4.13 Software Module Verification: After each Software Module Design Specification
has been established, verification shall address:
i) the adequacy of the Software Module Design Specification in fulfilling the Software
Design Specification;
ii) the adequacy of the Software Module Test Specification as a set of test cases for the
Software Module Design Specification;
iii) the decomposition of the Software Design Specification into software modules and
the Software Module Design Specifications.
iv) the adequacy of the Software Module Test Reports as a record of the tests carried out
in accordance with the Software Module Test Specification.
The results shall be recorded in a Software Module Verification Report.
11.4.14 Software Source Code Verification: To the extent demanded by the software
safety integrity level the Software Source Code shall be verified to ensure conformance to
the Software Module Design Specification and the Software Quality Assurance Plan. This
shall include a check to determine whether the coding standards have been applied
correctly. The results shall be recorded in a Software Source Code Verification Report.
11.4.15 A Software Integration Test Report shall be produced as follows:
i) a Software Integration Test Report shall be produced stating the test results and
whether the objectives and criteria of the Software Integration Test Plan have been met.
ii) the Software Integration Test Report shall be in a form that is auditable;
iii) test cases and their results shall be recorded, preferably in machine readable form for
subsequent analysis;
v) the identity and configuration of the items verified.
11.4.16 For software/hardware integration, see 12.4.8.
12 Software/hardware integration
12.2 Input documents
System Requirements Specification, System Safety Requirements Specification, System
Architecture Description, Software Requirements Specification, Software Requirements
Test Specification, Software Architecture Specification, Software Design Specification,
Software Module Design Specification, Software Module Test Specification, Software
Source code and supporting documentation, Hardware documentation.
12.4 Requirements
12.4.1 For software safety integrity levels greater than zero, a Software/Hardware
Integration Test Plan will be created.
12.4.7 Test cases and their results shall be recorded, preferably in machine readable form
for subsequent analysis.
12.4.8 A Software/Hardware Integration Test Report shall be produced as follows:
i) Software/Hardware Integration Test Report shall state the test results and whether the
objectives and criteria of the Software/Hardware Integration Test Plan have been met.
ii) the Software/Hardware Integration Test Report shall be in a form that is auditable;
iii) test cases and their results shall be recorded, preferably in a machine-readable form
for subsequent analysis.
13 Software validation
13.2 Input documents
Software Requirements Specification, All Hardware and Software Documentation,
System Safety Requirements Specification
13.4 Requirements
13.4.3 A Software Validation Plan shall be established and detailed in suitable
documentation.
13.4.7 The Software Validation Plan shall identify the steps necessary in fulfilling the
safety requirements set out in the System Safety Requirements Specification. The
Validator shall check that the verification process is complete.
13.4.10 The results of the validation shall be documented in the Software Validation
Report in an auditable form.
13.4.11 Once hardware/software integration is finished, a Software Validation Report
shall be produced as follows:
i) it shall state whether the objectives and criteria of the Software Validation Plan have
been met.
ii) it shall state the tests results and whether the whole software on its target machine
fulfils the requirements set out in the Software Requirements Specification;
iii) an evaluation of the test coverage on the requirements of the Software Requirements
Specification shall be provided;
13.4.13 Any discrepancies found, including detected errors, shall be clearly identified in a
separate section of the Software Validation Report and included in any release note that
accompanies the delivered software.
13.4.14 The software shall be tested against the Software Requirements Test
Specification. These tests shall show that all of the requirements in the Software
Requirements Specification are correctly performed. The results shall be recorded in a
Software Validation Report.
14 Software assessment
14.2 Input documents
System Safety Requirements Specification, All Hardware and Software Documentation
14.4 Requirements
14.4.2 Software with a Software Assessment Report from another Assessor does not have
to be an object for an entirely new assessment.
14.4.5 The Assessor shall assess that the software of the system is fit for its intended
purpose and
responds correctly to safety issues derived from the System Safety Requirements
Specification.
14.4.9 The Assessor shall produce a report for each review that shall detail his assessment
results.
15.4 Requirements
122
15.4.1 The supplier and/or developer shall have and use as a minimum a Quality
Assurance System compliant with EN ISO 9000 series, to support the requirements of
this European Standard. EN ISO 9001 accreditation is highly recommended.
15.4.2 As a minimum, the supplier and/or developer and the customer shall implement for
the software development the relevant parts of EN ISO 9001, in accordance with the
guidelines contained in EN ISO 9000-3.
15.4.3 The supplier and/or developer shall prepare and document, on a project by project
basis, a Software Quality Assurance Plan to implement the requirements of 15.4.1 and
15.4.2 of this European Standard.
15.4.5 All activities, actions, documents, etc. required by all the sections of EN ISO
9000-3 and of this European Standard (annex A included) shall be specified or referenced
in the Software Quality Assurance Plan. None of the lists in EN ISO 9000-3 shall be
presumed to be exhaustive.
15.4.6 As a minimum, configuration management shall be carried out in accordance with
the guidelines contained in EN ISO 9000-3.
15.4.7 The adequacy and results of Software Verification Plans shall be examined.
15.4.8 The supplier and/or developer shall establish, document and maintain procedures
for External Supplier Control. New software shall be developed and maintained in
conformity with the Software Quality Assurance Plan of the Supplier or with a specific
Software Quality Assurance Plan prepared by the external supplier in accordance with the
Software Quality Assurance Plan of the Supplier.
15.4.9 The supplier and/or developer shall establish, document and maintain procedures
for Problem Reporting and Corrective Actions. These procedures shall implement the
relevant parts of EN ISO 9001, covering re-test, re-verification, re-validation and re-
assessment. As a minimum, problem reporting and corrective action management shall be
applied in the software lifecycle starting immediately after Software Integration and
before the starting of formal Software Validation, also covering the whole phase of
Software Maintenance.
16 Software maintenance
16.2 Input documents
All documents.
16.4 Requirements
16.4.1 As a minimum, maintenance shall be carried out in accordance with the guidelines
contained in EN ISO 9000-3.
16.4.2 Maintainability shall be designed into the software system, in particular, by
following the requirements of clause 10 of this European Standard. ISO/IEC 9126 should
also be.
16.4.3 Procedures for the maintenance of software shall be established and recorded in
the Software Maintenance Plan.
16.4.4 The maintenance activities shall be audited against the Software Maintenance
Plan, at intervals defined in the Software Quality Assurance Plan.
16.4.7 External supplier control, problem reporting and corrective actions shall be
managed with the same criteria specified in the relevant paragraphs of the Software
Quality Assurance cla use.
16.4.8 A Software Maintenance Record shall be established for each Software Item
before its first release, and it shall be maintained. In addition to the requirements of EN
ISO 9000-3 for "Maintenance Records and Reports",
16.4.9 A Software Change Record shall be established for each maintenance activity.
123
17.4 Requirements
17.4.1 Data Preparation Lifecycle
17.4.1.1 Application Requirements Specification
This shall include standards that the application must comply with.
17.4.1.2 Overall Installation Design
17.4.1.3 Data Preparation
The data preparation process shall include the production of specific information (e.g.
control tables), production of the data source code and its compilation, checking and other
verification activities, and testing of the application data.
17.4.1.4 Integration and Acceptance
The system shall be commissioned as a fully operational system, and a final acceptance
process shall be carried out on the complete installation.
17.4.1.5 Validation and Assessment
Validation and assessment activities shall audit the performance of each stage of the life-
cycle.
17.4.2 Data Preparation Procedures and Tools
Specific data preparation procedures and tools shall be developed to allow the data
preparation lifecycle specified in 17.4.1.
17.4.2.1 At the Software Design phase for the system configured by application data, a
Data Preparation Plan shall be produced.
17.4.2.2 The Data Preparation Plan shall allocate a safety integrity level to any hardware
or software tools used in the data preparation lifecycle.
17.4.2.3 Where new notations are introduced, the necessary user documentation must be
provided and training shall also be provided.
17.4.2.4 The verification, test, validation and assessment reports required to demonstrate
that the data preparation has been carried out in accordance with the plan. This
information shall be contained in the Data Test Plan and the results shall be recorded in
the Data Test Report.
17.4.2.5 All data and associated documentation shall be subject to the configuration
management requirements of section 15 of this standard. Configuration management
records shall be created.
17.4.3 Software Development
17.4.3.1 The system safety integrity level will determine the standards to be applied.
Annex A
(normative)
Criteria for the Selection of Techniques and Measures
If a Highly Recommended technique or measure is not used then the rationale behind not
using it should be detailed in the Software Quality Assurance Plan or in another
document referenced by the Software Quality Assurance Plan;
124
If a Not Recommended technique or measure is used then the rationale behind using it
should be detailed in the Software Quality Assurance Plan or in another document
referenced by the Software Quality Assurance Plan.
Annex B
(informative)
Bibliography of techniques
69 techniques are described.