Vous êtes sur la page 1sur 25

What is DO-178B/ED-12B?

The purpose of this document is to provide guidelines for the production of software for airborne
systems and equipment that performs its intended function with a level of confidence in safety that
complies with airworthiness requirements.
The guidelines are in the form of:

Objectives of software life cycle processes

Description of activities and design considerations for achieving these objectives

Description of the evidence that indicate that the objectives have been satisfied

The document discusses those aspects of airworthiness certification that pertain to the production of
software for airborne systems and equipment used on aircraft or engines.

DO-178B SAFETY LEVELS


The levels are defined in term of the potential consequence of an undetected error in the software
certified at this level. Here are such consequences for each defined level:

Level A: Catastrophic: prevents continued safe flight or landing, many fatal injuries

Level B: Hazardous/Severe: potential fatal injuries to a small number of occupants

Level C: Major: impairs crew efficiency, discomfort or possible injuries to occupants

Level D: Minor: reduced aircraft safety margins, but well within crew capabilities

Level E: No Effect: does not effect the safety of the aircraft at all

These relate to the criticality of the airborne system. Flight control, navigation, and all fly-by-wire
systems are flight critical and require DO-178B Level A certification. Entertainment systems fall at the
other end of the criticality spectrum and would be Level E systems (except for the crews ability to
override it when making public announcements PA).
SW Level

Failure Condition
Category

Failure Condition Description

With
independence

Objectives

Catastrophic

Conditions which would prevent continued safe 66


flight and landing.

25

Hazardous/SevereMajor

Conditions which
margin/functional
workload to the
adverse effects on

14

Major

Conditions which would significantly reduce57


aircraft safety, crew ability to work under adverse
operation, or produce discomfort to occupants.

Minor

Conditions which would not significantly reduce 28


aircraft safety, slight increase in crew workload,
or produce some inconvenience to occupants.

No Effect

Conditions which do not affect the aircraft


operation or crew workload.

would reduce aircraft safety 65


capabilities produce a higher
flight crew or have serious
occupants.

DO-178B establishes processes that are intended to support the objectives, according to the software
level. In general, theres Integral and Development processes as shown in Figure 1.

The processes of a software life cycle may be iterative as represented by the dotted lines in the
Software Development Processes in Figure 1.

The objective for each process and its respective Software Life Cycle Data are further explained in
Table 2.
Software
Life Cycle
Process
Planning

Software Life Cycle Data


(process outputs)

Objective

Produce the software plans and standards that direct the PSAC Plan for SW Aspects of
software development processes and the integral processes
Certification
SDP SW Development Plan
SVP

SW Verification Plan

SCMP SW Config Management Plan


SQAP SW Quality Assurance Plan
SRS SW Requirements Standards

Requirements

Develop the high-level requirements

Design

Develop the software architecture and


requirements from the high-level requirements

SDS

SW Design Standards

SCS

SW Code Standards

SRD

SW Requirements Data

low-level SDD

SW Design Description

SECI SW Environment Config Index

Coding

Implement the source code from the software architecture SRC


and the low-level requirements

Source Code

Integration

Build the executable object code and load it into the target EOC Executable Object Code
hardware to detect and report errors/bugs that may have
been introduced during the software development processes SVCP SW Verification Cases & Proc.
by creating test cases, test procedures and analyses
SVR SW Verification Results
SCA

Structural Coverage Analysis

SCI

SW Configuration Index

SAS

SW Accomplishment Summary

Configuration
Management

Provide a defined and controlled configuration of the


software throughout the software life cycle and to provide
the ability to consistently replicate the Executable Object
Code

SCM Records
Problem Reports

Verification

Detect and report errors/bugs that may


introduced during the development processes

been

Review Records

Quality Assurance

Provide confidence that the software life cycle processes


produce software that conforms to its requirements by
assuring that these processes are performed in compliance
with the approved software plans and standards

SW Conformity Review
SQA Records

Certification Liaison

Establish communication and understanding between the


applicant and the certification authority throughout the
software life cycle to assist the certification process

PSAC, SAS, SCI

have

7 Keys to Successful DO-178B Audits


If your company develops or verifies software for the avionics industry, you are probably familiar with
DO-178B Software Considerations in Airborne Systems and Equipment Certification.
In brief, DO-178B is a document published by RTCA and used by the Federal Aviation Agency (FAA) as
the main guidance to determine if the software will perform safely and reliably in an airborne
environment. DO-178B lists a series of objectives that need to be accomplished in order to produce
reliable software according to the impact in safety caused by a failure condition of that software.
The FAA has established a software audit process that ensures compliance to the DO-178B objectives
and other applicable software policy, guidance, and issue papers.
This audit process includes four Stages of Involvement (SOI) throughout the software life cycle of a
project:
1. Planning Review
2. Development Review
3. Verification Review
4. Final Review
The intent of this article is not to explain each SOI, but to assist you in being successful when the FAAs
Designated Engineering Representative (DER) or delegate performs an audit of your software project.
For more information on the software audit process you may read the FAA Order 8110.49, Software
Approval Guidelines available on the FAA website.
Companies are oftentimes exposed to audits which may be stressful, but they do not have to be that
way. The following guidelines will help you handle FAA auditors, avoid common mistakes and reduce or
even eliminate stress.
1. Plan the Audit
Preparation is the key to having a successful audit.
It is important to schedule a meeting with the DER well before the audit to discuss the topics that will
be addressed during the visit. Ask the DER questions like:
What is the scope of the audit?

What documentation and materials will be needed and how soon?

When will the audit occur?

How long will it take to complete the audit?

What support and considerations will be required (e.g., workspace, Internet access, and
projector)?

What to do if you are not ready:


It is extremely important that you are ready before the DERs visit. If you are not ready, consider
postponing the visit for a date when you will be ready. Why? Because neither you nor the DER will be
very happy afterward when you both conclude that the audit has to be rescheduled, wasting the DERs
time and your companys money. If you havent worked with the DER before, you may want to provide
documentation in advance to confirm that you are ready, just to be safe. Before you start, you should
know what Level of Involvement (LOFI) the DER or designee will maintain with your facility. You can

estimate/calculate their LOFI using a checklist that is available in the FAA 8110.49 Software Approval
Guidelines.
The checklist provides a result that basically tells the DER what degree of thoroughness should be
maintained during the software audit. There are three types of Involvement: Low, Medium and High.
For example, for a Low level, the DER may want to check artifacts via e-mail or through a web based
system. In the other extreme, for a High level, the DER will do on-site audits for every step of the
project. This checklist is normally completed in advance by the DER but it is important that you are
familiar with it and prepared to provide the information that the DER may request to complete it. Your
initiative may help to build up the level of comfort of the DER by showing that you have been through
the process before.
2. Build Your Team
Having the right people available to support the software audit is critical. A team that includes
representatives from Project Management, Development, Verification and Quality Assurance will be
more than adequate. People who either have the answers at hand or can quickly investigate issues
and report back to the DER can help move a software audit along faster.
Before the software audit, it is important to designate a main speaker for the meeting so that he/she
can decide who should answer which questions from the DER. Typically the Project Manager should be
the main speaker of the meeting; but it could be anyone with sufficient knowledge of the project,
process and team members capabilities. Not all the team members have to be present at all times,
however the Quality Assurance and Project Management team members should be present.
3. Conduct a Self Assessment
Ask your Quality Assurance team member to lead a company self-assessment well in advance of the
actual audit. The checklists that the DER will most likely use to perform the audit are contained in the
Conducting Software Reviews Job Aid document available for download at the FAA website. Your team
should select the checklists and checklist items that are most appropriate depending of the software
level and the SOI that is being reviewed.
Evidence should be provided for each response of a checklist item. Do not resort to just giving Yes or
No statements. Answers should be justified with objective evidence (e.g., document references,
record references, justifications).
Some DERs will request more than one piece of evidence for each checklist item, so be sure to discuss
this with the DER before you start the self-assessment. If you are not sure how to address a specific
checklist item, dont hesitate to ask the DER; you do not want to be at the audit without knowing what
is expected from you.

1. What industries do the DO-178B guidelines apply to?


The DO-178B guidelines apply to civil aviation for aircraft, helicopters and engines as mandated by
the Federal Aviation Regulations (FAA). The guidelines also apply to the systems and equipment
that utilize software or airborne electronic hardware used in aviation.
2. When is DO-178B compliance required?
For systems and equipment using software to fulfill a safety related aircraft function. FAA Advisory
Circular 20-115B cites RTCA/DO-178B as a means of compliance to the Federal Aviation Regulations
Part 21, 23, 25, 27, 29 and 33.
DO-178B is used for all new software development as well as for software changes to legacy
systems containing software. The FAA defines DO-178B as a means, but not the only means of
compliance to the Federal Aviation Regulations. It is an extremely rare exception that an
alternative means of compliance is used for software in avionics applications.
3. Does the amount of testing vary depending on Software Design Assurance Level (i.e. Level A, B,
C etc.)?
Yes, and in two distinct ways:
Requirements that need test coverage:
- Level A, B & C require that the test cases provide normal range and robustness coverage of all
software high level and low level requirements.
- Level D only requires test cases that provide normal range and robustness coverage of all
software high level requirements.
The amount of testing needed to achieve the structural coverage objectives.
Coverage metrics are collected during the execution of the software test procedures. The test
cases needed for achieving modified condition decision coverage will be more comprehensive
than those needed for decision or statement coverage.
Level A software requires modified condition decision coverage; + Decision Coverage +
Statement coverage
Level B software requires decision coverage and
Level C software requirements only statement coverage.
Level D software does not require coverage metrics. In general, the test cases do not need to be
as comprehensive as those for Level A, B or C.
4. What is the relationship between DO-178B compliance and the FAA?
The Federal Aviation Administration (FAA) has the authority to determine whether compliance to
the Federal Aviation Regulations has been achieved.
For software, compliance to the Federal Aviation Regulations is defined in the FAA Advisory Circular
20-115B. The advisory circular states that software can use compliance to RTCA/DO-178B to show
compliance to the Federal Aviation Regulations.
Since the FAA determines compliance to Federal Aviation Regulations, then for software the FAA
determines compliance to RTCA/DO-178B.

RTCA/DO-178B was created by an RTCA committee. The FAA had participants in the committee.
Yes, an aircraft manufacturer can do the software development and verification and have company
DERs (Designated Engineering Representative) perform the compliance findings for the software.
The resources would need to be appropriately qualified and skilled for the tasks. The company DER
would need the appropriate delegation and authorization.
5. Could an aircraft manufacturer achieve DO-178B compliance and approval using only in-house
resources?
Yes. In fact many TSO (Technical Standard Order) equipment manufacturers do the software
development and use in house company DERs (Designated Engineering Representative) to perform
the compliance findings for the software.
Some equipment manufacturers have in house DERs and with suitable arrangements with the
aircraft manufacturer use their own DER to approve the software. Note that any system with
software also requires an installation approval to use the system and software on the aircraft.
There are no generic stand-alone approvals for software. Some portions of the approval may be
reused or credited under previously developed software.
6. Could an avionics software producer achieve DO-178B compliance and approval using only inhouse resources?
Before assist with planning and liaison with the FAA.
During review data, answer questions, and mainly to audit for compliance in accordance with
Chapter
2
of
FAA
Order
8110.49
using
the
Software
Job
Aid.
After assist with compliance findings and approvals for software updates due to in service
problems or when features are added.
7. What is the role and responsibility of the DER before, during and after DO-178B compliance and
approval?
The applicant and software developer would need to define the criteria by which test cases are
selected. It is necessary to provide full test coverage (normal range and robustness) of software
high level requirements for Level A/B/C/D and software low level requirements for Level A/B/C.
Another measure of completeness is the structural coverage.
Level A requires MC/DC, decision and statement coverage;
Level B requires decision and statement coverage;
Level C requires statement coverage.
The test cases need to be complete enough to result in fulfillment of the coverage metrics. Tests
for Level A/B/C also need to provide coverage of the data coupling and control coupling.
DO-178B Section 6.4.2 provides some guidance on test case selection. In general, a well defined
test case selection criteria and comprehensive test cases will provide the requisite test coverage of
the requirements and yield thorough structural coverage metrics. The test case selection criteria
should be defined in a Software Verification Plan as described in RTCA/DO-178B Section 11.3 c.3.

8. Are there guidelines or strict requirements to achieve completeness for

DO-178B tests?
RTCA/DO-178B defines a set of objectives for test coverage of the software requirements. If the
requirements are adequately defined and behaviorally oriented, then the verification tests will
demonstrate that the software meets its requirements. Meeting the requirements should also serve
to demonstrate correct behavior of the software.
RTCA/DO-178B objectives for software testing also include showing that the actual object code,
executing on the target flight hardware, meets its requirements. Testing the software in the target
environment will ensure that software is tested the same way it will be used.
9. Are there only guidelines or strict requirements to prove correctness of

the target software during DO-178B testing?


The

glossary

on

page

82

of

RTCA/DO-178B

provides

definition

of

independence.

During the software planning process, consideration should be given to which activities need to be
independently performed and how the independence will be documented. An effective way to
demonstrate independence is to keep records of the author of the requirements, design and code
documentation and to record the name of the reviewer on the review form or checklist.
Independence of verification activities is one of the questions in the Software Job Aid for
consideration during a Stage of Involvement (SOI) audit.
10. Are there only guidelines or strict requirements to prove independence during the variety of
tasks of DO-178B verification?
There are no formal ties between RTCA/DO-178B and ISO standards.
11. During

DO-178B verification, are there implications


standards (ISO 9000 and others), and if so, why?

with

other

RTCA/DO-178B configuration management objectives require that problem reporting be established


and that changes to baselines are performed with formal change control. The Software
Configuration Index should include a list of problem reports resolved by a software release. The
Software Accomplishment Summary must include a list of all known open problem reports at the
time of the release and a rationale for leaving them unresolved. Most projects will not permit a
large quantity of open problem reports in the final release.
In general, problem reports should be routinely assessed as part of configuration status accounting.
Open problem reports should be scheduled for resolution and incorporation in upcoming baselines
to minimize the rework they can entail. The sooner a problem is resolved, the less downstream
affect the problem will have on subsequent activities. Project specific FAA Issue Papers, EASA
Certification Review Item or customer contracts may require a specific timeframe to resolve all
outstanding problem reports.

12. Does DO-178B mandate the fix of problem reports (PRs) in a timely manner or not?
RTCA/DO-178B configuration management objectives require that problem reporting be established
and that changes to baselines are performed with formal change control. The Software
Configuration Index should include a list of problem reports resolved by a software release. The
Software Accomplishment Summary must include a list of all known open problem reports at the
time of the release and a rationale for leaving them unresolved. Most projects will not permit a
large quantity of open problem reports in the final release.
In general, problem reports should be routinely assessed as part of configuration status accounting.
Open problem reports should be scheduled for resolution and incorporation in upcoming baselines
to minimize the rework they can entail. The sooner a problem is resolved, the less downstream
affect the problem will have on subsequent activities. Project specific FAA Issue Papers, EASA
Certification Review Item or customer contracts may require a specific timeframe to resolve all
outstanding problem reports.
13. Does DO-178B mandate a plan with milestones and timelines to address all problem reports
(PRs)?
No, this specific topic is not covered in RTCA/DO-178B.
Problem reports and their ultimate resolution are of utmost importance to aircraft manufacturers.
They will often have program or contractual requirements for management of open problem
reports. FAA Notice N8110.110 has devoted Chapter to the topic of problem reports. The Notice
requires routine assessment of problem reports and active management of their resolution. The FAA
will assess the quantity of unresolved problem reports and their potential impact on the aircraft or
systems at the time of the approval of the software. Project specific FAA Issue Papers, EASA
Certification Review Item or customer contracts may require a specific timeframe to resolve all
outstanding problem reports.
14. Does DO-178B mandate fixing all problem reports (PRs) to achieve certification?
No, not specifically.
RTCA/DO-178B requires that all open problem reports be recorded in the Software Accomplishment
Summary. The Software Accomplishment Summary is submitted to the FAA. The FAA must agree
with the number and type of open problem reports and their potential to affect aircraft safety. FAA
Notice N8110.110 Chapter 2 also provides additional guidance on problem reporting. Project
specific FAA Issue Papers, EASA Certification Review Item or customer contracts may specify more
strict guidance for problem reports.
15. What is the role of the DER during the planning phase of the DO-178B certification?
The DER, when so delegated, will perform the Stage of Involvement (SOI-1) audit during, or near
the completion of, the planning phase. The DER can also assist with interpretation of guidance
materials, Issue Papers and provide any necessary liaison with the FAA. The DER can also facilitate
a specialist meeting with the FAA to review the applicant's proposed lifecycle and obtain any
feedback to ensure that the PSAC (Plan for Software Aspects of Certification) will ultimately be
approved. The DER will prepare an 8110-3 form to document the approval of the PSAC and submit
the form to the FAA.

16. What are the criteria for need of qualification of the tools used during DO-178B certification?
All tools used for software development and software verification should be assessed for
qualification.
The assessment should be documented in the Plan for Software Aspects of Certification. Tools
that can introduce an error into the flight software should be assessed as development tools. If the
output of these tools is not verified by review, analysis or test, then it should be qualified.
Generally, compilers, linkers and cross-assemblers are not qualified as development tools since the
outputs are subject to verification by test. Source code generation tools require qualification if
they are used to alleviate code reviews for Level A-C. Tools that can fail to detect an error in the
flight software should be assessed as verification tools. If the output of the tool is not verified by
review, analysis or test then it should be qualified. Typically, structural coverage analysis tools and
tools that automatically check test results are qualified as verification tools.
17. How is qualification of tools used during DO-178B verification achieved?
Qualification for tools is defined in RTCA/DO-178B Section 12.2 and FAA Order 8110.49 Chapter 9.
For a verification tool, the use of the tool and the proposal for qualifying the tool should be
defined in the additional considerations section of the Plan for Software Aspects of Certification.
The tool operation requirements need to be defined and documented. Test cases are created to
show that the verification tool meets its operational requirements under normal operating
conditions. The results of the testing should be recorded and documented.
All data for the verification process should be under configuration control as CC2 (Change Category
2) data. The results of the qualification activities should be summarized in the Software
Accomplishment Summary. The qualification activities should be performed with independent
verification of the tool data if the DO-178B objective requires independent verification.
18. Would Excel need to be "qualified" to be used for DO-178B verification purposes?
If Excel is used in a manner that would fail to detect an error in the flight code, it would need to
be qualified. If Excel is used to calculate expected results for test cases, and the expected results
are not reviewed, then a qualification should be performed. If Excel is used to populate test
harnesses with test inputs and the inputs are not independently confirmed, then a qualification
should be performed. If Excel is used to compare expected results with actual results for test
procedures, and the results are not reviewed, then a qualification should be performed.
19. Does all of the software running on a processor need to be at the same Design Assurance Level?
No, not necessarily. Partitioning may be used to run different functions of different Design
Assurance Levels on the same processor.
This is most commonly done with a partitioned real-time operating system (RTOS). Many times,
equipment designers will have provisions for having separate executables for software of different
Design Assurance Levels. Example: the Level A flight code only executes during flight, the
Level E data loader only executes during specific maintenance procedures on the ground or at
a repair facility. In general, the analysis and verification to prove time, data, memory partitioning
for two or more functions performed simultaneously on a processor are not trivial.

20. Does all of the software running on a circuit card assembly need to be at the same Design
Assurance Level?
This would depend on the architecture of the circuit card. Often times, separate processors are put
on a circuit card to have different functions and different Design Assurance Levels. If the circuit
card has the necessary isolation, then software of different Design Assurance Levels could be
hosted. The partitioning for two or more functions performed simultaneously on a single processor
is not trivial and would need to be proved.
21. How can existing software be upgraded to comply with DO-178B?
This depends on the quantity and quality of the documentation and lifecycle data produced in the
original development. In general, the FAA eschews reverse engineering to produce compliance data
for software. If this approach is proposed, then it is recommended that the applicant seeks FAA
agreement with the approach before proceeding with the effort.
The biggest problem with existing software is producing the functional requirements needed to use
in the verification testing. Requirements written after the fact tend to be based on knowledge
and/or description of the specific design or implementation not the intended function. When this
happens, the verification testing proves that the design or implementation is met and not that the
implementation meets its functional requirements. If the software was originally developed and
approved at a lower Design Assurance Level and needs to be upgraded to a higher Design Assurance
Level, then service history assessment, additional testing and activities required by the higher
Design Assurance Level may be performed.
22. Where do traditional Preliminary and Critical Design Reviews (PDR/CDR)

fit in the design life cycle timeline?


The Preliminary Design Review (PDR) is usually performed when the software or hardware
requirements trace to and fully implement the system or parent requirements baseline. The
Critical Design review (CDR) is usually performed when the software or hardware design traces to
and fully implements its requirements. A software CDR is performed prior to the formal coding
phase. A hardware CDR is typically used as an approval gate to manufacture hardware to be used
in formal tests.
23. When should the V&V team get involved on a project?
V&V should be involved right from the planning phase. The team should write their own
verification plan and assess the project needs for tools, equipment and the overall approach for
verification and testing.
V&V can review plans and standards when independent review is required.
V&V can also perform an overview of the system, hardware and software to look for overlaps in the
testing. This may allow integrated tests to cover system, software and/or electronic hardware
requirements all in one test case. Such an approach requires a disciplined effort in requirements
capture. It also requires test equipment that can capture analog, digital waveforms and software
data simultaneously during a test scenario.

V&V can be highly useful and effective in requirements reviews. They can provide early
feedback on whether the requirements, as written, can be verified by analysis and/or test.

V&V can start writing their test cases as soon as the requirements are available and ideally,
reviewed and released.

24. What is validation and when should it occur? How much design can be completed before
validation?
In RTCA/DO-254 validation is the review/analysis and/or test of derived requirements to ensure
that they are correct and complete in the context of the parent requirements allocated to the
hardware item (i.e. a PLD). Validation of the derived requirements, for a programmable logic
device (PLD) is typically performed during the review of the requirements document. All
requirements denoted as "derived" are evaluated against the applicable criteria for
completeness/correctness. Since the design is created to fulfill and trace to the requirements,
including the derived requirements, any design work done before the derived requirements have
been validated would be done at risk.
That is, if the validation of the derived requirements causes changes, then there may be a
downstream effect on the respective part of the design. Aerospace Recommended Practice
ARP-4754 defines methods for validation of system requirements. Requirements specified at a
system level and then allocated for implementation in software or airborne electronic hardware do
not need to be validated again at the software or airborne electronic hardware level.
25. How is the Implementation phase similar between hardware and software? How is it different?
For hardware the implementation phase produces test articles using manufacturing processes and
techniques identical or similar to the production environment. Once the hardware has been tested
and any final changes made, the data from the implementation phase is used in the production of
the hardware. Software does not have an implementation phase defined in DO-178B. The
integration activity is similar to the implementation defined in DO-254.
For software, once testing is complete, the executable object code and data necessary for load
control of the software is provided to production. Note that the hardware would need to be
produced in an implementation phase before the software testing can be formally concluded. That
is, software needs to be tested on flight hardware. The flight hardware needs to be representative
of the final configuration.
26. Where does DO-178B fit into the FAA TSO certification?
FAA Advisory Circular 20-115B for software allows the use of DO-178B for TSO compliance. If
equipment is developed under TSO rules, the applicant can show compliance to DO-178B for
software and DO-254 for airborne electronic hardware. Current policy allows for DERs to make
compliance findings for DO-178B for TSOs. This policy is currently being reassessed by the FAA.
27. Are there any other top-tiered certifications that DO-178B can be applied to?
Currently, DO-178B and DO-254 are aerospace specifications for commercial aircraft. They are a
collection of industry best practices.
28. Will DO-178C replace DO-178B or will either be accepted as part of the FAA certification
process?
This is up to the FAA to determine. An FAA Advisory Circular will be issued when the FAA
considerations DO-178C as an acceptable means of compliance to the Federal Aviation Regulations.
When DO-178B was published, a transition period was allowed for software developed to DO-178A.
A transition process may also be defined; it is too early to tell.

29. What are the most common mistakes a company makes while trying to get their product or
system through certification?
Underestimating the value of effective planning.
Underestimating the value of well documented and well reasoned standards.
Underestimating the value of well crafted requirements.
Underestimating the scope of verification.
Emphasizing delivery of software or hardware over producing well crafted requirements
and design documentation.
Overestimating the capabilities of outsourced or contracted work.
Lack of proper oversight of outsourced or contracted activities.
Not incorporating project specific requirements, such as FAA Issue Papers. The assumption
that what was acceptable for the last development program will work on the current
program is often incorrect.
Not considering electronic hardware aspects and software design tools in common cause
analysis.
Looking for ways to avoid the real work that compliance entails.
30. I have software that is already developed for my product and now I have to certify it to
conform to DO-178B standards. Is this possible and if so, how do I do it?
It may be possible and needs to be evaluated on a case by case basis. The success depends on the
quantity and quality of the documentation and lifecycle data produced in the original
development.
In general, the FAA eschews reverse engineering to produce compliance data for software. If this
approach is proposed, then it is recommended that the applicant seeks FAA agreement with the
approach before proceeding with the effort. The biggest problem with existing software is
producing the functional requirements needed to use in the verification testing. Requirements
written after the fact tend to be based on knowledge and/or description of the specific design or
implementation not the intended function. When this happens, the verification testing proves
that the design or implementation is met and not that the implementation meets its functional
requirements.
If the software was originally developed and approved at a lower Design Assurance Level and needs
to be upgraded to a higher Design Assurance Level, then service history assessment, additional
testing and activities required by the higher Design Assurance Level may be performed.

31. How many stages of involvement are between the 'authority' (FAA) and the company that is
trying to obtain certification and what documents are due at each S.O.I. audit?
The Stage of Involvement (SOI) audits are defined in the FAA Orders for software (8110.49) and
airborne electronic hardware (8110.105). All four audits are performed for each program. The
amount of direct FAA involvement is determined by a set of criteria called the level of FAA
involvement (LOFI). SOI activity is also determined by the amount of delegation to a DER, program
requirements and FAA Issue Papers on supplier oversight.
FAA Order 8110.49 defines 4 Stage of Involvement (SOI) audits for software:
Software Planning Review (SOI #1)
Software Development Review (SOI #2)
Software Verification Review (SOI #3)
Final Certification Software Review (SOI #4)
The lifecycle data inspected at each SOI audit is defined in the respective FAA Order for software
(8110.49) and airborne electronic hardware (8110.105).
32. How many objectives are there in DO-178B that must be addressed?
There are 66 objectives defined in DO-178B.
All 66 apply to Level A software.
65 apply to Level B software.
57 apply to Level C software.
28 apply to Level D software.
33. What is a DER? What does the DER have to do with DO-178B?
A Designated Engineering Representative (DER) is a representative of the FAA Administrator
authorized by law to examine, test, and/or make inspections necessary to issue aircraft
certificates. A DER is not an employee of the U.S. Government, and is not federally protected for
the work they perform or their decisions.
For a given project, a DER may be delegated to make compliance findings to the applicable Federal
Aviation Regulations on behalf of the FAA. For airborne electronic hardware, compliance to the
Federal Aviation Regulations may be established by finding compliance to the objectives in DO-254
(as permitted by Advisory Circular 20-152). For software, compliance to the Federal Aviation
Regulations may be established by finding compliance to the objectives in DO-178B (as permitted
by Advisory Circular 20-115B). The DER then assesses compliance to DO-178B or DO-254 as a means
of compliance to the Federal Aviation Regulations. The compliance findings are documented on FAA
form 8110-3.
DERs can fulfill their role as a company DER or a consultant DER. A company DER has delegation for
the products their employer produces. A consultant can get delegation for projects from different
suppliers or airframe manufacturers. For companies with an organizational delegation, a DER can

perform their duties as an Authorized Representative (AR) in a Delegation Option Authorization


(DOA) or a Unit Member (UM) in an Organization Designation Authorization (ODA).
34. What are the DO-178B Criticality Levels? And what does that mean for testing at the different
levels?
DO-178B has five Design Assurance Levels (DAL) A/B/C/D/E.
The design assurance level is assigned commensurate with the hazard the software can cause or
contribute to.
Level A-C all require test coverage of high and low level software requirements. Level D only
requires test coverage of high level software requirements. Level E is exempt from DO-178B
compliance, the testing is determined by the developer or contractual arrangements.
Structural coverage and test coverage of data and control coupling is required for Level A-C, Level
D does not. The coverage metrics is most rigorous for Level A (modified condition/decision
coverage) while Level B only requires decision and statement coverage. Level C needs to achieve
statement coverage only.
35. Is DO-178B used by the Military?
Military projects can elect to use DO-178B. The compliance to DO-178B is determined by the
project, not the FAA. Military projects often cite these guidelines since the supplier often already
have exposure. The FAA has a Military Certification Office (MCO) in Wichita, KS for military aircraft
derived from commercial products.
FAA certification can be performed for an already certified civilian aircraft adapted for military
use. There would however be no way to certify a fighter jet since it does not have a civilian
application.

DO-178B Design Assurance Level (DAL)


For systems and equipment using software to fulfill a safety related aircraft function, the FAA Advisory
Circular 20-115B cites RTCA/DO-178B as a means of compliance to the Federal Aviation Regulations
(FARs) Part 21, 23, 25, 27, 29 and 33. The FAA defines RTCA/DO-178B as a means, but not the only
means, of compliance to the FARs. It is an extremely rare exception that an alternative means of
compliance is used for software in avionics applications.
In order to certify safety-critical airborne software using the RTCA/DO-178B guidelines, the system
safety assessment process will identify the applicable DAL according to the five failure conditions
categories necessary for safe operation identified in the table below.
DAL

Condition

Level A

Catastrophic

Software that would cause or contribute to a failure of the system function resulting in conditions
that would prevent continued safe flight and landing.
Level B

Hazardous/Severe-Major

Software that would cause or contribute to a failure of the system function resulting in reducing the
capability of the aircraft or the ability to the crew to cope with adverse operating conditions so that
there would be a large reduction in safety margins of functional capabilities.
Level C

Major

Software that would cause or contribute to a failure of the system function resulting in reducing the
capability of the aircraft or crew with adverse operating conditions that would create a significant
reduction in safety margins or functional capabilities, a significant increase in crew workload, possibly
including injuries.
Level D

Minor

Software that would cause or contribute to a failure of the system function which would involve crew
action that are well within their capabilities that causes slight reductions in safety margins or
functional capabilities and slight increase in crew workload.
Level E

No Effect (DO-178B Objectives Do Not Apply)

Software that would cause or contribute to a failure of the system function which has no affect the
operational capability of the aircraft or increase workload.

The DO-178B Software Development Lifecycle is made up of six main phases,


Project Planning Phase,
Validation & Verification Phase,
Requirements Phase,
Design & Architecture Phase,
Implementation & Integration Phase, and
Delivery Phase.
Each phase in the software development lifecycle consists of guidelines and activities to achieve
compliance with the certification objectives that need to be filled in order for phase completion.
In CERTON's model of the DO-178B Software Development Lifecycle, the Validation & Verification
Phase encompasses activities during all of the development phases once the plans have been approved
by a Certification Authority (FAA, EASA, ANAC, Transport Canada, etc.). This is the key to successfully
managing the risk on a DO-178B project and the V&V team members should contribute to the
development of these plans that will affect the entire program as it evolves.
Software Development projects that fall victim to schedule and budget overruns can almost always be
attributed to not having a trained and experienced V&V team in place early and actively involved
during all phases of the software lifecycle.
DO-178B projects are requirements based, so we need expertise and experience to contribute valuable
input related to System Requirements, High Level Requirements, and Low Level Requirements Design
and Architecture that will support streamlined implementation and integration, V&V, and Delivery.
Errors in the Requirements, Design and Architecture have to be identified and resolved as they are
created by the Development team early in the project. Otherwise, the inevitable consequence of
detection by V&V after implementation and integration is complete rework from the top all the way
down. These avoidable errors become very costly to a program with milestone deadlines, such as First
Flight (SOF), Type Inspection Authorization (TIA), and Certification.

Project Planning Phase


At the start of the DO-178B Software Development Lifecycle, the objective of the Project Planning
Phase must be addressed for compliance. [DO-178B Table A-1] This Software Planning Process involves
creating plans and standards to govern the development, verification, configuration management,
quality assurance, and delivery of software in compliance with DO-178B guidelines. In this phase, the
Software Development Plan (SDP), Plan for Software Aspects of Certification (PSAC), Software Quality
Assurance Plan (SQAP), Software Configuration Management Plan (SCMP), Software Verification Plan
(SVP), and Software Requirements, Design & Coding Standards (SRDCS) documents are written and
reviewed in preparation for the first Stage of Involvement (SOI) Audit with the Certification Authority.
These documents are explained in further detail below.
CERTON has the experience and expertise to help you develop the plans that will be approved by the
Certification Authority and govern your successful DO-178B project.
Plan for Software Aspects of Certification (PSAC)

The purpose of the PSAC is to provide the primary means used by the Certification Authority for
determining whether an applicant is proposing a software lifecycle that is commensurate with the rigor
required for the DAL of software being developed (DO-178B Section 11.1). The PSAC should include
the following, at a minimum:

System Overview

Software Overview

Certification Considerations

Software Lifecycle

Software Lifecycle Data

Schedule

Additional Considerations

Software Development Plan (SDP)

The purpose of the SDP is to identify the objectives, standards, and software lifecycle(s) to be used in
the software development processes, ref. DO-178B Section 11.2. It can be included in the PSAC and
should contain the following, at a minimum:

Standards

Software Lifecycle

Software Development Environment

Software Verification Plan (SVP)


The SVP is a description of the verification procedures to satisfy the software verification process
objectives, ref. DO-178B Section 11.3. The procedures vary by software DAL as defined in the Tables
of DO-178B Annex A. The SVP should contain the following, at a minimum:

Organization

Independence

Verification Methods

Verification Environment

Transition Criteria
Partitioning Considerations
Compiler Assumptions
Re-verification Guidelines
Previously Developed Software+
Multiple-Version Dissimilar Software

Software Configuration Management Plan (SCMP)


The Software Configuration Management Plan establishes the methods to be used to achieve the
objectives of the software configuration management (SCM) process throughout the software lifecycle,
ref. DO-178B Section 11.4. The SCMP should contain the following, at a minimum:

Environment
Activities
Configuration Identification
Baselines and Traceability
Problem Reporting
Change Control
Change Review
Configuration Status Accounting
Archive, Retrieval, and Release
Software Load Control
Software Lifecycle Environment Controls
Software Lifecycle Data Controls
Transition Criteria
SCM Data
Supplier Control

Software Quality Assurance Plan (SQAP)


The purpose of the SQAP is to establish the methods to be used to achieve the objectives of the
software quality assurance (SQA) process, ref. DO-178B Section 11.5. The SQAP should contain the
following, at a minimum:

Environment
Authority
Activities
Transition Criteria
Timing

SQA Records
Supplier Control

Software Requirements, Design & Coding Standards (SRDCS)


The purpose the Software Requirements, Design, and Coding Standards is to establish a set of methods,
rules, and tools that will be used in the development of the software item to promote consistency
among processes, outputs, and artifacts. These standards will provide the constraints necessary to
enforce clarity and consistency between developers and streamline the activities associated with
requirements, design, and code development, validation, and verification throughout the software
lifecycle to prevent errors that could cause safety issues, schedule impact, and budget overruns.

Requirements Phase
The Requirements Phase uses the outputs of the System Lifecycle Process to develop the High Level
Requirements. These High Level Requirements include functional, performance, interface, and safetyrelated requirements, ref. DO-178B Section 5.1.
The inputs to the Requirements Phase are as follows:

System Requirements Data (SRD) allocated to software

Hardware Interfaces and System Architecture

Software Development Plan

Software Requirements, Design, and Coding Standards (SRDCS)

The outputs to the Requirements Phase are as follows:

Software High Level Requirements Document (SHLRD)

Software High Level Signal Dictionary (SHLSD)

Software High Level Requirements Document (SHLRD)

The Software High Level Requirements Document partitions and lists the high level requirements of the
software item using functional decomposition to differentiate the functionality of the subcomponents.
This includes operational, behavioral, and functional requirements. Derived high level requirements
should be provided to the system safety assessment process. The V&V phase activities should
commence as soon as a baseline of the SHLRD can be established in CM, including reviews for clarity,
consistency, and most important testability.
Software High Level Signal Dictionary (SHLSD)

The High level signal dictionary is used to capture a consolidated and universal list of all Input and
Output signals to the software from a board level perspective in order to generate requirements that

are testable from "end to end". Ideally, this signal dictionary would be directly traceable to system
level signals and schematics with a standardized naming convention for all signals.

Design & Architecture Phase


The Design & Architecture Phase uses the outputs of the Requirements Phase to develop the Low Level
Requirements. These Low Level Requirements include details on the design and architecture that can
be used to implement Source Code, ref. DO-178B Section 5.2.
The inputs to the Design & Architecture Phase are as follows:

Software
Software
Software
Software

High Level Requirements Document (SHLRD)


High Level Signal Dictionary (SHLSD)
Development Plan
Requirements, Design, and Coding Standards (SRDCS)

The outputs of the Design & Architecture Phase are as follows:


Software Low Level Requirements Document (SLLRD)
Software Low Level Signal Dictionary (SLLDD)
Software Low Level Requirements Document (SLLRD)

The Software Low level requirements document identifies how the requirements are partitioned and
lists the Low Level requirements of the Software item. Low level requirements may contain
implementation specific details of how the software will implement the functionality and behavior
described in the high level requirements. Derived low level requirements should be provided to the
system safety assessment process. The V&V phase activities should commence as soon as a baseline of
the SHLRD can be established in CM, including reviews for clarity, consistency, and most important
testability.
Software Low Level Data Dictionary (SLLDD)

The low level data dictionary contains a list of the input and output signals used in the SLLRD. This
document also describes the attributes of the signals. This includes: the signal names, signal data
types, operational ranges (min/max), memory map addresses, units, etc. Ideally, this data dictionary
would be directly traceable to the Software High Level Signal Dictionary (SHLSD) with a standardized
naming convention for all data.

Validation & Verification Phase


The most important phase in the development of DO-178B Software is the Validation & Verification
Phase. The V&V phase provides assurance for satisfying the RTCA/DO-178B objectives identified in
Tables A-2 through A-9 have been satisfied according to the objectives in Table A-1. The majority of
effort involved in a DO-178B project is associated with this phase of the software development
lifecycle, beginning with the Requirements Phase and ending with the Delivery Phase.
Validation Process
The Validation process in this phase provides assurance that the correct software requirements and
code have been developed according to the intended functions described in the System Requirements.
The High Level Requirements are aligned with and validated against the System Requirements, the
Design (Low Level) Requirements are aligned with and validated against the High Level Requirements,
and the Code Implementation is aligned with and validated against the Design (Low Level)
Requirements.
Verification Process
The Verification process in this phase provides assurance that the executable object code performs the
intended functions described by the validated High Level and Low Level Requirements while executing
in the target operational environment. Test Cases are developed and reviewed for complete coverage
(normal range and robustness) of the High Level Requirements (DAL A, B,C and D) and Low Level
Requirements (DAL A, B, and C). Test Procedures are developed and reviewed to correctly and
completely implement the Test Cases in a test environment according to the approved Software
Verification Plan (SVP). The Software Verification Cases & Procedures document (SVCP) details how to
reproduce the test setups and execution, along with a trace matrix for complete requirements based
test coverage.
The Verification process checks that expected results from the written requirements are equal to the
actual results produced from the Black Box or Actual Flight Code. CERTTEST and CERTBENCH Tool
sets for Test Case development and fully automated Test Procedure execution against Black Box LRU,
Circuit Cards, or SDK boards with the target processor.
Structural Coverage Analysis
Structural Coverage Analysis at the source code level is broken down into three categories:

Statement Coverage during execution of requirements based testing, every statement in the
program should be invoked at least one time.
Decision Coverage during execution of requirements based testing, every point of entrance
and exit in the program should be invoked at least once, and every decision in the program
should take on all possible outcomes at least once.
Modified Condition/Decision Coverage (MC/DC) during requirements based testing, every
point of entry and exit in the program should be invoked at least once, every decision should
take all possible outcomes at least once, every condition in a decision should take all possible

outcomes at least once, and each condition in a decision should be shown to independently
affect the outcome of that decision.
Code structures identified during Structural Coverage Analysis that are not exercised during
requirements based testing may be the result of one or more of the following, ref. DO-178B Section
6.4.4.3:

Shortcomings in the requirements based test cases and/or procedures

Missing or inadequate in software requirements

Dead Code

Deactivated Code

The Structural Coverage Analysis activity should also confirm the data coupling and control coupling
between the code components.
Note: Requirements based test coverage for data and control coupling should be identified as part of
the separate data and control coupling analysis activities.
Also, reference DO-248B, Section 3.43:
Sections 6.4.4.2 and 6.4.4.3 of DO-178B/ED-12B define the purpose of structural coverage analysis and
the possible resolution for code structure that was not exercised during requirements-based testing.
The purpose of structural coverage analysis with the associated structural coverage analysis resolution
is to complement requirements-based testing as follows:

Provide evidence that the code structure was verified to the degree required for the applicable
software level;

Provide a means to support demonstration of absence of unintended functions;

Establish the thoroughness of requirements-based testing.

Source to Object Code Trace Analysis (Level A Only)

The compiler converts the Source Code into assembly object code before it is compatible with the
target computer. For DAL A software, it is required to provide assurance that all assembly object code
generated by the compiler is traceable to the source code. Any assembly object code that is not
traceable directly to the source code requires additional verification to be performed in order to
provide safety assurance and the absence of unintended behavior.

Implementation & Integration Phase


The Implementation & Integration Phase uses the outputs of the Design & Architecture Phase to
develop the Source Code, ref. DO-178B Section 5.2. This Source Code will be compiled and loaded onto
the target computer for Verification.
The inputs to the Implementation & Integration Phase are as follows:

Software Low Level Requirements Document(SLLRD)

Software Low Level Data Dictionary (SLLDD)

Software Development Plan

Software Requirements, Design, and Coding Standards (SRDCS)

The outputs to the Implementation & Integration Phase are as follows:

Source Code

Executable Object Code

Source Code

This is the Source Code implementation developed from the software architecture and low level
requirements using the approved programming language specified in the SDP. This will be traceable to
the low level requirements and reviewed in order to validate the correct Source Code has been
developed.
Executable Object Code

This is the assembly object code generated by the compiler from the Source Code that will be loaded
onto the target computer as part of the integration process and verified using the the requirements
based test cases and procedures, including HW/SW integration testing. For DAL A software, this will
need to be shown to be directly traceable to the Source Code from which it was generated by the
compiler.

Vous aimerez peut-être aussi