Vous êtes sur la page 1sur 68

An Overview of Quantitative Risk Assessment

Methods

Fayssal Safie/MSFC

August 1, 2000

Shuttle Quantitative Risk Assessment - Technical Interchange Meeting 1


An Overview of Quantitative Risk Assessment Methods

• Definitions
• Qualitative and Quantitative FMEA – FMECA
• Qualitative and Quantitative Fault Tree Analysis (FTA)
• Probabilistic Risk Assessment (PRA)
• Reliability Allocation
• Reliability Prediction
• Reliability Demonstration
• Trend Analysis
• Probabilistic Structural Analysis
• Design of Experiments (DOE)
• Statistical Process Control (SPC)
• Manufacturing Process Capability
2
Definitions

• Probability: The chance or the likelihood of occurrence of


an event.
• Risk: The chance of occurrence of an undesired event and
the severity of the resulting consequences.
• Risk Assessment: The process of qualitative risk
categorization or quantitative risk estimation.
• Risk Management: The process of risk identification, risk
assessment, risk disposition, and risk tracking and control.

3
Definitions

• Reliability: The probability that an item will perform its


intended function for a specified mission profile.
• Safety: The freedom of injury, damage, or loss of
resources.
• Hazard: The condition that can result in or contribute to a
mishap.
• Mishap: An unintended event that can cause injuries,
damage, or loss of resources.

4
Failure Modes and Effects Analysis (FMEA)

• FMEA is an inductive (bottom-up) engineering analysis method.


• It is intended to analyze system hardware, processes, or
functions for failure modes, causes, and effects.
• Its primary objective is to identify critical and catastrophic
failure modes and to assure that potential failures do not result
in an adverse effect on safety and system operation.
• It is an integral part of the design process.
• It is performed in a timely manner to facilitate a prompt action
by design organization and project management.

5
Failure Modes and Effects Analysis (FMEA)

• Items in a typical FMEA sheet for the Shuttle program:


• Nomenclature and function
• Failure mode and cause
• Failure effect on subsystem
• Failure effect on element
• Failure effect on mission/crew and reaction time
• Failure detection
• Redundancy screens
• Correcting action/timeframe/remarks
• Criticality
6
FAILURE MODE EFFECTS ANALYSIS
REVISION: Basic A FINAL COUNTDOWN
DATE: March 15, 1988 B BOOST
PAGE: A-141 SUPERCEDES: ______ THRUST VECTOR CONTROL SUBSYSTEM C SEPARATION
ANALYST: C. Barnes D DESCENT
APPROVED: G. Perry E RETRIEVAL
NOMENCLATURE FAILURE MODE FAILURE EFFECT FAILURE EFFECT FAILURE EFFECT ON MISSION/ a. FAILURE DETECTION CORRECTING ACTION/ CRIT
AND FUNCTION AND CAUSE ON SUBSYSTEM ON SRB CREW AND REACTION TIME b. REDUNDANCY SCREENS TIMEFRAME/REMARKS CAT

20-01-44
FM Code A01
Turbine Exhaust Duct External A,B. Actual loss A,B. Probable Loss A,B. Probable Loss a) None Correcting Action: 1
Assembly leakage of Loss of containment Fire and explosion. Fire and explosion b) N/A None
hot exhaust of hot exhaust will lead to loss Timeframe: N/A
P/N: 10206-0002-102 gas (System gases. of the mission,
A and/or B) vehicle, and crew.
Ref. Des.: None caused by:
Reaction Time:
2 Required • Bellows Seconds
fracture/
Vents HPU turbine exhaust fatigue C,D,E. No Effect C,D,E. No Effect C,D,E. No Effect a) N/A 3
gas to atmosphere out- Failure mode not Failure mode not Failure mode not b) N/A
side of the aft skirt. • Flange/duct applicable to applicable to applicable to
fracture these phases. these phases. these phases.
Exhaust Duct Assembly
includes: • Seal failure

Upper Exhaust Assembly • Seal surface


(three bellows) defect
10206-0003-101
• Improper
Middle Exhaust Assembly torque
10206-0007-101
Alt. 10206-0031-851 • Contamination
Alt. 10206-0044-851 during assembly
Alt. 10206-0045-851
• Improperly
Lower Exhaust Assembly lockwired.
10206-0010-101

7
Failure Modes and Effects Analysis (FMEA)

Benefits:
• The FMEA provides a systematic evaluation and
documentation of failure modes, causes and their effects.
• It categorizes the severity (criticality category) of the
potential effects from each failure mode/failure cause.
• It provides input to the CIL (Critical Items List).
• It identifies all single point failures.
• The FMEA findings constitute a major consideration in
design and management reviews.
• Results from the FMEA provide data for other types of
analysis, such as design improvements, testing, operations
and maintenance, and analysis of mission risk.
8
Failure Modes, Effects, and Criticality Analysis (FMECA)

• A FMECA is similar to a FMEA; however, a FMECA provides


information to quantify, prioritize and rank failure modes.
• It is an analysis procedure which identifies all possible failure modes,
determines the effect of each failure on the system, and ranks each failure
according to a severity classification of failure effect.
• MIL-STD-1629A, Procedures for Performing a FMECA, discusses the
FMECA as a two-step process:
• Failure Modes and Effects Analysis (FMEA).
• Criticality Analysis (CA).
• Criticality analysis can be done quantitatively using failure rates or
qualitatively using a Risk Priority rating Number (RPN).
• CA using failure rates requires extensive amount of information and
failure data.
• A RPN is relatively simple measure which combines relative weights
for severity, frequency, and detectability of the failure. It is used for
ranking high risk items.
9
Failure Modes, Effects, and Criticality Analysis (FMECA)
Example

Part name/ Potential Causes (failure Risk Priority Rating Recommended Risk Priority Rating
Effects
Part number Failure modes Mechanism) Sev Freq Det RPN Improvement Sev Freq Det RPN

Turbine Exhaust External 1. Bellows Fire and


Duct Assembly leakage of fracture/fatigue Explosion
hot exhaust
P/N gas (System 2. Flange/duct Fire and
10206-0002-102 A and/or B) fracture Explosion

3. Seal failure Fire and


Explosion

4. Seal surface Fire and


defect Explosion

5. Improper Fire and


torque Explosion

6. Contamination Fire and


during assembly Explosion

7. Improperly Fire and


lockwired explosion

10
Qualitative Fault Tree Analysis (FTA)

• A FTA is a deductive (top-down) approach that graphically and


logically represents events at a lower level which can lead to a
top undesirable event.
• It is a tool that systematically can answer the question of what
can go wrong by identifying failure scenarios.
• It is an excellent tool for analyzing complex systems.
• Qualitative FTA is predominately a Safety tool.

11
Qualitative Fault Tree Analysis (FTA)
X-34 Hydraulic System Example
This is a portion of a schematic to a system which incorporates three hydraulic pump packages. The system can still
function properly if two of the pumps operate. The fault tree example is only a tiny portion of one pump package from the
hydraulic system fault tree from which this example was based.

Charging Connector External Power

Pump Motor PT
Controller 1 18 HP
var

Pump
Battery 1 Pump Latching
Relay 1
Flight Computer

Pump Motor
Controller 2 18 HP
PT
var

Pump
Battery 2 Pump Latching
Relay 2

Pump Motor
Controller 3
18 HP PT
var
Pump Cooling Plate
Battery 3 Pump Latching 6
Relay 3 12
FWD Manifold
Qualitative Fault Tree Analysis (FTA)
X-34 Hydraulic System Example

Inadequate Power to
Pump Package 1 Motor

MTR-1-PWR
Page X

Pump Package 1 Motor Inadequate / No Power


Controller Off / Low to Pump Package 1
Motor Controller

MTR-CTRL-1-OFF MTR-CTRL-1-PWR

Pump Package 1 Motor Pump Package 1 Motor Pump Package 1 Pump Package Relay
Controller Fails Off Controller Commanded Off / Battery Failure (Loss Fails / Commanded
/ Low (Component Low (Software / Pressure of Charge / Off
Failure) Transducer Error) Inadequate Charge)

MTR-CTRL-1-FOF MTR-1-CTRL-CMD-OFF PMP- PKG-1-BAT-F PMP-PKG-1-REL-OFF

Page XX

Pump Package 1 Relay Pump Package 1 Relay


Fails Off Commanded to "Off"
Position

PKG-1-REL-FOF PMP-PKG-1-CMD-OFF

13
Qualitative Fault Tree Analysis (FTA)
Benefits:
• Provides a format for quantitative and qualitative evaluation.
• Provides a visual description of system functions that lead to
undesired outcomes.
• Identifies failure potentials which may otherwise be overlooked.
• Identifies design features that preclude occurrence of a top level fault
event.
• Identifies manufacturing and processing faults.
• Determines where to place emphasis for further testing and analysis.
• Directs the analyst deductively to accident-related events.
• Useful in investigating accidents or problems resulting from use of a
complex system.

14
Qualitative Fault Tree Analysis (FTA)
Benefits: (cont’d)
• Can identify impact of operator/personal interaction with a system.
• Can help identify design, procedural, and external conditions which can cause
problems under normal operations.
• Often identifies common faults or inter-related events which were previously
unrecognized as being related.
• Excellent for ensuring interfaces are analyzed as to their contribution to the top
undesired event.
• Can easily include design flaws, human and procedural errors which are sometimes
difficult to quantify (and therefore, often ground-ruled out of quantitative analysis).
• Qualitative FTA requires cutset analysis to attain full benefits of the analysis.
(Cutsets: Any group of non-redundant contributing elements which, if all occur, will
cause the top event to occur)

15
Qualitative Fault Tree Analysis (FTA)

Considerations:
• FTA addresses only one undesirable condition or event at a time.
Many FTAs might be needed for a particular system.
• Both Quantitative and Qualitative FTAs are time/resource
intensive.
• In general, design oriented FTAs require much more time than
failure investigation FTAs. Management is mostly acquainted
with failure investigations FTAs. Such FTA efforts can give a
false sense of how quickly a design FTA can be developed.

16
Quantitative Fault Tree Analysis (FTA)

• Quantitative FTA is used as a Reliability and a Safety tool.


• It diverges from Qualitative FTA in that failure rates or probabilities are
input into the tree and the probability of occurrence is computed for the
cutsets and the top undesirable event.
• Tends to be strictly “hardware failure” oriented as opposed to Qualitative
FTA (which includes hardware and other less quantifiable faults).
• Is excellent in comparing different configurations of a system (even if the
failure rate data uncertainty is fairly high).
• Can be used to calculate the probability of occurrence of different cutsets
and the top undesirable event for reliability predictions.

17
Quantitative Fault Tree Analysis (FTA)
X-33 Methane Ground Storage and Loading Example

System Description:
• Methane loading system - The methane is stored in a
tank in a liquid form and then vaporized and loaded as
a gas. This example terminated at valve failure.

18
Quantitative Fault Tree Analysis (FTA)
X-33 Methane Ground Storage and Loading Example
Inability to Load
Methane (CH4)

NO-LOAD -CH4

CH4 Not Supplied Loss / Blockage of


Through Manual CH4 in Loading Line
Valve V -1537 (Post V -1537)

VIA-VLV -1537 LOAD -LINE

Valve V -1557 Fails Valve V -1537 Fails CH4 Vented CH4 Transfer
Open Closed Through Load Blocked Through
Line Load Line

VLV-1557-OP VLV -1537 -CL CH4-LOAD -VNT CH4-LOAD -BLK


3.90E -04 3.90E -04

Solenoid Operated Solenoid Operated Relief Valve RV - Solenoid Operated Check Valve CV -
Valve SOV -1549 Valve SOV -1549 1552 Open Valve SOV -1561 1548 Fails Closed
Mech. Fails Open Solenoid Fails Fails Closed
Open
SOV -1549-MECH -OP SOV -1549 -SOL-OP RV-1552 -OP SOV -1561-MECH -CL CV-1548-CL

6.50E -06 3.90E -04 3.90E -05 2.86E -08

Solenoid Operated Solenoid Operated


Valve SOV -1561 Valve SOV-1561
Mech. Fails Closed Solenoid Fails
Closed
SOV -1561-MECH -OP SOV -1561 -SOL-OP

6.50E -06 3.90E -04


19
Quantitative Fault Tree Analysis (FTA)

Considerations:
• The probabilities derived from a Quantitative FTA should be
viewed with the uncertainty fully understood.
• It is often difficult to obtain valid reliability data for
experimental / non-production related systems. In such cases:
• Too few items are available for a proper statistical sample
• Data from “Like” systems and operating environments must
be used
• Quantitative FTA has little or no place in failure investigations.

20
Probabilistic Risk Assessment (PRA)

• PRA is a process that follows a quantitative approach to determine the


risk of a top undesirable event and the associated uncertainty arising
from inherent causes.
• It provides a systematic way of answering the following questions:
• What can go wrong?
• How likely is it to happen?
• What are the consequences?
• How certain are we about the answer? (uncertainty or state of
knowledge)
• The main tools used in PRA processes are fault trees, event sequence
diagrams, and event trees.
• Other tools such as reliability block diagrams can be used to support a
PRA study.

21
Probabilistic Risk Assessment (PRA)

A typical PRA process involves:


• Identification of end state(s) to be assessed.
• Identification of Initiating Events (IE) leading to the end states.
• Development of the Event Sequence Diagrams (ESD) for the initiating event. An
ESD shows the sequence of events from IE to end states.
• Quantification of ESDs (event tree).
• Aggregation of risk for each system end state.
• Risk analysis which might include: risk ranking, risk reduction, sensitivity
analysis, etc.

22
Probabilistic Risk Assessment (PRA)
A PRA Process Example
FLIGHT/TEST DATA
Master Logic Diagram (MLD)
PROBABILISTIC STRUCTURAL MODELS
SIMILARITY ANALYSIS
ENGINEERING JUDGMENT

MLD identifies all significant basic/


initiating events that could lead
EVENT PROBABILITY
to loss of vehicle. DISTRIBUTION

Event Tree
Porosity Present
Porosity
Turbine Inspection in Critical
Present in Blade Scenario End State
Blade Location Leads
Porosity
Not Critical
to Crack in Failure Number or Transfer UNCERTAINTY
Effective Location
<4300 sec DISTRIBUTION
1 LOV
FOR LOV DUE
2 MS
TO TURBINE
QUANTIFICATION 3 MS
BLADE
OF ESD 4 MS
POROSITY
5 MS
INITIATING &
PIVOTAL EVENTS

Event Sequence Diagram (ESD)

Turbine Porosity Present Porosity in Critical


Inspection
in Critical Location Leads to
Blade Porosity Not Effective
Location Crack in <4300 sec
RISK AGGREGATION
OF BASIC EVENTS
Mission Mission Mission
Success Success Success UNCERTAINTY
DISTRIBUTION FOR Products
EVENT PROBABILITY 1. System Risk
Loss of 2. Element Risk
Blade
Vehicle
Failure
(LOV) 3. Subsystem Risk
4. Risk Ranking
5. Sensitivity Analysis
Mission etc..
Success
23
Probabilistic Risk Assessment (PRA)

Benefits:
• Imposes logic structure on risk assessment.
• Evaluates risk at various system levels including system interactions.
• Handles multiple failures and common causes.
• Provides more insight into the various system failure modes and the
effects of human/process interaction.
• Provides a tool to combine both qualitative and quantitative risk
analysis.
Limitations:
• Could be very expensive.
• Could be misapplied and misused due to the incorporation of
qualitative data.

24
Probabilistic Risk Assessment (PRA)
Event Tree Example – A Coolant System

P1 Normal
Emergency
D Coolant
Coolant
P2

A Coolant System

• P1 and P2 are electrically driven pumps, D is a flow detector, and EP (not shown) is the electric power

• Initiating event is a break in the normal coolant pipe

• Full system success (S) requires both pumps operating, the detection system, and the electrical power operating

• One pump operating results in partial success (P)

• Two pumps failing or failure of electrical power (EP) results in system failure (F) 25
Probabilistic Risk Assessment (PRA)
Event Tree Example – A Coolant System

P(P2)
P(P1)
1-S
Q(P2) 2-P
P(D) P(P2)
Q(P1) 3-P
Q(P2)
P(EP) 4-F

Q(D)
5-F
NORMAL COOLANT
PIPE FAILURE

Q(EP) 6-F

P(.) - Probability of Component Success


Q(.) - Probability of Component Failure
S - Full System Success
P - Partial System Success 26
F - System Failure
Probabilistic Risk Assessment (PRA)
Reliability Block Diagram
SRB Range Safety System (RSS) Example

0.9998843 0.9965403 0.9996991 0.9996991

NSD S&A CDF1 CDF2


0.9971161

LSC
0.9998843 0.9965403 0.9996991 0.9996991

NSD S&A CDF1 CDF2

NSD - NASA Standard Detonator


S&A - Safe and Arm
CDF - Confined Detonating Fuse
LSC - Linear Shaped Charge

RSYS =[1 - (1- NSD*S&A*CDF1*CDF2)2] * LSC 27


Reliability Allocation

• Reliability allocation is the top-down process of


subdividing a system reliability requirement into
subsystem and component requirements.
• Reliability allocation is performed in order to translate the
system reliability requirement into more manageable,
lower level requirements.

28
Reliability Allocation
Example

0.999
SSME
Reliability

0.99975
0.99975 0.99985
0.99980 0.99985 Controls &
HPFTP HPOTP Chamber Nozzle
Externals
0.99987 0.99987
Turbine Pump
Ass’y Ass’y

0.999961 0.999909
Housing Rotor
Ass’y Ass’y

0.999945 0.999964
Blades Retainers 29
Reliability Allocation

Benefits:
• Reliability allocation allows design trade-off studies to be
performed in order to achieve the optimum combination of
subsystems which meets the system reliability
requirement.

30
Reliability Prediction

• Reliability prediction is the process of quantitatively


estimating the reliability of a system.
• Reliability prediction is performed to the lowest level for
which data is available. The sub-level reliabilities are then
combined to derive the system level prediction.
• Reliability prediction during design is used as a benchmark
for subsequent reliability assessments.
• Predictions provide managers and designers a rational
basis for design decisions.

31
Reliability Prediction

• Reliability prediction techniques are dependent on the degree of


the design definition and the availability of historical data.
• Similarity analysis techniques: Reliability of a new design is
predicted using reliability of similar parts.
• Probabilistic design techniques: Reliability is predicted using
engineering failure models.
• Techniques that utilize generic failure rates such as MIL-
HDBK 217, Reliability Prediction of Electronic Equipment.

32
Reliability Prediction
Similarity Analysis Example
Fuel Turbo Pump

• Assume a Fuel Turbo Pump (FTP) has a historical failure rate of:
50 per 100k firings
• Assume also the failure mode break down is:
Cracked/Fractured Blades 35%
Turbine bearing Failure 25%
Pump bearing Failure 20%
Impeller Failure 10%
Turbine Seal Failure 10%
100%
33
• Then the Cracked/Fractured Failure rate is: .35 X 50 = 17.5/100k firings
Reliability Prediction
Similarity Analysis Example
Fuel Turbo Pump

• If the failure causes for Cracked/Fractured are


determined to be:

100%

• Then the Thermal Stress Failure Rate is:


0.57 X 17.5 = 10/100k firings

34
Reliability Prediction
Similarity Analysis Example
Fuel Turbo Pump

•Failure Rate Adjustments established through:


• Test Results
• Preliminary Analyses
• Integrated Product Team (IPT) Input
• Address "high hitters" - Using Thermal Stress failure rate of 10.0/100k firing
• Design changes to improve reliability
Cum
Percent Failure Rate
Improvement Reduction
Lower Operating Temperatures 20% 2.00
(Test)
Hollow Blades 30% (additional) 4.40
(Analysis, Expert Opinion)
Material Change 20% (additional) 5.52 35
(Analysis)
Reliability Prediction
Similarity Analysis Example
Fuel Turbo Pump

If no other changes are made, the FTP predicted reliability is


then:

50 - 5.52 = 44.48 / 100k firings

36
Reliability Prediction

Benefits:

• Provides a early quantitative evaluation of design


• Identifies problem areas
• Identifies parts and components with highest potential
reliability improvements
• Makes full use of lessons learned

37
Reliability Demonstration

• Reliability Demonstration is a reliability estimation method that


primarily uses test data (objective data) and statistical formulas
to calculate demonstrated reliability or to demonstrate numerical
reliability goal with some statistical confidence.
• Models and techniques used in reliability demonstration include
Binomial, Exponential, Weibull models. Reliability growth
techniques, such as the U.S. Army Material Systems Analysis
Activity (AMSAA) and Duane models can also be used to
calculate demonstrated reliability.
• Historically, some military and space programs employed this
method to demonstrate reliability goals. For example, a
reliability goal of .99 at 95% confidence level is demonstrated
by conducting 298 successful tests.

38
Reliability Demonstration
Reliability Calculation through Demonstrated Tests
By Using Binomial Statistical Formula
Demonstrated Reliability-Mean Time Between Failures

500
(.998)

450

400

With 90% Statistical Confidence


350

300

250
(.996)

200
With 95% Statistical Confidence
150

100
(.990)
Typical Case: To demonstrate .99 reliability
50 with 95% confidence, it takes 298 successful tests

0
0 100 200 300 400 500 600 700 800 900 1000

Number of Successful Tests Needed


39
Reliability Demonstration
Benefits:
• It provides a way to validate numerical reliability requirement.
• It provides a way to calculate the reliability that has been
demonstrated so far by the item under consideration.
• It eliminates the subjectivity that is usually embedded in other
reliability estimation methods.
• Through rigorous reliability demonstration test program, design
weakness and failures can be revealed and corrective actions can
be taken to significantly improve reliability.

Limitations:
• It is very expensive and time-consuming to run through a
reliability demonstration program.
• Data quantity sensitive.

40
Trend Analysis

• Problem/performance trending is a statistical characterization of


problem/performance data using graphical/descriptive techniques.
• Performance trending is done using control-type charts.
• The simplest and most powerful trending tool is the Pareto Chart for
problem trending.
• In general, problem trending involves:
• Extracting related problem data from a historical problem
database.
• Normalizing raw problem counts into problem rate of occurrence
based on prime parameter (starts, seconds of run time).
• Plotting normalized data to establish a frequency chart.
• Fitting a trend curve to the frequency plot.
• Analyzing the fitted curve for trends.
41
Tu
rb
o m

1000
1200
1400
1600
1800
2000

200
400
600
800

0
ac
hi
ne
ry
C
om
bu
In st
s io
tru n
m
en
ta
tio
n
P
lu
m
bi
ng

E
ng
in
e
H
ar
ne
ss
P es
ro
pe
l la
nt
Vl
vs

Ig
n ite
rs
H
yd
r au
l ic
In s
te
r co
n ne
Example Pareto Chart

ct
s
C
Problem Trending

on
tro
l le
P r
SSME UCRs Reported From 01/01/1990 - 12/31/1999

ne
um
at
ic
s

G
S
E
S
of
tw
ar
e
42
Count
Trend Analysis
Benefits:
• Performance trending
• Helps in identifying potential problems with a performance parameter
before it occurs.
• Problem trending
• Identifies major problem areas for optimum allocation of resources.
• Evaluates effectiveness of past recurrence control actions.
• Predicts future failure rates in a given area.
• Points to desirable and undesirable effects of hardware processing
changes.
• Communicates in simple, logical, visual, and easily understandable
presentation.
Limitations:
• Significant engineering evaluation may be required to isolate
appropriate set of problems.
• Rationale for frequency changes may not be obvious. 43
Probabilistic Structural Analysis

• It is a tool to probabilistically characterize the design and


analyze its reliability using engineering failure models.
• It is a tool to evaluate the expected reliability of a part
given the structural capability and the expected operating
environment.
• It is used when failure data is not available and the design
is characterized by complex geometry or is sensitive to
loads, material properties, and environments.

44
Probabilistic Structural Analysis
Turbo-Pump Bearing Example

•During rig testing the AT/HPFTP Bearing


experienced several cracked races.
•Summary of 440C race fractures / tests: 3 of
FRACTURE 4 Fractured
LOCATION

45
Probabilistic Structural Analysis
Turbo-Pump Bearing Example

OBJECTIVE: Predict probability of inner race over-stress, under the


conditions experienced in the test rig, and estimate the effect of
manufacturing stresses on the fracture probability.

Stress
Allowable
Load

Failure Region
46
Probabilistic Structural Analysis
Turbo-Pump Bearing Example

Conditions
• Using rig fits and clearances
• Crack size data from actual cut-ups
• Stresses associated with manufacturing (ideal)
• Materials properties and their variations
• Failure mode being analyzed is over-stress

47
Probabilistic Structural Analysis
Turbo-Pump Bearing Example
HPFTP Roller Bearing Inner Race - Model Flow
Variation in:
Randomly select values for Randomly select values for shaft o Fracture Toughness
inner race material properties and sleeve material properties o Yield Strength
o No. of Cracks
o Crack Depth
o Crack Length
ε α ν ε α ν

Compute Allowable
Tolerance fits of rig test bearing Load for each crack

Inner race hoop stress contribution Shaft and sleeve hoop stress
at given conditions contribution at given conditions. Compute Allowable
Load (worst crack)

Total hoop stress

Stress due to Manufacturing Stress > Allowable Load

48
Iterate and compute Failure Probability
Probabilistic Structural Analysis
Turbo-Pump Bearing Example
RESULTS - FAILURE RATES
At Test Race Configuration Probabilistic Structural Analysis

3 of 4 failed 440C w/ actual manufacturing 68,000 fail/100k firings


stresses (ie ideal + abusive
grinding)

--- 440C w/no manf. stresses 1,500 fail/100k firings

--- 440C w/ideal manf. stresses 27,000 fail/100k firings

In 15+ tests 9310 w/ ideal manf stresses 10 fail/100k firings


never had a
through ring
fracture
It is estimated that 50% of the through ring fractures would result in an engine
shutdown. The shutdown 9310 HPFTP Roller Bearing Inner Race Failure Rate is
49
then: 0.50 X 10/100k = 5 fail/100k firings
Probabilistic Structural Analysis

Benefits:
• Used to understand the uncertainty of the design and
identify high risk areas.
• Used to perform sensitivity analysis and trade studies for
reliability optimization.
• Used in identifying areas for further testing.

50
Design of Experiments (DOE)

• DOE is a systematic and scientific approach which allows


design, manufacturing, and test engineers to better
understand the variability of a design or a process and how
the input variables affect the response.
• It is used as a tool to optimize product design by identifying the
critical design parameters that affect the reliability of the design.
• It is used as a tool to understand manufacturing variability and to
identify the critical process variables that affect the quality and the
reliability of the product.

51
Design of Experiments (DOE)
ET Variable Polarity Plasma Arc (VPPA) Weld Process Example

Initial Weld Process Sensitivities


0.320” Oscillation Sensitivity 2195 Vertical VPPA Welding
Goal: Determine if the weld process is sensitive to cover pass oscillation
parameters.

Factors examined included width, dwell and speed, each with three levels:
Width - how far does it oscillate : 0.03, 0.10, 0.17 inches
Dwell - how long do you pause at the ends of the oscillation : 0.35, 0.52, 0.70 sec
Speed - how fast do you oscillate : 10.0, 27.5, 45.0 inches per minute

Responses : Room Temperature and Cryo Tensile strengths


Model : Response Surface Model (Box-Behnken) generated and analyzed using
ECHIP Software
52
Total number of tests : 16
Design of Experiments (DOE)
ET Variable Polarity Plasma Arc (VPPA) Weld Process Example
0.320” Cover Pass Oscillation Results
-Width and Speed most Significant
-Oscillation Parameters can effect weld properties
-Ultimate Tensile Strength UTS (ksi) R2 = 0.895, Cryo UTS R2 = 0.913
Cryo UTS
RT UTS Dwell = 0.00
Dwell = 0.00
60
50
55
45
50

40
H
45
IP
P EC
HI
EC 3
0.0
.0 3
035 40 .06
90
6
Spe 30 0.0
40 0 .0 ed
2
0.1 idth
9 20 5 W
0.0 0.1
Sp 30 2 10
ee 0 .1idt
h
d 20 5 W
0.1
10
53
Design of Experiments (DOE)
Jet Engine Diffuser Case Example

• Use information from past manufacturing problems on the diffuser


case to design the first fully cast jet engine diffuser case.
• Variables that lead to quality of casting:
• Metal Feed Technique
• Gating Scheme
• Core Pack Technique
• Stucco Application
• Mold Preheat
• Pour Temperature
• Burn Out Temperature
• Mold Insulation
• Hip temperature
• Heat Treat
• Homogenize
• Anneal
54
12 variables each at a high and low level.
Design of Experiments (DOE)
Jet Engine Diffuser Case Example

• If we test
12
all combinations of all variables, we need
to run 2 = 4096 tests with no replication.
• Using the DOE technique only 43 of the possible
points were tested. Resulting tests yielded the
process levels necessary to optimize the quality and
blueprint conformance of manufacturing the diffuser
case.

55
Design of Experiments (DOE)

Benefits:
• Provides a tool to understand variability in design and
manufacturing.
• Reduces time to establish mature design and
manufacturing processes.
• Saves time and money by optimizing the experiment
input and output.
• Reduces potential of nonconformances.

56
Statistical Process Control (SPC)

• Statistical Process Control (SPC) is a statistical technique


that measures and analyzes stability and variability of a
process using control charts.
• Most commonly used SPC charts are the X-bar chart and
R-chart.
• End product reliability is highly dependent on
manufacturing process stability and variability. SPC
provides an effective tool to ensure manufacturing quality.

57
Statistical Process Control (SPC)
Fastener Example

X-bar Chart for Fastener


40 UCL = 36.6654
38 Centerline = 33.32

36 LCL = 29.9746
X-bar

34
32
30
28
0 1 2 3 4 5 6 7 8 9 1011 12 13 14 15 16 1718 19 20

Subgroup
58
Statistical Process Control (SPC)
Fastener Example

Range Chart for Fastener


15
U CL = 12.2633

12 C enterline = 5.8

L CL = 0.0
9
Range

0
0 1 2 3 4 5 6 7 8 9 1011 12 13 14 15 16 1718 19 20

Subgroup
59
Statistical Process Control (SPC)
RSRM Phenolic Tag End Example

RSRM Production
• Material acceptance data ensures constituents are in family of
previously used components and the statistical trends can identify
potential subtle changes in vendor processes.
• One (of many) nozzle phenolic insulator parameters trended is
residual volatiles remaining after phenolic sample is heated.
• SPC evaluation showed changes in residual volatile levels of silica
cloth phenolic.
• Additional investigation revealed unanticipated change in silica
vendor furnace brick (resulting in slightly different oven heat
environment during silica processing).
• Corrective action implemented at vendor prior to continued silica
production - subsequent data verifies return of parameters to
within statistical expectations.
60
Statistical Process Control (SPC)
RSRM Phenolic Tag End Example
3.00

2.50

P Lower Spec Limit


e Lower Control Limit
r X bar
c 2.00
Percent Residual Volatiles
e
n Upper Control Limit
t Upper Spec Limit

1.50
R
e
s

V 1.00
o
l
s

0.50

Vendor Change Vendor Change


Made Corrected

0.00
1

13

16

28

31

43

46

58
10

19

22

25

34

37

40

49

52

55

61

64

67

70
Sample Number 61
Statistical Process Control (SPC)

Benefits:
• Statistical process control provides a vehicle to ensure
manufacturing process stability and end product
reliability.
• Process anomalies can be discovered earlier and be
resolved without any reliability impact on end product.
Limitations:
• SPC data and controlled features may not be directly
related to reliability concerns.
• SPC technique may not be effective when applied to
small run manufacturing processes (total only few parts
are made).
62
Manufacturing Process Capability

• In simple terms, the manufacturing process capability is defined as the


ratio of the engineering specification width to the process width (3-
sigma for one-sided, 6-sigma for two-sided). This ratio is called the
process capability index (Cpk).
• As a rule of thumb:
• Cpk > 1.33 Capable
• Cpk = 1.00-1.33 Capable with tight control
• Cpk < 1.00 Incapable
• Manufacturing process capability is essential to evaluate the suitability
of the process to meet the spec.
• Manufacturing process capability data are one of essential data sources
to support design feasibility and reliability trade study.

63
Manufacturing Process Capability
Application Example

lox post
Injector Lox Post Tolerance Requirement
Background: Lox post OD and ID
dimensions have significant effect on lox
and fuel mixture property. Uneven
mixture of the propellants and localized
overheating impact engine performance
and reliability

Analysis Support: OD and ID tolerance


boundaries need to be established with
ID sound engineering rationale and be
OD backed up by manufacturing process
capability

64
Manufacturing Process Capability
Application Example

Injector Lox Post Tolerance Requirement

Analysis Approach and Result

• Performance impact is correlated with OD and ID dimensions.

• Localized overheating is assessed by OD and ID process variability.

• Tolerance boundaries were established as +/- .0005” for both OD and ID.

• Results indicate the process capability is feasible to support design


and reliability requirement.

65
Manufacturing Process Capability
Example: Main Injector Lox Post ID Dimension

LSL Nominal USL


4
Mean = -.0000095”
sigma = .000076”
Cpk = 2.14

-3s +3s
frequency

-5 -3 -1 1 3 5 (X 0.0001”)

Post ID Deviation from Nominal

66
Manufacturing Process Capability
Benefits:
• Manufacturing process capability data are vital to support design
feasibility.
• Manufacturing process capability is a good tool to judge the
suitability of the process to build a specific design.

Limitations:
• Process capability data represent dynamic manufacturing
environment that can be easily misused.
• Maintaining a manufacturing process capability data bank is a very
intensive effort.

67
Conclusions/Recommendations

• QRA is a well-established technology that involves methods and


techniques beyond conducting classical PRA studies.
• QRA is essential to understanding uncertainty and controlling
our critical processes.
• Implementation and use of QRA could be enhanced if
• QRA is incorporated as part of the system management process
• QRA methods and techniques are viewed as part of the system
engineering effectiveness tools
• QRA is extremely important for the Space Shuttle Program to
understand and control risk. QRA techniques are well-
established, however, the application of the techniques on a
larger scale will require careful planning, extensive training, and
strong commitment by Shuttle Program management to pursue
long term plans.

68

Vous aimerez peut-être aussi