Vous êtes sur la page 1sur 19

Defect Classification And

Analysis
Risk Identification

Presented By

Talha Malik 6864


Akash younis 6889
Mohsin Sardar 6917 1

Valeed Farooq 6799


Defect Analysis
Goal:
General defect analyses:
Questions: what/where/when/how/why?
Distribution/trend/causal analyses.
Analyses of classified defect data:
Prior: defect classification.
Use of historical baselines.
Attribute focusing in 1-way and 2-way analyses.
Tree-based defect analysis

2
Defect in Quality Data/Models

Defect data quality measurement data:


As part of direct Q data.
Extracted from defect tracking tools.
Additional (defect classification) data may
be available.
Defect data in quality models:
As results in generalized models (GMs).
As R.V (response/independent variable)
in product specific models (PSMs).
semi-customized models GMs,
observation-based: R.V in SRGMs,
predictive: R.V in TBDMs.

3
General Defect Analysis

General defect analyses:

What? , Where? , When? , How/Why?


General defect analyses: Types
Distribution by type or area.
Trend over time.
Causal analysis.
Other analysis for classified data.

4
Defect Analysis: Data Treatment
Variations of defect data:
Error/fault/failure perspective.
Pre-/post-release.
Unique defect?
Focus here: defect fixes.
Why defect fixes (DF):
Propagation information.
Close ties to effort (defect fixing).
Pre-release: more meaningful.
(post release: each failure occurrence.)

5
Types Of Defect Analysis

Defect Trend Analysis


Trend as a continuous function:
Defect dynamics model.
Defect Causal Analysis
Defect causal analyses: Types
Causal relation identified:
Techniques: statistical or logical.
Root cause analysis (logical):
Statistical causal analysis:

6
ODC
new analytical methods used for software development and test process analysis
Key elements of ODC
Aim: tracking/analysis/improve
Approach: classification and analysis
Key attributes of defects
Views: both failure and fault
Applicability: inspection and testing
Analysis: attribute focusing
Need for historical data

7
ODC IDEAS
Cause-effect relation by type:
Different types of faults.
Causing different failures.
Need defect classification.
Multiple attributes for defects.

Good measurement:
Orthogonally (independent view).
Consistency across phases.
Uniformity across products.
ODC process/implementation:
Human classification.
Analysis method and tools.
Feedback results (and follow-up).
8
ODC Attributes: Failure-View

Defect trigger:
Associated with verification process
{ similar to test case measurement
{ collected by testers

Trigger classes
product specific
black box in nature
pre/post-release triggers
Defect type:
Associated with development process.
Missing or incorrect.
Collected by developers.
9
May be adapted for other products.
ODC Attributes: Cause/Error-View

Key attributes:
Defect source: vendor/base/new code.
Where injected.
When injected.
Characteristics:
Associated to additional causal analysis.
(May not be performed.)
Many subjective judgment involved
(evolution of ODC philosophy)

10
ODC Process and Implementation

ODC process:
Human classification
defect type: developers,
defect trigger and effect: testers,
other information: coordinator/other.
Tie to inspection/testing processes.
Analysis: attribute focusing.
Feedback results: graphical.
Implementation and deployment:
Training of participants.
Data capturing tools.
Centralized analysis.
11
Usage of analysis results.
Risk Identification

Why?
Where?
How?

describes and compares risk identification techniques.

12
BASIC IDEAS AND CONCEPTS

First, we need to establish a predictive relationship between


project metrics and actual product defects based on
historical data.
Then, this established predictive relation is used to predict
potential defects for the new project or new product
release once the project metrics data become available, but
before actual defects are observed in the new project or
product release.
In the above prediction, the focus is on the high-risk or the
potentially high-defect modules or components.
13
TRADITIONAL STATISTICAL ANALYSIS TECHNIQUES

Correlation analysis
Linear regression models
Other models and general observations

14
NEW TECHNIQUES FOR RISK IDENTIFICATION

Principal component and discriminant analyses


Artificial neural networks and learning algorithms
Data partitions and tree-based modeling
Pattern matching and optimal set reduction

15
COMPARISONS AND INTEGRATION

Accuracy of analysis results can be measured by the difference (error)


between predicted and actual results.
Early availability and stability
Constructive information and guidance for quality improvement

16
RISK IDENTIFICATION FOR CLASSIFIED DEFECT DATA

Defect impact analysis using TBM


Passive tracking and occasional correction.
Active identification and control of product quality

17
CONCLUDING REMARKS

Because of the highly uneven distribution of defects in software systems,


there is a great need for effective risk identification techniques so that high-
defect modules or software components can be identified and characterized
for effective defect removal and quality improvement. The survey of
different risk identification techniques presented in this presentation brings
together information from diverse sources to offer a common starting point
for software quality professionals and software engineering students. The
comparison of techniques can help them choose appropriate techniques for
their individual applications.

18
19

Vous aimerez peut-être aussi