Vous êtes sur la page 1sur 52

ISYS6264 – Testing and System Implementation

Bug Management
Learning Outcomes

• LO 2: Design the testing management plan for a software


• LO 3: Design the testing implementation plan for a software
References

• Black, Rex. (2009). Managing the testing process : practical


tools and techniques for managing hardware and software
testing. 03. Wiley. Indianapolis. ISBN: 9780470404157.

• Burnstein, Ilene. (2003). Practical Software Testing. Springer.


New York. ISBN: 0-387-95131-8

• Homès, Bernard. (2012).Fundamentals of Software Testing. ISTE


– Wiley. London – Hoboken. ISBN: 978-1-84821-324-1
Sub Topics

• Defect Management
• Benefit of Defect Analysis
• Bugs and Root Causes
• Levels of Importance to Bug
• Bug Life Cycle
• Bug Report
• Steps for Better Bug Reports
• Managing Bug Tracking
Defect Management
Introduction to Defect

• Besides the supply of information on the level of quality of


software and systems, one of the main objectives of the tests is to
identify defects so that they are processed – corrected or
evaluated – before the software or the system is delivered to
customers. This implies identifying and processing defects, as
well as checking their correction.

• Defects can be classified according to several criteria: the impact


on users (often called criticality), the necessary effort to correct
the defect, the component where this defect is initially present,
etc. Defect classification enables us to order them according to
various criteria and to take measures in order to avoid repeated
reporting of already discovered defects.
Defect Identification

1. Comparison to requirements and specifications provided for the


software design. Requirements and specifications should be
testable and comprise sufficient information to determine the
level of quality of the actual results. This method does not
always enable accurate testing, because the options chosen by
developers in case of ambiguities or imprecisions can differ from
those chosen by the tester. There will then be the creation of
defects, which will be considered as “works as designed” and as
“a feature, not a bug” without any added value or information;

2. Comparison with a previous version of the program or even


other competitor software, etc.
Defect Reports Components

1. Structure: tests deliberately carried out while taking notes enable


us to find more easily the first indices of a defect;

2. Reproduction: the reproducibility of an anomaly is a necessary


attribute to ensure its correction;

3. Isolation: once the defect is reproduced, it is important to isolate


the phases leading to its occurrence. Where possible limit the
number of steps necessary to reproduce the defect. A defect
including seven stages or more is generally difficult to read and
its correction will thus be delayed;

4. Generalizing: once the defect is isolated, we need to determine


whether it can be generalized. This includes the detection of
other defects of a similar structure in other modules of the
software;
Defect Reports Components
(cont.)
5. Comparing: determining whether the defect has occurred for the
first time in this version of the application or whether it was
already present (but undetected) in previous versions;

6. Summarizing: the title of the defect (or its summary) is critical


and must show how it can affect customers;

7. Condensing: reducing the size of the defect report so as not to


bore readers and reducing the number of acronyms so that it
remains readable;

8. Ambiguity: avoiding any ambiguity is important, in order not to be


vague or subject to misinterpretation;

9. Neutrality: the anomaly report must be neutral and not be


perceived as an attack against developers;

10. Review: once the defect has been written, it should be reread by
another tester.
Benefit of Defect Analysis
Argumentation
about Defect Analysis
• Defect analysis/prevention processes help to reduce the costs of
developing and maintaining software by reducing the number of
defects that require our attention in both review and execution-
based testing activities.

• Defect analysis/prevention processes help to improve software


quality. If we identity the cause of a class of defects and change
our process so that it does not reoccur, our software should be
less defective with respect to that class of defects and more able
to meet the customer’s requirements.

• If our software contains fewer defects, this reduces the total


number of problems we must look for; the sheer volume of
problems we need to address may be significantly smaller.
Argumentation
about Defect Analysis (cont.)
• Defect analysis/prevention processes provide a framework for
overall process improvement activities. When we know the cause
of a defect, we identify a specific area of our process that needs
work. Improvements made in this area usually produce readily
visible benefits. Defect analysis/ prevention activities not only
help to fine-tune an organizations’ current process and practices,
but also support the identification and implementation of new
methods and tools so that current process continues to evolve
and comes closer to being optimized.

• Defect analysis/prevention activities encourage interaction


between a diverse number of staff members, for example, project
managers, developers, testers, and SQA staff, The close
interrelationships between specialized group activities and the
quality of internal and external deliverables becomes more
apparent.
Benefits of Defect Analysis
and Prevention Processes

Source: Burnstein (2003, pg. 443)


Bugs and Root Causes
Bugs and Their Root Causes

Source: Black (2009, pg. 170)


Bugs and Their Root Causes
(cont.)
• An anomaly occurs when a tester observes an unexpected
behavior. If the test environment and the tester’s actions were
correct, this anomaly indicates either a system failure or a test
failure. The failure arises from a bug in either the system or the
test. The bug comes from an error committed by a software or
hardware engineer (while creating the system under test) or a test
engineer (while creating the test system). That error is the root
cause.

• Usually, the aim of performing a root cause analysis isn’t to


determine the exact error and how it happened. Other than
flogging some hapless engineer, you can’t do much with such
information. Instead, root cause analysis categorizes bugs into a
taxonomy.
Levels of Importance to Bug
Mechanism to Assign Levels
of Importance to Bugs

Severity Priority
Risk
Priority
Number
(RPN)
Severity

• By severity, means the impact, immediate or delayed, of a bug on


the system under test, regardless of the likelihood of occurrence
under end-user conditions or the effect such a bug would have on
users. You can use the same scale used for failure mode and
effect analysis (FMEA:
1. Loss of data, hardware damage, or a safety issue
2. Loss of functionality with no workaround
3. Loss of functionality with a workaround
4. Partial loss of functionality
5. Cosmetic or trivial
Priority

• You use priority to capture the elements of importance not


considered in severity, such as the likelihood of occurrence in
actual customer use and the subsequent impact on the target
customer. When determining priority, you can also consider
whether this kind of bug is prohibited by regulation or agreement,
what kinds of customers are affected, and the cost to the
company if the affected customers take their business elsewhere
because of the bug. Again, you can use a scale like the priority
scale used in the FMEA:
1. Complete loss of system value
2. Unacceptable loss of system value
3. Possibly acceptable reduction in system value
4. Acceptable reduction in system value
5. Negligible reduction in system value
Risk Priority Number (RPN)
for the Bug
• You can multiply severity by priority to calculate a risk priority
number (RPN) for the bug. Using this approach, the RPN can
range from 1 (an extremely dangerous bug) to 25 (a completely
trivial bug).
Bug Life Cycle
Bug Report Life Cycle

Source: Black (2009, pg. 160)


States in Managing
Bug Life Cycles
• Review. When a tester enters a new bug report in the bug-
tracking database, the bug-tracking database holds it for review
before it becomes visible outside the test team. If non-testers can
report bugs directly into the system, then the managers of those
non-testers should determine the review process for those non-
tester bug reports.

• Rejected. If the reviewer decides that a report needs significant


rework— either more research and information or improved
wording — the reviewer rejects the report. This effectively sends
the report back to the tester, who can then submit a revised report
for another review. The appropriate project team members can
also reject a bug report after approval by the reviewer.
States in Managing
Bug Life Cycles (cont.)
• Previous figure shows the states in bug life cycle and the flows
between them. Terminal states— in other words, states in which a
bug report’s life cycle might terminate and the bug report become
legitimately inactive with no further action required or expected—
are shown with heavy lines.

• Typical flows shown with solid lines and atypical flows with dotted
lines. Any time a bug report traverses an atypical flow, some
degree of inefficiency has occurred, which you as a test manager
can and should measure.

• Open. If the tester has fully characterized and isolated the


problem, the reviewer opens the report, making it visible to the
world as a known bug.
States in Managing
Bug Life Cycles (cont.)
• Assigned. The appropriate project team members assign it to the
appropriate development manager, who in turn assigns the bug to
a particular developer for repair.

• Test. Once development provides a fix for the problem, it enters a


test state. The bug fix comes to the test organization for
confirmation testing (which ensures that the proposed fix
completely resolves the bug as reported) and regression testing
(which addresses the question of whether the fix has introduced
new problems as a side effect).

• Reopened. If the fix fails confirmation testing, the tester reopens


the bug report. If the fix passes confirmation testing but fails
regression testing, the tester opens a new bug report.
States in Managing
Bug Life Cycles (cont.)
• Closed. If the fix passes confirmation testing, the tester closes
the bug report.

• Deferred. If appropriate project team members decide that the


problem is real but choose either to assign a low priority to the
bug or to schedule the fix for a subsequent release, the bug
report is deferred. Note that the project team can defer a bug at
any point in its life cycle.

• Cancelled. If appropriate project team members decide that the


problem is not real, but rather is a false positive, the bug report is
cancelled. Note that the project team can cancel a bug at any
point in its life cycle.
Bug Report
Example of Good Bug Report

Source: Black (2009, pg. 149)


Example of Good Bug Report
(cont.)
• The previous bug report contains three basic sections: summary,
steps to reproduce, and isolation.

• The summary is a one- or two-sentence description of the bug,


emphasizing its impact on the customer or the system user. The
summary tells managers, developers, and other readers why they
should care about the problem.
– The sentence, ‘‘I had trouble with screen resolutions’’ is a lousy summary; the
sentence, ‘‘Setting screen resolution to 800 by 1024 renders the screen
unreadable’’ is much better. A succinct, hard-hitting summary hooks the
reader and puts a label on the report. Consider it your one chance to make a
first impression.
Example of Good Bug Report
(cont.)
• The steps to reproduce provide a precise description of how to
repeat the failure. For most bugs, you can write down a sequence
of steps that re-create the problem. Be concise yet complete,
unambiguous, and accurate. This information is critical for
developers, who use your report as a guide to duplicate the
problem as a first step to debugging it. As a test manager and as
a consultant, I have heard many teams of programmers complain
bitterly about the poor job the test team was doing in terms of bug
reporting. In most cases, their complaints centered around the
poor quality of the steps to reproduce.
Example of Good Bug Report
(cont.)
• Isolation refers to the results and information the tester gathered
to confirm that the bug is a real problem and to identify those
factors that affect the bug’s manifestation. What variations or
permutations did the tester try in order to influence the behavior?
For example, if the problem involves reading the CD-ROMdrive
on DataRocket, what happens when theCD-ROMis on a different
SCSI ID? Did the tester check the SCSI termination? If
SpeedyWriter can’t print to a laser printer, can it print to an inkjet?
Good isolation draws a bounding box around a bug. Documenting
the isolation steps performed will assure the programmers and
the project team that the tester isn’t simply tossing an anomaly
over the wall, but is instead reporting a well-characterized
problem.
Example
of Incomplete Bug Report

Source: Black (2009, pg. 152)


Example
of Confusing Bug Report

Source: Black (2009, pg. 152)


Design for a Basic
Bug-Tracking Database

Source: Black (2009, pg. 155)


A Bug Detail Report

Source: Black (2009, pg. 156)


A Bug Detail Report
with Dynamic Information

Source: Black (2009, pg. 166)


Steps for Better Bug Reports
Environment
in Dealing with Bug Reports
• Some number of bug reports will always be irreproducible or
contested. Some bugs exhibit symptoms only intermittently, under
obscure or extreme conditions. In some cases, such as system
crashes and database corruption, the symptoms of the bug often
destroy the information needed to track down the bug.
Inconsistencies between test environments and the programmers’
systems sometimes lead programmers to respond, ‘‘works fine
on my system’’.

• On some projects without clear requirements, there can be


reasonable differences of opinion over what is correct behavior
under certain test conditions. Sometimes testers misinterpret test
results and report bugs when the real problem is bad test
procedures, bad test data, or incorrect test cases.
Ten Steps
for Better Bug Reports
1. Structure: Test thoughtfully and carefully, whether you’re using
reactive techniques, following scripted manual tests, or running
automated tests.

2. Reproduce: My usual rule of thumb is to try to reproduce the


failure three times. If the problem is intermittent, report the rate
of occurrence; for example, one in three tries, two in three tries,
and so forth.

3. Isolate: See if you can identify variables— for example,


configuration changes, workflow, data sets— that might change
the symptoms of the bug.
Ten Steps
for Better Bug Reports (cont.)
4. Generalize: Look for places that the bug’s symptoms might
occur in other parts of the system, using different data, and so
forth, especially where more severe symptoms might exist.

5. Compare: Review the results of running similar tests, especially


if you’re repeating a test run previously.

6. Summarize: Write a short sentence that relates the symptom


observed to the customers’ or users’ experiences of quality,
keeping in mind that in many bug review or triage meetings, the
summary is the only part of the bug report that is read.

7. Condense: Trim any unnecessary information, especially


extraneous test steps.
Ten Steps
for Better Bug Reports (cont.)
8. Be clear: Use clear words, avoiding especially words that have
multiple distinct or contradictory meanings; for example, ‘‘The
ship had a bow on its bow,’’ and ‘‘Proper oversight prevents
oversights,’’ respectively.

9. Neutralize: Express yourself impartially, making statements of


fact about the bug and its symptoms and avoiding hyperbole,
humor, or sarcasm. Remember, you never know who’ll end up
reading your bug report.

10. Review: Have at least one peer, ideally an experienced test


engineer or the test manager, read the bug report before you
submit it.
Managing Bug Tracking
Politics and Misuse
of Bug Data
• Here, however, we should briefly examine political issues that are
specifically related to bug data. From the most adversarial point
of view, for example, you can see every bug report as an attack
on a developer. You probably don’t — and certainly shouldn’t —
intend to offend, but it helps to remember that bug data is
potentially embarrassing and subject to misuse. Candor and
honesty are critical in gathering clean bug data, but developers
might distort the facts if they think you might use the data to slam
them with the bug reports. Think of the detailed bug information
your database captures as a loaded gun: an effective tool in the
right hands and used with caution, but a dangerous implement of
mayhem if it’s treated carelessly.
Don’t Fail to Build Trust

• Some situations are irretrievable. Developers who are convinced


that a written bug report is one step removed from a written
warning in their personnel files probably will never trust you. Most
developers, though, approach testing with an open mind. They
understand that testing can provide a useful service to them in
helping them fix bugs and deliver a better product. How do you
keep the trust and support of these developers?
– Don’t take bugs personally, and don’t become emotional about them.
– Submit only quality bug reports: a succinct summary, clear steps to
reproduce, evidence of significant isolation work, accuracy in classification
information, and a conservative estimate in terms of priority and severity. Also
try to avoid cheap shot bug reports that can seem like carping.
– Be willing to discuss bug reports with an open mind.
– If developers want you to change something in your bug-reporting process,
be open to their suggestions.
Don’t Be a Backseat Driver

• The test manager needs to ensure that testers identify,


reproduce, and isolate bugs. It’s also part of the job to track the
bugs to conclusion and to deliver crisp bug status summaries to
senior and executive management. These roles differ, though,
from managing bug fixes.

• If you, as an outsider, make it your job to nag developers about


when a specific bug will be fixed or to pester the development
manager about how slow the bug fix process is, you are setting
yourself up for a highly antagonistic situation. Reporting, tracking,
re-testing, and summarizing bugs are your worries. Whether any
particular bug gets fixed, how it gets fixed, and when it gets fixed
are someone else’s concerns.
Don’t Make Individuals
Look Bad
• It is a bad idea to create and distribute reports that make
individuals look bad. There’s probably no faster way to guarantee
that you will have trouble getting estimated fix dates out of people
than to produce a report that points out every failure to meet such
estimated dates. Creating reports that show how many bug fixes
resulted in reopened rather than closed bugs, grouped and
totaled by developer, is another express lane to bad relationships.
Again, managing the developers is the development manager’s
job, not yours. No matter how useful a particular report seems,
make sure that it doesn’t bash individuals.
Sticky Wickets

• Challenging bugs crop up on nearly every project. The most


vexing are those that involve questions about correct behavior,
prairie dog bugs that pop up only when they feel like it, and bugs
that cause a tug-of-war over priority.
Bug or Feature?

• Although a perfect development project provides you with clear


and unambiguous information about correct system behavior in
the form of requirements and specifications, you will seldom have
such good fortune. Many projects have only informal
specifications, and the requirements can be scattered around in
emails, product road maps, and sales materials. In such cases,
disagreements can arise between development and test over
whether a particular bug is in fact correct system behavior.

• How should you settle these differences? Begin by discussing the


situation with the developers, their manager, and your testers.
Most of these disagreements arise from miscommunication.
Before making a major issue out of it, confirm that all the parties
are clear on what the alleged bug is and why your team is
concerned.
Irreproducible Bug

• The challenge with irreproducible bugs comes in two flavors.


– First, some bugs simply refuse to reproduce their symptoms consistently. This
is especially the case in system testing, in which complex combinations of
conditions are required to re-create problems. Sometimes these types of
failures occur in clusters. If you see a bug three times in one day and then
don’t see it for a week, has it disappeared, or is it just hiding? Tempting as it
is to dismiss this problem, be sure to write up these bugs. Random,
intermittent failures— especially ones that result in system crashes or any
other data loss— can have a significant effect on customers.
– The second category of irreproducible bugs involves problems that seem to
disappear with new revisions of the system, although no specific fix was
made for them. I refer to these as ‘‘bugs fixed by accident.’’ You will find that
more bugs are fixed by accident than you expect, but that fewer are fixed by
accident than some project Pollyannas suggest. If the bug is an elusive one,
you might want to keep the bug report active until you’re convinced it’s
actually gone.
Deferring Trivia
or Creating Test Escapes?
• While bug severity is easy to quantify, priority is not. Developing
consensus on priority is often difficult. What do you do when bugs
are assigned a low priority? Bugs that will not be fixed should be
deferred. If you don’t keep the active bug list short, people will
start to ignore it. However, there’s a real risk that some deferred
bugs will come back to haunt you. What if a deferred bug pops up
in the field as a critical issue? Is that a test escape? Not if my
team found it and then deferred it on the advice or insistence of
the project manager.

• After you institute a bug-tracking system, including the database


and metrics discussed here, you will find yourself the keeper of
key indicators of project status. Fairness and accuracy should be
your watchwords in this role.
Thank You

Vous aimerez peut-être aussi