Vous êtes sur la page 1sur 32

Introduction to Software Testing

Contents
Why is Testing Necessary ............................................................................................. 2
Software Systems Context ......................................................................................... 2
Causes of Software Defects ....................................................................................... 3
When Do Defects Arise? ........................................................................................... 3
Role of Testing in Software Development, Maintenance and Operations ................ 5
Testing and Quality ................................................................................................... 5
How Much Testing is Enough? ................................................................................. 6
What is Testing? ............................................................................................................ 8
Testing as a Process ................................................................................................... 8
Different Testing Objectives...................................................................................... 8
Dynamic and Static Testing ....................................................................................... 9
Testing and Debugging .............................................................................................. 9
Seven Testing Principles.............................................................................................. 11
The Psychology of Testing .......................................................................................... 14
Mindsets of Developers and Testers ........................................................................ 14
Balance of Self-Testing and Independence of Testing ............................................ 14
Clear Objectives ...................................................................................................... 15
Communication Aspects of Testing......................................................................... 15
Software Development Models ................................................................................... 17
Waterfall Model ....................................................................................................... 17
V-model (Sequential Development Model)............................................................. 18
Iterative-Incremental Development Models ............................................................ 20
Testing within a Life Cycle Model .............................................................................. 23
Testing in Sequential Lifecycle Models .................................................................. 23
Testing in Iterative-Incremental Lifecycle Models ................................................. 23
Alignment in V-Model ............................................................................................ 24
Characteristic of Good Testing Regardless of Lifecycle Model.............................. 25
Metrics & Measurement .............................................................................................. 26
Code of Ethics ............................................................................................................. 28
Questions ..................................................................................................................... 29

Why is Testing Necessary


Learning Objectives:
Describe, with examples, the way in which a defect in software can cause harm
to a person, to the environment or to a company
Distinguish between the root cause of a defect and its effects
Give reasons why testing is necessary by giving examples
Describe why testing is part of quality assurance and give examples of how
testing contributes to higher quality
Explain and compare the terms error, defect, fault, failure, and the corresponding
terms mistake and bug, using example
Software Systems Context
Software systems are an integral part of life, from business applications (e.g., banking)
to consumer products (e.g., cars). However, most people have had an experience with
software that did not work as expected: an error on a bill, a delay when waiting for a
credit card to process and a website that did not load correctly are common examples
of problems that may happen because of software problems.
Some of the problems we encounter when using software are quite trivial, but others
can be costly and damaging. Software that does not work correctly can lead to many
problems, including loss of money, time or business reputation, and could even cause
injury or death. Incorrect software can harm:
people (e.g. by causing an aircraft crash in which people die, or by causing a
hospital life support system to fail)
companies (e.g. by causing incorrect billing, which results in the company losing
money)
the environment (e.g. by releasing chemicals or radiation into the atmosphere)
The same software problem can have different effects in different systems, depending
on the context.
Some well-known examples of software failures:
The first launch of the European Space Agency Ariane 5 rocket in June 1996
failed after 37 seconds: a software error caused the rocket to deviate from its
vertical ascent, and the self-destruct capabilities were enacted before the then
unpredictable flight path resulted in a bigger problem
In November 2005, information on the UKs top 10 wanted criminals was
displayed on a website. The publication of this information was described in
newspapers and on morning radio and television and, as a result, the site was hit
more than 350,000 times. The performance of the website proved inadequate
under this load and the website had to be taken offline. The publicity created
performance peaks beyond the capacity of the website
A software bug in the alarm system at a control room of the FirstEnergy
Corporation, located in Ohio, caused a widespread power outage that occurred
throughout parts of the Northeastern and Midwestern United States and the
2

Canadian province of Ontario on Thursday, August 14, 2003 (the so-called


Northeast blackout of 2003)
Causes of Software Defects
People make mistakes because they are fallible, but there are also many pressures that
make mistakes more likely. Our fallibility is compounded when we lack experience,
dont have the right information, misunderstand, or if we are careless, tired or under
time pressure. Pressures such as deadlines, complexity of systems and organizations,
changing technologies, and/or many system interactions all bear down on designers of
systems and increase the likelihood of errors in specifications, in designs and in software
code. This is because our brains can only deal with a reasonable amount of complexity
or changewhen asked to deal with more our brains may not process the information
we have correctly.
An error (mistake) during design of software can produce a defect (fault, bug) in the
program code, or in a document. If a defect in code is executed, the system may fail to
do what it should do (or do something it shouldnt), causing a failure.

Defects in software specification, program code, systems or documents may result in


failures, but not all defects do so; some defects stay dormant in the code, and we may
never notice them. While failure is not always guaranteed, it is likely that errors in
specifications will lead to faulty components and faulty components will cause system
failure.
There are other reasons why systems fail. Failures can be caused by environmental
conditions as well, such as the presence of radiation, magnetism, electronic fields, and
pollution. These factors can affect the operation of hardware and firmware and lead to
system failure.
Failures may also arise because of human error in interacting with the software, perhaps
a wrong input value being entered or an output being misinterpreted. Also, failures may
be caused by someone deliberately trying to cause a failure in a systemmalicious
damage.
When Do Defects Arise?
Consider the following figure. We can see when defects may arise in different cases.
Requirement 1 is implemented correctly. We understood the customers requirement,
designed correctly to meet that requirement, built correctly to meet the design, and
3

delivered the requirement with the right attributes. Functionally, it does what it is
supposed to do, and it also has the right non-functional attributes, so it is fast enough,
easy to understand and so on.

With the other requirements, errors have been made at different stages.
Requirement 2 is fine until the software is coded, when we make some mistakes and
introduce defects. Probably, these are easily spotted and corrected during testing,
because we can see the product does not meet its design specification.
The defects introduced in Requirement 3 are harder to deal with. We built exactly what
we were told to but unfortunately the designer made some mistakes so there are defects
in the design. Unless we check against the requirements definition, we will not spot
those defects during testing. When we do notice them, they will be hard to fix because
design changes will be required.
The defects in Requirement 4 were introduced during the definition of the requirements;
the product has been designed and built to meet that flawed requirements definition. If
we test the product meets its requirements and design, it will pass its tests but may be
rejected by the user or customer. Defects reported by the customer can be very costly.
Requirements and design defects are not rare (cases 3 and 4). Defects introduced during
requirements and design make up close to half of the total number of defects.
The cost of finding and fixing defects rises considerably across the life cycle. If an error
is made and the consequent defect is detected in the requirements at the specification
4

stage, then it is relatively cheap to find and fix. The specification can be corrected and
re-issued.
Similarly, if an error is made, and the consequent defect detected in the design at the
design stage, then the design can be corrected and re-issued with relatively little
expense. The same applies for construction.
If, however, a defect is introduced in the requirement specification and it is not detected
until the customer notices it, or even once the system has been implemented, then it will
be much more expensive to fix, because rework will be needed in the specification and
design before changes can be made in construction.
Role of Testing in Software Development, Maintenance and Operations
To avoid failure, we must either avoid errors and faults or find them and rectify them.
Testing can contribute to both avoidance and rectification.
Rigorous testing of systems and documentation can help to reduce the risk of problems
occurring during operation and contribute to the quality of the software system, if the
defects found are corrected before the system is released for operational use. To
influence errors with testing, we need to begin testing as soon as we begin making
errorsright at the beginning of the development processand we need to continue
testing until we are confident that there will be no serious system failuresright at the
end of the development process.
Software testing may also be required to meet contractual or legal requirements, or
industry-specific standards. These standards may specify what type of techniques we
must use, or the percentage of the software code that must be exercised. The higher the
potential failure cost associated with the industry using the software, the more likely it
is that a standard for testing will exist. The avionics, motor, medical and pharmaceutical
industries all have standards covering the testing of software.
Software testing is neither complex nor difficult to implement, yet it is a discipline that
is seldom applied with anything approaching the necessary rigor to provide confidence
in delivered software.
Testing and Quality
Quality is hard to define. One of the definitions is that if a system meets its users
requirements, then it is of high quality. For example, in the top 10 criminals case
mentioned above, the system was swamped by requests for access (non-functional
failure), and therefore was not able to deliver its services to its users.
Testing helps to measure the quality of software in terms of defects found, the tests run,
and the system covered by the tests, for both functional and non-functional software
requirements and characteristics (such as reliability, usability, efficiency,
maintainability and portability, to be discussed in the following lectures). Testing
ensures that key requirements are examined before the system enters service and any
defects are reported to the development team for rectification.
Testing can give confidence in the quality of the software if it finds few or no defects.
Of course, a poor test may uncover few defects and leave us with a false sense of
5

security. A well-designed test will uncover defects if they are present and so, if such a
test passes, we will rightly be more confident in the software and be able to assert that
the overall level of risk of using the system has been reduced.
Testing cannot directly remove defects, nor can it directly enhance quality. By reporting
defects it makes their removal possible and so contributes to the enhanced quality of the
system.
Testing is one component in the overall quality assurance activity that seeks to ensure
that systems enter service without defects that can lead to serious failures. Testing
should be integrated alongside development standards, training and defect analysis as
one of the quality assurance activities.
How Much Testing is Enough?
A risk is something that has not happened yet and it may never happen; it is a potential
problem. Risk is inherent in all software development. For instance, the system may not
work or the project may not be completed on time. These uncertainties become more
significant as the system complexity and the implications of failure increase.
Not all software systems carry the same level of risk and not all problems have the same
impact when they occur. E.g., we would expect to test an automatic flight control system
more than we would test a video game system, because the risk (and hence the
probability of failure) is greater in the earlier case.
Every system is subject to risk of one kind or another, and there is a level of quality that
is acceptable for a given system. These two factors can be used to decide how much
testing to do.
Deciding how much testing is enough should take account of the level of risk (including
technical, safety, and business risks), and project constraints such as time and budget.
The most important aspect of achieving an acceptable result from a finite and limited
amount of testing is prioritization. Do the most important tests (those that test the most
important functional and non-functional aspects of the system as defined by the users)
first so that at any time you can be certain that the tests that have been done are more
important than the ones still to be done.
The next most important aspect is setting criteria, usually known as completion criteria,
that give an objective estimate of whether it is safe to stop testing, so that time and all
the other pressures do not confuse the outcome.
Testing should provide sufficient information to stakeholders to make informed
decisions about the release of the software or system being tested, for the next
development step or handover to customers.
Glossary:
Defect (bug, fault): A flaw in a component or system that can cause the component or
system to fail to perform its required function, e.g. an incorrect statement or data
definition. A defect, if encountered during execution, may cause a failure of the
component or system.
Error (mistake): A human action that produces an incorrect result.
6

Failure: Deviation of the component or system from its expected delivery, service or
result.
Quality: The degree to which a component, system or process meets specified
requirements and/or user/customer needs and expectations.
Risk: Factor that could result in future negative consequences; usually expressed as
impact and likelihood.
Software: Computer programs, procedures, and possibly associated documentation and
data pertaining to the operation of a computer system.

What is Testing?
Learning Objectives:
Recall the common objectives of testing
Provide examples for the objective of testing in different phases of the software
life cycle
Differentiate testing from debugging
Testing as a Process
A common perception of testing is that it only consists of running tests, i.e., executing
the software. This is part of testing, but not all of the testing activities.
Test activities exist before and after test execution. Before test execution there is some
preparatory work to do to design the tests and set them up. After test execution there is
some work needed to record the results and check whether the tests are complete. Even
more important is deciding what we are trying to achieve with the testing and setting
clear objectives for each test.
In general, testing activities include planning and control, choosing test conditions,
designing and executing test cases, checking results, evaluating exit criteria, reporting
on the testing process and system under test, and finalizing or completing closure
activities after a test phase has been completed. Testing also includes reviewing
documents (including source code) and conducting static analysis.
Different Testing Objectives
Common testing objectives include:
Finding defects. It helps us understand the risks associated with putting the
software into operational use, and fixing the defects improves the quality of the
products. Identifying defects has another benefit: by analyzing their causes, we
can improve the development processes and make fewer mistakes in future work
Gaining confidence about the level of quality
Providing information for decision-making
Preventing defects
Different viewpoints in testing take different objectives into account:
In development testing (e.g., component, integration and system testing), the
main objective may be to cause as many failures as possible so that defects in the
software are identified and can be fixed
In acceptance testing, the main objective may be to confirm that the system works
as expected, to gain confidence that it has met the requirements
In some cases the main objective of testing may be to assess the quality of the
software (with no intention of fixing defects), to give information to stakeholders
of the risk of releasing the system at a given time
Maintenance testing often includes testing that no new defects have been
introduced during development of the changes
8

During operational testing, the main objective may be to assess system


characteristics such as reliability or availability
Dynamic and Static Testing
Static testing is the term used for testing where the code is not exercised. Failures often
begin with a human mistake in a software specification. Testing such documents is very
important because errors are much cheaper to fix than defects or failures. The thought
process and activities involved in designing tests early on can help to prevent defects
from being introduced into code. Static testing involves techniques such as reviews,
which can be effective in preventing defects, e.g. by removing ambiguities and errors
from specification documents.
Dynamic testing is the kind that exercises the program under test with some test data,
so we speak of test execution in this context.
Both dynamic testing and static testing can be used as a means for achieving similar
objectives, and will provide information that can be used to improve both the system
being tested and the development and testing processes.
Testing and Debugging
Debugging and testing are different kinds of activity. Debugging is the process that
developers go through to identify, analyze and remove the cause of bugs or defects in
code. Testing, on the other hand, is a systematic exploration of a component or system
with the main aim of finding and reporting defects.
Debugging does not give confidence that the component or system meets its
requirements completely. Testing makes a rigorous examination of the behavior of a
component or system and reports all defects found for the development team to correct.
Subsequent re-testing by a tester ensures that any changes and corrections in the code
are checked for their effect on other parts of the component or system.
Glossary:
Acceptance testing: Formal testing with respect to user needs, requirements, and
business processes conducted to determine whether or not a system satisfies the
acceptance criteria and to enable the user, customers or other authorized entity to
determine whether or not to accept the system.
Code: Computer instructions and data definitions expressed in a programming language
or in a form output by an assembler, compiler or other translator.
Debugging: The process of finding, analyzing and removing the causes of failures in
software.
Development testing: Formal or informal testing conducted during the implementation
of a component or system, usually in the development environment by developers
Dynamic testing: Testing that involves the execution of the software of a component or
system.
Maintenance testing: Testing the changes to an operational system or the impact of a
changed environment to an operational system.
9

Operational testing: Testing conducted to evaluate a component or system in its


operational environment, i.e., hardware and software products installed at users or
customers sites where the component or system under test will be used.
Requirement: A condition or capability needed by a user to solve a problem or achieve
an objective that must be met or possessed by a system or system component to satisfy
a contract, standard, specification, or other formally imposed document.
Review: An evaluation of a product or project status to ascertain discrepancies from
planned results and to recommend improvements. Examples include management
review, informal review, technical review, inspection, and walkthrough.
Static testing: Testing of a software development artifact, e.g., requirements, design or
code, without execution of these artifacts, e.g., reviews or static analysis.
Test case: A set of input values, execution preconditions, expected results and execution
postconditions, developed for a particular objective or test condition, such as to exercise
a particular program path or to verify compliance with a specific requirement.
Testing: The process consisting of all lifecycle activities, both static and dynamic,
concerned with planning, preparation and evaluation of software products and related
work products to determine that they satisfy specified requirements, to demonstrate that
they are fit for purpose and to detect defects.
Test objective: A reason or purpose for designing and executing a test.

10

Seven Testing Principles


Learning Objectives:
Explain the seven principles in testing
Testing is a very complex activity, and can be difficult to do well. A number of testing
principles have been suggested over the past 40 years and offer general guidelines
common for all testing.
Principle 1 Testing shows the presence of defects
Testing can show that defects are present, but cannot prove that there are no defects.
Testing reduces the probability of undiscovered defects remaining in the software but,
even if no defects are found, it is not a proof of correctness
Regardless of how many white swans we see, we cannot say All swans are white.
However, as soon as we see one black swan we can say Not all swans are white. In
the same way, regardless of how many tests we execute without finding a bug, we have
not shown There are no bugs. As soon as we find a bug, we have shown This code
is not bug-free.
Although there may be other objectives, usually the main purpose of testing is to find
defects. Therefore, tests should be designed to find as many defects as possible.
Principle 2 Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except
for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be
used to focus testing efforts
Lets look at how much testing wed need to do to be able to test exhaustively. How
many tests would you need to do to completely test a one-digit numeric field? There are
10 possible valid numeric values, and there are invalid values26 uppercase alpha
characters, 26 lower case, at least 6 special and punctuation characters as well as a blank
value. So there would be at least 68 tests for this example of a one-digit field.
In practice, systems have more than one input field with the fields being of varying
sizes. If we take an example where one screen has 15 input fields, each having 5 possible
values, then to test all of the valid input value combinations you would need 30 517 578
125 (515) tests, which is impossible to carry out in a project timescale.
Principle 3 Early testing
To find defects early, testing activities shall be started as early as possible in the
software or system development life cycle, and shall be focused on defined objectives
As a proposed deployment date approaches, time pressure can increase dramatically.
There is a real danger that testing will be squeezed, and this is bad news if the only
testing we are doing is after all the development has been completed. The earlier the
11

testing activity is started, the longer the elapsed time available. Testers do not have to
wait until software is available to test. As soon as work products (requirements, code,
documents etc.) are ready, we can test them. E.g., requirement documents are the basis
for acceptance testing, so the creation of acceptance tests can begin as soon as
requirement documents are available.
Carrying out testing as early as possible leads to finding and fixing defects more cheaply
and preventing defects from appearing at later stages of the project. Studies have shown
what is known as the cost escalation model presented below in a simplified way.

Principle 4 Defect clustering


Testing effort shall be focused proportionally to the expected and later observed defect
density of modules. A small number of modules usually contains most of the defects
discovered during prerelease testing, or is responsible for most of the operational
failures
One phenomenon that many testers have observed is that defects tend to cluster. In a
large application, it is often a small number of modules that exhibit the majority of the
problems. This can be for a variety of reasons, some of which are:
System complexity
Volatile code
The effects of change upon change
Development staff experience
Development staff inexperience
Testers will often use this information when making their risk assessment for planning
the tests, and will focus on known hot spots. However, it must be remembered that
testing should not concentrate exclusively on these parts. There may be fewer defects
in the remaining code, but testers still need to search diligently for them.
Principle 5 Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases
will no longer find any new defects. To overcome this pesticide paradox, test cases
12

need to be regularly reviewed and revised, and new and different tests need to be written
to exercise different parts of the software or system to find potentially more defects
Running the same set of tests continually will not continue to find new defects.
Developers will soon know that the test team always tests the boundaries of conditions,
for example, so they will test these conditions before the software is delivered. This
does not make defects elsewhere in the code less likely, so continuing to use the same
test set will result in decreasing effectiveness of the tests. Using other techniques will
find different defects.
Principle 6 Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is
tested differently from an e-commerce site
Different testing is necessary in different circumstances. A website where information
can merely be viewed will be tested in a different way to an e-commerce site, where
goods can be bought using credit/debit cards. We need to test an air traffic control
system with more rigor than an application for calculating the length of a mortgage.
Risk can be a large factor in determining the type of testing that is needed. The higher
the possibility of losses, the more we need to invest in testing the software before it is
implemented.
Principle 7 Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not
fulfill the users needs and expectations
The fact that no defects are outstanding is not a good reason to ship the software. The
customers for softwarethe people and organizations who buy and use it to aid in their
day-to-day tasksare not interested in defects or numbers of defects, except when they
are directly affected by the instability of the software. The people using software are
more interested in the software supporting them in completing tasks efficiently and
effectively.
Glossary:
Exhaustive testing: A test approach in which the test suite comprises all combinations
of input values and preconditions.

13

The Psychology of Testing


Learning Objectives:
Recall the psychological factors that influence the success of testing
Contrast the mindset of a tester and of a developer
Mindsets of Developers and Testers
The mindset to be used while testing and reviewing is different from that used while
developing software. By this, we mean that, if we are building something we are
working positively to solve problems in the design and to realize a product that meets
some need. However, when we test or review a product, we are looking for defects in
the product and thus are critical of it.
Looking for failures in a system requires curiosity, professional pessimism, a critical
eye, attention to detail, good communication with development peers, and experience
on which to base error guessing.
Balance of Self-Testing and Independence of Testing
Testing can be more effective if it is not undertaken by the individual who wrote the
code. The reason is that the creator of anything has a special relationship with the
created object: flaws in the created object are rendered invisible to the creator.
With the right mindset developers are able to test their own code, but the testing done
by them cannot be assumed to be complete. Separation of this responsibility to a tester
is typically done to help focus effort and provide additional benefits, such as an
independent view by trained and professional testing resources. This approach is called
independence of testing.
Several levels of independence can be defined as shown here from low to high:
Tests designed by the person(s) who wrote the software under test (low level of
independence)
Tests designed by another person(s) (e.g., from the development team)
Tests designed by a person(s) from a different organizational group (e.g., an
independent test team) or test specialists (e.g., usability or performance test
specialists)
Tests designed by a person(s) from a different organization or company (i.e.,
outsourcing or certification by an external body)
A certain degree of independence (avoiding the author bias) often makes the tester more
effective at finding defects and failures. Independence is not, however, a replacement
for familiarity, and developers can efficiently find many defects in their own code.

14

Clear Objectives
Each organization and each project will have its own goals and objectives. Different
stakeholders, such as the customers, the development team and the managers of the
organization, will have different viewpoints about quality and have their own
objectives. Because people and projects are driven by objectives, the stakeholder with
the strongest views or the greatest influence over a group will define, consciously or
subconsciously, what those objectives are.
People tend to align their plans with these objectives. E.g., depending on the objective,
a tester might focus either on finding defects or on confirming that software works. But
if one stakeholder is less influential during the project but more influential at delivery,
there may be a clash of views about whether the testing has met its objectives. One
manager may want the confirmation that the software works and that it is good
enough if this is seen as a way of delivering as fast as possible. Another manager may
want the testing to find as many defects as possible before the software is released,
which will take longer to do and will require time for fixing, re-testing and regression
testing. If there are not clearly stated objectives and exit criteria for testing which all the
stakeholders have agreed, arguments might arise, during the testing or after release,
about whether enough testing has been done.
Communication Aspects of Testing
Identifying failures during testing may be perceived as criticism against the product and
against the author. Many of us find it challenging to actually enjoy criticism of our work.
We usually believe that we have done our best to produce work which is correct and
complete. We all make mistakes and we sometimes get annoyed, upset or depressed
when someone points them out.
As a result, testing is often seen as a destructive activity, even though it is very
constructive in the management of product risks. Testers need to use tact and diplomacy
when raising defect reports. Defect reports need to be raised against the software, not
against the individual who made the mistake.
If errors, defects or failures are communicated in a constructive way, bad feelings
between the testers and the analysts, designers and developers can be avoided. This
applies to defects found during reviews as well as in testing. The tester and test leader
need good interpersonal skills to communicate factual information about defects,
progress and risks in a constructive way. For the author of the software or document,
defect information can help them improve their skills. Defects found and fixed during
testing will save time and money later, and reduce risks.
Communication problems may occur, particularly if testers are seen only as messengers
of unwanted news about defects. However, there are several ways to improve
communication and relationships between testers and others:
Start with collaboration rather than battles. The aim is to work together rather
than be confrontational. Keep the focus on delivering a quality product. Explain
that by knowing about the found defect now, we can work round it or fix it so the
delivered system is better for the customer
15

Communicate findings on the product in a neutral, non-personal, fact-focused


way without criticizing the person who created it
Try to understand how the other person feels and why they react as they do
At the end of discussions, confirm that the other person has understood what you
have said and vice versa
Glossary:
Error guessing: A test design technique where the experience of the tester is used to
anticipate what defects might be present in the component or system under test as a
result of errors made, and to design tests specifically to expose them.
Independence of testing: Separation of responsibilities, which encourages the
accomplishment of objective testing.

16

Software Development Models


Learning Objectives:
Explain the relationship between development, test activities and work products
in the development life cycle, by giving examples using project and product types
Describe how testing is a part of any software development and maintenance
activity
Recognize the fact that software development models must be adapted to the
context of project and product characteristics
In software development, work-products such as code and associated documentation
are generally created in a series of defined stages, from capturing a customer
requirement, to creating the system, to delivering the system. These stages are usually
shown as steps within a software life cycle (software development life cycle). The
software life cycle models specify the various stages of the process and the order in
which they are carried out.
Testing is not a stand-alone activity. It has its place within a software development life
cycle model and therefore the life cycle applied will largely determine how testing is
organized. Testing processes are related to others such as:
Requirements engineering & management
Project management
Configuration and change management
Software development
Software maintenance
Technical support
Production of technical documentation
The development process adopted for a project will depend on the project aims and
goals. There are numerous development life cycles that have been developed in order
to achieve different required objectives.
Waterfall Model
A development life cycle for a software product involves capturing the initial
requirements from the customer, expanding on these to provide the detail required for
code production, writing the code and testing the product, ready for release.
A simple development model known traditionally as the waterfall model is presented
below. It has a natural timeline where tasks are executed in a sequential fashion. We
start at the top of the waterfall with a feasibility study and flow down through the various
project tasks finishing with implementation into the live environment.

17

This type of model is often referred to as a linear or sequential model. Within this
model, each activity is completed before moving on to the next one. Testing is carried
out once the code has been fully developed. Once this is completed, a decision can be
made on whether the product can be released into the live environment.
In the waterfall model, the testing at the end serves only as a quality check. The product
can be accepted or rejected at this point. In software development, however, it is
unlikely that we can simply reject the parts of the system found to be defective, and
release the rest. What is needed is a process that assures quality throughout the
development life cycle. At every stage, a check should be made that the work-product
for that stage meets its objectives. The checks throughout the life cycle include
verification and validation:
Verification checks that the work-product meets the requirements set out for it.
Verification helps to ensure that we are building the product in the right way
Validation changes the focus of work-product evaluation to evaluation against
user needs. This means ensuring that the behavior of the work-product matches
the customer needs as defined for the project. Validation helps to ensure that we
are building the right product as far as the users are concerned
Two types of development model facilitate early work-product evaluation. We will
discuss them next in turn.
V-model (Sequential Development Model)
The V-model was developed to address some of the problems experienced using the
traditional waterfall approach. The V-model provides guidance that testing needs to
begin as early as possible in the life cycle. There are a variety of activities that need to
be performed before the end of the coding phase. These activities should be carried out
in parallel with development activities, and testers need to work with developers and
18

business analysts so they can perform these activities and tasks and produce a set of test
deliverables.
Although variants of the V-model exist, a common type of V-model uses four test
levels, corresponding to the four development levels.

The left-hand side of the model focuses on elaborating the initial requirements,
providing successively more technical detail as the development progresses. In the
model shown, these are:
User requirements: capturing of user needs
System requirements: definition of functions required to meet user needs
Global design: technical design of functions identified in the system requirements
Detailed design: design of each module or unit to be built to meet required
functionality
The middle of the V-model shows that planning for testing should start with each workproduct. For instance, using the requirement specification as an example, acceptance
testing would be planned for right at the start of the development.
The right-hand side focuses on the testing activities. For each work-product, a testing
activity is identified:
Testing against the program specification takes place at the unit (component)
testing stage. Searching for defects and verifying the functioning of software
components (e.g. modules, programs, objects, classes etc.) that are separately
testable
Testing against the technical specification takes place at the integration testing
stage. Testing interfaces between components, interactions to different parts of a
system such as an operating system, file system and hardware or interfaces
between systems
19

Testing against the functional specification takes place at the system testing stage.
The main focus is verification against specified requirements
Testing against the requirement specification takes place at the acceptance testing
stage. Validation testing with respect to user needs, requirements, and business
processes conducted to determine whether or not to accept the system
This allows testing to be concentrated on the detail provided in each work-product, so
that defects can be identified as early as possible in the life cycle, when the workproduct has been created.
In practice, a V-model may have more, fewer or different levels of development and
testing, depending on the project and the software product. For example, there may be
component integration testing after component testing, and system integration testing
after system testing. Other test levels can also be defined, such as:
Hardware-software integration testing
Feature interaction testing
Customer Product integration testing
Remembering that each stage must be completed before the next one can be started, this
approach to software development pushes validation of the system by the user
representatives right to the end of the life cycle. If the customer needs were not captured
accurately in the requirement specification, or if they change, then these issues may not
be uncovered until the user testing is carried out. This is the main drawback of this
model.
Iterative-Incremental Development Models
Not all life cycles are sequential. In iterative and incremental models, we cycle through
a number of smaller self-contained life cycle phases for the same project. This type of
development is often referred to as cyclical. As with the V-model, there are many
variants of iterative life cycles.
Within these models, the requirements do not need to be fully defined before coding
can start. Instead, a working version of the product is built, in a series of increments, or
builds, with each increment adding new functionality. Each increment encompasses
requirements definition, design, code and test.

20

The initial increment will contain the infrastructure required to support the initial build
functionality. The increment produced by an iteration may be tested at several levels as
part of its development. Subsequent increments will need testing for the new
functionality, regression testing of the existing functionality, and integration testing of
both new and existing parts. Regression testing is increasingly important on all
iterations after the first one.
This life cycle can give early market presence with critical functionality, can be simpler
to manage because the workload is divided into smaller pieces, and can reduce initial
investment although it may cost more in the long run. Also early market presence will
mean validation testing is carried out at each increment, thereby giving early feedback
on the business value and fitness-for-use of the product.
A key feature of this type of development is the involvement of user representatives in
the testing. They are empowered to request changes to the software in order to meet
their needs.
Several drawbacks of this model can be pointed out. The lack of formal documentation
makes it difficult to test. In addition, the working environment may be such that
developers make any changes required, without formally recording them. This approach
could mean that changes cannot be traced back to the requirements or to the parts of the
software that have changed. Thus, traceability as the project progresses is reduced. To
mitigate this, a robust process must be put in place at the start of the project to manage
these changes.
Forms of iterative development include prototyping, rapid application development
(RAD), and agile development models. A proprietary methodology is called the rational
unified process (RUP).
Glossary:
Component (unit) testing: The testing of individual software components.
Incremental development model: A development lifecycle where a project is broken
into a series of increments, each of which delivers a portion of the functionality in the
overall project requirements. The requirements are prioritized and delivered in priority
order in the appropriate increment. In some (but not all) versions of this lifecycle model,
each subproject follows a mini V-model with its own design, coding and testing
phases.
Integration testing: Testing performed to expose defects in the interfaces and in the
interactions between integrated components or systems.
Iterative development model: A development lifecycle where a project is broken into a
usually large number of iterations. An iteration is a complete development loop
resulting in a release (internal or external) of an executable product, a subset of the final
product under development, which grows from iteration to iteration to become the final
product.
Regression testing: Testing of a previously tested program following modification to
ensure that defects have not been introduced or uncovered in unchanged areas of the
software, as a result of the changes made. It is performed when the software or its
environment is changed.
21

Software lifecycle: The period of time that begins when a software product is conceived
and ends when the software is no longer available for use. The software lifecycle
typically includes a concept phase, requirements phase, design phase, implementation
phase, test phase, installation and checkout phase, operation and maintenance phase,
and sometimes, retirement phase. Note these phases may overlap or be performed
iteratively.
System testing: The process of testing an integrated system to verify that it meets
specified requirements.
Validation: Confirmation by examination and through provision of objective evidence
that the requirements for a specific intended use or application have been fulfilled.
Verification: Confirmation by examination and through provision of objective evidence
that specified requirements have been fulfilled.
V-model: A framework to describe the software development lifecycle activities from
requirements specification to maintenance. The V-model illustrates how testing
activities can be integrated into each phase of the software development lifecycle.

22

Testing within a Life Cycle Model


Learning Objectives:
Recall characteristics of good testing that are applicable to any life cycle model
As we already know, testing is an integral part of the various software development
models such as sequential, iterative, or incremental model. Testing must be
appropriately integrated into the software lifecycle to succeed. That is, proper alignment
between the testing process and other processes in the lifecycle is critical for success.
Testing in Sequential Lifecycle Models
In a sequential lifecycle model, a key assumption is that the project team will define the
requirements early in the project and then manage the changes to those requirements
during the rest of the project. In such a situation, if the team follows a formal
requirements process, an independent test team in charge of the system test level can
follow an analytical requirements-based test strategy.
Using such a strategy in a sequential model, the test team would start planning and
designing tests early in the project, following an analysis of the requirements
specification to identify test conditions. This planning, analysis, and design work might
identify defects in the requirements, making testing a preventive activity. Failure
detection would start much later in the lifecycle, once system test execution began.
The use of sequential lifecycle models creates certain issues for testing that the test
manager must manage:
The first and most infamous issue is that of schedule compression during testing
at the end of the project. A no-win situation occurs where the test manager is
subjected to immense pressure to approve the release followed by howls of
condemnation when the release proves bug-ridden in the field
The second issue is the common problem of development groups, likewise
pressured to achieve dates, delivering unstable and often untestable systems to
the test team. This problem causes significant portions of the test schedule to be
consumed by what is, effectively, retroactive unit testing
A third issue is the common failure to include all the testing activities described
in the model. Very little preparation time is allowed. Testing typically devolves
to an ad hoc or at best reactive strategy, with no defect prevention, no clear
coverage, and limited value
These sequential lifecycle issues are surmountable, but they require careful test
management.
Testing in Iterative-Incremental Lifecycle Models
In the incremental lifecycle model, the test team wont receive a complete set of
requirements early in the project. Instead, the test team will receive requirements at the
23

beginning of each iteration. Rather than analyzing requirements at the outset of the
project, the best the test team can do is to identify and prioritize key quality risk areas.
I.e., they can follow an analytical risk-based test strategy. Specific test designs and
implementation will occur immediately before test execution, potentially reducing the
preventive role of testing. Defect detection starts very early in the project, at the end of
the first iteration, and continues in repetitive, short cycles throughout the project. In
such a case, testing activities in the fundamental testing process overlap and are
concurrent with each other as well as with major activities in the software lifecycle.
The availability of testable systems earlier in the lifecycle would seem to be a benefit
to the test manager, and it can be. At the same time, the iterative lifecycle models create
certain test issues for the test manager:
The first issue is the need, in each increment after the first one, to be able to
regression test all the functions and capabilities provided in the previous
increments. Because the most important functions and capabilities are typically
provided in the earlier increments, it is very important that these functions and
capabilities not be broken. However, given the frequent and large changes to the
code baseevery increment being likely to introduce as much new and changed
code as the previous incrementthe risk of regression is high. This risk tends to
lead to attempts to automate regression tests, with varying degrees of success
The second issue is the common failure to plan for bugs and how to handle them.
This failure manifests itself when business analysts, designers, and developers
are assigned to work full-time on subsequent increments while testers are testing
the current increment. In other words, you allow the activities associated with
increments to overlap rather than requiring that each increment complete entirely
before the next one starts; this can seem efficient at first. However, once the test
team starts to locate bugs, an overbooked situation occurs for the business
analysts, designers, and developers who must address them
The final common issue, which is particularly common in the agile world, is the
lack of rigor in and respect for testing
These are all surmountable issues, but the test manager must manage them carefully, in
conjunction with the project management team.
In both models discussed above, good change management and configuration
management are critical for testing. A lack of proper change management results in an
inability for the test team to keep up with what the system is and what it should do.
Alignment in V-Model
Let us use the V-model as an example to illustrate the concept of alignment between the
testing process and other processes in the lifecycle. Well further assume that we are
talking about the system test level:
Test planning occurs concurrently with project planning, and test control
continues until system test execution and closure are complete. Analysis, design,
implementation, execution, evaluation of exit criteria, and test results reporting
are carried out according to the plan. Deviations from the plan are managed
24

Test analysis starts immediately after or even concurrently with test planning.
Test analysis and design occurs concurrent with requirements specification,
system and architectural (high-level) design specification, and component (lowlevel) design specification
Test implementation, including test environment implementation starts during
system design, and completes just before test execution begins
Test execution begins when the test entry criteria are all met. More realistically,
test execution starts when most entry criteria are met and any outstanding entry
criteria are waived. Test execution continues until system test exit criteria are met
Evaluation of test exit criteria and reporting of test results occurs throughout test
execution, generally with greater frequency and urgency as project deadlines
approach
Test closure activities occur after test exit criteria are met and test execution is
declared complete
Such alignment of activities with each other and with the rest of the system lifecycle
will not happen simply by accident. For each test level, and for any selected combination
of software lifecycle and test process, the test manager must perform this alignment
during the test planning and/or project planning.
Characteristic of Good Testing Regardless of Lifecycle Model
In any life cycle model, there are several characteristics of good testing:
For every development activity there is a corresponding testing activity
Each test level has test objectives specific to that level
The analysis and design of tests for a given test level should begin during the
corresponding development activity
Testers should be involved in reviewing documents as soon as drafts are available
in the development life cycle

25

Metrics & Measurement


A variety of metrics (numbers) and measures (trends, graphs, etc.) should be applied
throughout the software development life cycle (e.g. planning, coverage, workload,
etc.). Well-established metrics and measures, aligned with project goals and objectives,
enable us to report and track test and quality results to management in a consistent and
coherent way. A lack of metrics and measurements leads to purely subjective
assessments of quality and testing, and to disputes over the meaning of test results
toward the end of the lifecycle. It also results in a lack of clearly perceived and
communicated value, effectiveness, and efficiency for testing.
To evaluate results, a baseline must be defined, and then progress tracked with relation
to this baseline. Without defined baselines, successful testing is usually impossible.
Possible aspects that can be subjected to a metric and tracked through measurement:
1. Planned schedule, coverage, and their evolution over time
2. Requirements, their evolution and their impact in terms of schedule, resources
and tasks
3. Workload and resource usage, and their evolution over time
4. Milestones and scope of testing, and their evolution over time
5. Planned and actual costs
6. Risks and mitigation actions, and their evolution over time
7. Defects (total found, total fixed), current backlog, average closure periods
During test planning, we establish expectations, which I mentioned as the baselines
previously. As part of test control, we can measure actual outcomes and trends against
these expectations and adjust our approach as indicated. As part of test reporting, we
can consistently explain to management various important aspects of the process,
product, and project, using objective, agreed-upon metrics with realistic, achievable
targets.
Let us consider a simple example of using the metrics and measurements. As you can
see in the figure below, this project is in a lot of trouble at this point. There are a very
large number of total defects, there is a significant backlog of defects, and the weekly
defect discovery rate remains very high. The only good news, apparent in this graph, is
that the weekly defect fix rate is about the same as the discovery rate. So, while the
discovery rate isnt declining, at least the backlog is not growing.

26

When working with testing metrics and measurement program, three main areas are to
be taken into account:
Definition of metrics: a useful, pertinent, and concise set of quality and test
metrics should be defined for a project. Once these metrics have been defined,
their interpretation must be agreed upon by all stakeholders, in order to avoid
future discussions when metric values evolve. Metrics should be defined
according to objectives for a process or task, for components or systems, for
individuals or teams
Tracking of metrics: reporting and merging metrics should be as automated as
possible to reduce the time spent in producing the raw metrics values. Variations
of data over time for a specific metric may reflect other information than the
interpretation agreed upon in the metric definition phase
Reporting of metrics: the objective is to provide an immediate understanding of
the information for management purpose. Reporting should enlighten
management and other stakeholders, not confuse or misdirect them. good testing
Good reports based on metrics should be easily understood, not overly complex
and certainly not ambiguous. They should also draw the viewers attention
toward what matters most, not toward trivialities. In that way, good testing
reports based on metrics and measures will help management guide the project
to success. Not all types of graphical displays of metrics are equally useful. A
table with a snapshot of data at a moment in time might be the right way to present
such information as the coverage planned and achieved against certain critical
quality risk areas. A graph of a trend over time might be a useful way to present
other information, such as the total number of defects reported and the total
number of defects resolved since the start of testing
Glossary:
Measurement: The process of assigning a number or category to an entity to describe
an attribute of that entity.
Metric: A measurement scale and the method used for measurement.

27

Code of Ethics
Testers must adhere to a code of ethics: they are required to act in a professional manner.
Testers can have access to confidential and/or privileged information, and they are to
treat any information with care and attention, and act responsibly to the owner(s) of this
information, employers and the wider public interest. A code of ethics is necessary,
among other reasons to ensure that the information is not put to inappropriate use.
Recognizing the ACM and IEEE code of ethics for engineers, the ISTQB states the
following code of ethics:
PUBLIC Certified software testers shall act consistently with the public
interest
CLIENT AND EMPLOYER Certified software testers shall act in a manner
that is in the best interests of their client and employer, consistent with the public
interest
PRODUCT Certified software testers shall ensure that the deliverables they
provide (on the products and systems they test) meet the highest professional
standards possible
JUDGMENT Certified software testers shall maintain integrity and
independence in their professional judgment
MANAGEMENT Certified software test managers and leaders shall subscribe
to and promote an ethical approach to the management of software testing
PROFESSION Certified software testers shall advance the integrity and
reputation of the profession consistent with the public interest
COLLEAGUES Certified software testers shall be fair to and supportive of
their colleagues, and promote cooperation with software developers
SELF Certified software testers shall participate in lifelong learning regarding
the practice of their profession and shall promote an ethical approach to the
practice of the profession

28

Questions
1. A bug or defect is:
a) a mistake made by a person;
b) a run-time problem experienced by a user;
c) the result of an error or mistake;
d) the result of a failure, which may lead to an error.
2. The effect of testing is to:
a) increase software quality;
b) give an indication of the software quality;
c) enable those responsible for software failures to be identified;
d) show there are no problems remaining.
3. Which of the following is correct?
Debugging is:
a) Testing/checking whether the software performs correctly.
b) Checking that a previously reported defect has been corrected.
c) Identifying the cause of a defect, repairing the code and checking the fix is
correct.
d) Checking that no unintended consequences have occurred as a result of a fix.
4. Which of the following are aids to good communication, and which hinder it?
I. Try to understand how the other person feels.
II. Communicate personal feelings, concentrating upon individuals.
III. Confirm the other person has understood what you have said and vice versa.
IV. Emphasize the common goal of better quality.
V. Each discussion is a battle to be won.
a) I, II and III aid, IV and V hinder.
b) III, IV and V aid, I and I hinder.
c) I, III and IV aid, II and V hinder.
d) II, III and IV aid, I and V hinder.
5. When is testing complete?
a) When time and budget are exhausted.
b) When there is enough information for sponsors to make an informed decision
about release.
c) When there are no remaining high priority defects outstanding.
d) When every data combination has been exercised successfully.
6. Which list of levels of tester independence is in the correct order, starting with the
most independent first?
a) Tests designed by the author; tests designed by another member of the
development team; tests designed by someone from a different company.
29

b) Tests designed by someone from a different department within the company; tests
designed by the author; tests designed by someone from a different company.
c) Tests designed by someone from a different company; tests designed by someone
from a different department within the company; tests designed by another
member of the development team.
d) Tests designed by someone from a different department within the company; tests
designed by someone from a different company; tests designed by the author.
7. Which statement correctly describes the public and profession aspects of the code of
ethics?
a) Public: Certified software testers shall act in the best interests of their client and
employer (being consistent with the wider public interest). Profession: Certified
software testers shall advance the integrity and reputation of their industry
consistent with the public interest.
b) Public: Certified software testers shall advance the integrity and reputation of the
profession consistent with the public interest. Profession: Certified software
testers shall consider the wider public interest in their actions.
c) Public: Certified software testers shall consider the wider public interest in their
actions. Profession: Certified software testers shall participate in lifelong learning
regarding the practice of their profession and shall promote an ethical approach
to the practice of their profession.
d) Public: Certified software testers shall consider the wider public interest in their
actions. Profession: Certified software testers shall advance the integrity and
reputation of their industry consistent with the public interest.
8. Which of the following is true about the V-model?
a) It has the same steps as the waterfall model for software development.
b) It is referred to as a cyclical model for software development.
c) It enables the production of a working version of the system as early as possible.
d) It enables test planning to start as early as possible.
9. Which of the following is true of iterative development?
a) It uses fully defined specifications from the start.
b) It involves the users in the testing throughout.
c) Changes to the system do not need to be formally recorded.
d) It is not suitable for developing websites.
10. Which of the following statements are true?
I. For every development activity there is a corresponding testing activity.
II. Each test level has the same test objectives.
III. The analysis and design of tests for a given test level should begin after the
corresponding development activity.
IV. Testers should be involved in reviewing documents as soon as drafts are available
in the development life cycle.
a) I and II.
30

b) III and IV.


c) II and III.
d) I and IV.
11.
a)
b)
c)
d)

A risk relates to which of the following?


Negative feedback to the tester.
Negative consequences that will occur.
Negative consequences that could occur.
Negative consequences for the test object.

a)
b)
c)
d)

An exhaustive test would include:


All combinations of input values and preconditions.
All combinations of input values and output values.
All pairs of input value and preconditions.
All states and state transitions.

12.

13. Test objectives vary between projects and so must be stated in the test plan.
Which one of the following test objectives might conflict with the proper tester mindset?
a) Show that the system works before we ship it.
b) Find as many defects as possible.
c) Reduce the overall level of product risk.
d) Prevent defects through early involvement.
14.
a)
b)
c)
d)
e)

The cost of fixing a fault:


Is not important.
Increases as we move the product towards live use.
Decreases as we move the product towards live use.
Is more expensive if found in requirements than functional design.
Can never be determined.

15. When what is visible to end-users is a deviation from the specific or expected
behavior, this is called:
a) An error.
b) A fault.
c) A failure.
d) A defect.
e) A mistake.
16.
a)
b)
c)
d)
17.

In prioritizing what to test, the most important objective is to:


find as many faults as possible.
test high risk areas.
obtain good test coverage.
test whatever is easiest to test.
Which of the following should not normally be an objective for a test?
31

a)
b)
c)
d)

To find faults in the software.


To assess whether the software is ready for release.
To demonstrate that the software doesnt work.
To prove that the software is correct.

18. Which one of the following describes the major benefit of verification early in
the life cycle?
a) It allows the identification of changes in user requirements.
b) It facilitates timely set up of the test environment.
c) It reduces defect multiplication.
d) It allows testers to become involved early in the project.
19. Which of the following statements best describes one of the seven key principles
of software testing?
a) Automated tests are better than manual tests for avoiding the Exhaustive Testing.
b) Exhaustive testing is, with sufficient effort and tool support, feasible for all
software.
c) It is normally impossible to test all input / output combinations for a software
system.
d) The purpose of testing is to demonstrate the absence of defects.
20. Which of the following statements are true?
A. Software testing may be required to meet legal or contractual requirements.
B. Software testing is mainly needed to improve the quality of the developers work.
C. Rigorous testing and fixing of defects found can help reduce the risk of problems
occurring in an operational environment.
D. Rigorous testing is sometimes used to prove that all failures have been found.
a) B and C are true; A and D are false.
b) A and D are true; B and C are false.
c) A and C are true, B and D are false.
d) C and D are true, A and B are false.

32

Vous aimerez peut-être aussi