Académique Documents
Professionnel Documents
Culture Documents
Contents
Why is Testing Necessary ............................................................................................. 2
Software Systems Context ......................................................................................... 2
Causes of Software Defects ....................................................................................... 3
When Do Defects Arise? ........................................................................................... 3
Role of Testing in Software Development, Maintenance and Operations ................ 5
Testing and Quality ................................................................................................... 5
How Much Testing is Enough? ................................................................................. 6
What is Testing? ............................................................................................................ 8
Testing as a Process ................................................................................................... 8
Different Testing Objectives...................................................................................... 8
Dynamic and Static Testing ....................................................................................... 9
Testing and Debugging .............................................................................................. 9
Seven Testing Principles.............................................................................................. 11
The Psychology of Testing .......................................................................................... 14
Mindsets of Developers and Testers ........................................................................ 14
Balance of Self-Testing and Independence of Testing ............................................ 14
Clear Objectives ...................................................................................................... 15
Communication Aspects of Testing......................................................................... 15
Software Development Models ................................................................................... 17
Waterfall Model ....................................................................................................... 17
V-model (Sequential Development Model)............................................................. 18
Iterative-Incremental Development Models ............................................................ 20
Testing within a Life Cycle Model .............................................................................. 23
Testing in Sequential Lifecycle Models .................................................................. 23
Testing in Iterative-Incremental Lifecycle Models ................................................. 23
Alignment in V-Model ............................................................................................ 24
Characteristic of Good Testing Regardless of Lifecycle Model.............................. 25
Metrics & Measurement .............................................................................................. 26
Code of Ethics ............................................................................................................. 28
Questions ..................................................................................................................... 29
delivered the requirement with the right attributes. Functionally, it does what it is
supposed to do, and it also has the right non-functional attributes, so it is fast enough,
easy to understand and so on.
With the other requirements, errors have been made at different stages.
Requirement 2 is fine until the software is coded, when we make some mistakes and
introduce defects. Probably, these are easily spotted and corrected during testing,
because we can see the product does not meet its design specification.
The defects introduced in Requirement 3 are harder to deal with. We built exactly what
we were told to but unfortunately the designer made some mistakes so there are defects
in the design. Unless we check against the requirements definition, we will not spot
those defects during testing. When we do notice them, they will be hard to fix because
design changes will be required.
The defects in Requirement 4 were introduced during the definition of the requirements;
the product has been designed and built to meet that flawed requirements definition. If
we test the product meets its requirements and design, it will pass its tests but may be
rejected by the user or customer. Defects reported by the customer can be very costly.
Requirements and design defects are not rare (cases 3 and 4). Defects introduced during
requirements and design make up close to half of the total number of defects.
The cost of finding and fixing defects rises considerably across the life cycle. If an error
is made and the consequent defect is detected in the requirements at the specification
4
stage, then it is relatively cheap to find and fix. The specification can be corrected and
re-issued.
Similarly, if an error is made, and the consequent defect detected in the design at the
design stage, then the design can be corrected and re-issued with relatively little
expense. The same applies for construction.
If, however, a defect is introduced in the requirement specification and it is not detected
until the customer notices it, or even once the system has been implemented, then it will
be much more expensive to fix, because rework will be needed in the specification and
design before changes can be made in construction.
Role of Testing in Software Development, Maintenance and Operations
To avoid failure, we must either avoid errors and faults or find them and rectify them.
Testing can contribute to both avoidance and rectification.
Rigorous testing of systems and documentation can help to reduce the risk of problems
occurring during operation and contribute to the quality of the software system, if the
defects found are corrected before the system is released for operational use. To
influence errors with testing, we need to begin testing as soon as we begin making
errorsright at the beginning of the development processand we need to continue
testing until we are confident that there will be no serious system failuresright at the
end of the development process.
Software testing may also be required to meet contractual or legal requirements, or
industry-specific standards. These standards may specify what type of techniques we
must use, or the percentage of the software code that must be exercised. The higher the
potential failure cost associated with the industry using the software, the more likely it
is that a standard for testing will exist. The avionics, motor, medical and pharmaceutical
industries all have standards covering the testing of software.
Software testing is neither complex nor difficult to implement, yet it is a discipline that
is seldom applied with anything approaching the necessary rigor to provide confidence
in delivered software.
Testing and Quality
Quality is hard to define. One of the definitions is that if a system meets its users
requirements, then it is of high quality. For example, in the top 10 criminals case
mentioned above, the system was swamped by requests for access (non-functional
failure), and therefore was not able to deliver its services to its users.
Testing helps to measure the quality of software in terms of defects found, the tests run,
and the system covered by the tests, for both functional and non-functional software
requirements and characteristics (such as reliability, usability, efficiency,
maintainability and portability, to be discussed in the following lectures). Testing
ensures that key requirements are examined before the system enters service and any
defects are reported to the development team for rectification.
Testing can give confidence in the quality of the software if it finds few or no defects.
Of course, a poor test may uncover few defects and leave us with a false sense of
5
security. A well-designed test will uncover defects if they are present and so, if such a
test passes, we will rightly be more confident in the software and be able to assert that
the overall level of risk of using the system has been reduced.
Testing cannot directly remove defects, nor can it directly enhance quality. By reporting
defects it makes their removal possible and so contributes to the enhanced quality of the
system.
Testing is one component in the overall quality assurance activity that seeks to ensure
that systems enter service without defects that can lead to serious failures. Testing
should be integrated alongside development standards, training and defect analysis as
one of the quality assurance activities.
How Much Testing is Enough?
A risk is something that has not happened yet and it may never happen; it is a potential
problem. Risk is inherent in all software development. For instance, the system may not
work or the project may not be completed on time. These uncertainties become more
significant as the system complexity and the implications of failure increase.
Not all software systems carry the same level of risk and not all problems have the same
impact when they occur. E.g., we would expect to test an automatic flight control system
more than we would test a video game system, because the risk (and hence the
probability of failure) is greater in the earlier case.
Every system is subject to risk of one kind or another, and there is a level of quality that
is acceptable for a given system. These two factors can be used to decide how much
testing to do.
Deciding how much testing is enough should take account of the level of risk (including
technical, safety, and business risks), and project constraints such as time and budget.
The most important aspect of achieving an acceptable result from a finite and limited
amount of testing is prioritization. Do the most important tests (those that test the most
important functional and non-functional aspects of the system as defined by the users)
first so that at any time you can be certain that the tests that have been done are more
important than the ones still to be done.
The next most important aspect is setting criteria, usually known as completion criteria,
that give an objective estimate of whether it is safe to stop testing, so that time and all
the other pressures do not confuse the outcome.
Testing should provide sufficient information to stakeholders to make informed
decisions about the release of the software or system being tested, for the next
development step or handover to customers.
Glossary:
Defect (bug, fault): A flaw in a component or system that can cause the component or
system to fail to perform its required function, e.g. an incorrect statement or data
definition. A defect, if encountered during execution, may cause a failure of the
component or system.
Error (mistake): A human action that produces an incorrect result.
6
Failure: Deviation of the component or system from its expected delivery, service or
result.
Quality: The degree to which a component, system or process meets specified
requirements and/or user/customer needs and expectations.
Risk: Factor that could result in future negative consequences; usually expressed as
impact and likelihood.
Software: Computer programs, procedures, and possibly associated documentation and
data pertaining to the operation of a computer system.
What is Testing?
Learning Objectives:
Recall the common objectives of testing
Provide examples for the objective of testing in different phases of the software
life cycle
Differentiate testing from debugging
Testing as a Process
A common perception of testing is that it only consists of running tests, i.e., executing
the software. This is part of testing, but not all of the testing activities.
Test activities exist before and after test execution. Before test execution there is some
preparatory work to do to design the tests and set them up. After test execution there is
some work needed to record the results and check whether the tests are complete. Even
more important is deciding what we are trying to achieve with the testing and setting
clear objectives for each test.
In general, testing activities include planning and control, choosing test conditions,
designing and executing test cases, checking results, evaluating exit criteria, reporting
on the testing process and system under test, and finalizing or completing closure
activities after a test phase has been completed. Testing also includes reviewing
documents (including source code) and conducting static analysis.
Different Testing Objectives
Common testing objectives include:
Finding defects. It helps us understand the risks associated with putting the
software into operational use, and fixing the defects improves the quality of the
products. Identifying defects has another benefit: by analyzing their causes, we
can improve the development processes and make fewer mistakes in future work
Gaining confidence about the level of quality
Providing information for decision-making
Preventing defects
Different viewpoints in testing take different objectives into account:
In development testing (e.g., component, integration and system testing), the
main objective may be to cause as many failures as possible so that defects in the
software are identified and can be fixed
In acceptance testing, the main objective may be to confirm that the system works
as expected, to gain confidence that it has met the requirements
In some cases the main objective of testing may be to assess the quality of the
software (with no intention of fixing defects), to give information to stakeholders
of the risk of releasing the system at a given time
Maintenance testing often includes testing that no new defects have been
introduced during development of the changes
8
10
testing activity is started, the longer the elapsed time available. Testers do not have to
wait until software is available to test. As soon as work products (requirements, code,
documents etc.) are ready, we can test them. E.g., requirement documents are the basis
for acceptance testing, so the creation of acceptance tests can begin as soon as
requirement documents are available.
Carrying out testing as early as possible leads to finding and fixing defects more cheaply
and preventing defects from appearing at later stages of the project. Studies have shown
what is known as the cost escalation model presented below in a simplified way.
need to be regularly reviewed and revised, and new and different tests need to be written
to exercise different parts of the software or system to find potentially more defects
Running the same set of tests continually will not continue to find new defects.
Developers will soon know that the test team always tests the boundaries of conditions,
for example, so they will test these conditions before the software is delivered. This
does not make defects elsewhere in the code less likely, so continuing to use the same
test set will result in decreasing effectiveness of the tests. Using other techniques will
find different defects.
Principle 6 Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is
tested differently from an e-commerce site
Different testing is necessary in different circumstances. A website where information
can merely be viewed will be tested in a different way to an e-commerce site, where
goods can be bought using credit/debit cards. We need to test an air traffic control
system with more rigor than an application for calculating the length of a mortgage.
Risk can be a large factor in determining the type of testing that is needed. The higher
the possibility of losses, the more we need to invest in testing the software before it is
implemented.
Principle 7 Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not
fulfill the users needs and expectations
The fact that no defects are outstanding is not a good reason to ship the software. The
customers for softwarethe people and organizations who buy and use it to aid in their
day-to-day tasksare not interested in defects or numbers of defects, except when they
are directly affected by the instability of the software. The people using software are
more interested in the software supporting them in completing tasks efficiently and
effectively.
Glossary:
Exhaustive testing: A test approach in which the test suite comprises all combinations
of input values and preconditions.
13
14
Clear Objectives
Each organization and each project will have its own goals and objectives. Different
stakeholders, such as the customers, the development team and the managers of the
organization, will have different viewpoints about quality and have their own
objectives. Because people and projects are driven by objectives, the stakeholder with
the strongest views or the greatest influence over a group will define, consciously or
subconsciously, what those objectives are.
People tend to align their plans with these objectives. E.g., depending on the objective,
a tester might focus either on finding defects or on confirming that software works. But
if one stakeholder is less influential during the project but more influential at delivery,
there may be a clash of views about whether the testing has met its objectives. One
manager may want the confirmation that the software works and that it is good
enough if this is seen as a way of delivering as fast as possible. Another manager may
want the testing to find as many defects as possible before the software is released,
which will take longer to do and will require time for fixing, re-testing and regression
testing. If there are not clearly stated objectives and exit criteria for testing which all the
stakeholders have agreed, arguments might arise, during the testing or after release,
about whether enough testing has been done.
Communication Aspects of Testing
Identifying failures during testing may be perceived as criticism against the product and
against the author. Many of us find it challenging to actually enjoy criticism of our work.
We usually believe that we have done our best to produce work which is correct and
complete. We all make mistakes and we sometimes get annoyed, upset or depressed
when someone points them out.
As a result, testing is often seen as a destructive activity, even though it is very
constructive in the management of product risks. Testers need to use tact and diplomacy
when raising defect reports. Defect reports need to be raised against the software, not
against the individual who made the mistake.
If errors, defects or failures are communicated in a constructive way, bad feelings
between the testers and the analysts, designers and developers can be avoided. This
applies to defects found during reviews as well as in testing. The tester and test leader
need good interpersonal skills to communicate factual information about defects,
progress and risks in a constructive way. For the author of the software or document,
defect information can help them improve their skills. Defects found and fixed during
testing will save time and money later, and reduce risks.
Communication problems may occur, particularly if testers are seen only as messengers
of unwanted news about defects. However, there are several ways to improve
communication and relationships between testers and others:
Start with collaboration rather than battles. The aim is to work together rather
than be confrontational. Keep the focus on delivering a quality product. Explain
that by knowing about the found defect now, we can work round it or fix it so the
delivered system is better for the customer
15
16
17
This type of model is often referred to as a linear or sequential model. Within this
model, each activity is completed before moving on to the next one. Testing is carried
out once the code has been fully developed. Once this is completed, a decision can be
made on whether the product can be released into the live environment.
In the waterfall model, the testing at the end serves only as a quality check. The product
can be accepted or rejected at this point. In software development, however, it is
unlikely that we can simply reject the parts of the system found to be defective, and
release the rest. What is needed is a process that assures quality throughout the
development life cycle. At every stage, a check should be made that the work-product
for that stage meets its objectives. The checks throughout the life cycle include
verification and validation:
Verification checks that the work-product meets the requirements set out for it.
Verification helps to ensure that we are building the product in the right way
Validation changes the focus of work-product evaluation to evaluation against
user needs. This means ensuring that the behavior of the work-product matches
the customer needs as defined for the project. Validation helps to ensure that we
are building the right product as far as the users are concerned
Two types of development model facilitate early work-product evaluation. We will
discuss them next in turn.
V-model (Sequential Development Model)
The V-model was developed to address some of the problems experienced using the
traditional waterfall approach. The V-model provides guidance that testing needs to
begin as early as possible in the life cycle. There are a variety of activities that need to
be performed before the end of the coding phase. These activities should be carried out
in parallel with development activities, and testers need to work with developers and
18
business analysts so they can perform these activities and tasks and produce a set of test
deliverables.
Although variants of the V-model exist, a common type of V-model uses four test
levels, corresponding to the four development levels.
The left-hand side of the model focuses on elaborating the initial requirements,
providing successively more technical detail as the development progresses. In the
model shown, these are:
User requirements: capturing of user needs
System requirements: definition of functions required to meet user needs
Global design: technical design of functions identified in the system requirements
Detailed design: design of each module or unit to be built to meet required
functionality
The middle of the V-model shows that planning for testing should start with each workproduct. For instance, using the requirement specification as an example, acceptance
testing would be planned for right at the start of the development.
The right-hand side focuses on the testing activities. For each work-product, a testing
activity is identified:
Testing against the program specification takes place at the unit (component)
testing stage. Searching for defects and verifying the functioning of software
components (e.g. modules, programs, objects, classes etc.) that are separately
testable
Testing against the technical specification takes place at the integration testing
stage. Testing interfaces between components, interactions to different parts of a
system such as an operating system, file system and hardware or interfaces
between systems
19
Testing against the functional specification takes place at the system testing stage.
The main focus is verification against specified requirements
Testing against the requirement specification takes place at the acceptance testing
stage. Validation testing with respect to user needs, requirements, and business
processes conducted to determine whether or not to accept the system
This allows testing to be concentrated on the detail provided in each work-product, so
that defects can be identified as early as possible in the life cycle, when the workproduct has been created.
In practice, a V-model may have more, fewer or different levels of development and
testing, depending on the project and the software product. For example, there may be
component integration testing after component testing, and system integration testing
after system testing. Other test levels can also be defined, such as:
Hardware-software integration testing
Feature interaction testing
Customer Product integration testing
Remembering that each stage must be completed before the next one can be started, this
approach to software development pushes validation of the system by the user
representatives right to the end of the life cycle. If the customer needs were not captured
accurately in the requirement specification, or if they change, then these issues may not
be uncovered until the user testing is carried out. This is the main drawback of this
model.
Iterative-Incremental Development Models
Not all life cycles are sequential. In iterative and incremental models, we cycle through
a number of smaller self-contained life cycle phases for the same project. This type of
development is often referred to as cyclical. As with the V-model, there are many
variants of iterative life cycles.
Within these models, the requirements do not need to be fully defined before coding
can start. Instead, a working version of the product is built, in a series of increments, or
builds, with each increment adding new functionality. Each increment encompasses
requirements definition, design, code and test.
20
The initial increment will contain the infrastructure required to support the initial build
functionality. The increment produced by an iteration may be tested at several levels as
part of its development. Subsequent increments will need testing for the new
functionality, regression testing of the existing functionality, and integration testing of
both new and existing parts. Regression testing is increasingly important on all
iterations after the first one.
This life cycle can give early market presence with critical functionality, can be simpler
to manage because the workload is divided into smaller pieces, and can reduce initial
investment although it may cost more in the long run. Also early market presence will
mean validation testing is carried out at each increment, thereby giving early feedback
on the business value and fitness-for-use of the product.
A key feature of this type of development is the involvement of user representatives in
the testing. They are empowered to request changes to the software in order to meet
their needs.
Several drawbacks of this model can be pointed out. The lack of formal documentation
makes it difficult to test. In addition, the working environment may be such that
developers make any changes required, without formally recording them. This approach
could mean that changes cannot be traced back to the requirements or to the parts of the
software that have changed. Thus, traceability as the project progresses is reduced. To
mitigate this, a robust process must be put in place at the start of the project to manage
these changes.
Forms of iterative development include prototyping, rapid application development
(RAD), and agile development models. A proprietary methodology is called the rational
unified process (RUP).
Glossary:
Component (unit) testing: The testing of individual software components.
Incremental development model: A development lifecycle where a project is broken
into a series of increments, each of which delivers a portion of the functionality in the
overall project requirements. The requirements are prioritized and delivered in priority
order in the appropriate increment. In some (but not all) versions of this lifecycle model,
each subproject follows a mini V-model with its own design, coding and testing
phases.
Integration testing: Testing performed to expose defects in the interfaces and in the
interactions between integrated components or systems.
Iterative development model: A development lifecycle where a project is broken into a
usually large number of iterations. An iteration is a complete development loop
resulting in a release (internal or external) of an executable product, a subset of the final
product under development, which grows from iteration to iteration to become the final
product.
Regression testing: Testing of a previously tested program following modification to
ensure that defects have not been introduced or uncovered in unchanged areas of the
software, as a result of the changes made. It is performed when the software or its
environment is changed.
21
Software lifecycle: The period of time that begins when a software product is conceived
and ends when the software is no longer available for use. The software lifecycle
typically includes a concept phase, requirements phase, design phase, implementation
phase, test phase, installation and checkout phase, operation and maintenance phase,
and sometimes, retirement phase. Note these phases may overlap or be performed
iteratively.
System testing: The process of testing an integrated system to verify that it meets
specified requirements.
Validation: Confirmation by examination and through provision of objective evidence
that the requirements for a specific intended use or application have been fulfilled.
Verification: Confirmation by examination and through provision of objective evidence
that specified requirements have been fulfilled.
V-model: A framework to describe the software development lifecycle activities from
requirements specification to maintenance. The V-model illustrates how testing
activities can be integrated into each phase of the software development lifecycle.
22
beginning of each iteration. Rather than analyzing requirements at the outset of the
project, the best the test team can do is to identify and prioritize key quality risk areas.
I.e., they can follow an analytical risk-based test strategy. Specific test designs and
implementation will occur immediately before test execution, potentially reducing the
preventive role of testing. Defect detection starts very early in the project, at the end of
the first iteration, and continues in repetitive, short cycles throughout the project. In
such a case, testing activities in the fundamental testing process overlap and are
concurrent with each other as well as with major activities in the software lifecycle.
The availability of testable systems earlier in the lifecycle would seem to be a benefit
to the test manager, and it can be. At the same time, the iterative lifecycle models create
certain test issues for the test manager:
The first issue is the need, in each increment after the first one, to be able to
regression test all the functions and capabilities provided in the previous
increments. Because the most important functions and capabilities are typically
provided in the earlier increments, it is very important that these functions and
capabilities not be broken. However, given the frequent and large changes to the
code baseevery increment being likely to introduce as much new and changed
code as the previous incrementthe risk of regression is high. This risk tends to
lead to attempts to automate regression tests, with varying degrees of success
The second issue is the common failure to plan for bugs and how to handle them.
This failure manifests itself when business analysts, designers, and developers
are assigned to work full-time on subsequent increments while testers are testing
the current increment. In other words, you allow the activities associated with
increments to overlap rather than requiring that each increment complete entirely
before the next one starts; this can seem efficient at first. However, once the test
team starts to locate bugs, an overbooked situation occurs for the business
analysts, designers, and developers who must address them
The final common issue, which is particularly common in the agile world, is the
lack of rigor in and respect for testing
These are all surmountable issues, but the test manager must manage them carefully, in
conjunction with the project management team.
In both models discussed above, good change management and configuration
management are critical for testing. A lack of proper change management results in an
inability for the test team to keep up with what the system is and what it should do.
Alignment in V-Model
Let us use the V-model as an example to illustrate the concept of alignment between the
testing process and other processes in the lifecycle. Well further assume that we are
talking about the system test level:
Test planning occurs concurrently with project planning, and test control
continues until system test execution and closure are complete. Analysis, design,
implementation, execution, evaluation of exit criteria, and test results reporting
are carried out according to the plan. Deviations from the plan are managed
24
Test analysis starts immediately after or even concurrently with test planning.
Test analysis and design occurs concurrent with requirements specification,
system and architectural (high-level) design specification, and component (lowlevel) design specification
Test implementation, including test environment implementation starts during
system design, and completes just before test execution begins
Test execution begins when the test entry criteria are all met. More realistically,
test execution starts when most entry criteria are met and any outstanding entry
criteria are waived. Test execution continues until system test exit criteria are met
Evaluation of test exit criteria and reporting of test results occurs throughout test
execution, generally with greater frequency and urgency as project deadlines
approach
Test closure activities occur after test exit criteria are met and test execution is
declared complete
Such alignment of activities with each other and with the rest of the system lifecycle
will not happen simply by accident. For each test level, and for any selected combination
of software lifecycle and test process, the test manager must perform this alignment
during the test planning and/or project planning.
Characteristic of Good Testing Regardless of Lifecycle Model
In any life cycle model, there are several characteristics of good testing:
For every development activity there is a corresponding testing activity
Each test level has test objectives specific to that level
The analysis and design of tests for a given test level should begin during the
corresponding development activity
Testers should be involved in reviewing documents as soon as drafts are available
in the development life cycle
25
26
When working with testing metrics and measurement program, three main areas are to
be taken into account:
Definition of metrics: a useful, pertinent, and concise set of quality and test
metrics should be defined for a project. Once these metrics have been defined,
their interpretation must be agreed upon by all stakeholders, in order to avoid
future discussions when metric values evolve. Metrics should be defined
according to objectives for a process or task, for components or systems, for
individuals or teams
Tracking of metrics: reporting and merging metrics should be as automated as
possible to reduce the time spent in producing the raw metrics values. Variations
of data over time for a specific metric may reflect other information than the
interpretation agreed upon in the metric definition phase
Reporting of metrics: the objective is to provide an immediate understanding of
the information for management purpose. Reporting should enlighten
management and other stakeholders, not confuse or misdirect them. good testing
Good reports based on metrics should be easily understood, not overly complex
and certainly not ambiguous. They should also draw the viewers attention
toward what matters most, not toward trivialities. In that way, good testing
reports based on metrics and measures will help management guide the project
to success. Not all types of graphical displays of metrics are equally useful. A
table with a snapshot of data at a moment in time might be the right way to present
such information as the coverage planned and achieved against certain critical
quality risk areas. A graph of a trend over time might be a useful way to present
other information, such as the total number of defects reported and the total
number of defects resolved since the start of testing
Glossary:
Measurement: The process of assigning a number or category to an entity to describe
an attribute of that entity.
Metric: A measurement scale and the method used for measurement.
27
Code of Ethics
Testers must adhere to a code of ethics: they are required to act in a professional manner.
Testers can have access to confidential and/or privileged information, and they are to
treat any information with care and attention, and act responsibly to the owner(s) of this
information, employers and the wider public interest. A code of ethics is necessary,
among other reasons to ensure that the information is not put to inappropriate use.
Recognizing the ACM and IEEE code of ethics for engineers, the ISTQB states the
following code of ethics:
PUBLIC Certified software testers shall act consistently with the public
interest
CLIENT AND EMPLOYER Certified software testers shall act in a manner
that is in the best interests of their client and employer, consistent with the public
interest
PRODUCT Certified software testers shall ensure that the deliverables they
provide (on the products and systems they test) meet the highest professional
standards possible
JUDGMENT Certified software testers shall maintain integrity and
independence in their professional judgment
MANAGEMENT Certified software test managers and leaders shall subscribe
to and promote an ethical approach to the management of software testing
PROFESSION Certified software testers shall advance the integrity and
reputation of the profession consistent with the public interest
COLLEAGUES Certified software testers shall be fair to and supportive of
their colleagues, and promote cooperation with software developers
SELF Certified software testers shall participate in lifelong learning regarding
the practice of their profession and shall promote an ethical approach to the
practice of the profession
28
Questions
1. A bug or defect is:
a) a mistake made by a person;
b) a run-time problem experienced by a user;
c) the result of an error or mistake;
d) the result of a failure, which may lead to an error.
2. The effect of testing is to:
a) increase software quality;
b) give an indication of the software quality;
c) enable those responsible for software failures to be identified;
d) show there are no problems remaining.
3. Which of the following is correct?
Debugging is:
a) Testing/checking whether the software performs correctly.
b) Checking that a previously reported defect has been corrected.
c) Identifying the cause of a defect, repairing the code and checking the fix is
correct.
d) Checking that no unintended consequences have occurred as a result of a fix.
4. Which of the following are aids to good communication, and which hinder it?
I. Try to understand how the other person feels.
II. Communicate personal feelings, concentrating upon individuals.
III. Confirm the other person has understood what you have said and vice versa.
IV. Emphasize the common goal of better quality.
V. Each discussion is a battle to be won.
a) I, II and III aid, IV and V hinder.
b) III, IV and V aid, I and I hinder.
c) I, III and IV aid, II and V hinder.
d) II, III and IV aid, I and V hinder.
5. When is testing complete?
a) When time and budget are exhausted.
b) When there is enough information for sponsors to make an informed decision
about release.
c) When there are no remaining high priority defects outstanding.
d) When every data combination has been exercised successfully.
6. Which list of levels of tester independence is in the correct order, starting with the
most independent first?
a) Tests designed by the author; tests designed by another member of the
development team; tests designed by someone from a different company.
29
b) Tests designed by someone from a different department within the company; tests
designed by the author; tests designed by someone from a different company.
c) Tests designed by someone from a different company; tests designed by someone
from a different department within the company; tests designed by another
member of the development team.
d) Tests designed by someone from a different department within the company; tests
designed by someone from a different company; tests designed by the author.
7. Which statement correctly describes the public and profession aspects of the code of
ethics?
a) Public: Certified software testers shall act in the best interests of their client and
employer (being consistent with the wider public interest). Profession: Certified
software testers shall advance the integrity and reputation of their industry
consistent with the public interest.
b) Public: Certified software testers shall advance the integrity and reputation of the
profession consistent with the public interest. Profession: Certified software
testers shall consider the wider public interest in their actions.
c) Public: Certified software testers shall consider the wider public interest in their
actions. Profession: Certified software testers shall participate in lifelong learning
regarding the practice of their profession and shall promote an ethical approach
to the practice of their profession.
d) Public: Certified software testers shall consider the wider public interest in their
actions. Profession: Certified software testers shall advance the integrity and
reputation of their industry consistent with the public interest.
8. Which of the following is true about the V-model?
a) It has the same steps as the waterfall model for software development.
b) It is referred to as a cyclical model for software development.
c) It enables the production of a working version of the system as early as possible.
d) It enables test planning to start as early as possible.
9. Which of the following is true of iterative development?
a) It uses fully defined specifications from the start.
b) It involves the users in the testing throughout.
c) Changes to the system do not need to be formally recorded.
d) It is not suitable for developing websites.
10. Which of the following statements are true?
I. For every development activity there is a corresponding testing activity.
II. Each test level has the same test objectives.
III. The analysis and design of tests for a given test level should begin after the
corresponding development activity.
IV. Testers should be involved in reviewing documents as soon as drafts are available
in the development life cycle.
a) I and II.
30
a)
b)
c)
d)
12.
13. Test objectives vary between projects and so must be stated in the test plan.
Which one of the following test objectives might conflict with the proper tester mindset?
a) Show that the system works before we ship it.
b) Find as many defects as possible.
c) Reduce the overall level of product risk.
d) Prevent defects through early involvement.
14.
a)
b)
c)
d)
e)
15. When what is visible to end-users is a deviation from the specific or expected
behavior, this is called:
a) An error.
b) A fault.
c) A failure.
d) A defect.
e) A mistake.
16.
a)
b)
c)
d)
17.
a)
b)
c)
d)
18. Which one of the following describes the major benefit of verification early in
the life cycle?
a) It allows the identification of changes in user requirements.
b) It facilitates timely set up of the test environment.
c) It reduces defect multiplication.
d) It allows testers to become involved early in the project.
19. Which of the following statements best describes one of the seven key principles
of software testing?
a) Automated tests are better than manual tests for avoiding the Exhaustive Testing.
b) Exhaustive testing is, with sufficient effort and tool support, feasible for all
software.
c) It is normally impossible to test all input / output combinations for a software
system.
d) The purpose of testing is to demonstrate the absence of defects.
20. Which of the following statements are true?
A. Software testing may be required to meet legal or contractual requirements.
B. Software testing is mainly needed to improve the quality of the developers work.
C. Rigorous testing and fixing of defects found can help reduce the risk of problems
occurring in an operational environment.
D. Rigorous testing is sometimes used to prove that all failures have been found.
a) B and C are true; A and D are false.
b) A and D are true; B and C are false.
c) A and C are true, B and D are false.
d) C and D are true, A and B are false.
32