Vous êtes sur la page 1sur 94

Software Testing Unit 1

Aditi Chikhalikar
Department of Information Technology
Patkar-Varde College
Syllabus
• Unit 1 - Fundamentals of testing:
▫ Necessity of testing
▫ What is it
▫ Testing principles
▫ Fundamental test process
▫ The psychology of testing
• Unit 2 - Testing throughout the software life cycle:
▫ Software development models
▫ Test levels
▫ Test types: the targets of testing
▫ Maintenance testing
Syllabus
• Unit 3 – Static techniques:
▫ Reviews and the test process
▫ Review process
▫ Static analysis by tools
• Unit 4 - Test design techniques:
▫ Identifying test conditions and designing test cases
▫ Categories of test design techniques
▫ Specification-based or black box techniques
▫ Structure-based or white-box techniques
▫ Experience based techniques
Syllabus
• Unit 5 - Test management:
▫ Test organization
▫ Test plans
▫ Estimates and strategies
▫ Test progress monitoring and control
▫ Configuration management
▫ Risk and testing
▫ Incident management
• Unit 6 - Tool support for testing:
▫ Types of test tool
▫ Effective use of tools
▫ Potential benefits and risks
▫ Introducing a tool into an organization
Books
• Software Testing Foundations, 2nd Edition By Hans
Schaefer, Andreas Spillner, Tilo Linz, Shroff Publishers and
Distributors.

• Foundations Of Software Testing by Dorothy Graham, Erik


van Veenendaal, Isabel Evans, Rex Black.
Unit 1
• Fundamentals of testing:
▫ Necessity of testing
▫ What is it
▫ Testing principles
▫ Fundamental test process
▫ The psychology of testing
Books
• Software Testing Foundations, 2nd Edition By Hans
Schaefer, Andreas Spillner, Tilo Linz, Shroff Publishers and
Distributors. (Chapter: 1,2)
• Foundations Of Software Testing by Dorothy Graham,
Erik van Veenendaal, Isabel Evans, Rex Black. (Chapter: 1)
Software Testing
• Testing is the process of evaluating a system or its
component(s) with the intent to find that whether it satisfies
the specified requirements or not.

• Testing is executing a system in order to identify any gaps,


errors or missing requirements in contrary to the actual
desire or requirements.
Software Testing
• Testing is a process aiming at planning, preparing and
evaluating a program and the program’s results in order to
• execute a program with the aim to detect errors, faults
• execute a program with the aim to determine its quality
failures
• execute a program with the aim to increase trust in the
program
• analyze the program or the documents to prevent failures
Common Terms
• Errors:
▫ Humans make errors in their thoughts, actions, and in the
products that might result from their actions.
▫ A human action that produces an incorrect results.

• Defect (bug, fault):


▫ A flaw in a component or system that can cause the
component or system to fail to perform its required
function is called a defect or bug or fault.
Common Terms
• Failure: occur due to fault in software.
▫ A defect, if encountered during execution may cause a
failure of the component or system.
▫ When the software code has been built , it is executed and
then any defects may cause the system to fail to do what it
should do, its cause a failure.
▫ Failure is a non fulfillment/ deviation of the component or
system from its expected result/ behavior/ service or
delivery.
▫ Failure A fault is hidden by one or more other faults in
different parts of application (defect masking)
Purpose of Testing
• Executing a program in order to find failures.

• Executing a program in order to measure quality.

• Executing a program in order to provide confidence.

• Analyzing a program or its documentation in order to


prevent defects.
Reasons of defects
• The software does not do something that the specification
says it should do.
• The software does something that the specification says it
should not do.
• The software does something that the specification does not
mention.
• The software is difficult to understand, hard to use, slow …
Testing V/s Debugging
• The localization and correction of defects are the job of
software developer and are called debugging.

• Repairing a defect increases quality of product provided no


new defects are introduced.

• Debugging is task of localizing and correcting faults.

• The goal of testing (more or less systematic)is detection of


failures(that indicate presence of defects.)
Causes of Software Failure
• Human factor
▫ It is because human beings develop software.
▫ Human beings are not perfect.
▫ They are prone to make mistakes.
▫ As human beings develop software, it would be foolish to
expect the software to be perfect and without any defects
in it!
• Communication Failure:
▫ Miscommunication, lack of communication or erroneous
communication during software development.

▫ The requirements may be incomplete.

▫ This could lead to a situation where the programmers


would have to deal with problems that are not clearly
understood, thus leading to errors.

▫ Another scenario of problem with communication may


arise when a programmer tries to modify code developed
by another programmer.
• Unrealistic Development Timeframe:
▫ More often than not software are developed under crazy
release schedules, with limited/insufficient resources
and with unrealistic project deadlines.

▫ So it is probable that compromises are made in


requirement/design to meet delivery schedules.

▫ Sometimes the programmers are not given enough time


to design, develop or test their code before handing it
over to the testing team.

▫ Late design changes can require last minute code


changes, which are likely to introduce errors.
• Poor Design Logic:
▫ Sometimes the software is so complicated that it
requires some level of R&D and brainstorming to reach
a reliable solution.

▫ Lack of patience and an urge to complete it as quickly


as possible may lead to errors.

▫ Misapplication of technology (components, products,


techniques), desire/temptation to use the easiest way
to implement solution, lack of proper understanding of
the technical feasibility before designing the
architecture all can invite errors.

▫ May result from excessive pressure , time constraint.


• Poor Coding Practices:
▫ Sometimes errors are slipped into the code due to
simply bad coding.

▫ Bad coding practices such as inefficient or missing


error/exception handling, lack of proper validations
(datatypes, field ranges, memory overflows etc.) may
lead to introduction of errors in the code.

▫ In addition to this some programmers might be


working with poor tools, faulty compilers, debuggers
etc. making it almost to invite errors and making it too
difficult to debug them.
• Lack of Version Control:

▫ Even if a version control system is in place, errors


might still slip into the final builds if the programmers
fail to make sure that the most recent version of each
module are linked when a new version is being built to
be tested.
• Buggy Third-party Tools:

▫ Quite often during software development we require


many third-party tools, which in turn are software and
may contain some bugs in them.

▫ These tools could be tools that aid in the programming


e.g. shared DLLs, Compilers, a map navigation API etc.

▫ A bug in such tools may in turn cause bugs in the


software that is being developed.
• Lack of Skilled Testing:
▫ Poor testing does take place across organizations.

▫ There can be shortcomings in the testing process:


 Lack of seriousness for testing,
 Scarcity of skilled testers,
 Testing activity conducted without much importance
given to it etc.
• Last Minute Changes:
▫ Changes that are made to requirement, infrastructure,
tools, platform can be dangerous, especially if are being
made at the 11th hours of a project release.

▫ Actions like database migration, making your software


compatible across a variety of OS/browsers can be
complex things and if done in a hurry due to a last
minute change in the requirement may cause errors in
the application.

▫ These kind of late changes may result in last minute


code changes, which are likely to introduce errors.
When do
defects
arise?
What is the cost of defects?
Testing Terms
• Test objectives or test type: A reason or purpose for
designing and executing a test.

• Test technique: on basis of specification or execution


(e.g.: boundary value analysis or business process based
test)

• Test object: The component or system to be tested. (GUI


based or DB test)

• Testing level: A group of test activities that are


organized and managed together. A test level is linked to
the responsibilities in a project. Examples of test levels are
component test, integration test, system test and
acceptance test.
Testing Terms
• Test person: named after person or subgroup executing
test (developer test, user acceptance test)

• Test case: A set of input values, execution preconditions,


expected results and execution post conditions, developed
for a particular objective or test condition, such as to exercise
a particular program path or to verify compliance with a
specific requirement.

• Test scenario: Test cases can be combined to create Test


scenarios.
Testing Terms
• Test Plan:
▫ A document describing the scope, approach.

▫ It identifies amongst others test items, the features to be


tested, the testing tasks, who will do each task, the test
environment, the test design techniques and test
measurement techniques to be used.

▫ It is a record of the test planning process


7 Principles of Testing
Fundamental principles in testing/General principles
of testing
1. Testing shows presence of defects
2. Exhaustive testing is impossible
3. Early Testing
4. Defect Clustering
5. Pesticide Paradox
6. Testing is context dependent
7. Absence of errors fallacy
• Principle 1: Testing shows the presence of defects,
not their absence.
• Testing can show that defects are present, but cannot
prove that there are no defects.
• Testing reduces the probability of undiscovered defects
remaining in the software but, even if no defects are
found, it is not a proof of correctness.
• But a failure to find bugs does not means that there are no
defects in software.
• Tests must be designed to find as many bugs as possible.
• Since it is assumed that every product initially has bugs.
A test that identifies bugs is always better than one that
finds none.
• Principle 2: Exhaustive testing is not possible.

• An exhaustive test where all values possible for all inputs


and their combinations are run, combined with taking into
account all different preconditions is impossible .

• Testing everything (all combinations of inputs and


preconditions) is not feasible except for trivial cases.

• Instead of exhaustive testing, we use risks and priorities to


focus testing efforts.
• Principle 3: Early Testing

• Testing activities should start as early as possible.


• This contributes to finding defects early.

• Testing activities should start as early as possible in the


software or system development life cycle and should be
focused on defined objectives.

• Early testing helps detecting errors at an early stage of the


development process which simplifies error correction
(and reduces the costs for this work).

• Errors identified later in the process tend to be more


expensive.
• Principle 4: Defect Clustering

• Defects tend to cluster together.


• There is no equal distribution of errors within one test
object.
• If many defects are detected in one place ,there are more
defects nearby.
• A small number of modules contain most of the defects
discovered during pre-release testing or show the most
operational failures.
• During testing one must react flexibly to this principle.
• As software problems tend to cluster around narrow areas
or functions.
• By identifying and focusing on these clusters, testers can
efficiently test the product.
• Principle 5: The Pesticide Paradox

• If the same tests are repeated over and over again,


eventually the same set of test cases will no longer find
any new bugs.
• To overcome this 'pesticide paradox', the test cases need
to be regularly reviewed and revised, and new and
different tests need to be written to exercise different
parts of the software or system to potentially find more
defects.
• Use a variety of tests and techniques to expose a range of
defects across different areas of the product.
• Avoid using the same set of tests over and over on the
same product or application because this will reduce the
range of bugs you will find.
• Principle 6:Test is context dependent
• The same tests should not be applied across the board
because different software products have varying
requirements, functions and purposes.
• No two systems are the same and therefore can not be tested
the same way.
• Testing intensity, the definition of end criteria must be
defined individually for each system depending on its testing
context.
• For example, a website should be tested differently than a
company Intranet site.
• For example, safety-critical software is tested differently from
an website site.
• Testing must be adapted to the risks inherent in use and
environment of application.
• Principle 7: Absence of errors fallacy

• Fallacy : a mistaken belief.


• The fallacy of assuming that no failures means a useful
system.
• Finding and fixing defects does not help if the system built
is unusable and does not fulfill the users' needs and
expectations.
• Finding failures and repairing defects does not guarantee
that system as a whole meets user expectations and needs.
• Early involvement of users in development process and use
of prototypes are preventive measures intended to avoid
problems.
Psychology of Testing
• Goal of testing software is uncovering discrepancies
between software and specifications.
• Testing is an extremely creative and intellectual
challenging task.
• Can developer test his own program?
▫ Weakness of developer is that every developer who has
to test his or her own program parts will tend to be
optimistic.
▫ Developer is blind to his own errors.
▫ Advantage if developer tests, it is good to have
knowledge of one’s own test object.
• An independent testing team tends to increase quality and
comprehensiveness of tests.(no bias)
• Tester does not have any possible assumptions and
misunderstandings regarding the product/software.
• Tester needs knowledge about system, but tester has
knowledge of testing.
• Tester reports failures and discrepancies with required
diplomacy and tact.
• The error which has occurred in testing environment may
not occur in developers environment.
• Documentation of testers environment is important.
• Mutual knowledge of respective tasks encourages
cooperation between developer and tester.
Fundamental
Test Process
Test Planning and Control
• Execution of such a substantial task as testing must not
take place without a plan.

• Planning of the test process starts at the beginning of the


software development project.
Planning of the Resources:
• The mission and objectives of testing must be defined and
agreed upon.
• Necessary resources for the test process should be
estimated.
• Which employees are needed, for the execution of which
tasks and when?
• How much time is needed, and which equipment and
utilities must be available?
• These questions and many more must be answered during
planning and the result should be documented in the test
plan.
• Necessary training of the employees should be provided.
• Test control:
▫ Is the monitoring of the test activities,
▫ Comparing what actually happens during the project with
the plan,
▫ Reporting status of deviations from the plan,
▫ Taking any actions to meet the mission and objectives in
the new situation.

• The test plan must be continuously updated, taking into


account the feedback from monitoring and control.

• Progress tracking can be based on the appropriate reporting


from the employees, as well as data automatically generated
from tools.
Determination of the test strategy
• The main task of planning is to determine the test strategy
• Since an exhaustive test is not possible, priorities must be
set based on risk assessment.
• The test activities must be distributed to the individual
subsystems, depending on the expected risk and the severity
of failure effects.
• Critical subsystems must get greater attention, thus being
tested more intensively.
• If no negative effects are expected in case of a failure, testing
could even be skipped on some parts.
• However, this decision must be made with great care.
• The goal of the test strategy is the optimal distribution of
the tests to the "right" parts of the software system.
• Define test intensity for subsystems and individual
aspects
• Prioritization of the tests
• Tool support
Test planning has the following major tasks, which help us
build a test plan:
• Determine the scope and risks and identify the objectives of
testing:
• Determine the test approach (techniques, test items,
coverage):
• Implement the test policy and/or the test strategy:
• Determine the required test resources:
• Schedule test analysis and design tasks, test implementation,
execution and evaluation:
• Determine the exit criteria:
Test control has the following major tasks:
• Measure and analyze the results of reviews and testing:
• Monitor and document progress, test coverage and exit
criteria:
• Provide information on testing: (also use the information we
have to analyze the testing itself. )
• Initiate corrective actions:
• Make decisions:
Test analysis and design
• Test analysis and design is the activity where general
testing objectives are trans-formed into tangible test
conditions and test designs.
Test analysis and design
Test analysis and design has the following major tasks:
1.Review the test basis (such as the product risk analysis,
requirements, architecture, design specifications, and
interfaces)
2. Identify test conditions based on analysis of test items,
their specifications, and what we know about their behavior
and structure.
3. Design the tests using techniques to help select
representative tests that relate to particular aspects of the
software which carry risks or which are of particular interest,
based on the test conditions and going into more detail.
Test analysis and design
• Logical and concrete test cases
• The specification of the test cases takes place in two steps.
▫ Logical test cases have to be defined first.
▫ After that, the logical test cases can be translated into
concrete, physical test cases, meaning the actual inputs
are chosen ( concrete test cases).
▫ The development of physical test cases, however, is part
of the next phase, test implementation.
• The test basis guides the selection of logical test cases with
each test technique.
• The test cases can be determined from the test object's
specification ( black box test design technique), or be
created by analyzing the source code ( white box test design
technique).
Test analysis and design
• A test oracle is a mechanism for predicting the expected
results. The specification can serve as a test oracle.
• Here are two possibilities:
▫ The tester derives the expected data from the input data
by calculation or analysis, based on the specification of
the test object.
▫ If functions that do the reverse action are available, they
can be run after the test and then the result is verified
against the old input. An example of this scenario is
encryption and decryption.
Test analysis and design
• Test cases for expected and unexpected inputs
• Test cases can be differentiated by two criteria:
▫ First are test cases for examining the specified behavior,
output, and reaction. Included here are test cases that
examine the specified handling of exception and error
cases.
▫ But it is often difficult to create the necessary conditions
for the execution of these test cases (e.g., capacity
overload of a network connection).
▫ Next are test cases for examining the reaction of test
objects to invalid and unexpected inputs or conditions,
which have no specified exception handling.
Test analysis and design
4. Evaluate testability of the requirements and system.
• The requirements may be written in a way that allows a
tester to design tests; for example, if the performance of
the software is important, that should be specified in a
testable way.
• If the requirements just say 'the software needs to respond
quickly enough' that is not testable, because 'quick enough'
may mean different things to different people.
• A more testable requirement would be 'the soft ware needs
to respond in 5 seconds with 20 people logged on'.
Test analysis and design
• The testability of the system depends on aspects such as
whether it is possible to set up the system in an
environment that matches the operational environment
and whether all the ways the system can be configured or
used can be understood and tested.
• For example, if we test a website, it may not be possible to
identify and recreate all the configurations of hardware,
operating system, browser, connection, firewall and other
factors that the website might encounter.
5. Design the test environment set-up and identify any
required infrastructure and tools.
Test implementation and execution
• Test implementation and execution are the activities
where:
▫ Test conditions and logical test cases are transformed
into concrete test cases,
▫ All the details of the environment are set up to support
the test execution activity,
▫ The tests are executed and logged.
• When the testing process has advanced and more is known
about the technical implementation, the concrete, physical
test cases will be developed from the logical ones.
• These test cases can then be used without further
modifications or additions for executing the test.
Test implementation and execution
• Test case execution
• In addition to defining test cases one must describe how
the tests will be executed.
• The priority of the test cases , decided during test planning,
must be taken into account.
• Only if the test developer executes the tests himself,
additional, detailed description may not be necessary.
• The test cases should also be grouped into test suites for
efficient test execution and easier overview.
Test implementation and execution
• Test harness
• In many cases specific test harnesses, drivers, simulators,
etc., must be programmed, built, acquired, or set up as part
of the test implementation.
• Because failures may also be caused by faults in the test
harness, the correct functioning of the test environment
must be checked.
• When all preparatory tasks for the test have been
accomplished, test execution can start immediately after
programming and delivery of the subsystems to testing.
• Test execution may be done manually or with tools using
the prepared sequences and scenarios.
Test implementation and execution
• Checking for completeness
• First, the parts to be tested are checked for completeness.
• The test object is installed in the available test environment
and tested for its ability to start and do the main
processing.
• Examination of the main functions
• It is recommended to start test execution with the
examination of the test object's main functionality.
• If failures or deviations from the expected result show up
at this time, it is foolish to continue testing.
• The failures or deviations should be corrected first.
• After passing this test, all additional items are tested.
• Such a sequence should be defined in the test strategy.
Test implementation and execution
• Tests without a protocol are of no value
• The test execution must be exactly and completely logged.
• This includes logging which testing activities have been
carried out, i.e., logging every test case run, logging its
results (success or failure) for later analysis.
• The test execution must be comprehensible to people not
directly involved, for example the customer, on the basis of
these test logs.
• It must be provable that the planned test strategy was
actually executed.
• The test log must document who tested which parts, when,
how intensively, and with which results.
Test implementation and execution
• Reproducibility is important
• Besides the test object, quite a number of documents and
information belong to each test execution: test
environment, input data, test logs, etc.
• The information belonging to a test case or test run must
be maintained in such a way that it is possible to easily
repeat the test later with the same input data and
conditions.
• In some cases it must also be possible to audit the test.
Test implementation and execution
• Failure found?
• If during test execution a difference shows up between
expected and actual results, it must be decided, when
evaluating the test logs, if it really is a failure.
• Nothing is more detrimental to the credibility of a tester
than a reported assumed failure whose cause actually is a
test problem.
• But the fear of this possibility should not result in potential
failures not being reported, i.e., the testers starting to self-
censor their results.
• In addition to reporting discrepancies, test coverage should
be measured and if necessary the use of time should be
logged.
Test implementation and execution
• Correction may lead to new faults
• Invoking incident management: based on the severity of a
failure the priority of fault correction must be decided.
• After the correction, it must be examined whether the fault
has really been corrected and that no new faults have been
introduced.
• New testing activities result from the action taken for each
incident, e.g., reexecution of a test that previously failed in
order to confirm a defect fix, execution of a corrected test,
and/or regression tests.
• If necessary new test cases must be specified to examine
the modified or new source code.
Test implementation and execution
• It would be convenient to correct faults and retest
corrections individually, in order to avoid unwanted
interactions of the changes.

• But Several faults should be corrected as a group and then


the program should be resubmitted to testing with a new
version state.
Test implementation and execution
• The most important test cases first
• This is called risk-based testing.
• Giving priority has the advantage that important test cases
are executed first, and thus important problems are found
and corrected early.
• An equal distribution of the limited test resources on all
test objects of the project is not reasonable.
• Critical and uncritical program parts are then tested with
the same intensity.
• Critical parts would then be tested insufficiently and
resources would be wasted on uncritical parts for no
reason.
Test implementation and execution
• Implementation:
1. Develop and prioritize our test cases, using the
techniques and create test data for those tests.
▫ write instructions for carrying out the tests (test procedures)
2. Create test suites from the test cases for efficient test
execution. A test suite is a logical collection of test cases
which naturally work together. Test suites often share
data and a common high-level set of objectives.
3. Implement and verify the environment. We make sure
the test environment has been set up correctly, possibly
even running specific tests on it.
Test implementation and execution
• Execution:
1. Execute the test suites and individual test cases, following
our test procedures.
2. Log the outcome of test execution and record the
identities and versions of the software under test, test
tools.
3. Compare actual results (what happened when we ran the
tests) with expected results (what we anticipated would
happen).
4. Where there are differences between actual and expected
results, report discrepancies as incidents.
Test implementation and execution
5. Analyze results to gather further details about the defect,
reporting additional information on the problem, identify
the causes of the defect, and differentiate between
problems in the software and other products under test
and any defects in test data, in test documents, or
mistakes in the way we executed the test. (log the latter in
order to improve the testing itself. )
Test implementation and execution
6. Repeat test activities as a result of action taken for each
discrepancy.
▫ Re-execute tests that previously failed in order to
confirm a fix (confirmation testing or re-testing).
▫ Execute corrected tests and suites if there were defects
in our tests.
▫ Test corrected software again to ensure that the defect
was indeed fixed correctly (confirmation test) and that
the programmers did not introduce defects in
unchanged areas of the software and that fixing a defect
did not uncover other defects (regression testing).
Evaluating exit criteria and reporting
• End of test?
• This is the activity where the test object is assessed against
set objectives, the test exit criteria specified earlier.
• This may result in normal termination of the tests if all
criteria are met, or it may be decided that additional test
cases should be run, or that the criteria had an
unreasonably high level.
• Also called test completion criteria.
• If at least one test exit criterion is not fulfilled after
executing all tests, further tests must be executed.
• Attention should be paid to ensure that the new test cases
lead to fulfilling the respective exit criteria.
Evaluating exit criteria and reporting
• Is further effort justifiable?
• A closer analysis of the problem can also show that the
necessary effort to fulfill the exit criteria is not appropriate.
• In that situation, further tests are then canceled.
• (consider the associated risk.)
Evaluating exit criteria and reporting
• Dead code
• A further case of nonfulfillment of test exit criteria may
occur if the specified criterion is impossible to fulfill in the
concrete case.
• Even this possibility must be considered in order to avoid
further senseless tests in relation to the criterion.
• If further tests are planned, the test process must be
resumed, and it must be decided at which point the test
process will be reentered.
Evaluating exit criteria and reporting
• Further criteria for the determination of the test's
end
• Another possible criterion is the failure rate, or the defect
detection percentage (DDP).
• If the failure rate falls below a given threshold (e.g., less
than one failure per testing hour), it will be assumed that
more testing is not justified and the test can be ended.
Evaluating exit criteria and reporting
• Consider several test cycles
• The failures found during the test must be repaired
according to their severity, after which a new test becomes
necessary.
• Cycles often result if further failures occur during the test
of modifications.
• These failures need to be isolated and corrected and a new
test cycle is necessary.
• Not planning such correction cycles, by assuming that no
failures will occur while testing, is unrealistic.
• Because it can be assumed that testing finds failures,
additional faults must be removed and tested in a further
test cycle.
Evaluating exit criteria and reporting
• Exit criteria in practice: Time and costs
• In practice, the end of a test is often defined by factors that
have no direct connection to the test: time and costs.
• If these factors lead to stopping the test activities, it is
because not enough resources were provided in the project
plan or the effort for an adequate test was underestimated.
• Successful testing saves costs
• Even if more resources than planned were used in testing,
it nevertheless results in savings due to elimination of
faults in the software.
• Faults delivered in the product mostly cause higher costs
when found during operation rather than during testing
Evaluating exit criteria and reporting
• Test summary report
• When the test criteria are fulfilled or their nonfulfillment is
clarified, a test summary report should be written for the
stakeholders. (message to the project manager or formal
report).

• Evaluating exit criteria has the following major tasks:


1. Check test logs against the exit criteria specified in test
planning.
2. Assess if more tests are needed or if the exit criteria
specified should be changed.
3. Write a test summary report for stakeholders.
Test closure activities
• These activities, which should be executed during this final
phase in the test process, (are often left out).
• The experience gathered during the test work should be
analyzed and made available for further projects.
• Of interest are deviations between planning and execution
for the different activities.
• For example, the following data should be recorded:
▫ When was the software system released? When was the
test finished or terminated?
▫ When was a milestone reached or a maintenance release
completed?
Test closure activities
• Important information for evaluation can be extracted by
asking the following questions:
▫ Which planned results are achieved if at all?
▫ Which unexpected events happened (reasons and how
they were circumvented (find a way around ))?
▫ Are there open change requests? Why were they not
implemented?
▫ How good was user acceptance after deploying the
system?
Test closure activities
• The evaluation of the test process, i.e., a critical evaluation
of the executed tasks in the test process, taking into
account the spent resources and the achieved results, will
probably show possibilities for improvement.

• If these findings are used in subsequent projects,


continuous process improvement is achieved.
Test closure activities
• A further finishing activity is the "conservation" of the
testware for the future. (test cases, test logs, test
infrastructure, tools, etc.)
• Software systems are used for a long time.
• During this time, failures not found during testing will
occur.
• Additionally, customers require changes.
• Both of these occurrences lead to new versions of the
program, and the changed program must be tested.
• A major part of this test effort during maintenance can be
saved if the testware is still available.
• The testware should be delivered to the organization
responsible for maintenance.
Test closure activities
• Test closure activities include the following major tasks:
1. Check which planned deliverables we actually delivered
and ensure all incident reports have been resolved
through defect repair or deferral.

2. For deferred defects, in other words those that remain


open, we may request a change in a future release.

3. Document the-acceptance or rejection of the software


system.
Test closure activities
4. Finalize and archive testware, such as scripts, the test
environment, and any other test infrastructure, for later
reuse. It is important to reuse whatever we can of
testware; we will inevitable carry out maintenance
testing, and it saves time and effort if our testware can be
pulled out from a library of existing tests.
5. Hand over testware to the maintenance organization who
will support the software and make any bug fixes or
maintenance changes, for use in confirmation testing and
regression testing.
6. Evaluate how the testing went and analyze lessons
learned for future releases and projects.
Software Quality
• Testing of software contributes to improvement of software
quality.
• This is done through identifying defects and their
subsequent correction by debugging.
• But testing is also measurement of software quality.
• If the test cases are a reasonable sample of software use,
quality experienced by the user should not be too different
from quality experienced during testing.
• But software quality entails more than just the elimination
of failures that occurred during the testing.
Software Quality
• According to the ISO/IEC-Standard 9126-1 [ISO 9126] the
following factors belong to software quality: functionality,
reliability, usability, efficiency, maintainability, and
portability.

• It should be defined in advance which quality level the test


object is supposed to show for each characteristic.

• The achievement of these requirements must then be


examined with capable tests.
Software Quality
• The characteristics and their sub-characteristics are, respectively:
(Should be written as it is . Further slides give explanation of these
points)
▫ functionality, which consists of five sub-characteristics: suitability,
accuracy, security, interoperability and compliance;
▫ reliability, which is defined further into the sub-characteristics
maturity (robustness), fault-tolerance, recoverability and compliance;
▫ usability, which is divided into the sub-characteristics
understandability, learnability, operability, attractiveness and
compliance;
▫ efficiency, which is divided into time behavior (performance), resource
utilization and compliance;
▫ maintainability, which consists of five sub-characteristics:
analyzability, changeability, stability, testability and compliance;
▫ portability, which also consists of five sub-characteristics:
adaptability, installability, co-existence, replaceability and compliance.
Software Quality
• Functionality
• Functionality contains all characteristics which describe
the required capabilities of the system.

• The capabilities are usually described by a specific


input/output behavior and/or an appropriate reaction to
an input.

• The goal of the test is to prove that every single required


capability in the system was implemented in the specified
way.
Software Quality
• The functionality characteristic contains the sub -
characteristics: adequacy, interoperability, correctness,
and security.

• An appropriate solution is achieved if all required


capabilities exist in the system and they work adequately.

• Software systems must interoperate with other systems,


or at least with the operating system.

• Interoperability describes the cooperation between the


system to be tested and the previously existing system.
Software Quality
• Security: One area of functionality is the fulfillment of
application specific standards, agreements, or legal
requirements and similar regulations.

• Many applications give a high importance to the aspects of


access security and data security.

• It must be proven that unauthorized access to applications


and data, both accidentally and intentionally, will be
prevented.
Software Quality
• Reliability
• Reliability describes the ability of a system to keep
functioning under specific use over a specific period.

• It is split into maturity, fault tolerance, and recoverability.

• Maturity means how often a failure of the software occurs


as a result of defects in the software.
Software Quality
• Fault tolerance is the capability of the software product
to maintain a specified level of performance, or to recover
from faults in cases of software faults, or of infringement of
its specified interface.
Software Quality
• Recoverability is the capability of the software product
to reestablish a specified level of performance and recover
the data directly affected in the case of a failure.

• Following a failure, a software product will sometimes be


"down" for a certain period of time, the length of which is
assessed by its recoverability.

• The ease of recovery and the work required should also be


assessed.
Software Quality
• Usability
• Usability is very important for interactive software
systems.
• Users will not accept a system that is hard to use.
• How significant is the effort that is required for the usage
of the software for the different user groups
• Understandability, ease of learning, operability, and
attractiveness, as well as compliance to standards,
conventions, style guides or user interface regulations are
partial aspects of usability. (examined in nonfunctional
Tests).
Software Quality
• Efficiency
• The test for efficiency measures the required time and
consumption of resources for the fulfillment of tasks.

• Resources may include other software products, the


software and hardware configuration of the system, and
materials (e.g., print paper, network, and storage).
Software Quality
• Changeability and portability
• Software systems are often used over a long period on
varied platforms (operating system and hardware).

• Therefore, the last two quality criteria are very important:


maintainability and portability.

• Subcharacteristics of maintainability are


analyzability, changeability, stability against side effects,
testability, and compliance to standards.
Software Quality
• Maintainability

• Adaptability, ease of installation, conformity, and


interchangeability have to be considered for the portability
of software systems.

• Many of the aspects of maintainability and portability can


only be examined by static analysis.
Software Quality
• Portability
• Five sub-characteristics: adaptability, installability, co-
existence, replaceability and compliance.
• A software system cannot fulfill every quality characteristic
equally well.
• Sometimes it is possible that a fulfillment of one
characteristic results in a conflict with another one.
• For example, a highly efficient software system can become
hard to port, because the developers usually use special
characteristics (or features) of the chosen platform to
improve the efficiency, which in turn affects the portability
in a negative way.
Software Quality
• Prioritize quality characteristics

• Quality characteristics must therefore be prioritized.

• This definition also acts as a guide for the test to determine


the examination's intensity for the different quality
characteristics.

Vous aimerez peut-être aussi