Vous êtes sur la page 1sur 120

Testing Throughout the

Software Life Cycle


Overview
 Software Development Models
 Distinction: testing - checking, verification - validation
 General “V Model”, identification of phase results
 Benefits of early tests; iterative tests
 Testing within the life cycle models

 Test Levels
 Component Testing (unit testing)
 Integration Testing
 System Testing
 Acceptance Testing

 Test Types: the Targets of Testing


 Functional Testing
 Non-functional Testing
 Structural Testing
 Testing changes – Confirmation and Regression Testing

 Maintenance Tests
 Testing once changes have been made

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 2 - 123


Software Development Models
Development and Testing

 Developers create their software and test it as they develop it.

 Testers test the code the developers create, so testing is


dependent on Development.

 But, testing should begin at the Requirements Stage and not wait
until the code is created and passed onto Testing.

 Testing should review documents as they become available.

 Development and Testing should work in parallel – do not wait until


the product is coded for Testing to see the product for the first time!

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 4 - 123


Testing Phases in the V Model

Checking

 An activity like measuring, examining one or several characteristics


of a unit, and comparing them with defined requirements in order to
assert if conformity is achieved for every characteristic.

 Comparison of a measured characteristic with a reference value

 Generic term for all analytic quality assurance measures,


independent of method and artifact

 Generic term for testing (dynamic checking) and reviews

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 5 - 123


Testing Phases in the V Model

Activities in Software Development

Software Engineering

Constructive Organizational Analytic

Formal Methods Project Management Software Metrics

CASE Configuration Management ...

... Requirements Mgmt. Checking

Change Management
Formal Proof
...

Reviews

Tests

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 6 - 123


Testing Phases in the V Model

Testing

 Every (sample) execution of the artifact


under defined conditions
in order to check the observed results
regarding certain desired properties (test evaluation)

 Testing comprises not only testing activities, but also planning,


execution, and evaluation of tests (test management).

 Testing is dynamic checking.
 Testing comprises all activities ranging from planning, up to
evaluation of dynamic checking .

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 7 - 123


Testing Phases in the V Model

Verification vs. Validation

Verification
 Comparison of the product of a development phase with its
specification (e.g. component implementation against its
specification).

Validation
 The applicability of a problem solution (a product or a system) is
checked by the user
 Are we developing the right software to solve the problem?

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 8 - 123


Testing Phases in the V Model

Testing: Meaning has Changed Over Time

1960 1970 1980 1990 2000

Testing helps the Testing means planning, designing,


programmer demonstrate implementing, executing and
that his program works. maintaining tests and test
environments.
Testing is supposed to result in
Testing is the process that legitimate confidence in the software
executes a program with the by minimizing the risk of failures
intention of finding failures.
(risk-based testing).

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 9 - 123


Testing Phases in the V Model

Prerequisites for Efficient Testing

 Guidance  Test Case Properties


 planning  related to the requirements and
 aims the design
 propagation of success stories  reproducibility
 traceability

 Support  Monitoring
 method “know-how”  fault information system
 guidelines (defect tracking system)
 tools  develop a fault classification
 resources system
(time and staff)  progress and effort monitoring
 reporting

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 10 - 123


Test Levels

 Component Testing (unit testing)

 Integration Testing

 System Testing

 Acceptance Testing

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 11 - 123


Testing Phases in the V Model

V Model:

Analysis Design Coding Testing Operation

Requirements
Acceptance Testing
Analysis

Functional Design System Testing

Technical Design Integration Testing

Component Design Component Testing

Component Coding

(logical) order of activities


© 2006 www.methodpark.com | Testing in the Software Lifecycle | 12 - 123
Testing Phases in the V Model

V Model:

Analysis Design Coding Testing Operation

Requirements
Acceptance Testing
Analysis

Functional Design System Testing

Technical Design Integration Testing

Component Design Component Testing

Component Coding

Test is based on the specification


© 2006 www.methodpark.com | Testing in the Software Lifecycle | 13 - 123
Testing Phases in the V Model

Synonymous Terms for Phases in the V Model

Term Synonym

requirements analysis customer requirements

functional design functional system design,


system requirements
technical design technical system design

component design component specification,


unit specification
component coding programming

component test component test, class test, unit test

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 14 - 123


Testing Phases in the V Model

Other Life Cycle Models (1)

 Besides the general V Model, there are a series of other life cycle
models. Here are two well known representatives:

 Waterfall model
 Problem: Greatly simplifies reality; it assumes a linear sequence of
phases. A modified, more realistic version of the waterfall model allows for
returns from a phase to the preceding phase (feedback loop). Called the
Modified Waterfall Model.
Requirements
analysis

System &
component
design
Implementation and
Component test

Integration
and system test

Installation
and maintenance

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 15 - 123


Testing Phases in the V Model

Other Life Cycle Models (2)

 Incremental models
 Incremental models consider that the phases of software development
generally are not passed through only once but several times.

prototypes,
models,
simulations

 The respective procedure depends on the system to build and the


scope of the project.
© 2006 www.methodpark.com | Testing in the Software Lifecycle | 16 - 123
Agile Development Models

 These methods are highly iterative.


 Extreme Programming
 SCRUM
 Others…

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 17 - 123


Testing Phases in the V Model

Test Process  Acceptance test


 System requirements
 Practical suitability
(customer’s view)
 System test
 Functionality
 Non-functional requirements
(performance, reliability, portability...)

 Integration test
 Putting the components / modules together
 Interfaces
 Component test (unit test, component test)
 Component specification
 Component implementation
© 2006 www.methodpark.com | Testing in the Software Lifecycle | 18 - 123
Testing Phases in the V Model

Testing – Phases and Phase Deliverables

 Planning tests Test plan [TP]

 Designing tests Test design specification [TD]

 Identifying test cases Test case specification [TC]

 Implementing test cases Test scripts [TS]

 Carrying out tests Test protocol [TL]

 Analyzing test results Test report [TR]

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 24 - 123


Testing Phases in the V Model

Testing Activities in the Software Life Cycle (Outline)

Analysis Design Coding Testing Operation

Component Test

TP TD TC TS TL TR
Integration Test

TP TD TC TS TL TR

System Test

TP TD TC TS TL TR

Acceptance Test

TP TD TC TS TL TR
time

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 25 - 123


QA and Development Testing

QA testing

 QA testing means
 testing in accordance with quality assurance requirements
 specification-based testing

 Goal: Ensure that software released for testing complies with the
demands on software quality (i.e. functional as well as non-functional
requirements)

 Starts with defined (reproducible) test cases


 Supplemented by formal reviews
 Result: Test report

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 26 - 123


QA and Development Testing

Development Testing

 Goal: Ensure that software is basically functional

 The developer checks the code selectively during development and


before handing the software over to QA testing (typically using a
“debugger” tool)

 Generally, no defined test cases (is really debugging)


 test not reproducible

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 27 - 123


QA and Development Testing

QA testing
(specification-based) Acceptance Test
Requirements
Specification

Functional Design System Test


Specification

Technical Design Integration Test


Specification

Component Design Component Test


Specification
Development
testing
Component Coding

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 28 - 123


Project Planning and Testing

Ideal Case

Design Dev+Dev test QA testing Delivery


Software

Feature freeze and


release for QA Test
Design

Development

Dev. test

Prep. for
QA testing
QA testing
© 2006 www.methodpark.com | Testing in the Software Lifecycle | 29 - 123
Project Planning and Testing

The Much Encountered Reality

Design Dev+Dev test QA testing Delivery


Software

Design
Feature freeze and
release for QA test
?
?
Development

Dev. test

Prep. for
QA testing
QA testing

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 30 - 123


Project Planning and Testing

Test-oriented Development Process

 Typically – is independent of the maturity level of the used


development and test processes - it is true that:
 Short innovation cycles result in tight deadlines.
 Consequently, the time for systematic testing is insufficient.

 Wanted:
 Efficient means for ensuring the quality of the product,
 without consequences on the scheduled project deadlines
 without trade-offs for the systematic test

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 31 - 123


Project Planning and Testing

Prioritization
Last change of SW

Design Dev+Dev testing QA testing Delivery


Software

Mandatory tests (e.g. because of legal regulations)


 always conduct completely

Code sections burdened with risk (if tests are not mandatory)
 check completely if possible
(if applicable, check incrementally according to impact analysis)

Other tests  perform as many as possible

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 32 - 123


Project Planning and Testing

Last change of SW

Design Dev+Dev testing QA testing Delivery


Software

often VERY short

often:  0

Give more importance to development testing!


(without restrictions during QA testing)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 33 - 123


Test Exit Criteria

Release Conditions

 Test may be exited only if there are no known faults of a certain


severity (based upon the priorities of the requirements)

 Examples:
Exit testing only if
 all known faults of priority level “high” are corrected
 all known faults of priority level “high” and all known faults of priority
level “medium” are corrected

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 34 - 123


Test Exit Criteria

Additional Test Exit Criteria


(to be Used if Priority-based Release Conditions are Met)

 Examples
 A predefined test coverage has been reached.
 The number of remaining failures has fallen below a predefined
number (Note: this number can only be estimated!)
 Only a certain number of failures has been found per (test) time unit.
 The delivery date has been reached(!)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 35 - 123


Test Exit Criteria

Test Strategies:
 The Change Request Management System gives hints for the
decision whether further testing makes sense or not
 Prerequisite: (project) global definition of configurations -
which changes are permitted at a certain point in time, and which are
not?

2. recommendation:
test ends here
Code freeze
permitted changes
(start of QA testing)

comp 1
Develop QA test and ChReq’s Project
comp 2
ment time
Number of
detected faults 1. no more
known faults
of high severity

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 36 - 123


Testing in the V Model

Summary of V Model Goals

 Better understanding of the requirements


... Test cases as requirements definition

 Better communication with the customer and within the team

 Better understanding of the design, better design


( “design for testability”)

 Results in software with higher quality

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 37 - 123


Component / Module Test
Component / Module Test

Placement within the V Model

Requirements Acceptance Testing

Functional Design System Testing

Technical Design Integration Testing

Component Design Component Testing

Component Coding

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 39 - 123


Testing Phases in the V Model

Component Test

 Testing aspects
 Functionality of the component / module
 Component structure
 Exceptions
 Performance or other
non-functional requirements

 Fault classes
 Missing paths
 Wrong paths
 Computation faults

 Prerequisites
 Component specification exists
 Detailed specification and source code are available
© 2006 www.methodpark.com | Testing in the Software Lifecycle | 40 - 123
Component / Module Test

Definition: Component

 Software entity
 having a well-defined interface for provided services,
with a specification of the required services. Only explicit
dependencies are allowed.

 The general term “component” comprises components


(structured development), units (e.g. compiling units),
or classes (OO development)

 Functionality is requested only via the defined interface (signature


of the method, not a GUI interface).

 Several instances are possible

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 41 - 123


Component / Module Test

Definition: Component / Module

 Software entity
 having a well-defined interface (typically functional) and
 an internal state (defined by the data of the component)

 Data access is only allowed through the corresponding interface


functions (data encapsulation)

 Implementation details are hidden (unknown) to the user/caller


(information hiding)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 42 - 123


Component / Module Test

Test Drivers, Surrogates, and Test Harnesses

 Test driver (main test program)


- provides the environment for the test of the SuT (“Software under
Test”)

 Surrogates (stubs, simulators, dummies)


- simulate the functionality of not yet implemented parts
(needed for top-down tests)

 Test harness
- contains the necessary test drivers and surrogates

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 43 - 123


Component / Module Test

Test Harness

main Test driver

SuT SuT Test output

Sub- Sub-
routine routine Surrogates
#1 #2

SuT in target environment SuT in test environment

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 44 - 123


Component / Module Test

Test Cases

 Sources for test cases


 Component specification
 Experience of the tester
 Techniques (covered in more detail, later)
 Black box, white box tests
 Boundary value analysis
 Equivalence class partitioning
 Path coverage
 Data for load tests
 Data for performance tests

 See also “Dynamic Testing” (covered later in this course)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 45 - 123


Component / Module Test

Testing vs. Debugging

Testing Debugging
detects deviations, failures and finds the reasons for deviations
breakdowns and breakdowns
is a planned activity is a deductive activity (the first
step indicates the next step…)
is based upon defined conditions cannot be automated

can be done without design assumes design knowledge


knowledge

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 46 - 123


Component / Module Test

Testing vs. Debugging

 Testing is a planned process


 where a program is executed in order to find faults

 Debugging is an activity
 where the reason for a failure is located,
 its correction should be well thought out,
 the correction is carried out and
 the consequences of the correction are checked.

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 47 - 123


Component / Module Test

Negative Testing / Robustness Test

 Negative testing is functional testing using test cases, using input


parameters that are not permitted ( “negative”) according to the
specification of the test object.

 The robustness test checks if the test object reacts to erroneous


input values in a robust way.
 the test object must handle this situation in an orderly way without
interfering with the rest of the system
 It must recognize the invalid values, and
 either report an appropriate status
 or handle exceptions appropriately

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 48 - 123


Component / Module Test

Who should test?

 The Author of the Code


 knows critical parts of the code
 can easily implement a test frame
 can locate and correct faults very efficiently
but
 typically is “blind” to his/her own mistakes (!!)
 typically does not test systematically unless test cases
are given

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 49 - 123


Component / Module Test

Who should test?

 Another software developer


 knows typical pitfalls
 possibly changes makes the code less fault-prone ????
 tests often with low priority ????

 Testing expert
 systematically plans test strategies and test cases
 tests systematically
 Corrections take a relatively high amount of effort

(see also the section on “Test Management”)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 50 - 123


Extreme Programming

 Write the test code before you implement an interface function!


 Publish the test code
 Execute all tests before checking-in the code, after having changed
code! (mandatory)
 If failures occur during test:
correct faulty code immediately (or arrange for the correction)

 Amend the set of tests continuously


 when adding/changing interfaces
 if the existing tests overlook faults
 Maintain the tests

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 51 - 123


Extreme Programming

 Beware
 Here, an Extreme Programming test means the development test
(unit test), not the QA test !
 There are no explicit requirements and no explicit test specifications
i.e. the quality of the tests depends upon the developer‘s
understanding of reasonable and necessary tests

 Advantages
 Developers consider testability already during design
 Stability of interfaces

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 52 - 123


Requirements for Development

 Testability is important!
 Design for testability

 Do not accept “disposable tests” or “one-developer-only” tests!

 Design for Testability


 Loose coupling of components
 Strong cohesion
 Additional functionality for test
good bad

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 53 - 123


Requirements for Development

 Examples for measures (continued))


 Test code should be part of the product code (include assertions)
 Encapsulate hardware-dependent parts (so can port the system easier)
 Simulation (of parts) of the environment

 include them in the development guidelines!


 create checklists

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 54 - 123


Results

 Improvement of the development testing


 Awareness of the requirements regarding testability
 No disposable tests
 development tests are more beneficial
 Commitment for the developers and users

 Of course: improved development testing only supplements QA


testing; it is not a substitute!

 The interaction between development and test departments


becomes more efficient, if the guidelines and checklists are used.

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 55 - 123


Results

High Quality in Acceptable Time

many time-consuming
iterations needed
Standard SW Ok?
Development returns + (Test  Development)
Process corrections

QA Test Approved
Product
Test-oriented
SW
Development Ok?
Process returns + fewer iterations
corrections “Test  Development”
corrections
 more time for QA test

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 56 - 123


Results

More Effective QA Tests at the Same Coverage

System test
Integration test
Testing effort
Component test
Traditional
process
t
QA Test

Testing effort
Test-
oriented
process t
Developm’ Test QA Test
© 2006 www.methodpark.com | Testing in the Software Lifecycle | 57 - 123
Results

… and Efficient QA Tests at the Same Coverage

System test
Integration test
Testing effort
Component test
Traditional
process
t
QA Test

Testing effort
Test-
oriented
process t
Development’s Test QA Test
© 2006 www.methodpark.com | Testing in the Software Lifecycle | 58 - 123
Integration and Integration Test
Integration and Integration Test

Placement within the V Model

Requirements Acceptance Testing

Functional Design System Testing

Technical Design Integration Testing

Component Design Component Testing

Component Coding

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 60 - 123


Integration and Integration Test

Properties of the Integration Test


 Testing aspects
 Procedure interface
 Parameter interface Components that are tested,
 Message passing interface
working properly,
 Shared memory interface
and integrated
 Fault classes
 Incorrect usage of an interface
 Wrong understanding of the interface
 Synchronization problems
Prerequisite
 Prerequisites
 Interface descriptions are available
for integration test!
 At least 2 components are implemented

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 61 - 123


Testing Phases in the V Model

 Attention:

 Interfaces to external software systems

 Passing the integration test is no guarantee for accuracy


 Caution: There are always two sides of an external Interface. Only one of
those is under control of the development team.

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 62 - 123


Integration and Integration Test

Integration vs. Integration Test

Assembly of system parts A test that is focused on the


(done by developers and interaction of system parts
system integrators) (performed after successful
integration)

Integration Strategy

Determines the order of integration and integration test


steps. Decide which parts to integrate first.

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 63 - 123


Integration and Integration Test

Test Drivers, Surrogates, and Test Harnesses

 are necessary for integration test, just as for component test

 are needed to simulate components that are not implemented nor


integrated yet.

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 64 - 123


Integration Strategies
Integration Strategies

Order of the Integration Steps (covered in detail over the next pages)

 Bottom-up
 Top-down
 Ad-hoc
 Hardest first
 Function-oriented
 Transaction-oriented
 Non-incremental or “Big-bang”

 Integration strategies for distributed systems

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 66 - 123


Integration Strategies

Bottom-up Integration

main
(will be tested at the end)

(5)

(4) (3)

(1) (2)

 Idea:
 “Start at the lowest level and then work upwards”
 Integrate hardware-related or support functions first
 Top-level functions are called by test drivers

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 67 - 123


Integration Strategies

Evaluation of Bottom-up Integration

 Advantages:
 No stubs needed
 Ideal case: very efficient development

 Disadvantages:
 Cannot be presented at an early stage
 No prototype available
 Higher-level components must be simulated by test drivers

 Useful:
 If the requirements are clear, the project duration is short and no
prototype is needed

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 68 - 123


Integration Strategies

Top-down Integration

main
(1)

(2)

(3)

(4)

 Idea:
 “Start at the highest level and then work downwards”
 The “calling functions” are integrated before the “called functions”
 Called functions are simulated
 Begin with an early prototype

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 69 - 123


Integration Strategies

Evaluation of Top-down Integration

 Advantages:
 Progress can be demonstrated early
 A common understanding of the main functionality is achieved early
 Early validation of system handling
 Possible demand for corrections detected early

 Disadvantages:
 Effort for the implementation of the simulators for the called functions
is high

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 70 - 123


Integration Strategies

Hardest-first Integration

 Idea:
 “Implement the most difficult/critical task first”
 Critical or error-prone components, or components that are essential
for further integration steps, are integrated early
 Calling function is simulated
 Called functions are simulated

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 71 - 123


Integration Strategies

Evaluation of Hardest-first Integration

 Advantages:
 Critical components / functions are checked early. If needed, even
major changes can be realized on time.
 Feasibility is examined early
 Benefits of usage may be possible before completion of the whole
system

 Disadvantages:
 Critical components / functions must be available early (planning)
 Effort for the implementation of the simulators

 Useful:
 If there are critical or extremely error-prone components
© 2006 www.methodpark.com | Testing in the Software Lifecycle | 72 - 123
Integration Strategies

Integration “as Available”

 Idea:
 Integration commences as soon as at least two components with a
common interface have passed the respective component tests
 Calling function is simulated
 Called function is simulated

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 73 - 123


Integration Strategies

Evaluation of “Integration as Available”

 Advantages:
 Integration can start as soon as possible

 Disadvantages:
 May not be a prototype and therefore no early presentation
 Critical components are possibly integrated very late
 Effort for the implementation of simulators is high

 Useful:
 If no other strategy applies
 If you cannot wait any longer for components delivered late

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 74 - 123


Integration Strategies

Big-bang Integration

 Idea:
 “All at once” (or “a great deal at once”)
 All components are integrated in one step

 not incremental ! (but used often…)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 75 - 123


Integration Strategies

Evaluation of Big-bang Integration

 Advantages:
 No effort for the implementation of simulators

 Disadvantages:
 Localization of faults is costly
 Systematic testing is difficult, if not impossible (many interruptions)
 Often, integration and test can only be continued after the corrections

 Useful:
 If no other strategy applies (e.g. because a partial integration cannot
be performed due to the test environment)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 76 - 123


Integration Strategies

Function-oriented Integration

 Idea:
 Integrate components that realize a common system function.

 Example: In Word, integrate all the components for copying and


pasting portions of text

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 77 - 123


Integration Strategies

Evaluation of Function-oriented Integration

 Advantages:
 User-oriented, determined approach (often risk-minimizing)

 Disadvantages:
 Rarely used functions are tested rarely
(perilous functions are possibly neglected)
 “Tricky” faults (not occurring during the standard tests) remain
undetected
 Localization of faults is costly
 Possibly, tests begin very late

 Useful:
 In combination with other strategies (e.g. bottom-up)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 78 - 123


Integration Strategies

Transaction-oriented Integration

 Idea:
 Integrate components that realize a common transaction

 Example: In Word, cut, copy, paste text

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 79 - 123


Integration Strategies

Evaluation of Transaction-oriented Integration

 Advantages:
 Data-oriented, determined approach (often risk-minimizing)

 Disadvantages:
 Rarely used transactions are tested rarely
 “Tricky” faults (not occurring during the standard tests) remain
undetected
 Localization of faults is costly
 Possibly, tests begin very late

 Useful:
 In combination with other strategies

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 80 - 123


Integration Strategies

Summary (1)
bad good
 Design for Testability
 Loose coupling
 Strong cohesion
 Added test interfaces
 Added functionality for tests

 Design in Layers
 Project-specific, customer-specific, or user interface-specific components
 Basis components with standard functionality
 Hardware-related components

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 81 - 123


Integration Strategies

Summary (2)

 Choice of the right combination of strategies


For example:
 Hardest-first integration for critical components
 Top-down and function-oriented integration for GUI
 Bottom-up and partially data-oriented integration for the remaining
components

 Adapt project planning according to required components


 Get commitment / support
 Select and plan the strategy early in order to be able to adapt the
project plan in time

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 82 - 123


System Test
System Test

Placement in the V Model

Requirements Acceptance Testing

Functional Design System Testing

Technical Design Integration Testing

Component Design Component Testing

Component Coding

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 84 - 123


Testing Phases in the V Model

System Test

 Testing aspects
 Completeness of the functional qualities
 Completeness of the non-functional qualities, e.g.
 Efficiency and data capacity
 Ease of use and robustness
 Security and reliability
 Compatibility and maintainability

 Fault classes
 Requirement not fulfilled
 Requirement wrongly fulfilled

 Prerequisites
 Requirement specification is available
 Requirements are quantified

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 85 - 123


System Test

Goal of the System Test

 Validation of all system requirements

 Investigation whether the system is ready for acceptance


 Points of interest:
 The system does not contain severe faults
 Degree of correctness as high as possible for the complete system

 Qualities in system testing


 Complete fulfillment of all important requirements
 Fulfillment of all other requirements as well as possible

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 86 - 123


System Test

Functional System Tests

 Completeness of the functional properties:


Are all the functional properties fulfilled?

 At a minimum, there should be one test case per functional requirement


 Ability of the “Software under Test” to carry out the functions specified for
the intended use under given conditions
 Computations, services, ...
 Focus is on the user scenarios/use cases and business processes
in the functional/requirement specification

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 87 - 123


System Test

Non-functional System Tests

 Looking a the completeness of the non-functional properties:.

 Are asking, “Are all the non-functional properties fulfilled?”


 Efficiency, data capacity and robustness
 Usability
 Security and reliability
 Compatibility
 Portability
 Changeability and maintainability
 Performance, scalability,…
 Note: ISO standard 9126 (quality attributes)
© 2006 www.methodpark.com | Testing in the Software Lifecycle | 88 - 123
System Test

Volume Test

 Focus:
 Reliability, correctness, efficiency of the system when dealing with
large amount of data

 Test cases:
 Transactions with large amounts of data

 Examples for faults:


 Data items still being changed by the transaction (n-1)
are evaluated too soon in the nth transaction
 Data items are changed in transaction n but transaction (n-1) has not
yet finished

 Useful for data base systems and transaction-oriented systems

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 89 - 123


System Test

Load Test

 Focus:
 Reliability, correctness and efficiency of the system under a high or
growing computational load

 Test cases:
 Activities consuming large amounts of computing time
 Large number of parallel requests

 Examples of failures:
 System crash of internet servers, database servers, ...

 Useful for proving scalability of the system

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 90 - 123


System Test

Probe effect

 The probe effect is the effect on the component or system by the


measurement instrument when the component or system is being
measured, e.g. by a performance testing tool or monitor.

 For example, performance may be slightly worse when


performance testing tools are being used.

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 91 - 123


System Test

Stress Test

 Focus:
 Reliability, correctness and efficiency of the system under a high
communication load

 Test cases:
 High rate of equal requests
 High rate of different requests

 Examples for faults:


 Queue overflow after execution of nth statement (n usually large)

 Useful for distributed systems

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 92 - 123


System Test

Performance/Efficiency Test

 Focus:
 Acceptable response times and efficient resource usage of the system
under low and high load

 Test cases:
 Activities consuming high and low amounts of computing time

 Examples for failures:


 Consumption of resources is too high
 Response times are unacceptable for computational-intensive
activities

 Useful for proving scalability of the system

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 93 - 123


System Test

Security Test

 Focus:
 Security of the system regarding unauthorized access / actions

 Test cases:
 “Forbidden” login, “forbidden“ data base queries

 Examples for failures:


 Unauthorized login
 Unauthorized user can read/modify data

 Useful for systems with a rights management

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 94 - 123


System Test

Stability/Reliability Test

 Focus:
 Stability under difficult conditions

 Test cases:
 Abort function, interrupts, radio interferences or interferences on the
system‘s bus

 Examples for failures:


 Deadlocks
 (similar to a car intersection when no one can move in any direction)
 No ”undo“ function
 Inconsistencies

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 95 - 123


System Test

Robustness and Recovery Test

 Focus:
 Wrong operation by user
 Successful (system) restart after (system) abort (e.g. power breakdown,
hardware failure)

 Test cases:
 Wrong input by user
 Recovery after abort in different situations

 Examples of failures:
 Deadlocks
 Inconsistencies
 System crash after wrong input

 Useful for data-oriented systems, as well as for embedded systems

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 96 - 123


System Test

Compatibility Test

 Focus:
 Compatibility / data conversion / portability

 Test cases:
 Compliance with standards
 Porting to another hardware platform
 Conversion to another transaction protocol

 Examples for failures:


 Deadlocks
 Data field overrun

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 97 - 123


System Test

Usability Test

 Focus:
 Usability

 Test cases:
 Application-relevant sequence of operations, installation and
maintenance problems

 Examples for failures:


 Usage too complicated, incomprehensible, not intuitive, …
 Incomprehensible or ambiguous information or error messages
 Help system does not provide useful information

 Useful for systems with human interaction

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 98 - 123


System Test

Configuration Test

 Focus:
 Configurability

 Test cases:
 Creation of different configurations
 Test of the software in different configurations

 Examples for faults:


 Insufficiencies when creating the configuration
 Certain combinations of configurations are faulty/invalid

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 99 - 123


System Test

Check of Maintainability (Static Test)

 Focus:
 Complexity of the activities for small changes
(hopefully, the small changes were defined)

 Test cases:
 Extension of allowed margins, input values, states, …
 Comprehensibility of the documentation in order to perform the
adaptation
 Update of the documentation regarding the adaptation

 Examples for faults:


 Side effects
 Insufficiencies in the documentation

 Useful if respective development guidelines are available


© 2006 www.methodpark.com | Testing in the Software Lifecycle | 100 - 123
System Test – Test Environment

 The test environment should be as similar as possible to the


subsequent operational environment (except from the test tools)
 The test environment should not be the productive environment of the
customer
 since system crashes might implicate loss of important data
 since otherwise the testers would have only limited control over the system

Test computer

Car e.g.: Hardware-in-the-Loop Simulation


model

Interface ( Sensor,  Actuator)

Actua-
Sensor-
h/w
Controller tor
h/w

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 101 - 123


System Test

Check of Documentation (Static Documentation Test)

 Focus: Document quality


 Is the document:
 current (is it up to date?)
 complete,
 contents have value? - is it of benefit to the user?
 Is there a table of contents? - an index?

 Examples for faults:


 Changes not incorporated/inconsistencies
 New functionality not documented
 Glossary/index missing or incomplete
 Level of detail is unreasonable
 Too complicated, incomprehensible, not intuitive, …

 Useful together with documentation guidelines

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 102 - 123


System Test

Summary

 System test is the last test before acceptance


 The last chance for QA to point out defects
 Last chance for the contractor to gain confidence in the software
before acceptance

 Validation of the demanded (important) properties


 Initial requirements
 Including all changes agreed upon later

 Great risk if there are defects in the test process


 Plan and perform tests risk-based
 Plan tests according to later use of the software

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 103 - 123


Acceptance Test
Acceptance Test

Placement within the V model

Requirements Acceptance Testing

Functional Design System Testing

Technical Design Integration Testing

Component Design Component Testing

Component Coding

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 105 - 123


Testing Phases in the V Model

Acceptance Test

 Testing aspects
 “Confidence-building measure”
 Practical suitability
 Completeness

 Fault classes
 Requirement not fulfilled
 Requirement wrongly fulfilled

 Prerequisites
 Requirement specification is available
 Customer is available to provide feedback

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 106 - 123


Acceptance Test

 Test for user acceptance


 Last validation step: the customer validates the usability
 The validation is performed
 within the application environment or
 within a model environment that is as similar as possible

 Test for contractual acceptance


 Validation of the contractual acceptance criteria

 The product is accepted after a successful acceptance test!

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 107 - 123


Acceptance Test

Testing of Pre-release Software

 Alpha/beta testing
 Validation of software before being released; performed by
representatives of the users
 The situation is as if the product had been bought already
(customer environment)
 Plan a feedback loop

 This kind of testing of pre-release software is often called alpha or


beta testing
 Alpha testing is performed at the development site
 Beta testing at the customer’s site (also called field testing)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 108 - 123


Acceptance Test

Product Acceptance

 Prerequisites
 Acceptance requirements are defined (at the beginning of the project!)

 Use acceptance protocols


 Partial acceptance
 Complete acceptance
 Documentation of the beginning of the warranty
 Acceptance protocol is a legal document

 Document the deliveries


 To whom? Date?
 Document deliveries to and from the customer

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 109 - 123


Acceptance Test

Product Acceptance

 Acceptance criteria:
 Acceptance as mainly fulfilling requirements
 Acceptance with minor defects
 Acceptance despite major defects
 Acceptance deferred due to major defects
 Acceptance without validation

 Defect elimination until ...

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 110 - 123


Acceptance Test

Product Acceptance

 Common scenarios
 Customer reports a defect (justified)
 Customer claims mandatory functionality is missing
 Customer repeatedly reports severe defects in order to defer acceptance

 Do not deny services because of unpaid bills


(always get legal advice)
 If denial is unjustified: liability for compensation

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 112 - 123


Acceptance Test

Tactical Remarks (1)

 Discuss the acceptance procedure very early with the customer

 Prepare the protocols before the acceptance

 Acceptance is a common event (customer and supplier)


 An agreement should have been made earlier
 Hint: perform a stakeholder acceptance test well in advance (with
protocol and signature)
 Should only be a formality

 Where does acceptance take place?


 Test system (alpha test)
 Production system (beta test)

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 113 - 123


Acceptance Test

Tactical Remarks (2)

 Planning
 Who takes part?
 Schedule
 Criteria must be defined

 The contract should contain agreements regarding the schedule, e.g.


 The project is automatically accepted 10 days after delivery
 If commercial use occurs

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 114 - 123


Contract and Regulation
Acceptance Test
Contract & Regulation Acceptance Testing

 Contract Acceptance Testing


 Performed against a contract’s acceptance criteria for producing
customer-developed software.
 Acceptance criteria should be defined when the contract is agreed
upon.

 Regulation Acceptance Testing


 Performed against any regulations which must be adhered to, such as
governmental, legal, or safety regulations.
 Examples: FDA, FAA, HIPPA

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 116 - 123


Maintenance Test
Maintenance Test

Original software version (1.0) a) Maintained software version (1.1)


Requirements Acceptance Testing
(e.g. “faults eliminated”,
Functional Design System Testing
update of operating system,
Technical Design Integration Test0ing
data base or compiler)
Component Design Component Testing b) Enhanced software (2.0)
Coding
(“big changes”)
 In both cases: Maintenance test

Requirements Acceptance Testing

Functional Design System Testing

Technical Design Integration Testing

Component Design Component Testing

Coding

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 118 - 123


Maintenance Test

Regression Test  Retest

Regression tests take place Retests take place at each test


after modification of an level after a failure was
already tested program and reported as corrected.
should verify that no new Retests should validate the
defects were introduced into proper error correction.
the software by the
modification.

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 119 - 123


Maintenance Test

Ideal Conditions

Development documents available Validation documents available

Requirements Acceptance Test

Functional Design System Test

Technical Design Integration Test

Component Design Component Test

Component Coding

Source code available


© 2006 www.methodpark.com | Testing in the Software Lifecycle | 120 - 123
Maintenance Test

Task: Perform All (or Selected) Regression Tests

Development documents available Validation documents available

Requirements Acceptance Test

Functional Design System Test

Technical Design Integration Test

Component Design Component Test

Maintenance
Component Coding Test

Source code available


© 2006 www.methodpark.com | Testing in the Software Lifecycle | 121 - 123
Maintenance Test

Ideal: Few Changes in the Development Documents

Efficient adaptation:
What to change?
Requirements  Requirements in the relevant
specifications
Functional Design  Affected design
 Affected test specifications
Technical Design  Affected code: few components
with loose coupling
Component Design

Component Coding

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 122 - 123


Maintenance Test

Ideal: Repetition of some Tests

main

Acceptance Test

System Test
changed module

Efficient maintenance test Integration Test

What to repeat?
 Component test
 Integration test Component Test

 System test
(condensed)
 Acceptance test Component Coding
(condensed)
© 2006 www.methodpark.com | Testing in the Software Lifecycle | 123 - 123
Maintenance Test

The Reality often Looks different:

Development documents are not or Validation documents are not or


only partially available only partially available
Requirements Acceptance Test

Functional Design System Test

Technical Design Integration Test

Component Design Component Test

Component Coding

Source code available


© 2006 www.methodpark.com | Testing in the Software Lifecycle | 124 - 123
Maintenance Test

Task 1: Reengineering of Task 2: Creation of


the Development Documents Corresponding
from the Code Test Specifications

Requirements Acceptance Test

Functional Design System Test

Technical Design Integration Test

Component Design Component Test


Task 3:
Perform the
Component Coding
necessary
tests
Source code available

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 125 - 123


Testing in the Software Life Cycle

Summary (1)

 Testing should be done in four phases concentrating on different


aspects
 Acceptance test
 Customer’s trust
 Usability
 System test
 Completeness of functional and non-functional requirements
 Integration test
 Correct interaction of components and subsystems
 Component test
 Basic functionality of individual components / units / components

© 2006 www.methodpark.com | Testing in the Software Lifecycle | 126 - 123

Vous aimerez peut-être aussi