Vous êtes sur la page 1sur 43

Test architecture

- Ravi Sharda, April 2011

Setting the context


Cem Kaner (adapted):

A toxic myth about testing: testing = verification

If you have contracted for delivery of software, and the contract contains a complete and correct specification (requirements and/or design) - only then verification covers a good part of testing

For example, w.r.t. requirements (adapted from [BergerUsecases01]):


They might be ambiguous They might be incomplete They may not describe enough detail of use Not enough of them, missing entire areas of functionality They might be inaccurate They might not have been updated when requirements changed (or CCBs arrived and were accepted) They may have assumed general standards of quality attributes (usual response times, fail gracefully, etc.)

Kaner: Verification cannot tell you whether the software will meet stakeholder needs or elicit a positive reaction from users

Berger (as implied by [BergerUsecases01]): The goal of testing is to find bugs, rather than to make sure the software works

Still setting the context

Software testing definition (Cem Kaner):

Software testing is an empirical technical investigation conducted to provide stakeholders with information about the quality of the product or service under test.

Still setting the context

Alistair Cockburn: Software engineering is built on three legs Test Architect: 1) What are

Craft

Lifelong learning Deepening the proficiency in ones craft New tools and technologies

the available testing techniques, practices, tools, technologies, etc.? 2) Can I invent one?

Test Architect: What testing techniques, practices, Every situation (game) is different tools, technologies we No formula for winning the game need to use for this product? Quality of the move in the game is not absolute How do we plan to cover Quality of community and communication among members matter as to the product so enormously develop an adequate assessment of quality? Lessons from lean manufacturing

Cooperative gaming

People hand others decisions; people wait on each other for decisions; Some people have bigger backlog of decisions they they can handle at the moment (By implication - outputs too)
Project Mgr: How do I optimize the dependency network?

Functional testing (this does this)

The system under test is viewed as a black box

Emphasis is on the external behavior of the software


Selection of tests is based on specifications Requirements, and/or Design specification

Is often used interchangeably with black-box testing, feature testing, behavioral testing

Behavioral testing: a more specific form of functional testing

Is based on requirements specification

Functional testing (or more specifically behavioral testing) isnt all testing

In fact it represents just 35-65% of all testing If we stick to a functional story (this does this), well miss all kinds of problems that data and other structural elements can trigger (this does this with that)

Adapted from [LouTechniques], [Beiz95], [BoltonModel05]

Structural testing

The software is viewed as a white-box or a glass-box

Selection of test cases is based on the implementation of the software


Static analysis (code complexity), code churn measures, code paths, etc. Path coverage, branch coverage, data-flow coverage, etc. Specific statements, program branches or paths Examples

Goal: Cause the execution of specific spots in the software entity


Execute every statement at least once Test the use of all data objects

Expected results are evaluated based on a set of coverage criteria

A product can be manifested in (or on) concrete, physical parts


[BoltonModel05]

Object code, templates, sample data, configuration files, registry settings, user manuals, etc. Does your test strategy incorporate ideas informed by these physical objects?

Adapted from [LouTechniques], [Beiz95], [BoltonModel05]

Other terms

Test architecture vs. design

Test architecture: non-local

things that affect most or large part of an application, or a group of applications Things that affect local parts

Test design: local

Bugs vs. faults [Beiz95]


Fault implies someone is to blame

Carelessness during programming, incompetence, etc. Just happens. No one is to blame.

Bugs

Heuristic models [BoltonModel05]

When we model something, we focus on certain attributes of it while ignoring others

Gives opportunity to understand some important aspect

Risk is we might be oblivious to other important things

Good models are often heuristic


Set of guidelines to help us solve a problem But they are provisional: used for a specific, temporary purpose

They are fallible.

Higher quality Repeatability Consistency Improved productivity Predictability Improved estimation Better prioritization

Test Architecture COE: 1) What are the available testing techniques, practices, tools, technologies, etc.? 2) Do we need to invent one? 3) How do I propagate these and help teams in using these?

CRAFT

Lifelong learning Deepening the proficiency in ones craft New tools and technologies

Testing techniques - Classification A wise navigator never relies solely on one technique N. Bowditch
Coverage-based techniques Function testing Feature or function integration testing Menu tour Domain testing Equivalence class analysis Boundary testing Best representative testing Input field test catalogs Logic testing State-based testing Path testing Statement or branch coverage Configuration coverage 9 Specification-based testing Combination testing Activity-based techniques Regression Scripted testing Smoke testing Exploratory testing Guerilla testing Scenario testing Installation testing Load testing. Long sequence testing Performance testing

Testing techniques Another Classification


Black-box or Functional testing techniques Function tests Domain testing. Specification-based testing Risk-based testing Stress testing Regression testing Performance-based Finite-state machine-based Exploratory testing Decision table Orthogonal arrays and all pairs Etc.

White-box or Structural testing techniques Control-flow testing Data-flow testing Mutation testing Reference models for code-based testing Etc. Etc.

Testing techniques: function testing


Identify all functions or features (from requirements, user manuals, walking through the interface, etc.) Test them one at a time Are highly credible Easy to evaluate Not particularly powerful

11

Adapted from Cem Kaner

Testing techniques: specification-based testing

Check the program against every claim made in:

requirements, design, user interface description, published model, user manual, etc.

When specs. are taken seriously, spec-based testing is important

Spec. is part of the contract Or, products must conform to their advertisements, etc.

Are often weak


Adapted from Cem Kaner

12

Testing techniques: specification-based

Structural risk analysis example from a design spec.

[pointing at a box] What if this function fails? Can this function ever be invoked at the wrong time? [pointing at any part of the diagram] What error checking do you do here? [pointing at an arrow] What exactly does this arrow mean? What would happen if it was broken? Try annotating the box with icons for test ideas
Real-time Monitoring Browser Web Server Database Layer App Server

Auction Server
13 Src: Michael Bolton

Testing techniques

14 Src: Michael Bolton

Testing techniques: domain testing

Stratified sampling strategy for choosing a few test cases from the near infinity of candidate of test cases

Divide/partition a domain into subdomains (equivalence classes) Then select representatives of each subdomain Every boundary value

E.g., a good set of domain tests for a numeric variable hits:


Min, max, a value barely below the min, a value barely above the max Empty, null, negative (when positive expected), 0 (when non-zero expected)

Every extreme value

Boundary/extreme-value errors are very common in practice. Hence, testing for them, dont use best representatives Skip some of the subdomains E.g., people often skip cases that are expected to lead to error messages

Are higher power than tests that


15

Src: Adapted from [BoltonDom06], [KanerDomain]

Testing techniques: domain testing

Domain testing

How to divide into subdomains:


Intuitive equivalence: two test values are equivalent if they are so similar to each other that it seems pointless to test both Specified equivalence: two test values are equivalent if the specification says that the program handles them in the same way Paths: two test values are equivalent if they would drive the program down the same path (e.g. execute the same branch of an IF) Risk-based: two test values are equivalent if, given your theory of possible error, you expect the same result from each Etc.
http://www.testingeducation.org/k04/DomainExamples.htm http://www.testingeducation.org/k04/documents/bbst5_2005.pdf http://www.testingeducation.org/k04/documents/bbst6_2005.pdf

Examples of usage

16

Src: Adapted from [BoltonDom06], [KanerDomain]

Testing techniques: risk-based testing

A program is a collection of opportunities for things to go wrong

For each way that you can imagine the program failing, design tests to determine whether the program actually will fail in that way

Some bugs are not functional problems, but fall into other quality risk categories

States, installation or uninstallation, operations, maintenance, regression, data quality, date and time handling, configuration and compatibility, performance and reliability, stress and capacity, etc,
Customer facing portal: have you tested for cross-site scripting, SQL injections and other typical web application attacks? B2B gateway application: can the system handle multiple identical messages coming in (idempotency)

Examples,

17

Testing techniques: risk-based testing

A Generic risk list


Complex: anything disproportionately large, intricate, or convoluted New: anything that has no history in the product Changed: anything that has been tampered with or "improved". Upstream Dependency: anything whose failure will cause cascading failure in the rest of the system. Downstream Dependency: anything that is especially sensitive to failures in the rest of the system. Critical: anything whose failure could cause substantial damage. Precise: anything that must meet its requirements exactly. Popular: anything that will be used a lot. Strategic: anything that has special importance to your business, such as a feature that sets you apart from the competition. Third-party: anything used in the product, but developed outside the project. Distributed: anything spread out in time or space, yet who elements must work together. Buggy: anything known to have a lot of problems. Recent failure: anything with a recent history of failure.

18 Src: James Bach, Heuristic Risk-Based Testing, Software Testing and Quality Engineering Magazine, 11/99

Testing techniques: finite-state machine based

Model a program as a finite state machine that runs from state to state in response to events (such as new inputs) Tests can be selected it order to cover states and transitions on it

In each state, does it respond correctly to each event?

Suited for transaction-processing, reactive, embedded and real-time systems

19

Src: [Sweebok01] [KanerDesign]

Testing techniques: exploratory testing

Not a replacement for sustained engineering necessary for the long-term maintenance of software releases Not purely spontaneous. Needs extensive research:

studying other competitive products/systems, failure histories of this and analogous systems, the weaknesses of the product interviewing programmers, reading specifications, etc.

Might use any or all of these techniques:

Domain Specification-based Stress Risk-based

20

Testing techniques: exploratory testing

Example

21

Src: James Bach

Testing Web applications

Concerns

Functional correctness aspects Recoverability from errors Browser compatibility and configuration Usability: Understandability, learnability, operability, attractiveness Business rules: checking for accurate representation of business rules Transaction accuracy: Checking whether transactions complete accurately and whether cancelled transactions are appropriately rolled back Data validity and integrity: Valid formats of enterable data and proper character sets. Security: vulnerability analysis (unvalidated input, broken access control and session management, cross-site scripting flaws, SQL injection flaws, improper error handling, external intrusion, protection of secured transaction, viruses, access control, etc.) Performance: Concurrency, stress, throughput, response times Etc. HTML test tools, Site validation, General purpose Web test tools (GUI capture and playback), Web security tools, Web load and performance testing tools, Site monitoring tools

Tools

More tools and technologies

Web services testing

Concerns

Testing the transportation layer (HTTP/S, JMS, FTP, etc.) Functional corectness Regular Web services testing using good, bad and unexpected inputs Service design (WSDL) validation Message and schema validation, scheme version verification and data transformation Security policy validation Performance and load testing Testing for WS-* standards and related industry standards SLA and QoS testing Interoperability testing, say using WS-I basic profile, for Web services Communication protocol compatibility tests Testing for idempotency, etc. Etc.

Tools: Web services testing tools including SOATest and SOAPUI, etc.

Test data generation tools and strategies


Production sampling, Starting from scratch, seeding data, generating from databases Reverting data to a known state

Testing

Datawarehouse
Concerns [PerryTesting00]
Inaccurate or incomplete data in a datawarehouse Losing an update to a single data item Inadequate audit trail to reconstruct transaction Unauthorized access to data Inadequate service level Placing data in a wrong calendar period Improper use of data Loss of continuity of processing Etc..

Security

Concerns

Whether Application meets security needs. Examples include user authentication, secure data storage and transmission of specific fields (such as encrypted), verifying sensitive data is not stored in logs, Identifying security vulnerabilities of applications in the given environment. Examples include, buffer overflow, SQL injection, cross-site scripting, parameter tampering, cookie poisoning, hidden fields, debug options, unvalidated input, broken authorization, broken authentication, and session management

Tools: Network scanning, vulnerability scanning, penetration testing,

Testing for

Robustness

Concerns

How sensitive is a system to erroneous inputs and changes in operational environment? Verify that the application can recover using current backups Test failover and redundancy operations (DB, application server, etc.) High availability tests, say verifying that the system gracefully and quickly recovers from hardware and software failures without adversely impacting the operation of the system Verifying that failed transactions roll back correctly

Mostly manual

Performance

Concerns

Volume, load and stress tests Identification of critical transactions Initial analysis of performance data

Tools: Loadrunner, Winrunner, etc.

More tools and technologies

Mobile web application testing

Concerns [NguyenWebTest03] :

Add-on installation tests Data synchronization related tests UI implementation and limited usability tests Browser-specific tests Platform-specific tests Configurability or compatibility tests (cross devices, cross OS, cross browsers, cross versions, cross languages, graphic formats, sound formats, video formats, etc) Connectivity tests Performance, security tests Etc.

Tools: Device and browser emulators, Web-based mobine phone emulators and WML validators, Desktop WAP browsers, etc.

Other examples

Rich Internet Applications: AJAX, Flash/Silverlight, FlashFX, AIR, etc Database testing Etc.

Testing

Ajax testing

Concerns

Back button, book-marking, browsers loading control, especially when you use Ajax for affecting navigation or workflow Tests on different browser types UI testing: Since Ajax Web apps rely on stateful async client/server communication and client-side manipulation of DOM-tree, they are fundamentally harder to test automatica Ajax security

Server-side: same server side security schemes of the regular web apps Client-side: JS code is visible to a user/hacker (obfuscation or compression may help)

Rules testing Business process management and workflow testing Desktop UI testing

Test coverage

Functional Data Platform Operations Time

There were more than a million test cases written for Microsoft Office 2007 Alan Page et. Al.

28

Test coverage: functional coverage

Print testing example


Test what it does

Setup, preview, zoom Print range, print copies, scale to paper Print all, current page, or specific range Choose printer, printer properties, paper size and type Print to file instead of to printer

Focusing on functional coverage

menu and dialog tours: choose every option, click every button, fill in every field mouse tours: dont forget right-clicks, Shift-click, Ctrl-click, Alt-click, -click click frenzy: drag and drop keyboard tours: dont forget key combinations error tours: search for error messages inside resource tables other forms of guided tours: see Mike Kellys FCC CUTS VIDS Google it!
Src: Michael Bolton, Understanding Test Coverage

29

Test what it does Test coverage: data coverage it to

Print testing example


Content in documents (text, graphics, links, objects) Types of documents (letter, envelope, book) Size or structure of documents (empty, huge, complex) Data about how to print (zoom factor, number of copies) Data about the hardware

Focusing on data coverage

test using known equivalence classes and boundaries orient testing towards discovery of previously unknown classifications and boundaries not just testing at known or rumoured boundaries increase the quantity and ranges of input, driving toward infinity and zero dont forget to drive output values too descriptions of elements of a system include data; what data can we vary in those descriptions?
Src: Michael Bolton, Understanding Test Coverage

30

Test coverage: time coverage

Print testing example


Test how its affected by time

Try different network or port speeds Print documents right after another and after long intervals Try time-related constraints--spooling, buffering, timeouts Try printing hourly, daily, month-end, year-end reports Try printing from two workstations at the same time Try printing again, later.

31

Src: Michael Bolton, Understanding Test Coverage

Extent of test coverage

Smoke and sanity

Can this thing even be tested at all? Can this thing do the things it must do? Does it handle happy paths and regular input? Can it work? Will this thing handle challenging tests, complex data flows, and malformed input, etc.? Will it work?

Common and critical


Complex, extreme and exceptional

32

Src: Michael Bolton, Understanding Test Coverage

Craft: role of test architecture COE

Development of competency in different technical domains

Testing techniques, tools, technologies Organizing training and coaching

Development of competency in vertical domains

Context-Driven Testing http://www.context-driventesting.com/

Test Architecture COE: In the given situation/context, what testing techniques, practices, tools, technologies do we need to use?

COOPERATIVE GAMING

Every situation (game) is different No formula for winning the game Quality of the move in the game is not absolute Quality of community and communication among members matter enormously

Choosing test techniques, testing tools & technologies

A multi-dimensional problem [KanerDesign]

Forces:

Objectives (of a given testing project) The context


Product/software Quality criteria Risks Project factors (constraints, etc.)

No one testing technique, fits all needs. Often need many of them.

Experience matters How do we capture collective experience and make it easily accessible to all

Cooperative gaming: role of test architecture COE

Influencing, mentoring, coaching, and training execution teams

Techniques, tools and technologies that are relevant to a given project Collaboration with specialists on specific type of testing aspects Definition of a process that forms the basis for optimizing the test architecture lifecycle dependency network\ Reviewing and providing inputs and technical assistance on test design and strategy to execution teams

The importance of domain knowledge


Expert tester in Microsoft in the Outlook group, may not be an expert tester in another context say Avionics group in Boeing A loose grouping of similar things

Industry domains (telecom OSS/BSS) and sub-domains (fulfillment, assurance, billing, etc.) Technical domains (Rules and related infrastructure, Business Process management, Rich Internet Applications, and so forth) Company-specific domains (Mass Markets Ordering, Out of Region Billing, etc., eCommerce, etc.) Constraints the set of techniques, practices, tools and technologies one must learn Enables testers to use a vocabulary that is understood by others in the same domain Failures commonly found in a domain can be used to determine risks in system under test

How it helps?

A poorly stated (and conceived test strategy)

We will use black box testing, cause-effect graphing, boundary testing, and white box testing to test this product against its specification. Test cases and procedures should manifest the test strategy

38 Src: James Bach, Test Strategy: What is it? What does it look like?

Test strategy: An example

What is the product?

An application to help people, and teams of people, make important decisions. It will suggest the wrong decisions. People will use the product incorrectly. It will incorrectly compare scenarios. Scenarios may become corrupted. It will not be able to handle complex decisions

What are the key potential risks?


How could we test the product so as to evaluate the actual risks associated with it?
39

Test strategy: An example

How could we test the product so as to evaluate the actual risks associated with it?

Understand the underlying algorithm. Simulate the algorithm in parallel. Capability test each major function. Generate large numbers of decision scenarios. Create complex scenarios and compare them. Review documentation and help. Test for sensitivity to user error

40

References
[Berard94] [Northrup94] [Beiz95] [LouTechniques] [Kaner09] [BoltonDom06] [Sweebok01] [BoltonModel05] [KanerDomain] [KanerDesign] [BergerUsecases01] Berard, Edward V. "Object Oriented Design." In MARC94, pp. 721729 Northrup, Linda M. "Object-Oriented Development." In MARC94, pp. 729737 Beizer, Boris. Black-Box Testing: Techniques for Functional Testing of Software and Systems. John Wiley & Sons. 1995 Lu Lou, "Software Testing Techniques, Technology Maturation and Research Strategy", Institute for Software Research International, Carnegie Mellon University Cen Kaner, Automated Testing @ RIM, 2009 Micheal Bolton, Master of your domain, Better Software, Vol. 8, No. 9, Oct 2006, http://www.developsense.com/articles/2006-10-MasterOfYourDomain.pdf Guide to the Software Engineering Body of Knowledge (SWEEBOK), IEEE Computer Society, May 2001 Micheal Bolton, Elemental models, Better Software, Vol. 7, No. 8, October, 2005, http://www.developsense.com/articles/2005-10-ElementalModels.pdf Cem Kaner, Teaching Domain Testing: A Status Report, http://www.testingeducation.org/a/tdtsr.pdf Cem Kaner and James Bach, Black Box Software Testing, Part 8 Test design, http://www.testingeducation.org/BBST/index.html Bernie Berger, The Dangers of Use Cases Employed as Test Cases, 2001, http://www.testassured.com/docs/Dangers.htm

More references
[NailTesting08] [KanerContext02] Naik, Sagar, and Piyu Tripathy. "Software Testing and Quality Assurance: Theory and Practice", John Wiley & Sons, 2008 Kaner, Cem, James Bach, and Bret Pettichord. "Chapter 2 - Thinking Like a Tester". Lessons Learned in Software Testing: A Context-Driven Approach. John Wiley & Sons. 2002. Perry, William E., "Chapter 25 - Testing a Date Warehouse". Effective Methods for Software Testing, Second Edition. John Wiley & Sons. 2000. Books24x7 Nguyen, Hung Q., Bob Johnson, and Michael Hackett. "Chapter 20 Testing Mobile Web Applications". Testing Applications on the Web: Test Planning for Mobile and Internet-Based Systems, Second Edition. John Wiley & Sons. 2003. Testing Object-Oriented Systems: Models, Patterns, and Tools, Robert Binder Page, Alan, Ken Johnston, and Bj Rollison. How We Test Software at Microsoft. Microsoft Press. 2009 Farrell-Vinay, Peter. Manage Software Testing. Auerbach Publications. 2008

[PerryTesting00]

[NguyenWebTest 03]

[BinderTestPatter ns] [PageTestAtMicro soft09] [FarellManagingT esting08]

Version history
Versio n No. 1.0 1.1, 1.2 Date March 8, 10 Nov. 10, 2010 Author(s) Ravi Sharda Ravi Sharda Details of Change First published version Made some minor changes to add Web testing, ajax testing, mobile web testing, etc.

Vous aimerez peut-être aussi