Académique Documents
Professionnel Documents
Culture Documents
Elaine Weyuker AT&T Labs Research Florham Park, NJ November 11, 2002
Goals of Testing
Detect faults Establish confidence in software Evaluate properties of software
Reliability Performance Memory Usage Security Usability
Stages of Testing
Testing in the Small
Unit Testing Feature Testing Integration Testing
Unit Testing
Tests the smallest individually executable code units. Usually done by programmers. Test cases might be selected based on code, specification, intuition, etc. Tools: Test driver/harness Code coverage analyzer Automatic test case generator
Integration Testing
Tests interactions between two or more units or components. Usually done by programmers. Emphasizes interfaces. Issues: In what order are units combined? How do you assure the compatibility and correctness of externally-supplied components?
Integration Testing
How are units integrated? What are the implications of this order? Top-down => need stubs; top-level tested repeatedly. Bottom-up => need drivers; bottom-levels tested repeatedly. Critical units first => stubs & drivers needed; critical units tested repeatedly.
Integration Testing
Potential Problems: Inadequate unit testing. Inadequate planning & organization for integration testing. Inadequate documentation and testing of externally-supplied components.
Stages of Testing
Testing in the Large
System Testing End-to-End Testing Operations Readiness Testing Beta Testing Load Testing Stress Testing Performance Testing Reliability Testing Regression Testing
System Testing
Test the functionality of the entire system. Usually done by professional testers.
Subdomains are not necessarily disjoint, even though the testing literature frequently refers to them as partitions.
Operational Distributions
An operational distribution is a probability distribution that describes how the system is used in the field.
Can be difficult and expensive to collect necessary data. Not suitable if the usage distribution is uniform (which it never is). Does not take consequence of failure into consideration.
Look at characteristics of the input domain or subdomains. Consider typical, boundary, & near-boundary cases (these can sometimes be automatically generated). This sort of boundary analysis may be meaningless for non-numeric inputs. What are the boundaries of {Rome, Paris, London, }? Can also apply similar analysis to output values, producing output-based test cases.
Random Testing
Random testing involves selecting test cases based on a probability distribution. It is NOT the same as ad hoc testing. Typical distributions are:
uniform: test cases are chosen with equal probability
If the domain is well-structured, automatic generation can be used, allowing many more test cases to be run than if tests are manually generated. If an operational distribution is used, then it should approximate user behavior.
Risk-based Testing
Risk is the expected loss attributable to the failures caused by faults remaining in the software. Risk is based on Failure likelihood or likelihood of occurrence. Failure consequence. So risk-based testing involves selecting test cases in order to minimize risk by making sure that the most likely inputs and highest consequence ones are selected.
Risk-based Testing
Example: ATM Machine Functions: Withdraw cash, transfer money, read balance, make payment, buy train ticket. Attributes: Security, ease of use, availability
Occurrence Likelihood
High = 3 Medium = 2 Low = 1 Low = 1 High = 3 Medium = 2
Failure Consequence
High = 3 Medium = 2 Low = 1 High = 3 Low = 1 High = 3
Priority (L x C)
9 4 1 3 3 6
Occurrence Likelihood
High = 3 Medium = 2 Medium = 2 Low = 1 High = 3 Low = 1
Failure Consequence
High = 3 High = 3 Medium = 2 High = 3 Low 1 Low = 1
Priority (L x C)
9 6 4 3 3 1
Acceptance Testing
The end user runs the system in their environment to evaluate whether the system meets their criteria. The outcome determines whether the customer will accept system. This is often part of a contractual agreement.
Regression Testing
Test modified versions of a previously validated system. Usually done by testers. The goal is to assure that changes to the system have not introduced errors (caused the system to regress). The primary issue is how to choose an effective regression test suite from existing, previously-run test cases.
White-box Testing
Methods based on the internal structure of code: Statement coverage Branch coverage Path coverage Data-flow coverage
White-box Testing
White-box methods can be used for Test case selection or generation. Test case adequacy assessment. In practice, the most common use of white-box methods is as adequacy criteria after tests have been generated by some other method.
Test Automation
Test execution: Run large numbers of test cases/suites without human intervention. Test generation: Produce test cases by processing the specification, code, or model. Test management: Log test cases & results; map tests to requirements & functionality; track test progress & completeness
More testing can be accomplished in less time. Testing is repetitive, tedious, and error-prone. Test cases are valuable - once they are created, they can and should be used again, particularly during regression testing.
*NIST Report: The Economic Impact of Inadequate Infrastructure for Software Testing, 2002.
*NIST Report: The Economic Impact of Inadequate Infrastructure for Software Testing, 2002.