Académique Documents
Professionnel Documents
Culture Documents
Introduction
My name Jim Giangrande Been involved in testing for 20 years Degrees in economics, computer science and business Focus today on when to stop testing. Looking mostly from a functional test perspective and not talking about special testing like performance, stress, volume, interface design, disaster, recovery, etc. High level discussion, more to make you think than to provide specific solutions
Software Coverage
Many ways software can be considered covered Necessary vs. sufficient testing White box approach based on program structure and content Black box approach functionality based, usage based Gray box combo of white and black Note complete coverage from a decision perspective is not a complete logic coverage Two branches both T-F, need to have four tests (T,T), (T,F), (F,T), (F,F) are all these valid and possible? Outside the box random testing, exploratory testing, statistical testing
Whats Missing?
Missing items from structural testing described
The concept of an ideal set of tests is good, but how to achieve it (developing, designing, finding the ideal set)
Functional Coverage
All paths happy paths and alternative paths (variants of happy path) Also need to look at exceptions handling and how to drive exceptions
Business Usage Coverage Critical for credibility with customer or business user
Cannot be the only kind of testing done, too narrow, will not find certain classes of defects easily
Using Metric
Area of test guidance that is not used well or enough (under used) Need to be able to analyze past test results (defect finds) and translate that into better tests, additional tests focused on specific areas, functions, interfaces, or modules
No Silver Bullets Hard to know when to stop Need to use multiple approaches Good understanding of functionality, program structure, business usage, interfaces to other systems - Historical analysis of code reviews and defect sources - Use ongoing test metrics to help focus where to do more extensive testing and where to beef up test cases (error run rates, clustering, types of errors, etc.) - Use defect prediction models or past experience to gauge expected defect levels and severities - Test data adequacy should be done on a program-by-program basis (one size does not fit all) - Dont expect perfection (testers always do), learn from experience and do a better job in the next cycle or project
INTERFACE TESTING The purpose of interface testing is to test the interfaces, particularly the external interfaces with the system. The emphasis is on verifying exchange of data, transmission and control, and processing times. External interface testing usually occurs as part of System Test.