Vous êtes sur la page 1sur 8

Software Testing

Who Tests the Software?

Testing Strategy

Process of exercising a program with the specific intent of finding errors prior to delivery to the end user. Must be planned carefully to avoid wasting development time and resources, and conducted systematically. Developer Understands the system but, will test "gently" and, is driven by "delivery" Independent tester Must learn about the system, but, will attempt to break it and, is driven by quality The developer and Independent Test Group (ITG) must work together throughout the software project to ensure that thorough tests will be conducted. Identifies steps to be undertaken, when these steps are undertaken, how much effort, time, and resources will be

Overall Software Testing Strategy

required. Any testing strategy must incorporate: Test planning Test case design Test execution Resultant data collection and evaluation Should provide guidance for the practitioners and a set of milestones for the manager. Generic characteristics of software testing strategies that have been proposed in the literature: Viewed in the context of the spiral. Begins by testing-in-the-small and move toward testing-in-thelarge. Unit Testing Integration Testing Validation Testing System Testing

Unit Testing

Focuses on assessing: o Internal processing logic and data structures within the boundaries of a component (module). o Proper information flow of module interfaces. o Local data to ensure that integrity is maintained. o Boundary conditions. o Basis (independent) path. o All error handling paths. o If resources are scarce to do comprehensive unit testing, select critical or complex module and unit test only those. After unit testing of individual modules the modules are combined together into a system.  Question commonly asked once all modules have been unit tested  The problem is putting them together interfacing. Incremental integration testing strategies: o Bottom-up integration o Top down integration o Regression testing o Smoke testing Bottom-up Integration An approach where the lowest level modules are tested first, then used to facilitate the testing of higher level modules.

Integration Testing

Is helpful only when all or most of the modules of the same development level are ready. The steps: 1. Test D, E individually (using a dummy program called driver) 2. Low-level components are combined into clusters that perform a specific software function. 3. Test C such that it call D (If an error occurs we know that the problem is in C or in the interface between C and D) 4. Test C such that it call E (If an error occurs we know that the problem is in C or in the interface between C and E) 5. The cluster is tested. 6. Drivers are removed and clusters are combined moving upward in the program structure. Top-down Integration The steps: 1. Main/top module used as a test driver and stubs are substitutes for modules directly subordinate to it. 2. Subordinate stubs are replaced one at a time with real modules (following the depth-first or breadth-first approach). 3. Tests are conducted as each module is integrated. 4. On completion of each set of tests and other stub is replaced with a real module. 5. Regression testing may be used to ensure that new errors not introduced. The process continues from step 2 until the entire program structure is built Example steps: 1. Test A individually (use stubs for other modules) 2. Depending on the integration approach selected, subordinate stubs are replaced one at a time with actual components 3. In a depth-first structure: 4. Test A such that it calls B (use stub for other modules) 5. If an error occurs we know that the problem is in B or in the interface between A and B 6. Replace stubs one at a time, depth-first and re-run tests. Regression Testing Focuses on retesting after changes are made In traditional regression testing, we reuse the same tests. In risk-oriented regression testing, we test the same areas as before, but we use different (increasingly complex) tests. Regression testing may be conducted manually, by re-executing a subset of all test cases or using automated capture/playback tools. Smoke Testing A common approach for creating daily builds for product software Smoke testing steps: 1. Software components that have been translated into code are integrated into a build. 2. A build includes all data files, libraries, reusable modules,

and engineered components that are required to implement one or more product functions. 3. A series of tests is designed to expose errors that will keep the build from properly performing its function. 4. The intent should be to uncover show stopper errors that have the highest likelihood of throwing the software project behind schedule. 5. The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. 6. The integration approach may be top down or bottom up. Validation Testing Focuses on uncovering errors at the software requirements level. SRS might contain a Validation Criteria that forms the basis for a validation-testing approach Validation-Test Criteria: Ensure that: i. all functional requirements are satisfied, ii. all behavior characteristics are achieved, iii. all content is accurate and properly presented, iv. all performance requirements are attained, documentation is correct, and v. usability and other requirements are met. An important element of the validation process is a configuration review/audit. A series of acceptance tests are conducted to enable the customer to validate all requirements. System Testing A series of different tests to verify that system elements have been properly integrated and perform allocated functions. Types of system tests: i. Recovery Testing - forces the software to fail in a variety of ways and verifies that recovery is properly performed. ii. Security Testing - verifies that protection mechanisms built into a system will, in fact, protect it from improper penetration. iii. Stress Testing - executes a system in a manner that demands resources in abnormal quantity, frequency, or volume. iv. Performance Testing - test the run-time performance of software within the context of an integrated system. v. Deployment Testing - examines all installation procedures and specialized installation software that will be used by customers, and all documentation that will be used to introduce the software to end users.

Test-case Design

Focuses on a set of techniques for the creation of test cases that meet overall testing objectives and the testing strategies. These techniques provide a systematic guidance for designing tests that 1. Exercise the internal logic and interfaces of every software component/module 2. Exercise the input and output domains of the program to uncover errors in program function, behaviour, and performance. For conventional application, software is tested from two perspectives: 1. White-box testing focus on the program control structure (internal program logic). Test cases are derived to ensure that all statements in the program have been executed at least once during testing and all logical conditions have been exercised. Performed early in the testing process 2. Black-box testing Examines some fundamental aspect of a system with little regard for the internal logical structure of the software Performed during later stages of testing Using white-box testing method, you may derive test-cases that: i. ii. iii. iv. Guarantee that al independent paths within a module have been exercised at least once Exercise all logical decisions on their true and false sides Execute all loops at their boundaries and within their operational bounds Exercise internal data structures to ensure their validity Basis path testing: i. Test cases derived to exercise the basis set are guaranteed to

White-box Testing

Deriving Test Cases

execute every statement in the program at least one time during testing Steps to derive the test cases (by applying the basis path testing method): i. Using the design or code, draw a corresponding flow graph. ii. The flow graph depicts logical control flow using the notation illustrated in next slide. iii. Refer Figure 18.2 in page 486 - comparison between a flowchart and a flow graph iv. Calculate the Cyclometic Complexity V(G) of the flow graph v. Determine a basis set of independent paths vi. Prepare test cases that will force execution of each path in the basis set. Testing is potentially endless. i. We cannot test till all the defects are unearthed and removed -- it is simply impossible. ii. At some point, we have to stop testing and ship the software. The question is when. iii. According to Pan (1999): iv. Realistically, testing is a trade-off between budget, time and quality. v. It is driven by profit models. The pessimistic, and unfortunately most often used approach is to stop testing whenever some, or any of the allocated resources - time, budget, or test cases -- are exhausted. The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost.

When to Stop Testing?

Vous aimerez peut-être aussi