Vous êtes sur la page 1sur 6

Testing is considered as a part of the System Development Life Cycle, but it can be also

termed as Software Testing Life Cycle or Test Development Life Cycle.


Software Testing Life Cycle consists of the following phases:
1. Planning
2. Analysis
3. Design
4. Execution
5. Cycles
6. Final Testing and Implementation
7. Post Implementation

1. Test Planning (Product Definition Phase):


The test plan phase mainly signifies preparation of a test plan. A test plan is a high level-
planning document derived from the project plan (if one exists) and details the future
course of testing. Sometimes, a quality assurance plan - which is more broader in scope
than a test plan is also made.

Contents of a Test Plan are as follows :


• Scope of testing
• Entry Criteria (When testing will begin?)
• Exit Criteria (When testing will stop?)
• Testing Strategies (Black Box, White Box, etc.)
• Testing Levels (Integration testing, Regression testing, etc.)
• Limitation (if any)
• Planned Reviews and Code Walkthroughs
• Testing Techniques (Boundary Value Analysis, Equivalence Partitioning, etc.)
• Testing Tools and Databases (Automatic Testing Tools, Performance testing tools)
• Reporting (How would bugs be reported)
• Milestones
• Resources and Training

Contents of a SQA Plan, broader than a test plan, are as follows:


• The IEEE standard for SQA Plan Preparation contains the following outline:
• Purpose
• Reference Documents
• Management
• Documentation
• Standards, Practices and Conventions
• Reviews and Audits
• Software Configuration Management
• Problem Reporting and Corrective Action (Software Metrics to be used can be
identified at this stage)
• Tools, Techniques and Methodologies
• Code Control
• Media Control
• Supplier Control
• Records, Collection, maintenance and Retention

2. Test Analysis (Documentation Phase)


The Analysis Phase is more an extension of the planning phase. Whereas the planning
phase pertains to high level plans - the Analysis phase is where detailed plans are
documented. This is when actual test cases and scripts are planned and documented.

This phase can be further broken down into the following steps:
• Review Inputs: The requirement specification document, feature specification
document and other project planning documents are considered as inputs and the test
plan is further disintegrated into smaller level test cases.
• Formats: Generally at this phase a functional validation matrix based on Business
Requirements is created. Then the test case format is finalized. Also Software Metrics
are designed in this stage. Using some kind of software like Microsoft project, the
testing timeline along with milestones are created.
• Test Cases: Based on the functional validation matrix and other input documents, test
cases are written. Also some mapping is done between the features and test cases.
• Plan Automation: While creating test cases, those cases that should be automated are
identified. Ideally those test cases that are relevant for Regression Testing are
identified for automation. Also areas for performance, load and stress testing are
identified.
• Plan Regression and Correction Verification Testing: The testing cycles, i.e. number
of times that testing will be redone to verify that bugs fixed have not introduced
newer errors is planned.

3. Test Design (Architecture Document and Review Phase):


One has to realize that the testing life cycle runs parallel to the software development life
cycle. So by the time, one reaches this phase - the development team would have created
some code or at least some prototype or minimum a design document would be have
been created.
Hence in the Test Design (Architecture Document Phase) - all the plans, test cases, etc.
from the Analysis phase are revised and finalized. In other words, looking at the work
product or design - the test cases, test cycles and other plans are finalized. Newer test
cases are added. Also some kind of Risk Assessment Criteria is developed. Also writing
of automated testing scripts begins. Finally - the testing reports (especially unit testing
reports) are finalized. Quality checkpoints, if any, are included in the test cases based on
the SQA Plan.

4. Test Execution (Unit / Functional Testing Phase):


By this time. the development team would have been completed creation of the work
products. Of Course, the work product would still contain bugs. So, in the execution
phase - developers would carry out unit testing with testers help, if required. Testers
would execute the test plans. Automatic testing Scripts would be completed. Stress and
performance Testing would be executed. White box testing, code reviews, etc. would be
conducted. As and when bugs are found - reporting would be done.

5. Test Cycle (Re-Testing Phase):


By this time, minimum one test cycle (one round of test execution) would have been
completed and bugs would have been reported. Once the development team fixes the
bugs, then a second round of testing begins. This testing could be mere correction
verification testing that is checking only that part of the code that has been corrected. It
could also be Regression Testing - where the entire work product is tested to verify that
correction to the code has not affected other parts of the code.
Hence this process of :
Testing --> Bug reporting --> Bug fixing (and enhancements) --> Retesting
is carried out as planned. Here is where automation tests are extremely useful to repeat
the same test cases again and again.
During this phase - review of test cases and test plan could also be carried out.

6. Final Testing and Implementation (Code Freeze Phase):


When the exit criteria is achieved or planned test cycles are completed, then final testing
is done. Ideally, this is System or Integration testing. Also any remaining Stress and
Performance testing is carried out. Inputs for process improvements in terms of software
metrics is given. Test reports are prepared. if required, a test release note, releasing the
product for roll out could be prepared. Other remaining documentation is completed.

7. Post Implementation (Process Improvement Phase):


This phase, that looks good on paper, is seldom carried out. In this phase, the testing is
evaluated and lessons learnt are documented. Software Metrics (Bug Analysis Metrics)
are analyzed statistically and conclusions are drawn. Strategies to prevent similar
problems in future projects is identified. Process Improvement Suggestions are
implemented. Cleaning up of testing environment and Archival of test cases, records and
reports are done.

Unit Test Checklist

• Is the number of input parameters equal to number of arguments?


• Do parameter and argument attributes match?
• Do parameter and argument units system match?
• Is the number of arguments transmitted to called modules equal to number of
parameters?
• Are the attributes of arguments transmitted to called modules equal to attributes of
parameters?
• Is the units system of arguments transmitted to called modules equal to units system
of parameters?
• Are the number of attributes and the order of arguments to built-in functions correct?
• Are any references to parameters not associated with current point of entry?
• Have input only arguments altered?
• Are global variable definitions consistent across modules?
• Are constraints passed as arguments?

When a module performs external I/O, additional interface


tests must be conducted:
• File attributes correct?
• OPEN/CLOSE statements correct?
• Format specification matches I/O statement?
• Buffer size matches record size?
• Files opened before use?
• End-of-file conditions handled?
• I/O errors handled?
• Any textual errors in output information?

The local data structure for a module is a common source of


errors. Test cases should be designed to uncover errors in the
following categories:
• Improper or inconsistent typing
• Erroneous initialization or default values
• Incorrect (misspelled or truncated) variable names
• Inconsistent data types
• Underflow, overflow and addressing exceptions

From a strategic point of view, the following questions should


be addressed:
• Has the component interface been fully tested?
• Have local data structured been exercised at their boundaries?
• Has the cyclomatic complexity of the module been determined?
• Have all independent basis paths been tested?
• Have all loops been tested appropriately?
• Have data flow paths been tested? Have all error-handling paths been tested?

Validation Test Checklist

During validation, the software is tested against requirements. This checklist


addresses some of the key issues that will lead to effective validation. For this checklist,
the more questions that elicit a negative response, the higher the risk that validation
testing will not adequately achieve its purpose.

• Is a complete software requirements specification available?


• Are requirements bounded?
• Have equivalence classes been defined to exercise input?
• Have boundary tests been derived to exercise the software at its boundaries.
• Have test suites been developed to validate each software function?
• Have test suites been developed to validate all data structures?
• Have test suites been developed to assess software performance?
• Have test suites been developed to test software behavior?
• Have test suites been developed to fully exercise the user interface?
• Have test suites been developed to exercise all error handling?
• Are use-cases available to perform scenario testing?
• Is statistical use testing (SEPA, 5/e, Chapter 26) being considered as an element of
validation?
• Have tests been developed to exercise the software against procedures defined in user
documentation and help facilities?
• Have error reporting and correction mechanisms been established?
• Has a deficiency list been created?

Test Case Checklist


Quality Attributes
• Accurate: tests what the description says it will test.
• Economical: has only the steps needed for its purpose.
• Repeatable, self standing: same results no matter who tests it.
• Appropriate: for both immediate and future testers.
• Traceable: to a requirement.
• Self-cleaning: returns the test environment to clean state.
Structure and testability
• Has a name and number
• Has a stated purpose that includes what requirement is being tested
• Has a description of the method of testing
• Specifies setup information - environment, data, prerequisite tests, security access
• Has actions and expected results
• States if any proofs, such as reports or screen grabs, need to be saved
• Leaves the testing environment clean
• Uses active case language
• Does not exceed 15 steps
• Matrix does not take longer than 20 minutes to test
• Automated script is commented with purpose, inputs, expected results
• Setup offers alternative to prerequisite tests, if possible
• Is in correct business scenario order with other tests

Configuration management
• Employs naming and numbering conventions
• Saved in specified formats, file types
• Is versioned to match software under test
• Includes test objects needed by the case, such as databases
• Stored as read
• Stored with controlled access
• Stored where network backup operates
• Archived off-site

Vous aimerez peut-être aussi