Vous êtes sur la page 1sur 4

Designed by B.

Balakrishna Reddy Millennium Software Solutions TESTING REVIEWS A Acceptance Testing Ad - Hoc testing Is the process of comparing a program to its requirements Appropriate and very often syndicated when tester wants to become familiar with the product, or in the environment when technical/testing materials are not 100% completed. It is also largely based on general software product functionality/testing understanding and the normal 'Human Common Sense'.

B-C The build acceptance test is a simplistic check of a product's functionality in order to determine if the product is capable of being tested to a greater extent. Every new build should undergo a build acceptance test to determine if further testing can be executed. Examples of a build acceptance: Product can be installed with no crashes or errors that terminate the installation. (Development needs to install the software from the same source accessed by QA (e.g. Drop Zone, CD-ROM, Electronic Software Distribution archives, etc.). Clients can connect to associated servers. Simple client server communication can be achieved Start testing with the bottom of the program. The bottom up strategy does not exist until the last module is added. An in-house test is performed before the Alpha, Beta, and FCS milestones which is used to determine whether the product can be installed and used without any problems, assistance, or support from others.

Build Acceptance Test

Bottom - up CET (Customer Experience Test)

Client-Server Testing Systems that operate in client/server environments. Test This test is used to test compatibility between different Compatibilit client/server version combinations as well as other y Test supported products. The confidence test ensures a product functions as expected by ensuring platform specific bugs are not introduced and functionality has not regressed from drop to drop. A typical Confidence confidence test is designed to touch all major areas of a Test product's functionality. These tests are run regularly once the Functional Freeze milestone is reached throughout the remaining development cycle. Configuratio These tests are run for product testing across various system n Tests configuration combinations. Examples of configurations: Cross platform (e.g. Windows Clients against a UNIX server). Client/server network configurations. Operating systems and database combinations (also including version combinations). Web servers and web browsers (for web products). The

Designed by B.Balakrishna Reddy Millennium Software Solutions system configurations to test are determined from the product's compatibility matrix. This test is sometimes called a 'Platform test'. CET (Customer Experience Test) An in-house test is performed before the Alpha, Beta, and FCS milestones which is used to determine whether the product can be installed and used without any problems, assistance, or support from others. D-E Depth Test The depth tests are designed to test all the product's functionality in-depth.

The error test is designed to test the dialogs, alerts, and other feedback provided to the user when an error situation occurs. The difference between this test and a Negative Test Error Test is that an Error Test is simply verifying that the correct dialogs are seen. The Negative Test is primarily looking at the robustness and recovery facets. Testing event-driven processes, such as unpredictable Event-Driven sequences of interrupts from input devices or sensors, or sequences of mouse clicks in a GUI environment. F-G Verification that the final media, prior to hand off to Operations for duplication, contains the correct code which was previously tested and is installable on all the supported platforms and databases. The product demo is executed and product Release Notes verified. This is designed to test the full functionality, features, and Functionality user interfaces of software based upon the functional Test specifications. A full test is Build Acceptance + Sanity + Confidence + Depth. This is designed to test the full functionality, features, Full Test and user interfaces of software based upon the functional specifications. Graphical Testing the front-end user interfaces to applications which User use GUI support systems and standard such as MS Windows Interface or Motif. (GUI) Step by step walk through of a tool or application, exercising each screen or window's menus, toolbar and dialog boxes to GUI verify the execution and correctness of the Graphical User Roadmaps Interface. Typically, this is handled by automated scripts and rarely is used as a manual tests due to the low numbers of bugs found from them. Final Installation Test H-I

Designed by B.Balakrishna Reddy Millennium Software Solutions J-k-l-m-n-O Module testing Multi-user Test To test large program its necessary to use module testing. Module testing (or unit testing) is a process of testing individual subprograms (small blocks), rather than testing the program as a whole. Module testing eases the task of debugging. When error is found, it is known is which particular module it is. Test maximum number of users specified in the design concurrently, to simulate the real user environment when they use the product. Tests that deliberately introduce an error to check an application's behavior and robustness. For example, erroneous data may be entered, or attempts made to force the application to perform an operation that is should not be able to complete. Generally a message box is generated to inform the user of the problem. If the program terminates, the program should exit gracefully Testing systems designed or coded using an object-oriented approach or development environment, such as C++ or Smalltalk.

Negative Test

ObjectOriented P Parallel Testing

Performance

Phased Approach

Testing by processing the same (or at least closely comparable) test workload against both the existing and new versions of a product, then comparing results. Measurement and prediction of performance (e.g. Response time and/or throughput) for a given benchmark workload. Phased Approach A testing strategy where test cases are developed in stages so a minimally acceptable level of testing can be completed at any time. As new features are coded and frozen, they receive priorities for a given amount of time-so that a concentrated effort is directed toward testing those new features before the effort returns to validate the preexisting functionality. When no new features are available, preexisting features will be targeted-with priorities set by Project Leads. 1st level - Minimal Acceptance Test 2nd level - Confidence Tests 3rd level - Full Functionality Test 4th level - Error, Negative, and other Tests 5th level - System level tests A testing strategy where testcases are developed in stages so a minimally acceptable level of testing can be completed at any time. As new features are coded and frozen, they receive priorities for a given amount of times-so that a concentrated effort is directed toward testing those new features before the effort returns to validate the preexisting functionality. When no new features are available, preexisting features will be targeted-with

Designed by B.Balakrishna Reddy Millennium Software Solutions priorities set by Project Leads. 1st level - Build Acceptance Test 2nd level - Sanity Test 3rd level - Confidence Test 4th level - Depth Test 5th level - Error, Negative, and other Tests 6th level - System level tests Q

R These tests are used for comprehensive re-testing of software to validate that all functionality and features of previous builds (or releases) have maintained integrity of features and functions tested previously. This suite of tests includes the Full Functionality Tests and bug Regression Tests (automated and manual).

Regression Tests

S Sanity Test Security Testing Stress Test System Level Test Sanity tests are subsets of the confidence test and are used only to validate high-level functionality. It is a test how easy to break program's security system. These tests are used to validate software functionality at the limit (e.g. Maximum throughput) and then testing at and beyond these limits. These tests check for factors such as Cross-Tool testing, memory management and other operating system factors.

T-u-v-w-x-y-Z Top - Down Start testing with the top of the program. strategy Volume Is the process of feeding a program with heavy volume of Testing data. The effectiveness, efficiency, and satisfaction with which Usability specified users can achieve specified goals in a particular environment. Synonymous with "ease of use".

Vous aimerez peut-être aussi