Académique Documents
Professionnel Documents
Culture Documents
Confidential
Table of Contents 1 INTRODUCTION TO SOFTWARE...........................................................................................7 1.1 EVOLUTION OF THE SOFTWARE TESTING DISCIPLINE ...........................................................................7 1.2 THE TESTING PROCESS AND THE SOFTWARE TESTING LIFE CYCLE........................................................7 1.3 BROAD CATEGORIES OF TESTING.....................................................................................................8 1.4 WIDELY EMPLOYED TYPES OF TESTING ...........................................................................................8 1.5 THE TESTING TECHNIQUES.............................................................................................................9 1.6 CHAPTER SUMMARY......................................................................................................................9 2 BLACK BOX AND WHITE BOX TESTING..........................................................................11 2.1 INTRODUCTION............................................................................................................................11 2.2 BLACK BOX TESTING....................................................................................................................11 2.3 TESTING STRATEGIES/TECHNIQUES................................................................................................13 2.4 BLACK BOX TESTING METHODS.....................................................................................................14 2.5 BLACK BOX (VS) WHITE BOX....................................................................................................16 2.6 WHITE BOX TESTING........................................................................................................18 3 GUI TESTING............................................................................................................................23 3.1 SECTION 1 - WINDOWS COMPLIANCE TESTING................................................................................23 3.2 SECTION 2 - SCREEN VALIDATION CHECKLIST................................................................................25 3.3 SPECIFIC FIELD TESTS.................................................................................................................29 3.4 VALIDATION TESTING - STANDARD ACTIONS...................................................................................30 4 REGRESSION TESTING..........................................................................................................33 4.1 WHAT IS REGRESSION TESTING......................................................................................................33 4.2 TEST EXECUTION .......................................................................................................................34 4.3 CHANGE REQUEST......................................................................................................................35 4.4 BUG TRACKING .........................................................................................................................35 4.5 TRACEABILITY MATRIX................................................................................................................36 5 PHASES OF TESTING..............................................................................................................39 5.1 INTRODUCTION ...........................................................................................................................39 5.2 TYPES AND PHASES OF TESTING....................................................................................................39 5.3 THE VMODEL........................................................................................................................40 ........................................................................................................................................................42 6 INTEGRATION TESTING........................................................................................................43 6.1 GENERALIZATION OF MODULE TESTING CRITERIA..............................................................................44 .........................................................................................................................................................46 7 ACCEPTANCE TESTING.........................................................................................................49 7.1 INTRODUCTION ACCEPTANCE TESTING.........................................................................................49 7.2 FACTORS INFLUENCING ACCEPTANCE TESTING.................................................................................49 7.3 CONCLUSION..............................................................................................................................50 8 SYSTEM TESTING....................................................................................................................51 8.1 INTRODUCTION TO SYSTEM TESTING....................................................................................51 8.2 NEED FOR SYSTEM TESTING ........................................................................................................51
Performance Testing Process & Methodology 2Proprietary & Confidential -
8.3 SYSTEM TESTING TECHNIQUES .....................................................................................................52 8.4 FUNCTIONAL TECHNIQUES.............................................................................................................53 8.5 CONCLUSION:.............................................................................................................................53 9 UNIT TESTING.........................................................................................................................54 9.1 INTRODUCTION TO UNIT TESTING..................................................................................................54 9.2 UNIT TESTING FLOW:...............................................................................................................55 1 RESULTS.....................................................................................................................................55 UNIT TESTING BLACK BOX APPROACH...........................................................................................56 UNIT TESTING WHITE BOX APPROACH............................................................................................56 UNIT TESTING FIELD LEVEL CHECKS...................................................................................56 UNIT TESTING FIELD LEVEL VALIDATIONS.....................................................................................56 UNIT TESTING USER INTERFACE CHECKS.........................................................................................56 9.3 EXECUTION OF UNIT TESTS..........................................................................................................57 UNIT TESTING FLOW :.....................................................................................................................57 DISADVANTAGE OF UNIT TESTING.............................................................................................59 METHOD FOR STATEMENT COVERAGE.................................................................................................59 RACE COVERAGE...................................................................................................................60 9.4 CONCLUSION..............................................................................................................................60 10 TEST STRATEGY....................................................................................................................62 10.1 INTRODUCTION .........................................................................................................................62 10.2 KEY ELEMENTS OF TEST MANAGEMENT:......................................................................................62 10.3 TEST STRATEGY FLOW :............................................................................................................63 10.4 GENERAL TESTING STRATEGIES...................................................................................................65 10.5 NEED FOR TEST STRATEGY........................................................................................................65 10.6 DEVELOPING A TEST STRATEGY..................................................................................................66 10.7 CONCLUSION:...........................................................................................................................66 11 TEST PLAN...............................................................................................................................68 11.1 WHAT IS A TEST PLAN?............................................................................................................68 CONTENTS OF A TEST PLAN..............................................................................................................68 11.2 CONTENTS (IN DETAIL)...............................................................................................................68 12 TEST DATA PREPARATION - INTRODUCTION..............................................................71 12.1 CRITERIA FOR TEST DATA COLLECTION ......................................................................................72 12.2 CLASSIFICATION OF TEST DATA TYPES.........................................................................................79 12.3 ORGANIZING THE DATA..............................................................................................................80 12.4 DATA LOAD AND DATA MAINTENANCE........................................................................................82 12.5 TESTING THE DATA..................................................................................................................83 12.6 CONCLUSION............................................................................................................................84 13 TEST LOGS - INTRODUCTION ..........................................................................................85 13.1 FACTORS DEFINING THE TEST LOG GENERATION..........................................................................85 13.2 COLLECTING STATUS DATA.......................................................................................................86 14 TEST REPORT........................................................................................................................92 14.1 EXECUTIVE SUMMARY...............................................................................................................92
Performance Testing Process & Methodology 3Proprietary & Confidential -
15 DEFECT MANAGEMENT......................................................................................................95 15.1 DEFECT...................................................................................................................................95 15.2 DEFECT FUNDAMENTALS ...........................................................................................................95 15.3 DEFECT TRACKING....................................................................................................................96 15.4 DEFECT CLASSIFICATION............................................................................................................97 15.5 DEFECT REPORTING GUIDELINES.................................................................................................98 16 AUTOMATION.......................................................................................................................101 16.1 WHY AUTOMATE THE TESTING PROCESS?..................................................................................101 16.2 AUTOMATION LIFE CYCLE........................................................................................................103 16.3 PREPARING THE TEST ENVIRONMENT.........................................................................................105 16.4 AUTOMATION METHODS...........................................................................................................108 17 GENERAL AUTOMATION TOOL COMPARISON..........................................................111 17.1 FUNCTIONAL TEST TOOL MATRIX.............................................................................................111 17.2 RECORD AND PLAYBACK..........................................................................................................111 17.3 WEB TESTING........................................................................................................................112 17.4 DATABASE TESTS....................................................................................................................112 17.5 DATA FUNCTIONS....................................................................................................................112 17.6 OBJECT MAPPING...................................................................................................................113 17.7 IMAGE TESTING......................................................................................................................114 17.8 TEST/ERROR RECOVERY...........................................................................................................114 17.9 OBJECT NAME MAP................................................................................................................114 17.10 OBJECT IDENTITY TOOL.........................................................................................................115 17.11 EXTENSIBLE LANGUAGE.........................................................................................................115 17.12 ENVIRONMENT SUPPORT.........................................................................................................116 17.13 INTEGRATION........................................................................................................................116 17.14 COST..................................................................................................................................116 17.15 EASE OF USE......................................................................................................................117 17.16 SUPPORT..............................................................................................................................117 17.17 OBJECT TESTS......................................................................................................................117 17.18 MATRIX...............................................................................................................................118 17.19 MATRIX SCORE.....................................................................................................................118 18 SAMPLE TEST AUTOMATION TOOL..............................................................................119 18.1 RATIONAL SUITE OF TOOLS ......................................................................................................119 18.2 RATIONAL ADMINISTRATOR.......................................................................................................120 18.3 RATIONAL ROBOT...................................................................................................................124 18.4 ROBOT LOGIN WINDOW............................................................................................................125 18.5 RATIONAL ROBOT MAIN WINDOW-GUI SCRIPT............................................................................126 18.6 RECORD AND PLAYBACK OPTIONS..............................................................................................127 18.7 VERIFICATION POINTS...............................................................................................................129 18.8 ABOUT SQABASIC HEADER FILES...........................................................................................131 18.9 ADDING DECLARATIONS TO THE GLOBAL HEADER FILE...............................................................131 18.10 INSERTING A COMMENT INTO A GUI SCRIPT:...........................................................................131 18.11 ABOUT DATA POOLS..............................................................................................................132 18.12 DEBUG MENU.......................................................................................................................132 18.13 COMPILING THE SCRIPT..........................................................................................................133 18.14 COMPILATION ERRORS............................................................................................................134
Performance Testing Process & Methodology 4Proprietary & Confidential -
19 RATIONAL TEST MANAGER.............................................................................................135 19.1 TEST MANAGER-RESULTS SCREEN.............................................................................................136 20 SUPPORTED ENVIRONMENTS.........................................................................................138 20.1 OPERATING SYSTEM.................................................................................................................138 20.2 PROTOCOLS............................................................................................................................138 20.3 WEB BROWSERS......................................................................................................................138 20.4 MARKUP LANGUAGES..............................................................................................................138 20.5 DEVELOPMENT ENVIRONMENTS..................................................................................................138 21 PERFORMANCE TESTING.................................................................................................139 21.1 WHAT IS PERFORMANCE TESTING?...........................................................................................139 21.2 WHY PERFORMANCE TESTING?...............................................................................................139 21.3 PERFORMANCE TESTING OBJECTIVES.........................................................................................140 21.4 PRE-REQUISITES FOR PERFORMANCE TESTING.............................................................................140 21.5 PERFORMANCE REQUIREMENTS.................................................................................................141 22 PERFORMANCE TESTING PROCESS.............................................................................143 22.1 PHASE 1 REQUIREMENTS STUDY............................................................................................144 22.2 PHASE 2 TEST PLAN............................................................................................................145 22.3 PHASE 3 TEST DESIGN.........................................................................................................145 22.4 PHASE 4 SCRIPTING..............................................................................................................146 22.5 PHASE 5 TEST EXECUTION....................................................................................................147 22.6 PHASE 6 TEST ANALYSIS......................................................................................................147 22.7 PHASE 7 PREPARATION OF REPORTS........................................................................................148 22.8 COMMON MISTAKES IN PERFORMANCE TESTING..........................................................................149 22.9 BENCHMARKING LESSONS .......................................................................................................149 23 TOOLS.....................................................................................................................................152 23.1 LOADRUNNER 6.5..................................................................................................................152 23.2 WEBLOAD 4.5.......................................................................................................................152 23.3 ARCHITECTURE BENCHMARKING................................................................................................153 23.4 GENERAL TESTS....................................................................................................................153 24 PERFORMANCE METRICS................................................................................................155 24.1 CLIENT SIDE STATISTICS..........................................................................................................155 24.2 SERVER SIDE STATISTICS..........................................................................................................156 24.3 NETWORK STATISTICS..............................................................................................................156 24.4 CONCLUSION..........................................................................................................................156 25 LOAD TESTING.....................................................................................................................158 25.1 WHY IS LOAD TESTING IMPORTANT ?..........................................................................................158 25.2 WHEN SHOULD LOAD TESTING BE DONE?....................................................................................158 26 LOAD TESTING PROCESS.................................................................................................159 26.1 SYSTEM ANALYSIS..................................................................................................................159 26.2 USER SCRIPTS........................................................................................................................159 26.3 SETTINGS...............................................................................................................................159 26.4 PERFORMANCE MONITORING....................................................................................................160
Performance Testing Process & Methodology 5Proprietary & Confidential -
26.5 ANALYZING RESULTS...............................................................................................................160 26.6 CONCLUSION..........................................................................................................................160 27 STRESS TESTING.................................................................................................................162 27.1 INTRODUCTION TO STRESS TESTING...........................................................................................162 27.2 BACKGROUND TO AUTOMATED STRESS TESTING.........................................................................163 27.3 AUTOMATED STRESS TESTING IMPLEMENTATION..........................................................................165 27.4 PROGRAMMABLE INTERFACES....................................................................................................165 27.5 GRAPHICAL USER INTERFACES..................................................................................................166 27.6 DATA FLOW DIAGRAM............................................................................................................166 27.7 TECHNIQUES USED TO ISOLATE DEFECTS....................................................................................167 28 TEST CASE COVERAGE.....................................................................................................169 28.1 TEST COVERAGE.....................................................................................................................169 28.2 TEST COVERAGE MEASURES......................................................................................................169 28.3 PROCEDURE-LEVEL TEST COVERAGE.........................................................................................170 28.4 LINE-LEVEL TEST COVERAGE..................................................................................................170 28.5 CONDITION COVERAGE AND OTHER MEASURES..........................................................................170 28.6 HOW TEST COVERAGE TOOLS WORK........................................................................................170 28.7 TEST COVERAGE TOOLS AT A GLANCE......................................................................................172 29 TEST CASE POINTS-TCP....................................................................................................173 29.1 WHAT IS A TEST CASE POINT (TCP)........................................................................................173 29.2 CALCULATING THE TEST CASE POINTS:......................................................................................173 29.3 CHAPTER SUMMARY................................................................................................................175
1.2 The Testing process and the Software Testing Life Cycle
Every testing project has to follow the waterfall model of the testing process. The waterfall model is as given below 1.Test Strategy & Planning 2.Test Design 3.Test Environment setup 4.Test Execution 5.Defect Analysis & Tracking 6.Final Reporting According to the respective projects, the scope of testing can be tailored, but the process mentioned above is common to any testing activity. Software Testing has been accepted as a separate discipline to the extent that there is a separate life cycle for the testing activity. Involving software testing in all phases of the
Performance Testing Process & Methodology 7Proprietary & Confidential -
software development life cycle has become a necessity as part of the software quality assurance process. Right from the Requirements study till the implementation, there needs to be testing done on every phase. The V-Model of the Software Testing Life Cycle along with the Software Development Life cycle given below indicates the various phases or levels of testing.
Requirement Study High Level Design Low Level Design Unit Testing
Production Verification Testing User Acceptance Testing System Testing Integration Testing
SDLC - STLC
System Testing: Testing the software for the required specifications on the intended hardware Acceptance Testing: Formal testing conducted to determine whether or not a system satisfies its acceptance criteria, which enables a customer to determine whether to accept the system or not. Performance Testing: To evaluate the time taken or response time of the system to perform its required functions in comparison Stress Testing: To evaluate a system beyond the limits of the specified requirements or system resources (such as disk space, memory, processor utilization) to ensure the system do not break unexpectedly Load Testing: Load Testing, a subset of stress testing, verifies that a web site can handle a particular number of concurrent users while maintaining acceptable response times Alpha Testing: Testing of a software product or system conducted at the developers site by the customer Beta Testing: Testing conducted at one or more customer sites by the end user of a delivered software product system.
This chapter covered the Introduction and basics of software testing mentioning about Evolution of Software Testing The Testing process and lifecycle Broad categories of testing Widely employed Types of Testing The Testing Techniques
Black-box test design treats the system as a literal "black-box", so it doesn't explicitly use knowledge of the internal structure. It is usually described as focusing on testing functional requirements. Synonyms for black-box include: behavioral, functional, opaquebox, and closed-box. White-box test design allows one to peek inside the "box", and it focuses specifically on using internal knowledge of the software to guide the selection of test data. It is used to detect errors by means of execution-oriented test cases. Synonyms for white-box include: structural, glass-box and clear-box. While black-box and white-box are terms that are still in popular use, many people prefer the terms "behavioral" and "structural". Behavioral test design is slightly different from black-box test design because the use of internal knowledge isn't strictly forbidden, but it's still discouraged. In practice, it hasn't proven useful to use a single test design method. One has to use a mixture of different methods so that they aren't hindered by the limitations of a particular one. Some call this "gray-box" or "translucent-box" test design, but others wish we'd stop talking about boxes altogether!!!
example of the organisational problem of implementing a translation memory is the language service of a big automobile manufacturer, where the major implementation problem is not the technical environment, but the fact that many clients still submit their orders as print-out, that neither source texts nor target texts are properly organised and stored and, last but not least, individual translators are not too motivated to change their working habits. Laboratory tests are mostly performed to assess the general usability of the system. Due to the high laboratory equipment costs laboratory tests are mostly only performed at big software houses such as IBM or Microsoft. Since laboratory tests provide testers with many technical possibilities, data collection and analysis are easier than for field tests.
May leave many program paths untested Cannot be directed toward specific segments of code which may be very complex (and therefore more error prone) Most testing related research has been directed toward glass box testing
Lets use this system to understand and clarify the characteristics of black box and white box testing. People: Who does the testing? Some people know how software works (developers) and others just use it (users). Accordingly, any testing by users or other non-developers is sometimes called black box testing. Developer testing is called white box testing. The distinction here is based on what the person knows or can understand. Coverage: What is tested?
Performance Testing Process & Methodology 16 Proprietary & Confidential -
If we draw the box around the system as a whole, black box testing becomes another name for system testing. And testing the units inside the box becomes white box testing. This is one way to think about coverage. Another is to contrast testing that aims to cover all the requirements with testing that aims to cover all the code. These are the two most commonly used coverage criteria. Both are supported by extensive literature and commercial tools. Requirements-based testing could be called black box because it makes sure that all the customer requirements have been verified. Code-based testing is often called white box because it makes sure that all the code (the statements, paths, or decisions) is exercised. Risks: Why are you testing? Sometimes testing is targeted at particular risks. Boundary testing and other attack-based techniques are targeted at common coding errors. Effective security testing also requires a detailed understanding of the code and the system architecture. Thus, these techniques might be classified as white box. Another set of risks concerns whether the software will actually provide value to users. Usability testing focuses on this risk, and could be termed black box. Activities: How do you test? A common distinction is made between behavioral test design, which defines tests based on functional requirements, and structural test design, which defines tests based on the code itself. These are two design approaches. Since behavioral testing is based on external functional definition, it is often called black box, while structural testingbased on the code internalsis called white box. Indeed, this is probably the most commonly cited definition for black box and white box testing. Another activity-based distinction contrasts dynamic test execution with formal code inspection. In this case, the metaphor maps test execution (dynamic testing) with black box testing, and maps code inspection (static testing) with white box testing. We could also focus on the tools used. Some tool vendors refer to code-coverage tools as white box tools, and tools that facilitate applying inputs and capturing inputsmost notably GUI capture replay toolsas black box tools. Testing is then categorized based on the types of tools used. Evaluation: How do you know if youve found a bug? There are certain kinds of software faults that dont always lead to obvious failures. They may be masked by fault tolerance or simply luck. Memory leaks and wild pointers are examples. Certain test techniques seek to make these kinds of problems more visible. Related techniques capture code history and stack information when faults occur, helping with diagnosis. Assertions are another technique for helping to make problems more visible. All of these techniques could be considered white box test techniques, since they use code instrumentation to make the internal workings of the software more visible. These contrast with black box techniques that simply look at the official outputs of a program. White box testing is concerned only with testing the software product, it cannot guarantee that the complete specification has been implemented. Black box testing is concerned only with testing the specification, it cannot guarantee that all parts of the implementation have been tested. Thus black box testing is testing against the specification and will discover faults of omission, indicating that part of the specification has not been fulfilled. White box testing is testing against the implementation and will discover
Performance Testing Process & Methodology 17 Proprietary & Confidential -
faults of commission, indicating that part of the implementation is faulty. In order to fully test a software product both black and white box testing are required. White box testing is much more expensive than black box testing. It requires the source code to be produced before the tests can be planned and is much more laborious in the determination of suitable input data and the determination if the software is or is not correct. The advice given is to start test planning with a black box test approach as soon as the specification is available. White box planning should commence as soon as all black box tests have been successfully passed, with the production of flowgraphs and determination of paths. The paths should then be checked against the black box test plan and any additional required test runs determined and applied. The consequences of test failure at this stage may be very expensive. A failure of a white box test may result in a change which requires all black box testing to be repeated and the re-determination of the white box paths To conclude, apart from the above described analytical methods of both glass and black box testing, there are further constructive means to guarantee high quality software end products. Among the most important constructive means are the usage of object-oriented programming tools, the integration of CASE tools, rapid prototyping, and last but not least the involvement of users in both software development and testing procedures Summary : Black box testing can sometimes describe user-based testing (people); system or requirements-based testing (coverage); usability testing (risk); or behavioral testing or capture replay automation (activities). White box testing, on the other hand, can sometimes describe developer-based testing (people); unit or code-coverage testing (coverage); boundary or security testing (risks); structural testing, inspection or codecoverage automation (activities); or testing based on probes, assertions, and logs (evaluation).
The purpose of white box testing Initiate a strategic initiative to build quality throughout the life cycle of a software product or service. Provide a complementary function to black box testing. Perform complete coverage at the component level. Improve quality by optimizing performance. Practices : This section outlines some of the general practices comprising white-box testing process. In general, white-box testing practices have the following considerations: 1. The allocation of resources to perform class and method analysis and to document and review the same. 2. Developing a test harness made up of stubs, drivers and test object libraries. 3. Development and use of standard procedures, naming conventions and libraries. 4. Establishment and maintenance of regression test suites and procedures. 5. Allocation of resources to design, document and manage a test history library. 6. The means to develop or acquire tool support for automation of capture/replay/compare, test suite execution, results verification and documentation capabilities.
basic set of execution paths. These are test cases that exercise basic set will execute every statement at least once.
Note that unstructured loops are not to be tested . rather, they are redesigned. 2 Design by Contract (DbC) DbC is a formal way of using comments to incorporate specification information into the code itself. Basically, the code specification is expressed unambiguously using a formal language that describes the code's implicit contracts. These contracts specify such requirements as: Conditions that the client must meet before a method is invoked. Conditions that a method must meet after it executes. Assertions that a method must satisfy at specific points of its execution Tools that check DbC contracts at runtime such as JContract [http://www.parasoft.com/products/jtract/index.htm] are used to perform this function. 3 Profiling Profiling provides a framework for analyzing Java code performance for speed and heap memory use. It identifies routines that are consuming the majority of the CPU time so that problems may be tracked down to improve performance. These include the use of Microsoft Java Profiler API and Suns profiling tools that are bundled with the JDK. Third party tools such as JaViz [http://www.research.ibm.com/journal/sj/391/kazi.html] may also be used to perform this function.
4 Error Handling
Exception and error handling is checked thoroughly are simulating partial and complete fail-over by operating on error causing test vectors. Proper error recovery, notification and logging are checked against references to validate program design. 5 Transactions Systems that employ transaction, local or distributed, may be validated to ensure that ACID (Atomicity, Consistency, Isolation, Durability). Each of the individual parameters is tested individually against a reference data set. Transactions are checked thoroughly for partial/complete commits and rollbacks encompassing databases and other XA compliant transaction processors. Advantages of White Box Testing Forces test developer to reason carefully about implementation Approximate the partitioning done by execution equivalence Reveals errors in "hidden" code Beneficent side-effects
Disadvantages of White Box Testing Expensive Cases omitted in the code could be missed out.
GUI Testing
What is GUI Testing? GUI is the abbreviation for Graphic User Interface. It is absolutely essential that any application has to be user-friendly. The end user should be comfortable while using all the components on screen and the components should also perform their functionality with utmost clarity. Hence it becomes very essential to test the GUI components of any application. GUI Testing can refer to just ensuring that the look-and-feel of the application is acceptable to the user, or it can refer to testing the functionality of each and every component involved. The following is a set of guidelines to ensure effective GUI Testing and can be used even as a checklist while testing a product / application.
Never updateable fields should be displayed with black text on a gray background with a black label. All text should be left justified, followed by a colon tight to it. In a field that may or may not be updateable, the label text and contents changes from black to gray depending on the current status. List boxes are always white background with black text whether they are disabled or not. All others are gray. In general, double-clicking is not essential. In general, everything can be done using both the mouse and the keyboard. All tab buttons should have a distinct letter.
Spacing should be compatible with the existing windows spacing (word etc.). Items should be in alphabetical order with the exception of blank/none, which is at the top or the bottom of the list box. Drop down with the item selected should be display the list with the selected item on the top. Make sure only one space appears, shouldn't have a blank line at the bottom.
10. Can the cursor be placed in read-only fields by clicking in the field with the mouse? 11. Is the cursor positioned in the first input field or control when the screen is opened? 12. Is there a default button specified on the screen? 13. Does the default button work correctly? 14. When an error message occurs does the focus return to the field in error when the user cancels it? 15. When the user Alt+Tab's to another application does this have any impact on the screen upon return to the application? 16. Do all the fields edit boxes indicate the number of characters they will hold by there length? e.g. a 30 character field should be a lot longer
6. In drop down list boxes, assure that the list and each entry in the list can be accessed via appropriate key / hot key combinations. 7. Ensure that duplicate hot keys do not exist on each screen 8. Ensure the proper usage of the escape key (which is to undo any changes that have been made) and generates a caution message "Changes will be lost Continue yes/no" 9. Assure that the cancel button functions the same as the escape key. 10. Assure that the Cancel button operates, as a Close button when changes have been made that cannot be undone. 11. Assure that only command buttons, which are used by a particular window, or in a particular dialog box, are present. (i.e) make sure they don't work on the screen behind the current screen. 12. When a command button is used sometimes and not at other times, assures that it is grayed out when it should not be used. 13. Assure that OK and Cancel buttons are grouped separately from other command buttons. 14. Assure that command button names are not abbreviations. 15. Assure that all field labels/names are not technical labels, but rather are names meaningful to system users. 16. Assure that command buttons are all of similar size and shape, and same font & font size. 17. Assure that each command button can be accessed via a hot key combination. 18. Assure that command buttons in the same window/dialog box do not have duplicate hot keys. 19. Assure that each window/dialog box has a clearly marked default value (command button, or other object) which is invoked when the Enter key is pressed - and NOT the Cancel or Close button 20. Assure that focus is set to an object/button, which makes sense according to the function of the window/dialog box. 21. Assure that all option buttons (and radio buttons) names are not abbreviations. 22. Assure that option button names are not technical labels, but rather are names meaningful to system users. 23. If hot keys are used to access option buttons, assure that duplicate hot keys do not exist in the same window/dialog box. 24. Assure that option box names are not abbreviations. 25. Assure that option boxes, option buttons, and command buttons are logically grouped together in clearly demarcated areas "Group Box" 26. Assure that the Tab key sequence, which traverses the screens, does so in a logical way. 27. Assure consistency of mouse actions across windows. 28. Assure that the color red is not used to highlight active objects (many individuals are red-green color blind). 29. Assure that the user will have control of the desktop with respect to general color and highlighting (the application should not dictate the desktop background characteristics). 30. Assure that the screen/window does not have a cluttered appearance 31. Ctrl + F6 opens next tab within tabbed window 32. Shift + Ctrl + F6 opens previous tab within tabbed window 33. Tabbing will open next tab within tabbed window if on last field of current tab
Performance Testing Process & Methodology 28 Proprietary & Confidential -
34. Tabbing will go onto the 'Continue' button if on last field of last tab within tabbed window 35. Tabbing will go onto the next editable field in the window 36. Banner style & size & display exact same as existing windows 37. If 8 or less options in a list box, display all options on open of list box - should be no need to scroll 38. Errors on continue will cause user to be returned to the tab and the focus should be on the field causing the error. (i.e the tab is opened, highlighting the field with the error on it) 39. Pressing continue while on the first tab of a tabbed window (assuming all fields filled correctly) will not open all the tabs. 40. On open of tab focus will be on first editable field 41. All fonts to be the same 42. Alt+F4 will close the tabbed window and return you to main screen or previous screen (as appropriate), generating "changes will be lost" message if necessary. 43. Microhelp text for every enabled field & button 44. Ensure all fields are disabled in read-only mode 45. Progress messages on load of tabbed screens 46. Return operates continue 47. If retrieve on load of tabbed window fails window should not open
Include value zero in all calculations. Include at least one in-range value. Include maximum and minimum range values. Include out of range values above the maximum and below the minimum. Assure that upper and lower values in ranges are handled correctly.
Key F1 F2 F3 F4
Close Close Document / Application. Child window. N/A N/A N/A N/A N/A N/A N/A
F5 F6 F7 F8
Toggle extend Toggle Add N/A mode, if mode, if supported. supported. N/A N/A N/A N/A N/A Move to next open Document or Child window. (Adding SHIFT reverses the order of movement). N/A
F9 F10
N/A N/A N/A Switch to previously used application. (Holding down the ALT key displays all open applications). N/A
Alt
* These shortcuts are suggested for text formatting applications, in the context for which they make sense. Applications may use other modifiers for these operations.
The selective retesting of a software system that has been modified to ensure
sanity cycle checks the entire system at a basic level (breadth, rather than depth)
to see that it is functional and stable. This cycle should include basic-level tests containing mostly positive checks.
normal cycle tests the system a little more in depth than the sanity cycle. This cycle
can group medium-level tests, containing both positive and negative checks.
advanced cycle tests both breadth and depth. This cycle can be run when more
time is available for testing. The tests in the cycle cover the entire application (breadth), and also test advanced options in the application (depth).
regression cycle tests maintenance builds. The goal of this type of cycle is to verify
that a change to one part of the software did not break the rest of the application. A regression cycle includes sanity-level tests for testing the entire software, as well as in-depth tests for the specific area of the application that was modified.
With Manual Test Execution you follow the instructions in the test steps of each
test. You use the application, enter input, compare the application output with the expected output, and log the results. For each test step you assign either pass or fail status. During Automated Test Execution you create a batch of tests and launch the entire batch at once. Testing Tools runs the tests one at a time. It then imports results, providing outcome summaries for each test.
Information about bugs must be detailed and organized in order to schedule bug
fixes and determine software release dates.
First you report New bugs to the database, and provide all necessary
information to reproduce, fix, and follow up the bug.
Software developers fix the Open bugs and assign them the status Fixed. QA personnel test a new build of the application. If a bug does not reoccur, it
is Closed. If a bug is detected again, it is reopened. Communication is an essential part of bug tracking; all members of the development and quality assurance team must be well informed in order to insure that bugs information is up to date and that the most important problems are addressed. The number of open or fixed bugs is a good indicator of the quality status of your application. You can use data analysis tools such as re-ports and graphs in interpret bug data.
product tested to meet the requirement. Below is a simple traceability matrix structure. There can be more things included in a traceability matrix than shown below. Traceability requires unique identifiers for each requirement and product. Numbers for products are established in a configuration management (CM) plan.
Traceability ensures completeness, that all lower level requirements derive from higher level requirements, and that all higher level requirements are allocated to lower level requirements. Traceability is also used in managing change and provides the basis for test planning. SAMPLE TRACEABILITY MATRIX A traceability matrix is a report from the requirements database or repository. The examples below show traceability between user and system requirements. User requirement identifiers begin with "U" and system requirements with "S."
Tracing S12 to its source makes it clear this requirement is erroneous: it must be eliminated, rewritten, or the traceability corrected.
In addition to traceability matrices, other reports are necessary to manage requirements. What goes into each report depends on the information needs of those receiving the report(s). Determine their information needs and document the information that will be associated with the requirements when you set up your requirements database or repository
5 Phases of Testing
5.1 Introduction
The Primary objective of testing effort is to determine the conformance to requirements specified in the contracted documents. The integration of this code with the internal code is the important objective. Goal is to evaluate the system as a whole, not its parts Techniques can be structural or functional. Techniques can be used in any stage that tests the system as a whole (System testing ,Acceptance Testing, Unit testing, Installation, etc.)
Requirements
Acceptance Testing
Specification
System Testing
Architecture
Integration Testing
Detailed Design
Unit Testing
Coding
Requirement Study
Software Requirement
Functional Specification Architecture Design Coding Functional Specification Design Document Functional Specification Unit/Integratio n/System Test Functional Specification Performance Software Requirement Regression Test Case Performance Test Cases and
Unit Test Case Documents Unit Test Case System Test Case Document Integration Test Case Regression Test Case Performance Test Cases and
Requirement
Requirement s Review
Specification
Regression Round 2
Specification Review
System Testing
Architecture
Regression Round 1
Architectur e Review
Integration Testing
Detailed
Unit Testing
Code Walkthrough
6 Integration Testing
One of the most significant aspects of a software development project is the integration strategy. Integration may be performed all at once, top-down, bottom-up, critical piece first, or by first integrating functional subsystems and then integrating the subsystems in separate phases using any of the basic strategies. In general, the larger the project, the more important the integration strategy. Very small systems are often assembled and tested in one phase. For most real systems, this is impractical for two major reasons. First, the system would fail in so many places at once that the debugging and retesting effort would be impractical Second, satisfying any white box testing criterion would be very difficult, because of the vast amount of detail separating the input data from the individual code modules. In fact, most integration testing has been traditionally limited to ``black box'' techniques. Large systems may require many integration phases, beginning with assembling modules into low-level subsystems, then assembling subsystems into larger subsystems, and finally assembling the highest level subsystems into the complete system. To be most effective, an integration testing technique should fit well with the overall integration strategy. In a multi-phase integration, testing at each phase helps detect errors early and keep the system under control. Performing only cursory testing at early integration phases and then applying a more rigorous criterion for the final stage is really just a variant of the high-risk "big bang" approach. However, performing rigorous testing of the entire software involved in each integration phase involves a lot of wasteful duplication of effort across phases. The key is to leverage the overall integration structure to allow rigorous testing at each phase while minimizing duplication of effort. It is important to understand the relationship between module testing and integration testing. In one view, modules are rigorously tested in isolation using stubs and drivers before any integration is attempted. Then, integration testing concentrates entirely on module interactions, assuming that the details within each module are accurate. At the other extreme, module and integration testing can be combined, verifying the details of each module's implementation in an integration context. Many projects compromise, combining module testing with the lowest level of subsystem integration testing, and then performing pure integration testing at higher levels. Each of these views of integration testing may be appropriate for any given project, so an integration testing method should be flexible enough to accommodate them all.
that it is possible to exercise them independently during integration testing. The idea behind design reduction is to start with a module control flow graph, remove all control structures that are not involved with module calls, and then use the resultant "reduced" flow graph to drive integration testing. Figure 7-2 shows a systematic set of rules for performing design reduction. Although not strictly a reduction rule, the call rule states that function call ("black dot") nodes cannot be reduced. The remaining rules work together to eliminate the parts of the flow graph that are not involved with module calls. The sequential rule eliminates sequences of non-call ("white dot") nodes. Since application of this rule removes one node and one edge from the flow graph, it leaves the cyclomatic complexity unchanged. However, it does simplify the graph so that the other rules can be applied. The repetitive rule eliminates top-test loops that are not involved with module calls. The conditional rule eliminates conditional statements that do not contain calls in their bodies. The looping rule eliminates bottom-test loops that are not involved with module calls. It is important to preserve the module's connectivity when using the looping rule, since for poorly-structured code it may be hard to distinguish the ``top'' of the loop from the ``bottom.'' For the rule to apply, there must be a path from the module entry to the top of the loop and a path from the bottom of the loop to the module exit. Since the repetitive, conditional, and looping rules each remove one edge from the flow graph, they each reduce cyclomatic complexity by one. Rules 1 through 4 are intended to be applied iteratively until none of them can be applied, at which point the design reduction is complete. By this process, even very complex logic can be eliminated as long as it does not involve any module calls.
Incremental integration
Hierarchical system design limits each stage of development to a manageable effort, and it is important to limit the corresponding stages of testing as well. Hierarchical design is most effective when the coupling among sibling components decreases as the component size increases, which simplifies the derivation of data sets that test interactions among components. The remainder of this section extends the integration testing techniques of structured testing to handle the general case of incremental integration, including support for hierarchical design. The key principle is to test just the interaction among components at each integration stage, avoiding redundant testing of previously integrated sub-components.
Performance Testing Process & Methodology 46 Proprietary & Confidential -
To extend statement coverage to support incremental integration, it is required that all module call statements from one component into a different component be exercised at each integration stage. To form a completely flexible "statement testing" criterion, it is required that each statement be executed during the first phase (which may be anything from single modules to the entire program), and that at each integration phase all call statements that cross the boundaries of previously integrated components are tested. Given hierarchical integration stages with good cohesive partitioning properties, this limits the testing effort to a small fraction of the effort to cover each statement of the system at each integration phase. Structured testing can be extended to cover the fully general case of incremental integration in a similar manner. The key is to perform design reduction at each integration phase using just the module call nodes that cross component boundaries, yielding component-reduced graphs, and exclude from consideration all modules that do not contain any cross-component calls. Figure 7-7 illustrates the structured testing approach to incremental integration. Modules A and C have been previously integrated, as have modules B and D. It would take three tests to integrate this system in a single phase. However, since the design predicate decision to call module D from module B has been tested in a previous phase, only two additional tests are required to complete the integration testing. Modules B and D are removed from consideration because they do not contain cross-component calls, the component module design complexity of module A is 1, and the component module design complexity of module C is 2.
7 Acceptance Testing
7.1 Introduction Acceptance Testing
In software engineering, acceptance testing is formal testing conducted to determine whether a system satisfies its acceptance criteria and thus whether the customer should accept the system. The main types of software testing are: Component. Interface. System. Acceptance. Release. Acceptance Testing checks the system against the "Requirements". It is similar to systems testing in that the whole system is checked but the important difference is the change in focus: Systems Testing checks that the system that was specified has been delivered. Acceptance Testing checks that the system delivers what was requested. The customer, and not the developer should always do acceptance testing. The customer knows what is required from the system to achieve value in the business and is the only person qualified to make that judgment. The forms of the tests may follow those in system testing, but at all times they are informed by the business needs. The test procedures that lead to formal 'acceptance' of new or changed systems. User Acceptance Testing is a critical phase of any 'systems' project and requires significant participation by the 'End Users'. To be of real use, an Acceptance Test Plan should be developed in order to plan precisely, and in detail, the means by which 'Acceptance' will be achieved. The final part of the UAT can also include a parallel run to prove the system against the current system.
Critical Problem; testing can continue but we cannot go into production (live) with this problem Major Problem; testing can continue but live this feature will cause severe disruption to business processes in live operation Medium Problem; testing can continue and the system is likely to go live with only minimal departure from agreed business processes Minor Problem ; both testing and live operations may progress. This problem should be corrected, but little or no changes to business processes are envisaged 'Cosmetic' Problem e.g. colours; fonts; pitch size However, if such features are key to the business requirements they will warrant a higher severity level. The users of the system, in consultation with the executive sponsor of the project, must then agree upon the responsibilities and required actions for each category of problem. For example, you may demand that any problems in severity level 1, receive priority response and that all testing will cease until such level 1 problems are resolved. Caution. Even where the severity levels and the responses to each have been agreed by all parties; the allocation of a problem into its appropriate severity level can be subjective and open to question. To avoid the risk of lengthy and protracted exchanges over the categorisation of problems; we strongly advised that a range of examples are agreed in advance to ensure that there are no fundamental areas of disagreement; or, or if there are, these will be known in advance and your organisation is forewarned. Finally, it is crucial to agree the Criteria for Acceptance. Because no system is entirely fault free, it must be agreed between End User and vendor, the maximum number of acceptable 'outstandings' in any particular category. Again, prior consideration of this is advisable. N.B. In some cases, users may agree to accept ('sign off') the system subject to a range of conditions. These conditions need to be analysed as they may, perhaps unintentionally, seek additional functionality which could be classified as scope creep. In any event, any and all fixes from the software developers, must be subjected to rigorous System Testing and, where appropriate Regression Testing.
7.3 Conclusion
Hence the goal of acceptance testing should verify the overall quality, correct operation, scalability, completeness, usability, portability, and robustness of the functional components supplied by the Software system.
8 SYSTEM TESTING
8.1 Introduction to SYSTEM TESTING
For most organizations, software and system testing represents a significant element of a project's cost in terms of money and management time. Making this function more effective can deliver a range of benefits including reductions in risk, development costs and improved 'time to market' for new systems. Systems with software components and software-intensive systems are more and more complex everyday. Industry sectors such as telecom, automotive, railway, and aeronautical and space, are good examples. It is often agreed that testing is essential to manufacture reliable products. However, the validation process does not often receive the required attention. Moreover, the validation process is close to other activities such as conformance, acceptance and qualification testing. The difference between function testing and system testing is that now the focus is on the whole application and its environment . Therefore the program has to be given completely. This does not mean that now single functions of the whole program are tested, because this would be too redundant. The main goal is rather to demonstrate the discrepancies of the product from its requirements and its documentation. In other words, this again includes the question, ``Did we build the right product?'' and not just, ``Did we build the product right?'' However, system testing does not only deal with this more economical problem, it also contains some aspects that are orientated on the word ``system'' . This means that those tests should be done in the environment for which the program was designed, like a mulituser network or whetever. Even security guide lines have to be included. Once again, it is beyond doubt that this test cannot be done completely, and nevertheless, while this is one of the most incomplete test methods, it is one of the most important. A number of time-domain software reliability models attempt to predict the growth of a system's reliability during the system test phase of the development life cycle. In this paper we examine the results of applying several types of Poisson-process models to the development of a large system for which system test was performed in two parallel tracks, using different strategies for test data selection. we will test that the functionality of your systems meets with your specifications, integrating with which-ever type of development methodology you are applying. We test for errors that users are likely to make as they interact with the application as well as your applications ability to trap errors gracefully. These techniques can be applied flexibly, whether testing a financial system, e-commerce, an online casino or games testing. System Testing is more than just functional testing, however, and can, when appropriate, also encompass many other types of testing, such as: o security o load/stress o performance o browser compatibility o localisation
Reduce rework and support overheads More effort spent on developing new functionality and less on "bug fixing" as quality increases If it goes wrong, what is the potential impact on your commercial goals? Knowledge is power, so why take a leap of faith while your competition step forward with confidence?
These benefits are achieved as a result of some fundamental principles of testing, for example, increased independence naturally increases objectivity. Your test strategy must take into consideration the risks to your organisation, commercial and technical. You will have a personal interest in its success in which case it is only human for your objectivity to be compromised.
Techniques can be structural or functional In practice, its usually ad-hoc and looks a lot like debugging More structured approaches exist
8.5 Conclusion:
Hence the system Test phase should begin once modules are integrated enough to perform tests in a whole system environment. System testing can occur in parallel with integration test, especially with the top-down method.
Unit Testing
mean that methods can get away with not being tested. The programmer should know that their unit testing is complete when the unit tests cover at the very least the functional requirements of all the code. The careful programmer will know that their unit testing is complete when they have verified that their unit tests cover every cluster of objects that form their application.
Concepts in Unit Testing: The most 'micro' scale of testing; To test particular functions or code modules. Typically done by the programmer and not by testers. As it requires detailed knowledge of the internal program design and code. Not always easily done unless the application has a well-designed architecture with tight code;
Unit Testing User Interface Checks Readability of the Controls Tool Tips Validation Ease of Use of Interface Across Tab related Checks
Proprietary & Confidential -
Advantage of Unit Testing Can be applied directly to object code and does not require processing source code. Performance profilers commonly implement this measure.
Method for Statement Coverage -Design a test-case for the pass/failure of every decision point -Select unique set of test cases This measure reports whether Boolean expressions tested in control structures (such as the if-statement and while-statement) evaluated to both true and false. The entire Boolean expression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators. Additionally, this measure includes coverage of switch-statement cases, exception handlers, and interrupt handlers Also known as: branch coverage, all-edges coverage, basis path coverage, decisiondecision-path testing "Basis path" testing selects paths that achieve decision coverage. ADVANTAGE: Simplicity without the problems of statement coverage DISADVANTAGE This measure ignores branches within boolean expressions which occur due to shortcircuit operators. Method for Condition Coverage: -Test if every condition (sub-expression) in decision for true/false -Select unique set of test cases.
Performance Testing Process & Methodology 59 Proprietary & Confidential -
Reports the true or false outcome of each Boolean sub-expression, separated by logical-and and logical-or if they occur. Condition coverage measures the sub-expressions independently of each other. Reports whether every possible combination of boolean sub-expressions occurs. As with condition coverage, the sub-expressions are separated by logical-and and logical-or, when present. The test cases required for full multiple condition coverage of a condition are given by the logical operator truth table for the condition. DISADVANTAGE: Tedious to determine the minimum set of test cases required, especially for very complex Boolean expressions Number of test cases required could vary substantially among conditions that have similar complexity Condition/Decision Coverage is a hybrid measure composed by the union of condition coverage and decision coverage. It has the advantage of simplicity but without the shortcomings of its component measures This measure reports whether each of the possible paths in each function have been followed. A path is a unique sequence of branches from the function entry to the exit. Also known as predicate coverage. Predicate coverage views paths as possible combinations of logical conditions Path coverage has the advantage of requiring very thorough testing FUNCTION COVERAGE:
This measure reports whether you invoked each function or procedure. It is useful during preliminary testing to assure at least some coverage in all areas of the software. Broad, shallow testing finds gross deficiencies in a test suite quickly.
LOOP COVERAGE This measure reports whether you executed each loop body zero times, exactly once, twice and more than twice (consecutively). For do-while loops, loop coverage reports whether you executed the body exactly once, and more than once. The valuable aspect of this measure is determining whether while-loops and for-loops execute more than once, information not reported by others measure. RACE COVERAGE This measure reports whether multiple threads execute the same code at the same time. Helps detect failure to synchronize access to resources. Useful for testing multi-threaded programs such as in an operating system.
9.4 Conclusion
Performance Testing Process & Methodology 60 Proprietary & Confidential -
Testing irrespective of the phases of testing should encompass the following : Cost of Failure associated with defective products getting shipped and used by customer is enormous To find out whether the integrated product work as per the customer requirements To evaluate the product with an independent perspective To identify as many defects as possible before the customer finds To reduce the risk of releasing the product
10 Test Strategy
10.1 Introduction
This Document entails you towards the better insight of the Test Strategy and its methodology. It is the role of test management to ensure that new or modified service products meet the business requirements for which they have been developed or enhanced. The Testing strategy should define the objectives of all test stages and the techniques that apply. The testing strategy also forms the basis for the creation of a standardized documentation set, and facilitates communication of the test process and its implications outside of the test discipline. Any test support tools introduced should be aligned with, and in support of, the test strategy. Test Approach/Test Architecture are the acronyms for Test Strategy. Test management is also concerned with both test resource and test environment management.
responsibility for testing and commissioning is buried deep within the supply chain as a sub-contract of a sub-contract. It is possible to gain greater control of this process and the associated risk through the use of specialists such as Systems Integration who can be appointed as part of the professional team. The time necessary for testing and commissioning will vary from project to project depending upon the complexity of the systems and services that have been installed. The Project Sponsor should ensure that the professional team and the contractor consider realistically how much time is needed. Fitness for purpose checklist: Is there a documented testing strategy that defines the objectives of all test stages and the techniques that may apply, e.g. non-functional testing and the associated techniques such as performance, stress and security etc? Does the test plan prescribe the approach to be taken for intended test activities, identifying: the items to be tested, the testing to be performed, test schedules, resource and facility requirements, reporting requirements, evaluation criteria, risks requiring contingency measures? Are test processes and practices reviewed regularly to assure that the testing processes continue to meet specific business needs? For example, e-commerce testing may involve new user interfaces and a business focus on usability may mean that the organization must review its testing strategies.
Create a means to generate and apply large numbers of decision scenarios to the product. This will be done using the GUI test Automation system or through the direct generation of Decide Right scenario files that would be loaded into the product during test. Review the Documentation, and the design of the user interface and functionality for its sensitivity to user error. Test with decision scenarios that are near the limit of complexity allowed by the product Compare complex scenarios. Test the product for the risk of silent failures or corruptions in decision analysis. Issues in Execution of the Test Strategy The difficulty of understanding and simulating the decision algorithm The risk of coincidal failure of both the simulation and the product. The difficulty of automating decision tests
Coding
Test Factor The risk or issue that needs to be addressed as part of the test strategy. The strategy will select those factors that need to be addressed in the testing of a specific application system. Test Phase The Phase of the systems development life cycle in which testing will occur.
Not all the test factors will be applicable to all software systems. The development team will need to select and rank the test factors for the specific software systems being developed. The test phase will vary based on the testing methodology used. For example the test phases in as traditional waterfall life cycle methodology will be much different from the phases in a Rapid Application Development methodology.
Facto
Risks
10.7 Conclusion:
Test Strategy should be developed in accordance with the business risks associated with the software when the test team develop the test tactics. Thus the Test team needs to acquire and study the test strategy that should question the following: What is the relationship of importance among the test factors? Which of the high level risks are the most significant? What damage can be done to the business if the software fails to perform correctly? What damage can be done to the business if the business if the software is not completed on time? Who are the individuals most knowledgeable in understanding the impact of the identified business risks?
Hence the Test Strategy must address the risks and present a process that can reduce those risks. The system accordingly focuses on risks thereby establishes the objectives for the test process.
Performance Testing Process & Methodology 66 Proprietary & Confidential -
11 TEST PLAN
11.1 What is a Test Plan?
A Test Plan can be defined as a document that describes the scope, approach, resources and schedule of intended test activities. It identifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. The main purpose of preparing a Test Plan is that everyone concerned with the project are in sync with regards to the scope, responsibilities, deadlines and deliverables for the project. It is in this respect that reviews and a sign-off are very important since it means that everyone is in agreement of the contents of the test plan and this also helps in case of any dispute during the course of the project (especially between the developers and the testers).
Scope
This section should talk about the areas of the application which are to be tested by the QA team and specify those areas which are definitely out of scope (screens, database, mainframe processes etc). Test Approach This would contain details on how the testing is to be performed and whether any specific strategy is to be followed (including configuration management). Entry Criteria This section explains the various steps to be performed before the start of a test (i.e.) pre-requisites. For example: Timely environment set up, starting the web server / app server, successful implementation of the latest build etc. Resources This section should list out the people who would be involved in the project and their designation etc. Tasks / Responsibilities This section talks about the tasks to be performed and the responsibilities assigned to the various members in the project. Exit criteria Contains tasks like bringing down the system / server, restoring system to pre-test environment, database refresh etc. Schedules / Milestones This sections deals with the final delivery date and the various milestone dates to be met in the course of the project. Hardware / Software Requirements This section would contain the details of PCs / servers required (with the configuration) to install the application or perform the testing; specific software that needs to be installed on the systems to get the application running or to connect to the database; connectivity related issues etc. Risks & Mitigation Plans This section should list out all the possible risks that can arise during the testing and the mitigation plans that the QA team plans to implement incase the risk actually turns into a reality. Tools to be used This would list out the testing tools or utilities (if any) that are to be used in the project (e.g.) WinRunner, Test Director, PCOM, WinSQL. Deliverables This section contains the various deliverables that are due to the client at various points of time (i.e.) daily, weekly, start of the project, end of the project etc. These could include Test Plans, Test Procedure, Test Matrices, Status Reports, Test Scripts etc. Templates for all these could also be attached.
Performance Testing Process & Methodology 69 Proprietary & Confidential -
References Procedures Templates (Client Specific or otherwise) Standards / Guidelines (e.g.) QView Project related documents (RSD, ADD, FSD etc) Annexure This could contain embedded documents or links to documents which have been / will be used in the course of testing (e.g.) templates used for reports, test cases etc. Referenced documents can also be attached here. Sign-Off This should contain the mutual agreement between the client and the QA team with both leads / managers signing off their agreement on the Test Plan.
Configuration data can dictate control flow, data manipulation, presentation and user interface. A system can be configured to fit several business models, work (almost) seamlessly with a variety of cooperative systems and provide tailored experiences to a host of different users. A business may look to an application's configurability to allow them to keep up with the market without being slowed by the development process, an individual may look for a personalized experience from commonly-available software. FUNCTIONAL TESTING SUFFERS IF DATA IS POOR Tests with poor data may not describe the business model effectively, they may be hard to maintain, or require lengthy and difficult setup. They may obscure problems or avoid them altogether. Poor data tends to result in poor tests, that take longer to execute. GOOD DATA IS VITAL TO RELIABLE TEST RESULTS An important goal of functional testing is to allow the test to be repeated with the same result, and varied to allow diagnosis. Without this, it is hard to communicate problems to coders, and it can become difficult to have confidence in the QA team's results, whether they are good or bad. Good data allows diagnosis, effective reporting, and allows tests to be repeated with confidence,. GOOD DATA CAN HELP TESTING STAY ON SCHEDULE An easily comprehensible and well-understood dataset is a tool to help communication. Good data can greatly assist in speedy diagnosis and rapid re-testing. Regression testing and automated test maintenance can be made speedier and easier by using good data, while an elegantly-chosen dataset can often allow new tests without the overhead of new data. A formal test plan is a document that provides and records important information about a test project, for example: project and quality assumptions project background information resources schedule & timeline entry and exit criteria test milestones tests to be performed
In order to ensure consistency when measuring the results, the tests should be independently monitored. This task would normally be carried out by a nominated member of the Business Recovery Team or a member of the Business Continuity Planning Team. This section of the BCP will contain the names of the persons nominated to monitor the testing process throughout the organization. It will also contain a list of the duties to be undertaken by the monitoring staff. Prepare Feedback Questionnaires It is vital to receive feedback from the persons managing and participating in each of the tests. This feedback will hopefully enable weaknesses within the Business Recovery Process to be identified and eliminated. Completion of feedback forms should be mandatory for all persons participating in the testing process. The forms should be completed either during the tests (to record a specific issue) or as soon after finishing as practical. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. This section of the BCP should contain a template for a Feedback Questionnaire. Prepare Budget for Testing Phase Each phase of the BCP process which incurs a cost requires that a budget be prepared and approved. The 'Preparing for a Possible Emergency' Phase of the BCP process will involve the identification and implementation of strategies for back up and recovery of data files or a part of a business process. It is inevitable that these back up and recovery processes will involve additional costs. Critical parts of the business process such as the IT systems, may require particularly expensive back up strategies to be implemented. Where the costs are significant they should be approved separately with a specific detailed budget for the establishment costs and the ongoing maintenance costs. This section of the BCP will contain a list of the testing phase activities and a cost for each. It should be noted whenever part of the costs is already incorporated with the organizations overall budgeting process.
This section of the BCP is to contain a list of each business process with a test schedule and information on the simulated conditions being used. The testing co-ordination and monitoring will endeavor to ensure that the simulated environments are maintained throughout the testing process, in a realistic manner. Test Accuracy of Employee and Vendor Emergency Contact Numbers During the testing process the accuracy of employee and vendor emergency contact information is to be re-confirmed. All contact numbers are to be validated for all involved employees. This is particularly important for management and key employees who are critical to the success of the recovery process. This activity will usually be handled by the HRM Department or Division. Where, in the event of an emergency occurring outside of normal business hours, a large number of persons are to be contacted, a hierarchical process could be used whereby one person contacts five others. This process must have safety features incorporated to ensure that if one person is not contactable for any reason then this is notified to a nominated controller. This will enable alternative contact routes to be used. Assess Test Results Prepare a full assessment of the test results for each business process. The following questions may be appropriate: Were objectives of the Business Recovery Process and the testing process met - if not, provide further comment Were simulated conditions reasonably "authentic" - if not, provide further comment Was test data representative - if not, provide further comment Did the tests proceed without any problems - if not, provide further comment What were the main comments received in the feedback questionnaires Each test should be assessed as either fully satisfactory, adequate or requiring further testing. Training Staff in the Business Recovery Process All staff should be trained in the business recovery process. This is particularly important when the procedures are significantly different from those pertaining to normal operations. This training may be integrated with the training phase or handled separately. The training should be carefully planned and delivered on a structured basis. The training should be assessed to verify that it has achieved its objectives and is relevant for the procedures involved. Training may be delivered either using in-house resources or external resources depending upon available skills and related costs. Managing the Training Process For the BCP training phase to be successful it has to be both well managed and structured. It will be necessary to identify the objective and scope for the training, what specific training is required, who needs it and a budget prepared for the additional costs associated with this phase. Develop Objectives and Scope of Training The objectives and scope of the BCP training activities are to be clearly stated within the plan. The BCP should contain a description of the objectives and scope of the training phase. This will enable the training to be consistent and organized in a manner where the results can be measured, and the training fine tuned, as appropriate.
Performance Testing Process & Methodology 74 Proprietary & Confidential -
The objectives for the training could be as follows : "To train all staff in the particular procedures to be followed during the business recovery process". The scope of the training could be along the following lines : "The training is to be carried out in a comprehensive and exhaustive manner so that staff become familiar with all aspects of the recovery process. The training will cover all aspects of the Business Recovery activities section of the BCP including IT systems recovery". Consideration should also be given to the development of a comprehensive corporate awareness program for communicating the procedures for the business recovery process. Training Needs Assessment The plan must specify which person or group of persons requires which type of training. It is necessary for all new or revised processes to be explained carefully to the staff. For example it may be necessary to carry out some process manually if the IT system is down for any length of time. These manual procedures must be fully understood by the persons who are required to carry them out. For larger organizations it may be practical to carry out the training in a classroom environment, however, for smaller organizations the training may be better handled in a workshop style. This section of the BCP will identify for each business process what type of training is required and which persons or group of persons need to be trained. Training Materials Development Schedule Once the training needs have been identified it is necessary to specify and develop suitable training materials. This can be a time consuming task and unless priorities are given to critical training programmes, it could delay the organization in reaching an adequate level of preparedness. This section of the BCP contains information on each of the training programmes with details of the training materials to be developed, an estimate of resources and an estimate of the completion date. Prepare Training Schedule Once it has been agreed who requires training and the training materials have been prepared a detailed training schedule should be drawn up. This section of the BCP contains the overview of the training schedule and the groups of persons receiving the training. Communication to Staff Once the training is arranged to be delivered to the employees, it is necessary to advise them about the training programmes they are scheduled to attend. This section of the BCP contains a draft communication to be sent to each member of staff to advise them about their training schedule. The communication should provide for feedback from the staff member where the training dates given are inconvenient. A separate communication should be sent to the managers of the business units advising them of the proposed training schedule to be attended by their staff. Each member of staff will be given information on their role and responsibilities applicable in the event of an emergency. Prepare Budget for Training Phase Each phase of the BCP process which incurs a cost requires that a budget be prepared and approved. Depending upon the cross charging system employed by the organization, the training costs will vary greatly. However, it has to be recognized that, however well justified, training incurs additional costs and these should be approved by the appropriate authority within the organization.
Performance Testing Process & Methodology 75 Proprietary & Confidential -
This section of the BCP will contain a list of the training phase activities and a cost for each. It should be noted whenever part of the costs is already incorporated with the organizations overall budgeting process. Assessing the Training The individual BCP training programmes and the overall BCP training process should be assessed to ensure its effectiveness and applicability. This information will be gathered from the trainers and also the trainees through the completion of feedback questionnaires. Feedback Questionnaires Assess Feedback Feedback Questionnaires It is vital to receive feedback from the persons managing and participating in each of the training programmes. This feedback will enable weaknesses within the Business Recovery Process, or the training, to be identified and eliminated. Completion of feedback forms should be mandatory for all persons participating in the training process. The forms should be completed either during the training (to record a specific issue) or as soon after finishing as practical. This will enable observations and comments to be recorded whilst the event is still fresh in the persons mind. This section of the BCP should contain a template for a Feedback Questionnaire for the training phase. Assess Feedback The completed questionnaires from the trainees plus the feedback from the trainers should be assessed. Identified weaknesses should be notified to the BCP Team Leader and the process strengthened accordingly. The key issues raised by the trainees should be noted and consideration given to whether the findings are critical to the process or not. If there are a significant number of negative issues raised then consideration should be given to possible re-training once the training materials, or the process, have been improved. This section of the BCP will contain a format for assessing the training feedback. Keeping the Plan Up-to-date Changes to most organizations occur all the time. Products and services change and also their method of delivery. The increase in technological based processes over the past ten years, and particularly within the last five, have significantly increased the level of dependency upon the availability of systems and information for the business to function effectively. These changes are likely to continue and probably the only certainty is that the pace of change will continue to increase. It is necessary for the BCP to keep pace with these changes in order for it to be of use in the event of a disruptive emergency. This chapter deals with updating the plan and the managed process which should be applied to this updating activity. Maintaining the BCP It is necessary for the BCP updating process to be properly structured and controlled. Whenever changes are made to the BCP they are to be fully tested and appropriate amendments should be made to the training materials. This will involve the use of formalized change control procedures under the control of the BCP Team Leader. Change Controls for Updating the Plan It is recommended that formal change controls are implemented to cover any changes required to the BCP. This is necessary due to the level of complexity contained within the BCP. A Change request Form / Change Order form is to be prepared and approved in respect of each proposed change to the BCP.
Performance Testing Process & Methodology 76 Proprietary & Confidential -
This section of the BCP will contain a Change Request Form / Change Order to be used for all such changes to the BCP. Responsibilities for Maintenance of Each Part of the Plan Each part of the plan will be allocated to a member of the BCP Team or a Senior Manager with the organization who will be charged with responsibility for updating and maintaining the plan. The BCP Team Leader will remain in overall control of the BCP but business unit heads will need to keep their own sections of the BCP up to date at all times. Similarly, HRM Department will be responsible to ensure that all emergency contact numbers for staff are kept up to date. It is important that the relevant BCP coordinator and the Business Recovery Team are kept fully informed regarding any approved changes to the plan. Test All Changes to Plan The BCP Team will nominate one or more persons who will be responsible for coordinating all the testing processes and for ensuring that all changes to the plan are properly tested. Whenever changes are made or proposed to the BCP, the BCP Testing Co-ordinator will be notified. The BCP Testing Co-ordinator will then be responsible for notifying all affected units and for arranging for any further testing activities. This section of the BCP contains a draft communication from the BCP Co-ordinator to affected business units and contains information about the changes which require testing or re-testing. Advise Person Responsible for BCP Training A member of the BCP Team will be given responsibility for co-ordinating all training activities (BCP Training Co-ordinator). The BCP Team Leader will notify the BCP Training Co-ordinator of all approved changes to the BCP in order that the training materials can be updated. An assessment should be made on whether the change necessitates any retraining activities. Advise Person Responsible for BCP Training A member of the BCP Team will be given responsibility for co-ordinating all training activities (BCP Training Co-ordinator). The BCP Team Leader will notify the BCP Training Co-ordinator of all approved changes to the BCP in order that the training materials can be updated. An assessment should be made on whether the change necessitates any retraining activities. Problems which can be caused by Poor Test Data Most testers are familiar with the problems that can be caused by poor data. The following list details the most common problems familiar to the author. Most projects experience these problems at some stage - recognizing them early can allow their effects to be mitigated. Unreliable test results. Running the same test twice produces inconsistent results. This can be a symptom of an uncontrolled environment, unrecognized database corruption, or of a failure to recognize all the data that is influential on the system. Degradation of test data over time. Program faults can introduce inconsistency or corruption into a database. If not spotted at the time of generation, they can cause hard-to-diagnose failures that may be apparently unrelated to the original fault. Restoring the data to a clean set gets rid of the symptom, but the original fault is undiagnosed and can carry on into live operation and perhaps future releases. Furthermore, as the data is restored, evidence of the fault is lost. Increased test maintenance cost If each test has its own data, the cost of test maintenance is correspondingly increased.
Performance Testing Process & Methodology 77 Proprietary & Confidential -
If that data is itself hard to understand or manipulate, the cost increases further. Reduced flexibility in test execution If datasets are large or hard to set up, some tests may be excluded from a test run. If the datasets are poorly constructed, it may not be time-effective to construct further data to support investigatory tests. Obscure results and bug reports Without clearly comprehensible data, testers stand a greater chance of missing important diagnostic features of a failure, or indeed of missing the failure entirely. Most reports make reference to the input data and the actual and expected results. Poor data can make these reports hard to understand. Larger proportion of problems can be traced to poor data A proportion of all failures logged will be found, after further analysis, not to be faults at all. Data can play a significant role in these failures. Poor data will cause more of these problems. Less time spent hunting bugs The more time spent doing unproductive testing or ineffective test maintenance, the less time spent testing. Confusion between developers, testers and business Each of these groups has different data requirements. A failure to understand each others data can lead to ongoing confusion. Requirements problems can be hidden in inadequate data It is important to consider inputs and outputs of a process for requirements modeling. Inadequate data can lead to ambiguous or incomplete requirements. Simpler to make test mistakes Everybody makes mistakes. Confusing or over-large datasets can make data selection mistakes more common. Unwieldy volumes of data Small datasets can be manipulated more easily than large datasets. A few datasets are easier to manage than many datasets. Business data not representatively tested Test requirements, particularly in configuration data, often don't reflect the way the system will be used in practice. While this may arguably lead to broad testing for a variety of purposes, it can be hard for the business or the end users to feel confidence in the test effort if they feel distanced from it. Inability to spot data corruption caused by bugs A few well-known datasets can be more easily be checked than a large number of complex datasets, and may lend themselves to automated testing / sanity checks. A readily understandable dataset can allow straightforward diagnosis; a complex dataset will positively hinder diagnosis. Poor database/environment integrity If a large number of testers, or tests, share the same dataset, they can influence and corrupt each others results as they change the data in the system. This can not only cause false results, but can lead to database integrity problems and data corruption. This can make portions of the application untestable for many testers simultaneously.
Partitioning
Partitions allow data access to be controlled, reducing uncontrolled changes in the data. Partitions can be used independently; data use in one area will have no effect on the results of tests in another. Data can be safely and effectively partitioned by machine / database / application instance, although this partitioning can introduce configuration management problems in software version, machine setup, environmental data and data load/reload. A useful and basic way to start with partitions is to set up, not a single environment for each test or tester, but to set up three shared by many users, so allowing different kinds of data use. These three have the following characteristics: Safe area
Performance Testing Process & Methodology 80 Proprietary & Confidential -
Used for enquiry tests, usability tests etc. No test changes the data, so the area can be trusted. Many testers can use simultaneously Change area Used for tests which update/change data. Data must be reset or reloaded after testing. Used by one test/tester at a time. Scratch area Used for investigative update tests and those which have unusual requirements. Existing data cannot be trusted. Used at tester's own risk! Testing rarely has the luxury of completely separate environments for each test and each tester. Controlling data, and the access to data, in a system can be fraught. Many different stakeholders have different requirements of the data, but a common requirement is that of exclusive use. While the impact of this requirement should not be underestimated, a number of stakeholders may be able to work with the same environmental data, and to a lesser extent, setup data - and their work may not need to change the environmental or setup data. The test strategy can take advantage of this by disciplined use of text / value fields, allowing the use of 'soft' partitions. 'Soft' partitions allow the data to be split up conceptually, rather than physically. Although testers are able to interfere with each others tests, the team can be educated to avoid each others work. If, for instance, tester 1's tests may only use customers with Russian nationality and tester 2's tests only with French, the two sets of work can operate independently in the same dataset. A safe area could consist of London addresses, the change area Manchester addresses, and the scratch area Bristol addresses. Typically, values in free-text fields are used for soft partitioning. Data partitions help because: Allow controlled and reliable data, reducing data corruption / change problems Can reduce the need for exclusive access to environments/machines Clarity Permutation techniques may make data easier to grasp by making the datasets small and commonly used, but we can make our data clearer still by describing each row in its own free text fields, allowing testers to make a simple comparison between the free text (which is generally displayed on output), and actions based on fields which tend not to be directly displayed. Use of free text fields with some correspondence to the internals of the record allows output to be checked more easily. Testers often talk about items of data, referring to them by anthropomorphic personification - that is to say, they give them names. This allows shorthand, but also acts as jargon, excluding those who are not in the know. Setting this data, early on in testing, to have some meaningful value can be very useful, allowing testers to sense check input and output data, and choose appropriate input data for investigative tests. Reports, data extracts and sanity checks can also make use of these; sorting or selecting on a free text field that should have some correspondence with a functional field can help spot problems or eliminate unaffected data.
Data is often used to communicate and illustrate problems to coders and to the business. However, there is generally no mandate for outside groups to understand the format or requirements of test data. Giving some meaning to the data that can be referred to directly can help with improving mutual understanding. Clarity helps because: Improves communication within and outside the team Reduces test errors caused by using the wrong data Allows another method way of doing sanity checks for corrupted or inconsistent data Helps when checking data after input Helps in selecting data for investigative tests
environmental or configuration scripts. Large volumes of setup data can often be generated from existing datasets and loaded using a data load tool, while small volumes of setup data often have an associated system maintenance function and can be input using the system. Fixed input data may be generated or migrated and is loaded using any and all of the methods above, while consumable input data is typically listed in test scripts or generated as an input to automation tools. When data is loaded, it can append itself to existing data, overwrite existing data, or delete existing data first. Each is appropriate in different circumstances, and due consideration should be given to the consequences.
12.6 Conclusion
Data can be influential on the quality of testing. Well-planned data can allow flexibility and help reduce the cost of test maintenance. Common data problems can be avoided or reduced with preparation and automation. Effective testing of setup data is a necessary part of system testing, and good data can be used as a tool to enable and improve communication throughout the project. The following points summarize the actions that can influence the quality of the data and the effectiveness of its usage: Plan the data for maintenance and flexibility Know your data, and make its structure and content transparent Use the data to improve understanding throughout testing and the business Test setup data as you would test functionality
Users/Customers served The organization ,individuvals,or class users/customers serviced by this activity. Deficiencies noted The status of the results of executing this activity and any appropriate interpretation of those facts. The Criterion is the users statement of what is desired. It can be stated in the either negative or positive terms. For example , it could indicate the need to reduce the complaints or delays as well as desired processing turn around time. Work Paper to describe the problem, and document the statement of condition and the statement of criteria. For example the following Work paper provides the information for Test Log Documentation:
Field Requirements: Field Instructions for Entering Data Name of Software Tested : Put the name of the S/W or subsystem tested. Problem Description: Write a brief narrative description of the variance uncovered from expectations Statement of Conditions: Put the results of actual processing that occurred here. Statement of Criteria: Put what testers believe was the expected result from processing Effect of Deviation: If this can be estimated , testers should indicate what they believe the impact or effect of the problem will be on computer processing Cause of Problem: The testers should indicate what they believe is the cause of the problem, if known. If the testers re unable to do this , the work paper will be given to the development team and they should indicate the cause of the problem. Location of the Problem: The Tests should document where problem occurred as closely as possible. Recommended Action: The testers should indicate any recommended action they believe would be helpful to the project team. If not approved, the alternate action should be listed or the reason for not following the recommended action should be documented. Name of the S/W tested: Problem Description Statement of Condition Statement of Criteria Effect of Deviation Cause of a Problem Location of the Problem Recommended Action
Test Results Data This data will include, Test factors -The factors incorporated in the plan, the validation of which becomes the Test Objective. Business objective The validation that specific business objectives have been met.
Performance Testing Process & Methodology 86 Proprietary & Confidential -
Interface Objectives-Validation that data/Objects can be correctly passed among Software components. Functions/Sub functions-Identifiable Software components normally associated with the requirements of the software. Units- The smallest identifiable software components Platform- The hardware and Software environment in which the software system will operate.
Test Transactions, Test Suites, and Test Events These are the test products produced by the test team to perform testing. Test transactions/events: The type of tests that will be conducted during the execution of tests, which will be based on software requirements. Inspections A verification of process deliverables against deliverable specifications. Reviews: Verification that the process deliverables / phases are meeting the users true needs.
Defect
This category includes a Description of the individual defects uncovered during the testing process. This description includes but not limited to : Data the defect uncovered Name of the Defect Location of the Defect Severity of the Defect Type of Defect How the defect was uncovered(Test Data/Test Script) The Test Logs should add to this information in the form of where the defect originated , when it was corrected, and when it was entered for retest.
program or from the individual keyboard or keypad software at any time. Individual Reports include all of the following information. Status Report Word Processing Tests or Keypad Tests Basic Skills Tests or Data Entry Tests Progress Graph Game Scores Test Report for each test Test Director: Facilitates consistent and repetitive testing process Central repository for all testing assets facilitates the adoption of a more consistent testing process, which can be repeated throughout the application life cycle Provides Analysis and Decision Support Graphs and reports help analyze application readiness at any point in the testing process Requirements coverage, run schedules, test execution progress, defect statistics can be used for production planning Provides Anytime, Anywhere access to Test Assets Using Test Directors web interface, tester, developers, business analysts and Client can participate and contribute to the testing process Traceability throughout the testing process Test Cases can be mapped to requirements providing adequate visibility over the test coverage of requirements Test Director links requirements to test cases and test cases to defects Manages Both Manual and Automated Testing Test Director can manage both manual and automated tests (Win Runner) Scheduling of automated tests can be effectively done using Test Director
Test Report Standards - Defining the components that should be included in a test report. Statistical Analysis - Ability to draw statistically valid conclusions from quantitative test results. Testing Data used for metrics
Testers are typically responsible for reporting their test status at regular intervals. The following measurements generated during testing are applicable: Total number of tests Number of Tests executed to date Number of tests executed successfully to date Data concerning software defects include Total number of defects corrected in each activity
Performance Testing Process & Methodology 89 Proprietary & Confidential -
Total number of defects entered in each activity. Average duration between defect detection and defect correction Average effort to correct a defect Total number of defects remaining at delivery Software performance data us usually generated during system testing, once the software has been integrated and functional testing is complete. Average CPU utilization Average memory Utilization Measured I/O transaction rate
Test Reporting A final test report should be prepared at the conclusion of each test activity. This includes the following Individual Project Test Report Integration Test Report System Test Report Acceptance test Report These test reports are designed to document the results of testing as defined in the testplan.The test report can be a combination of electronic data and hard copy. For example, if the function matrix is maintained electronically, there is no reason to print that, as paper report will summarize the data, draws appropriate conclusions and present recommendations.9 - Purpose of a Test Report: The test report has one immediate and three long term purposes. The immediate purpose is to provide information to customers of the software system so that they can determine whether the system is ready for production , and if so, to assess the potential consequences and initiate appropriate actions to minimize those consequences. The first of the three long term uses is for the project to trace problems in the event the application malfunctions in production. Knowing which functions have been correctly tested and which ones still contain defects can assist in taking corrective actions. The second long term purpose is to use the data to analyze the rework process for making changes to prevent the defects from occurring in the future. These defect prone components identify tasks/steps that if improved, could eliminate or minimize the occurrence of high frequency defects. The Third long term purpose is to show what was accomplished in case of an Y2K lawsuit. Individual Project Test Report These reports focus on the Individual projects(software system),when different testers should test individual projects, they should prepare a report on their results. Integration Test Report Integration testing tests the interfaces between individual projects. A good test plan will identify the interfaces and institute test conditions that will validate interfaces. Given is the Individual Project test report except that conditions tested are interfaces. 1.Scope of Test This section indicates which functions were and were not tested 2.Test Results This section indicates the results of testing, including any variance between what is and what should be 3.What works/What does not work - This section defines the functions that work and do not work and the interfaces that work and do not work 4. Recommendations This section recommends actions that should be taken to
Performance Testing Process & Methodology 90 Proprietary & Confidential -
Fix functions /Interfaces that do not work. Make additional improvements System Test Reports A System Test plan standard that identified the objective of testing , what was to be tested, how was it to be tested, and when tests should occur. The system test Report should present the results of executing the test plan. If these details are maintained Electronically , then it need only be referenced , not included in the report. Acceptance Test Report There are two primary objectives of Acceptance testing Report .The first is to ensure that the system as implemented meets the real operating needs of the user/customer. If the defined requirements are those true needs, testing should have accomplished this objective. The second objective is to ensure that software system can operate in the real world user environment, which includes people skills and attitudes, time pressures, changing business conditions, and so forth. The Acceptance Test Report should encompass these criterias for the User acceptance respectively.
13.2.2 Conclusion
The Test Logs obtained from the execution of the test results and finally the test reports should be designed to accomplish the following objectives: Provide Information to the customer whether the system should be placed into production, if so the potential consequences and appropriate actions to minimize these consequences. One Long term objective is for the Project and the other is for the information technology function. The project can use the test report to trace problems in the event the application malfunction in production. Knowing which functions have been correctly tested and which ones still contain defects can assist in taking corrective actions. The data can also be used to analyze the developmental process to make changes to prevent defects from occurring in the future. These defect prone components identify tasks/steps that if improved, could eliminate or minimize the occurrence of high frequency defects in future.
14 Test Report
A Test Report is a document that is prepared once the testing of a software product is complete and the delivery is to be made to the customer. This document would contain a summary of the entire project and would have to be presented in a way that any person who has not worked on the project would also get a good overview of the testing effort.
2. Test Details This section would contain the Test Approach, Types of Testing conducted, Test Environment and Tools Used. Test Approach This would discuss the strategy followed for executing the project. This could include information on how coordination was achieved between Onsite and Offshore teams, any innovative methods used for automation or for reducing repetitive workload on the testers, how information and daily / weekly deliverables were delivered to the client etc. Types of testing conducted This section would mention any specific types of testing performed (i.e.) Functional, Compatibility, Performance, Usability etc along with related specifications. Test Environment This would contain information on the Hardware and Software requirements for the project (i.e.) server configuration, client machine configuration, specific software installations required etc. Tools used This section would include information on any tools that were used for testing the project. They could be functional or performance testing automation tools, defect management tools, project tracking tools or any other tools which made the testing work easier. 3. Metrics This section would include details on total number of test cases executed in the course of the project, number of defects found etc. Calculations like defects found per test case or number of test cases executed per day per person etc would also be entered in this section. This can be used in calculating the efficiency of the testing effort. 4. Test Results This section is similar to the Metrics section, but is more for showcasing the salient features of the testing effort. Incase many defects have been logged for the project, graphs can be generated accordingly and depicted in this section. The graphs can be for Defects per build, Defects based on severity, Defects based on Status (i.e.) how many were fixed and how many rejected etc.
5. Test Deliverables This section would include links to the various documents prepared in the course of the testing project (i.e.) Test Plan, Test Procedures, Test Logs, Release Report etc.
Performance Testing Process & Methodology 93 Proprietary & Confidential -
6. Recommendations This section would include any recommendations from the QA team to the client on the product tested. It could also mention the list of known defects which have been logged by QA but not yet fixed by the development team so that they can be taken care of in the next release of the application.
15 Defect Management
15.1 Defect
A mismatch in the application and its specification is a defect. A software error is present when the program does not do what its end user expects it to do.
The Initial State of a defect will be New. The Project Lead of the development team will review the defect and set it to one of the following statuses: Open Accepts the bug and assigns it to a developer. Invalid Bug The reported bug is not valid one as per the requirements/design As Designed This is an intended functionality as per the requirements/design Deferred This will be an enhancement. Duplicate The bug has already been reported.
Proprietary & Confidential -
Document Once it is set to any of the above statuses apart from Open, and the testing team does not agree with the development team it is set to document status.
Once the development team has started working on the defect the status is set to WIP ((Work in Progress) or if the development team is waiting for a go ahead or some technical feedback, they will set to Dev Waiting After the development team has fixed the defect, the status is set to FIXED, which means the defect is ready to re-test. On re-testing the defect, and the defect still exists, the status is set to REOPENED, which will follow the same cycle as an open defect. If the fixed defect satisfies the requirements/passes the test case, it is set to Closed.
Medium
Low
Bug reports need to do more than just describe the bug. They have to give developers something to work with so that they can successfully reproduce the problem. In most cases the more information correct information given the better. The report should explain exactly how to reproduce the problem and an explanation of exactly what the problem is. The basic items in a report are as follows: Version: This is very important. In most cases the product is not static, developers will have been working on it and if theyve found a bug it may already have been reported or even fixed. In either case, they need to know which version to use when testing out the bug. If you are developing more than one product Identify the product in question. Unless you are reporting something very simple, such as a cosmetic error on a screen, you should include a dataset that exhibits the error. If youre reporting a processing error, you should include two versions of the dataset, one before the process and one after. If the dataset from before the process is not included, developers will be forced to try and find the bug based on forensic evidence. With the data, developers can trace what is happening. Steps: List the steps taken to recreate the bug. Include all proper menu names, dont abbreviate and dont assume anything. After youve finished writing down the steps, follow them - make sure youve included everything you type and do to get to the problem. If there are parameters, list them. If you have to enter any data, supply the exact data entered. Go through the process again and see if there are any steps that can be removed. When you report the steps they should be the clearest steps to recreating the bug.
Performance Testing Process & Methodology 99 Proprietary & Confidential -
Product: Data:
Description: Explain what is wrong - Try to weed out any extraneous information, but detail what is wrong. Include a list of what was expected. Remember report one problem at a time, dont combine bugs in one report. Supporting documentation: If available, supply documentation. If the process is a report, include a copy of the report with the problem areas highlighted. Include what you expected. If you have a report to compare against, include it and its source information (if its a printout from a previous version, include the version number and the dataset used) This information should be stored in a centralized location so that Developers and Testers have access to the information. The developers need it to reproduce the bug, identify it and fix it. Testers will need this information for later regression testing and verification.
15.5.1Summary
A bug report is a case against a product. In order to work it must supply all necessary information to not only identify the problem but what is needed to fix it as well. It is not enough to say that something is wrong. The report must also say what the system should be doing. The report should be written in clear concise steps, so that someone who has never seen the system can follow the steps and reproduce the problem. It should include information about the product, including the version number, what data was used. The more organized information provided the better the report will be.
16 Automation
What is Automation Automated testing is automating the manual testing process currently in use
individuals. Many automated testing tools can replicate the activity of a large number of users (and their associated transactions) using a single computer. Therefore, load/stress testing using automated methods require only a fraction of the computer hardware that would be necessary to complete a manual test. Imagine performing a load test on a typical distributed client/server application on which 50 concurrent users were planned. To do the testing manually, 50 application users employing 50 PCs with associated software, an available network, and a cadre of coordinators to relay instructions to the users would be required. With an automated scenario, the entire test operation could be created on a single machine having the ability to run and rerun the test as necessary, at night or on weekends without having to assemble an army of end users. As another example, imagine the same application used by hundreds or thousands of users. It is easy to see why manual methods for load/stress testing is an expensive and logistical nightmare.
regulations as well as being required to document their quality assurance efforts for all parts of their systems.
disruptions in critical operations. Mission-critical processes are prime candidates for automated testing. Examples include: financial month-end closings, production planning, sales order entry and other core activities. Any application with a high-degree of risk associated with a failure is a good candidate for test automation. Repetitive Testing - If a testing procedure can be reused many times, it is also a prime candidate for automation. For example, common outline files can be created to establish a testing session, close a testing session and apply testing values. These automated modules can be used again and again without having to rebuild the test scripts. This modular approach saves time and money when compared to creating a new end-to-end script for each and every test. Applications with a Long Life Span - If an application is planned to be in production for a long period of time, the greater the benefits are from automation. What to Look For in a Testing Tool Choosing an automated software testing tool is an important step, and one which often poses enterprise-wide implications. Here are several key issues, which should be addressed when selecting an application testing solution.
Internet/Intranet Testing
A good tool will have the ability to support testing within the scope of a web browser. The tests created for testing Internet or intranet-based applications should be portable across browsers, and should automatically adjust for different load times and performance levels.
Performance Testing Process & Methodology 104 Proprietary & Confidential -
Ease of Use
Testing tools should be engineered to be usable by non-programmers and application end-users. With much of the testing responsibility shifting from the development staff to the departmental level, a testing tool that requires programming skills is unusable by most organizations. Even if programmers are responsible for testing, the testing tool itself should have a short learning curve.
A robust testing tool should support testing with a variety of user interfaces and create simple-to manage, easy-to-modify tests. Test component reusability should be a cornerstone of the product architecture.
The selected testing solution should allow users to perform meaningful load and performance tests to accurately measure system performance. It should also provide test results in an easy-to-understand reporting format.
Test Planning
Careful planning is the key to any successful process. To guarantee the best possible result from an automated testing program, those evaluating test automation should consider these fundamental planning steps. The time invested in detailed planning significantly improves the benefits resulting from test automation.
Begin the automated testing process by defining exactly what tasks your application software should accomplish in terms of the actual business activities of the end-user. The definition of these tasks, or business requirements, defines the high-level, functional requirements of the software system in question. These business requirements should be defined in such a way as to make it abundantly clear that the software system correctly (or incorrectly) performs the necessary business functions. For example, a business requirement for a payroll application might be to calculate a salary, or to print a salary check.
A test case identifies the specific input values that will be sent to the application, the procedures for applying those inputs, and the expected application values for the procedure being tested. A proper test case will include the following key components: Test Case Name(s) - Each test case must have a unique name, so that the results of these test elements can be traced and analyzed. Test Case Prerequisites - Identify set up or testing criteria that must be established before a test can be successfully executed. Test Case Execution Order - Specify any relationships, run orders and dependencies that might exist between test cases. Test Procedures Identify the application steps necessary to complete the test case. Input Values - This section of the test case identifies the values to be supplied to the application as input including, if necessary, the action to be completed. Expected Results - Document all screen identifier(s) and expected value(s) that must be verified as part of the test. These expected results will be used to measure the acceptance criteria, and therefore the ultimate success of the test. Test Data Sources - Take note of the sources for extracting test data if it is not included in the test case. Inputs to the Test Design and Construction Process Test Case Documentation Standards Test Case Naming Standards Approved Test Plan Business Process Documentation Business Process Flow Test Data sources Outputs from the Test Design and Construction Process Revised Test Plan Test Procedures for each Test Case Test Case(s) for each application function described in the test plan Procedures for test set up, test execution and restoration
Inputs to the Test Execution Process Approved Test Plan Documented Test Cases Stabilized, repeatable, test execution environment Standardized Test Logging Procedures Outputs from the Test Execution Process Test Execution Log(s) Restored test environment The test execution phase of your software test process will control how the test gets applied to the application. This step of the process can range from very chaotic to very simple and schedule driven. The problems experienced in test execution are usually attributed to not properly performing steps from earlier in the process. Additionally, there may be several test execution cycles necessary to complete all the necessary types of testing required for your application. For example, a test execution may be required for the functional testing of an application, and a separate test execution cycle may be required for the stress/volume testing of the same application. A complete and thorough test plan will identify this need and many of the test cases can be used for both test cycles. The secret to a controlled test execution is comprehensive planning. Without an adequate test plan in place to control your entire test process, you may inadvertently cause problems for subsequent testing.
Test ready
application
Result analysis
Defect management
With Client server testing the target customer is usually well defined you know what network operating system you will be using, the applications and so on but on the web it is far different. A person may be connecting from the USA or Africa, they may be disabled, they may use various browsers, and the screen resolution on their computer will be different. They will speak different languages, will have fast connections and slow connections, connect using MAC, Linux or Windows, etc, etc. So the cost to set up a test environment is usually greater than for a client server test where the environment is fairly well defined.
interface with files, spreadsheets, etc to create, extract data? Can you randomise the access to that data? Is the data access truly random? This functionality is normally more important than database tests as the databases will usually have their own interface for running queries. However applications (except for manual input) do not usually provide facilities for bulk data input. The added benefit (as I have found) is this functionality can be used for a production reason e.g. for the aforementioned bulk data input sometimes carried out in data migration or application upgrades. These functions are also very important as you move from the record/playback phase, to data-driven to framework testing. Data-driven tests are tests that replace hard coded names, address, numbers; etc with variables supplied from an external source usually a CSV (Comma Separated variable) file, spreadsheet or database. Frameworks are usually the ultimate goal in deploying automation test tools. Frameworks provide an interface to all the applications under test by exposing a suitable list of functions, databases, etc. This allows an inexperienced tester/user to run tests by just running/providing the test framework with know commands/variables. A test framework has parallels to Software frameworks where you develop an encapsulation layer of software (framework) around the applications, databases etc and expose functions, classes, methods etc that is used to call the underlying applications, return data, input data, etc. However to do this requires a lot of time, skilled resources and money to facilitate the first two.
If you have a custom object that behaves like one of these are you able to map (tell the test tool that the custom control behaves like the standard) control? Does it support all the standard controls methods? Can you add the custom control to its own class of control?
17.11Extensible Language
Here is a question that you will here time and time again in automation forums. How do I get {insert test tool name here} to do such and such, there will be one of four answers. I dont know It cant do it It can do it using the function x, y or Z It cant in the standard language but you can do it like this
What we are concerned with in this section is the last answer e.g. if the standard test language does not support it can I create a DLL or extend the language in some way to do it? This is usually an advanced topic and is not encountered until the trained tester has been using the tool for at least 6 12 months. However when this is encountered the tool should support language extension. If via DLLs then the tester must have knowledge of a traditional development language e.g. C, C++ or VB. For instance if I wanted to extend a tool that could use DLLs created by VB I would need to have Visual Basic then open say an ActiveX dll project, create a class containing various methods (similar to functions) then I would make a dll file. Register it on the machine then reference that dll from the test tool calling the methods according to their specification. This will sound a lot clearer as you go on in the tools and this document will be updated to include advanced topics like this in extending the tools capabilities. Some tools provide extension by allowing you to create user defined functions, methods, classes, etc but these are normally a mixture of the already supported data types, functions, etc rather than extending the tool beyond its released functionality. Because this is an advanced topic I have not taken into account ease of use, as those people who have got to this level should have already exhausted the current capabilities of the tools. So want to use external functions like win32api functions and so on and should have a good grasp of programming.
17.12Environment Support
How many environments does the tool support out the box? Does it support the latest Java release, what Oracle, Powerbuilder, WAP, etc. Most tools can interface to unsupported environments if the developers in that environment provide classes, dlls etc that expose some of the applications details but whether a developer will or has time to do this is another question. Ultimately this is the most important part of automation. Environment support. If the tool does not support your environment/application then you are in trouble and in most cases you will need to revert to manually testing the application (more shelf ware).
17.13Integration
How well does the tool integrate with other tools. This is becoming more and more important. Does the tool allow you to run it from various test management suites? Can you raise a bug directly from the tool and feed the information gathered from your test logs into it? Does it integrate with products like word, excel or requirements management tools? When managing large test projects with an automation team greater than five and testers totaling more than ten. The management aspect and the tools integration moves further up the importance ladder. An example could be a major Bank wants to redesign its workflow management system to allow faster processing of customer queries. The anticipated requirements for the new workflow software numbers in the thousands. To test these requirements 40,000 test cases have been identified 20,000 of these can be automated. How do I manage this? This is where a test management tool comes in real handy. Also how do I manage the bugs raised as a result of automation testing, etc? Integration becomes very important rather than having separate systems that dont share data that may require duplication of information. The companies that will score larger on these are those that provide tools outside the testing arena as they can build in integration to their other products and so when it comes down to the wire on some projects, we have gone with the tool that integrated with the products we already had.
17.14Cost
In my opinion cost is the least significant in this matrix, why? Because all the tools are similar in price except Visual Test that is at least 5 times cheaper than the rest but as you will see from the matrix there is a reason. Although very functional it does not provide the range of facilities that the other tools do. Price typically ranges from $2,900 - $5,000 (depending on quantity brought, packages, etc) in the US and around 2,900 - 5,000 in the UK for the base tools included in this document. So you know the tools will all cost a similar price it is usually a case of which one will do the job for me rather than which is the cheapest. Visual Test I believe will prove to be a bigger hit as it expands its functional range it was not that long ago where it did not support web based testing.
Performance Testing Process & Methodology 116 Proprietary & Confidential -
The prices are kept this high because they can. All the tools are roughly the same price and the volumes of sales is low relative to say a fully blown programming language IDE like JBuilder or Visual C++ which are a lot more function rich and flexible than any of the test tools. On top of the above prices you usually pay an additional maintenance fee of between 10 and 20%. There are not many applications I know that cost this much per license not even some very advanced operating systems. However it is all a matter of supply. The bigger the supply the less the price as you can spread the development costs more. However I do not anticipate a move on the prices upwards as this seems to be the price the market will tolerate. Visual Test also provides a free runtime license.
17.15Ease Of Use
This section is very subjective but I have used testers (my guinea pigs) of various levels and got them from scratch to use each of the tools. In more cases than not they have agreed on which was the easiest to use (initially). Obviously this can change as the tester becomes more experienced and the issues of say extensibility, script maintenance, integration, data-driven tests, etc are required. However this score is based on the productivity that can be gained in say the first three months when those issues are not such a big concern. Ease of use includes out the box functions, debugging facilities, layout on screen, help files and user manuals.
17.16Support
In the UK this can be a problem as most of the test tool vendors are based in the USA with satellite branches in the UK. Just from my own experience and the testers I know in the UK. We have found Mercury to be the best for support, then Compuware, Rational and last Segue. However having said that you can find a lot of resources for Segue on the Internet including a forum at www.betasoft.com that can provide most of the answers rather than ringing the support line. On their website Segue and Mercury provide many useful user and vendor contributed material. I have also included various other criteria like the availability of skilled resources, online resources, validity of responses from the helpdesk, speed of responses and similar
17.17Object Tests
Now presuming the tool of choice does work with the application you wish to test what services does it provide for testing object properties? Can it validate several properties at once? Can it validate several objects at once? Can you set object properties to capture the application state? This should form the bulk of your verification as far as the automation process is concerned so I have looked at the tools facilities on client/server as well as web based applications.
Performance Testing Process & Methodology 117 Proprietary & Confidential -
17.18Matrix
What will follow after the matrix is a tool-by-tool comparison under the appropriate heading (as listed above) so that the user can get a feel for the tools functionality side by side. Each category in the matrix is given a rating of 1 5. 1 = Excellent support for this functionality, 2 = Good support but lacking or another tool provides more effective support, 3 = Basic/ support only. 4 = This is only supported by use of an API call or third party add-in but not included in the general test tool/below average, 5 = No support. Test/Error recovery Image testing Record & Playback Data functions Extensible Language Object Mapping Object Name Map Database tests Object Identity Tool Environment support Object Tests 1 1 1 2 1 Ease of use 2 2 3 3 1 1 2 2 2 2
-
Integration
2 1 1 3 1
1 2 2 3 2
Web Testing
1 1 1 4 1
2 2 2 3 1
1 1 1 2 1
1 1 1 2 1
2 2 1 2 2
1 2 1 4 4
2 1 2 1 1
2 2 1 2 1
1 2 2 3 2
1 1 3 2 1
3 2 3 1 2
17.19Matrix score
Win Runner = 24 QARun = 25 SilkTest = 24 Visual Test = 39 Robot = 24
Support
Cost
Rational Robot. Facilitates functional and performance testing by automating record and playback of test scripts. Allows you to write, organize, and run tests, and to capture and analyze the results. Rational Test Factory. Automates testing by combining automatic test generation with source-code coverage analysis. Tests an entire application, including all GUI features and all lines of source code. During playback, Rational Load Test can emulate hundreds, even thousands, of users placing heavy loads and stress on your database and Web servers. Rational Test categorizes test information within a repository by project. You can use the Rational Administrator to create and manage projects.
The tools that are to discussed here are Rational Administrator Rational Robot Rational Test Manager
Open the Rational administrator and go to File->New Project. In the above window opened enter the details like Project name and location. Click Next. In the corresponding window displayed, enter the Password if you want to protect the project with password, which is required to connect to, configure or delete the project.
Click Finish. In the configure project window displayed click the Create button. To manage the Requirements assets connect to Requisite Pro, to manage test assets create associated test data store and for defect management connect to Clear quest database.
Once the Create button in the Configure project window is chosen, the below seen Create Test Data store window will be displayed. Accept the default path and click OK button.
Once the below window is displayed it is confirmed that the Test datastore is successfully created and click OK to close the window.
Click OK in the configure project window and now your first Rational project is ready to play with.
Create and edit scripts using the SQABasic, VB, and VU scripting environments. The Robot editor provides color-coded commands with keyword Help for powerful integrated programming during script development. Test applications developed with IDEs such as Visual Basic, Oracle Forms, PowerBuilder, HTML, and Java. Test objects even if they are not visible in the application's interface. Collect diagnostic information about an application during script playback. Robot is integrated with Rational Purify, Quantify, and PureCoverage. You can play back scripts under a diagnostic tool and see the results in the log.
The Object-Oriented Recording technology in Robot lets you generate scripts quickly by simply running and using the application-under-test. Robot uses Object-Oriented Recording to identify objects by their internal object names, not by screen coordinates. If objects change locations or their text changes, Robot still finds them on playback. The Object Testing technology in Robot lets you test any object in the application-under-test, including the object's properties and data. You can test standard Windows objects and IDEspecific objects, whether they are visible in the interface or hidden.
Once logged you will see the robot window. Go to File-> New->Script
In the above screen displayed enter the name of the script say First Script by which the script is referred to from now on and any description (Not mandatory).The type of the script is GUI for functional testing and VU for performance testing.
The GUI Script top pane) window displays GUI scripts that you are currently recording, editing, or debugging. It has two panes: Asset pane (left) Lists the names of all verification points and low-level scripts for this script. Script pane (right) Displays the script.
The Output window bottom pane) has two tabs: Build Displays compilation results for all scripts compiled in the last operation. Line numbers are enclosed in parentheses to indicate lines in the script with warnings and errors. Console Displays messages that you send with the SQAConsoleWrite command. Also displays certain system messages from Robot.
To display the Output window: Click View Output. How to record a play back script? To record a script just go to Record->Insert at cursor Then perform the navigation in the application to be tested and once recording is done stop the recording. Record-> Stop
In this window we can set general options like identification of lists, menus ,recording think time in General tab: Web browser tab: Mention the browser type IE or Netscape Robot Window: During recording how the robot should be displayed and hotkeys details Object Recognition Order: the order in which the recording is to happen . For ex: Select a preference in the Object order preference list.
If you will be testing C++ applications, change the object order preference to C++ Recognition Order.
18.6.1Playback options
Go to Tools-> Playback options to set the options needed while running the script.
Performance Testing Process & Methodology 128 Proprietary & Confidential -
This will help you to handle unexpected window during playback, error recovery, mention the time out period, to manage log and log data.
A verification point is stored in the project and is always associated with a script. When you create a verification point, its name appears in the Asset (left) pane of the Script window. The verification point script command, which always begins with Result =, appears in the Script (right) pane. Because verification points are assets of a script, if you delete a script, Robot also deletes all of its associated verification points. You can easily copy verification points to other scripts if you want to reuse them.
Module Existence
Object Data Object Properties Region Image Web Site Compare Web Site Scan Window Existence Window Image
If editing, position the pointer in the script and click the Display GUI Insert Toolbar button on the Standard toolbar. 2. 3. 4. Click the Comment button on the GUI Insert toolbar. Type the comment (60 characters maximum). Click OK to continue recording or editing.
Robot inserts the comment into the script (in green by default) preceded by a single quotation mark. For example:
' This is a comment in the script To change lines of text into comments or to uncomment text: 1. Highlight the text.
Proprietary & Confidential -
2.
18.12Debug menu
The Debug menu has the following commands:
Performance Testing Process & Methodology 132 Proprietary & Confidential -
Go Go Until Cursor Animate Pause Stop Set or Clear Breakpoints Clear All Breakpoints Step Over Step Into Step Out Note: The Debug menu commands are for use with GUI scripts only.
18.14Compilation errors
After the script is created and compiled and errors fixed it can be executed. The results need to be analyzed in the Test Manager.
In the Results tab of the Test Manager, you could see the results stored. From Test Manager you can know start time of the script and
20.2 Protocols
Oracle SQL server HTTP Sybase Tuxedo SAP People soft
www.rational.com
21 Performance Testing
The performance testing is a measure of the performance characteristics of an application. The main objective of a performance testing is to demonstrate that the system functions to specification with acceptable response times while processing the required transaction volumes in real-time production database. The objective of a performance test is to demonstrate that the system meets requirements for transaction throughput and response times simultaneously. The main deliverables from such a test, prior to execution, are automated test scripts and an infrastructure to be used to execute automated tests for extended periods.
Typically to debug applications, developers would execute their applications using different execution streams (i.e., completely exercise the application) in an attempt to find errors. When looking for errors in the application, performance is a secondary issue to features;
Performance Testing Process & Methodology 139 Proprietary & Confidential -
Quantitative, relevant, measurable, realistic, achievable requirements As a foundation to all tests, performance requirements should be agreed prior to the test. This helps in determining whether or not the system meets the stated requirements. The following attributes will help to have a meaningful performance comparison. Quantitative - expressed in quantifiable terms such that when response times are measured, a sensible comparison can be derived. Relevant - a response time must be relevant to a business process. Measurable - a response time should be defined such that it can be measured using a tool or stopwatch and at reasonable cost.
Realistic - response time requirements should be justifiable when compared with the durations of the activities within the business process the system supports. Achievable - response times should take some account of the cost of achieving them.
Stable system A test team attempting to construct a performance test of a system whose software is of poor quality is unlikely to be successful. If the software crashes regularly, it will probably not withstand the relatively minor stress of repeated use. Testers will not be able to record scripts in the first instance, or may not be able to execute a test for a reasonable length of time before the software, middleware or operating systems crash. Realistic test environment The test environment should ideally be the production environment or a close simulation and be dedicated to the performance test team for the duration of the test. Often this is not possible. However, for the results of the test to be realistic, the test environment should be comparable to the actual production environment. Even with an environment which is somewhat different from the production environment, it should still be possible to interpret the results obtained using a model of the system to predict, with some confidence, the behavior of the target environment. A test environment which bears no similarity to the actual production environment may be useful for finding obscure errors in the code, but is, however, useless for a performance test.
Response time requirements When asked to specify performance requirements, users normally focus attention on response times, and often wish to define requirements in terms of generic response times. A single response time requirement for all transactions might be simple to define from the users point of view, but is unreasonable. Some functions are critical and require short response times, but others are less critical and response time requirements can be less stringent. Load profiles The second component of performance requirements is a schedule of load profiles. A load profile is the level of system loading expected to occur during a specific business scenario. Business scenarios might cover different situations when the users organization has different levels of activity or involve a varying mix of activities, which must be supported by the system. Database volumes Data volumes, defining the numbers of table rows which should be present in the database
Performance Testing Process & Methodology 141 Proprietary & Confidential -
after a specified period of live running complete the load profile. Typically, data volumes estimated to exist after one years use of the system are used, but two year volumes or greater might be used in some circumstances, depending on the business application.
Test Plan
Test Design
Scripting
Test Scripts
Test Execution
Test Analysis
Preliminary Report
Activity
Is Performan ce Goal DeliverableReached?
NO
Deliverable
Internal
YES
Preparation of Reports Final Report
22.1.1 22.1.2Deliverables
Deliverable Requirement Collection
RequirementCollectio n.doc
Sample
22.2.1Deliverables
Deliverable Test Plan
TestPlan.doc
Sample
Script Customization (Delay, Checkpoints, Synchronizations points) Data Generation Parameterization/ Data pooling
Work items Hardware and Software requirements that includes the server components , the Load Generators used etc., Setting up the monitoring servers Setting up the data Preparing all the necessary folders for saving the results as the test is over. Pre Test and Post Test Procedures
22.3.1Deliverables
Deliverable Test Design
TestDesign.doc
Sample
22.4.1Deliverables
Deliverable Test Scripts
Sample Script.doc
Sample
22.5.1 Deliverables
Deliverable Test Execution
Time Sheet.doc
Sample
Run Logs.doc
22.6.1 Deliverables
Deliverable
Performance Testing Process & Methodology 147 Proprietary & Confidential
Sample
-
Test Analysis
Preliminary Report.doc
22.7.1 Deliverables
Deliverable Final Report
Final Report.doc
Sample
Goals =>Techniques, Metrics, Workload Not trivial Biased Goals To show that OUR system is better than THEIRS Analysts = Jury Unsystematic Approach Analysis without Understanding the Problem Incorrect Performance Metrics Unrepresentative Workload Wrong Evaluation Technique Overlook Important Parameters Ignore Significant Factors Inappropriate Experimental Design Inappropriate Level of Detail No Analysis Erroneous Analysis No Sensitivity Analysis Ignoring Errors in Input Improper Treatment of Outliers Assuming No Change in the Future Ignoring Variability Too Complex Analysis Improper Presentation of Results Ignoring Social Aspects Omitting Assumptions and Limitations
goals. Establish incremental performance goals throughout the product development cycle. All the members in the team should agree that a performance issue is not just a bug; it is a software architectural problem. Performance testing of Web services and applications is paramount to ensuring an excellent customer experience on the Internet. The Web Capacity Analysis (WebCAT) tool provides Web server performance analysis; the tool can also assess Internet Server Application Programming Interface and application server provider (ISAPI/ASP) applications. Creating an automated test suite to measure performance is time-consuming and laborintensive. Therefore, it is important to define concrete performance goals. Without defined performance goals or requirements, testers must guess, without a clear purpose, at how to instrument tests to best measure various response times. The performance tests should not be used to find functionality-type bugs. Design the performance test suite to measure response times and not to identify bugs in the product. Design the build verification test (BVT) suite to ensure that no new bugs are injected into the build that would prevent the performance test suite from successfully completing. The performance tests should be modified consistently. Significant changes to the performance test suite skew or make obsolete all previous data. Therefore, keep the performance test suite fairly static throughout the product development cycle. If the design or requirements change and you must modify a test, perturb only one variable at a time for each build. Strive to achieve the majority of the performance goals early in the product development cycle because: Most performance issues require architectural change. Performance is known to degrade slightly during the stabilization phase of the development cycle.
Achieving performance goals early also helps to ensure that the ship date is met because a product rarely ships if it does not meet performance goals. You should reuse automated performance tests Automated performance tests can often be reused in many other automated test suites. For example, incorporate the performance test suite into the stress test suite to validate stress scenarios and to identify potential performance issues under different stress conditions. Tests are capturing secondary metrics when the instrumented tests have nothing to do with measuring clear and established performance goals. Although secondary metrics look good on wall charts and in reports, if the data is not going to be used in a meaningful way to make improvements in the engineering cycle, it is probably wasted data. En sure that you know what you are measuring and why.
Testing for most applications will be automated. Tools used for testing would be the tool specified in the requirement specification. The tools used for performance testing are Loadrunner 6.5 and Webload 4.5x
23 Tools
23.1 LoadRunner 6.5
LoadRunner is Mercury Interactives tool for testing the performance of client/server systems. LoadRunner enables you to test your system under controlled and peak load conditions. To generate load, LoadRunner runs thousands of Virtual Users that are distributed over a network. Using a minimum of hardware resources, these Virtual users provide consistent. Repeatable and measurable load to execute your client/server system just as real users would. LoadRunners in depth reports and graphs provide the information that you need to evaluate the performance of your client/server system.
WebSizr, WebCorder
http://www.technova tions.com/home.htm
Technovations. WebSizr load testing tool supports authentication, cookies, redirects Notes: downloadable, 30 eval. period.
Hardware Benchmarking - Hardware benchmarking is performed to size the application with the planned Hardware platform. It is significantly different from capacity planning exercise in that it is done after development and before deployment Software Benchmarking - Defining the right placement and composition of software instances can help in vertical scalability of the system without addition of hardware resources. This is achieved through software benchmark test.
Purpose: explains the value and focus of the test, along with some simple background information that might be helpful during testing. Constraints: details any constraints and values that should not be exceeded during testing. Time estimate: a rough estimate of the amount of time that the test may take to complete.
Proprietary & Confidential -
Type of workload: in order to properly achieve the goals of the test, each test requires a certain type of workload. This methodology specification provides information on the appropriate script of pages or transactions for the user. Methodology: a list of suggested steps to take in order to assess the system under test. What to look for: contains information on behaviors, issues and errors to pay attention to during and after the test.
24 Performance Metrics
The Common Metrics selected /used during the performance testing is as below Response time Turnaround time = the time between the submission of a batch job and the completion of its output. Stretch Factor: The ratio of the response time with single user to that of concurrent users. Throughput: Rate (requests per unit of time) Examples: Jobs per second Requests per second Millions of Instructions Per Second (MIPS) Millions of Floating Point Operations Per Second (MFLOPS) Packets Per Second (PPS) Bits per second (bps) Transactions Per Second (TPS) Capacity: Nominal Capacity: Maximum achievable throughput under ideal workload conditions. E.g., bandwidth in bits per second. The response time at maximum throughput is too high. Usable capacity: Maximum throughput achievable without exceeding a pre-specified response-time limit Efficiency: Ratio usable capacity to nominal capacity. Or, the ratio of the performance of an n-processor system to that of a one-processor system is its efficiency. Utilization: The fraction of time the resource is busy servicing requests. Average Fraction used for memory. As tests are executed, metrics such as response times for transactions, HTTP requests per second, throughput etc., should be collected. It is also important to monitor and collect the statistics such as CPU utilization, memory, disk space and network usage on individual web, application and database servers and make sure those numbers recede as load decreases. Cognizant has built custom monitoring tools to collect the statistics. Third party monitoring tools are also used based on the requirement.
Page Component breakdown time Page Download time Component size Analysis Error Statistics Errors per Second Total Successful/Failed Transactions
24.4 Conclusion
Performance testing is an independent discipline and involves all the phases as the mainstream testing lifecycle i.e strategy, plan, design, execution, analysis and reporting. Without the rigor described in this paper, executing performance testing does not yield anything more than finding more defects in the system. However, if executed systematically with appropriate planning, performance testing can unearth issues that otherwise cannot be done through mainstream testing. It is very typical of the project manager to be overtaken by time and resource pressures leading not enough budget being allocated for performance testing, the consequences of which could be disastrous to the final system. There is another flip side of the coin. However there is an important point to be noted here. Before testing the system for performance requirements, the system should have been architected and designed for meeting the required performance goals. If not, it may be too late in the software development cycle to correct serious performance issues. Web-enabled applications and infrastructures must be able to execute evolving business processes with speed and precision while sustaining high volumes of changing and unpredictable user audiences. Load testing gives the greatest line of defense against poor performance and accommodates complementary strategies for performance management and monitoring of a production environment. The discipline helps businesses succeed in leveraging Web technologies to their best advantage, enabling new business opportunity lowering transaction costs and strengthening profitability. Fortunately, robust and viable solutions exist to help fend off disasters that result from poor performance. Automated load
Performance Testing Process & Methodology 156 Proprietary & Confidential -
testing tools and services are available to meet the critical need of measuring and optimizing complex and dynamic application and infrastructure performance. Once these solutions are properly adopted and utilized, leveraging an ongoing, lifecycle-focused approach, businesses can begin to take charge and leverage information technology assets to their competitive advantage. By continuously testing and monitoring the performance of critical software applications, business can confidently and proactively execute strategic corporate initiatives for the benefit of shareholders and customers alike.
25 Load Testing
Load Testing is creation of a simulated load on a real computer system by using virtual users who submit work as real users would do at real client workstations and thus testing the systems ability to support such workload. Testing of critical web applications during its development and before its deployment should include functional testing to confirm to the specifications, performance testing to check if it offers an acceptable response time and load testing to see what hardware or software configuration will be required to provide acceptable response time and handle the load that will created by the real users of the system
Finally it should also be taken into consideration of the test tool which supports load testing by determining its multithreading capabilities and the creation of number of virtual users with minimal resource consumption and maximal virtual user count.
26.3 Settings
Run time settings should be defined the way the scripts should be run in order to accurately emulate real users. Settings can configure the number of concurrent connections, test run time, follow HTTP redirects etc., System response times also can vary based on the connection speed. Hence throttling bandwidth can emulate dial up connections at varying modem speeds (28.8 Kbps or 56.6 Kbps or T1 (1.54M) etc.
26.6 Conclusion
Load testing is the measure of an entire Web application's ability to sustain a number of simultaneous users and transactions, while maintaining adequate response times. It is the only way to accurately test the end-to-end performance of a Web site prior to going live. Two common methods for implementing this load testing process are manual and automated testing. Manual testing would involve
Performance Testing Process & Methodology 160 -
As load testing is iterative in nature, the performance problems must be identified so that system can be tuned and retested to check for bottlenecks. For this reason, manual testing is not a very practical option. Today, automated load testing is the preferred choice for load testing a Web application. The testing tools typically use three major components to execute a test: A console, which organizes, drives and manages the load Virtual users, performing a business process on a client application Load servers, which are used to run the virtual users
With automated load testing tools, tests can be easily rerun any number of times and the results can be reported automatically. In this way, automated testing tools provide a more cost-effective and efficient solution than their manual counterparts. Plus, they minimize the risk of human error during testing.
27 Stress Testing
27.1 Introduction to Stress Testing
This testing is accomplished through reviews (product requirements, software functional requirements, software designs, code, test plans, etc.), unit testing, system testing (also known as functional testing), expert user testing (like beta testing but in-house), smoke tests, etc. All these testing activities are important and each plays an essential role in the overall effort but, none of these specifically look for problems like memory and resource management. Further, these testing activities do little to quantify the robustness of the application or determine what may happen under abnormal circumstances. We try to fill this gap in testing by using stress testing. Stress testing can imply many different types of testing depending upon the audience. Even in literature on software testing, stress testing is often confused with load testing and/or volume testing. For our purposes, we define stress testing as performing random operational sequences at larger than normal volumes, at faster than normal speeds and for longer than normal periods of time as a method to accelerate the rate of finding defects and verify the robustness of our product. Stress testing in its simplest form is any test that repeats a set of actions over and over with the purpose of breaking the product. The system is put through its paces to find where it may fail. As a first step, you can take a common set of actions for your system and keep repeating them in an attempt to break the system. Adding some randomization to these steps will help find more defects. How long can your application stay functioning doing this operation repeatedly? To help you reproduce your failures one of the most important things to remember to do is to log everything as you proceed. You need to know what exactly was happening when the system failed. Did the system lock up with 100 attempts or 100,000 attempts?[1] Note that there are many other types of testing which have not mentioned above, for example, risk based testing, random testing, security testing, etc. We have found, and it seems they agree, that it is best to review what needs to be tested, pick multiple testing types that will provide the best coverage for the product to be tested, and then master these testing types, rather than trying to implement every testing type. Some of the defects that we have been able to catch with stress testing that have not been found in any other way are memory leaks, deadlocks, software asserts, and configuration conflicts. For more details about these types of defects or how we were able to detect them, refer to the section Typical Defects Found by Stress Testing. Table 1 provides a summary of some of the strengths and weaknesses that we have found with stress testing.
Helpful at finding memory leaks, deadlocks, software asserts, and configuration conflicts
With automated stress testing, the stress test is performed under computer control. The stress test tool is implemented to determine the applications configuration, to execute all valid command sequences in a random order, and to perform data logging. Since the stress test is automated, it becomes easy to execute multiple stress tests simultaneously across more than one product at the same time. Depending on how the stress inputs are configured stress can do both positive and negative testing. Positive testing is when only valid parameters are provided to the device under test, whereas negative testing provides both valid and invalid parameters to the device as a way of trying to break the system under abnormal circumstances. For example, if a valid input is in seconds, positive testing would test 0 to 59 and negative testing would try 1 to 60, etc. Even though there are clearly advantages to automated stress testing, it still has its disadvantages. For example, we have found that each time the product application changes we most likely need to change the stress tool (or more commonly commands need to be added to/or deleted from the input command set). Also, if the input command set changes, then the output command sequence also changes given pseudo-randomization. Table 2 provides a summary of some of these advantages and disadvantages that we have found with automated stress testing.
In summary, automated stress testing overcomes the major disadvantages of manual stress testing and finds defects that no other testing types can find. Automated stress testing exercises various features of the system, at a rate exceeding that at which actual end-users can be expected to do, and for durations of time that exceed typical use. The automated stress test randomizes the order in which the product features are accessed. In this way, non-typical sequences of user interaction are tested with the system in an attempt to find latent defects not detectable with other techniques.
Performance Testing Process & Methodology 164 Proprietary & Confidential -
To take advantage of automated stress testing, our challenge then was to create an automated stress test tool that would: 1. Simulate user interaction for long periods of time (since it is computer controlled we can exercise the product more than a user can). 2. Provide as much randomization of command sequences to the product as possible to improve test coverage over the entire set of possible features/commands. 3. Continuously log the sequence of events so that issues can be reliably reproduced after a system failure. 4. Record the memory in use over time to allow memory management analysis. 5. Stress the resource and memory management features of the system.
2) Graphical User Interfaces (GUIs): Interfaces that use the Windows model to allow
the user direct control over the device, individual windows and controls may or may not be visible and/or active depending on the state of the device.
For additional complexity, other variations of the automated stress test can be performed. For example, the stress test can vary the rate at which commands are sent to the interface, the stress test can send the commands across multiple interfaces simultaneously, (if the product supports it), or the stress test can send multiple commands at the same time.
Input File
DUT
occurs, continue to add additional data to the defect description. Eventually, over time, you will be able to detect a pattern, isolate the root cause and resolve the defect. Some defects just seem to be un-reproducible, especially those that reside around page faults, but overall, we know that the robustness of our applications increases proportionally with the amount of time that the stress test will run uninterrupted.
Finding areas of a program not exercised by a set of test cases, Creating additional test cases to increase coverage, and Determining a quantitative measure of code coverage, which is an indirect measure of quality.
Also an optional aspect of test coverage analysis is: Identifying redundant test cases that do not increase coverage.
A test coverage analyzer automates this process. Test coverage analysis is sometimes called code coverage analysis. The two terms are synonymous. The academic world more often uses the term "test coverage" while practitioners more often use "code coverage". Test coverage analysis can be used to assure quality of the set of tests, and not the quality of the actual product. Coverage analysis requires access to test program source code and often requires recompiling it with a special command. Code coverage analysis is a structural testing technique (white box testing). Structural testing compares test program behavior against the apparent intention of the source code. This contrasts with functional testing (black-box testing), which compares test program behavior against a requirements specification. Structural testing examines how the program works, taking into account possible pitfalls in the structure and logic. Functional testing examines what the program accomplishes, without regard to how it works internally.
A large variety of coverage measures exist. Here is a description of some fundamental measures and their strengths and weaknesses
28.3 Procedure-Level Test Coverage
Probably the most basic form of test coverage is to measure what procedures were and were not executed during the test suite. This simple statistic is typically available from execution profiling tools, whose job is really to measure performance bottlenecks. If the execution time in some procedures is zero, you need to write new tests that hit those procedures. But this measure of test coverage is so coarse-grained it's not very practical.
28.4 Line-Level Test Coverage
The basic measure of a dedicated test coverage tool is tracking which lines of code are executed, and which are not. This result is often presented in a summary at the procedure, file, or project level giving a percentage of the code that was executed. A large project that achieved 90% code coverage might be considered a well-tested product. Typically the line coverage information is also presented at the source code level, allowing you to see exactly which lines of code were executed and which were not. This, of course, is often the key to writing more tests that will increase coverage: By studying the unexecuted code, you can see exactly what functionality has not been tested.
28.5 Condition Coverage and Other Measures
It's easy to find cases where line coverage doesn't really tell the whole story. For example, consider a block of code that is skipped under certain conditions (e.g., a statement in an if clause). If that code is shown as executed, you don't know whether you have tested the case when it is skipped. You need condition coverage to know. There are many other test coverage measures. However, most available code coverage tools do not provide much beyond basic line coverage. In theory, you should have more. But in practice, if you achieve 95+% line coverage and still have time and budget to commit to further testing improvements, it is an enviable commitment to quality!
To monitor execution, test coverage tools generally "instrument" the program by inserting "probes". How and when this instrumentation phase happens can vary greatly between different products. Adding probes to the program will make it bigger and slower. If the test suite is large and time-consuming, the performance factor may be significant.
28.6.1Source-Level Instrumentation
Some products add probes at the source level. They analyze the source code as written, and add additional code (such as calls to a code coverage runtime) that will record where the program reached. Such a tool may not actually generate new source files with the additional code. Some products, for example, intercept the compiler after parsing but before code generation to insert the changes they need. One drawback of this technique is the need to modify the build process. A separate version namely, code coverage version in addition to other versions, such as debug (un optimized) and release (optimized) needs to be maintained. Proponents claim this technique can provide higher levels of code coverage measurement (condition coverage, etc.) than other forms of instrumentation. This type of instrumentation is dependent on programming language -- the provider of the tool must explicitly choose which languages to support. But it can be somewhat independent of operating environment (processor, OS, or virtual machine).
28.6.2Executable Instrumentation
Probes can also be added to a completed executable file. The tool will analyze the existing executable, and then create a new, instrumented one. This type of instrumentation is independent of programming language. However, it is dependent on operating environment -- the provider of the tool must explicitly choose which processors or virtual machines to support.
28.6.3Runtime Instrumentation
Probes need not be added until the program is actually run. The probes exist only in the in-memory copy of the executable file; the file itself is not modified. The same executable file used for product release testing should be used for code coverage. Because the file is not modified in any way, just executing it will not automatically start code coverage (as it would with the other methods of instrumentation). Instead, the code coverage tool must start program execution directly or indirectly. Alternatively, the code coverage tool will add a tiny bit of instrumentation to the executable. This new code will wake up and connect to a waiting coverage tool
Performance Testing Process & Methodology 171 Proprietary & Confidential -
whenever the program executes. This added code does not affect the size or performance of the executable, and does nothing if the coverage tool is not waiting. Like Executable Instrumentation, Runtime Instrumentation is independent of programming language but dependent on operating environment.
Rational (IBM) PurifyPlus Software Research Testwell Paterson Technology TCAT CTC++ LiveCoverage
Coverage analysis is a structural testing technique that helps eliminate gaps in a test suite. It helps most in the absence of a detailed, up-to-date requirements specification. Each project must choose a minimum percent coverage for release criteria based on available testing resources and the importance of preventing post-release failures. Clearly, safety-critical software should have a high goal. We must set a higher coverage goal for unit testing than for system testing since a failure in lower-level code may affect multiple high-level callers.
TCP is a measure of estimating the complexity of an application. This is also used as an estimation technique to calculate the size and effort of a testing project. The TCP counts are nothing but ranking the requirements and the test cases that are to be written for those requirements into simple, average and complex and quantifying the same into a measure of complexity. In this courseware we shall give an overview about Test Case Points and not elaborate on using TCP as an estimation technique.
29.2 Calculating the Test Case Points:
Based on the Functional Requirement Document (FRD), the application is classified into various modules like say for a web application, we can have Login and Authentication as a module and rank that particular module as Simple, Average and Complex based on the number and complexity of the requirements for that module. A Simple requirement is one, which can be given a value in the scale of 1 to3. An Average requirement is ranked between 4 and 7. A Complex requirement is ranked between 8 and 10. 29.2.1.1.1 Complexity of Requirements Requirement Classification Simple (1-3) Average (4-7) Complex (> 8) Total
The test cases for a particular requirement are classified into Simple, Average and Complex based on the following four factors. Test case complexity for that requirement OR Interface with other Test cases OR No. of verification points OR Baseline Test data
A sample guideline for classification of test cases is given below. Any verification point containing a calculation is considered 'Complex' Any verification point, which interfaces with or interacts with another application is classified as 'Complex' Any verification point consisting of report verification is considered as 'Complex' A verification point comprising Search functionality may be classified as 'Complex' or 'Average' depending on the complexity
Depending on the respective project, the complexity needs to be identified in a similar manner. Based on the test case type an adjustment factor is assigned for simple, average and complex test cases. This adjustment factor has been calculated after a thorough study and analysis done on many testing projects. The Adjustment Factor in the table mentioned below is pre-determined and must not be changed for every project.
Test Case Type Simple Average Complex Total Test Case Points
Number Result No of Simple requirements in Number*Adjust factor A the project (R1) No of Average requirements in Number*Adjust factor B the project (R2) No of Complex requirements in Number*Adjust factor C the project (R3) R1+R2+R3
From the break up of Complexity of Requirements done in the first step, we can get the number of simple, average and complex test case types. By multiplying the number of requirements with it s corresponding adjustment factor, we get the simple, average and complex test case points. Summing up the three results, we arrive at the count of Total Test Case Points.