Académique Documents
Professionnel Documents
Culture Documents
Software Testing: Testing is a process of executing a program with the intent of finding
error.
How to test?
Testing can be done in the following ways:
Manually
Automation (By using tools like WinRunner, LoadRunner, TestDirector )
Combination of Manual and Automation.
Software Project: A problem solved by some people through a process is called a project.
Software Project
Page 1 of 132
Software Testing Material
Resources
Cost
Schedules
Size
Data Design: Transforms the information domain model into the data structures that
will be required to implement the software.
Architectural design: Relationship between major structural elements of the software.
Represents the structure of data and program components that are required to build a
computer based system.
Interface design: Creates an effective communication medium between a human and a
computer.
Component level Design: Transforms structural elements of the software architecture
into a procedural description of software components.
Testing: Testing is a process of executing a program with the intent of finding error
Page 2 of 132
Software Testing Material
High Level Design Document (HLDD): Consists of the overall hierarchy of the system in
terms of modules.
Low Level Design Document (LLDD): Consists of every sub module in terms of Structural
logic (ERD) and Backend Logic(DFD)
White Box Testing: A coding level testing technique to verify completeness and correctness
of the programs with respect to design. Also called as Glass BT or Clear BT
Grey Box Testing: Combination of white box and black box testing.
Software Quality Assurance(SQA): SQA concepts are monitoring and measuring the
strength of development process.
Ex: LCT (Life Cycle Testing)
Quality:
Meet customer requirements
Meet customer expectations (cost to use, speed in process or performance, security)
Possible cost
Time to market
LCD: A multiple stages of development stages and the every stage is verified for
completeness.
V model:
Build: When coding level testing over. it is a completely integration tested modules. Then it
is called a build. Build is developed after integration testing. (.exe)
Page 3 of 132
Software Testing Material
Test Management: Testers maintain some documents related to every project. They will
refer these documents for future modifications.
Assessment of Development Plan
Prepare TestPlan
Information Gathering Requirements Phase Testing
& Analysis
Port Testing
Maintenance Test Software Changes
Test Efficiency
Change Request: The request made by the customer to modify the software.
BBT, UAT and Test management process where the independent testers or testing team will
be involved.
Refinement form of V-Model: Due to cost and time point of view v-model is not applicable
to small scale and medium scale companies. This type of organizations are maintaining a
refinement form of v-model.
Page 4 of 132
Software Testing Material
Code
During the requirements analysis all the requirements are analyzed. at the end of this phase
S/wRS is prepared. It consists of the functional (customer requirements) + System
Requirements (h/w + S/w) requirements. It is prepared by the system analyst.
During the design phase two types of designs are done. HLDD and LLDD. Tech Leads will
be involved.
During unit testing, they conduct program level testing with the help of WBT techniques.
During the Integration Testing, the testers and programmers or test programmers integrating
the modules to test with respect to HLDD.
During the system and functional testing the actual testers are involved and conducts tests
based on S/wRS.
During the UAT customer site people are also involved, and they perform tests based on the
BRS.
From the above model the small scale and medium scale organizations are also conducts life
cycle testing. But they maintain separate team for functional and system testing.
Page 5 of 132
Software Testing Material
After the completion of above like design documents, they (tech leads) concentrate on review
of the documents for correctness and completeness. In this review they can apply the below
factors.
User Information
Invalid User
Unit Testing:
After the completion of design and their reviews programmers are concentrating on coding.
During this stage they conduct program level testing, with the help of the WBT techniques.
This WBT is also known as glass box testing or clear box testing.
WBT is based on the code. The senior programmers will conduct testing on programs WBT is
applied at the module level.
Page 6 of 132
Software Testing Material
2. Operations Testing: Whither the software is running under the customer expected
environment platforms (such as OS, compilers, browsers and etcsys s/w.)
Integration Testing: After the completion of unit testing, development people concentrate on
integration testing, when they complete dependent modules of unit testing. During this test
programmers are verifying integration of modules with respect to HLDD (which contains
hierarchy of modules).
Top-down Approach
Bottom-up approach.
Stub: It is a called program. It sends back control to main module instead of sub module.
Driver: It is a calling Program. It invokes a sub module instead of main module.
Mai
n Stub
Sub Sub
Module1 Module2
Bottom-Up: This approach starts testing, from lower-level modules. drivers are used to
connect the sub modules. ( ex login, create driver to accept default uid and pwd)
Mai
n
Driver
Sub
Module1
Sub
Module2
Page 7 of 132
Software Testing Material
Sandwich: This approach combines the Top-down and Bottom-up approaches of the
integration testing. In this middle level modules are testing using the drivers and stubs.
Mai
n
Driver
Sub
Module1
Stub
Sub Sub
Module2 Module3
System Testing:
Conducted by separate testing team
Follows Black Box testing techniques
Depends on S/wRS
Build level testing to validate internal processing depends on external interface
processing depends on external interface
This phase is divided into 4 divisions
After the completion of Coding and that level tests(U & I) development team releases a
finally integrated all modules set as a build. After receiving a stable build from
development team, separate testing team concentrate on functional and system testing with
the help of BBT.
Usability and System testing are called as Core testing and Performance and Security Testing
techniques are called as Advanced testing.
From the testers point of view functional and usability tests are important.
Page 8 of 132
Software Testing Material
Help documentation is also called as user manual. But actually user manuals are prepared
after the completion of all other system test techniques and also resolving all the bugs.
Functional testing: During this stage of testing, testing team concentrate on " Meet
Customer Requirements". For performing what functionality, the system is developed met or
not can be tested.
For every project functionality testing is most important. Most of the testing tools, which are
available in the market are of this type.
System Testing
80 % Functional Testing
Page 9 of 132
Software Testing Material
Input Domain Testing: During this test, the test engineer validates size and type of every
input object. In this coverage, test engineer prepares boundary values and equivalence classes
for every input object.
Ex: A login process allows user id and password. User id allows alpha numeric from 4-16
characters long. Password allows alphabet from 4-8 characters long.
Recovery Testing: This test is also known as Reliability testing. During this test, test
engineers validates that, whether our application build can recover from abnormal situations
or not.
Ex: During process power failure, network disconnect, server down, database disconnected
etc
Abnormal
Normal
Recovery Testing is an extension of Error Handling Testing.
Page 10 of 132
Software Testing Material
Compatibility Testing: This test is also known as portable testing. During this test, test
engineer validates continuity of our application execution on customer expected
platforms( like OS, Compilers, browsers, etc..)
During this compatibility two types of problems arises like
1. Forward compatibility
2. Backward compatibility
Forward compatibility:
The application which is developed is ready to run, but the project technology or environment
like OS is not supported for running.
Buil OS
d
Backward compatibility:
The application is not ready to run on the technology or environment.
Buil OS
d
Configuration Testing: This test is also known as Hardware Compatibility testing. During
this test, test engineer validates that whether our application build supports different
technology i.e. hardware devices or not?
Inter Systems Testing: This test is also known as End-to-End testing. During this test, test
engineer validates that whither our application build coexistence with other existing software
in the customer site to share the resources (H/w or S/w).
WBAS
Water Bill Automation
Local EBAS
Electricity Bill Automation
Data
Base
Server TPBAS
Tele Phone Bill Automation
ITBAS
Income Tax Bill Automation
Sharable
Newly Added Component Resource New Server
Page 11 of 132
Software Testing Material
Bank Loans
The first example is one system is our application and other one is sharable.
The second example is same system but different components.
Build Server
Instal
Build 1. Setup Program
lation
+Required
Customer Site
S/w
Like 2. Easy Interface
components to
Environment
run
application 3. Occupied Disk Space
Page 12 of 132
Software Testing Material
Sanitation Testing: This test is also known as Garbage Testing. During this test, test
engineer finds extra features in your application build with respect to S/w RS.
Maximum testers may not get this type of problems.
User Id
Password
Parallel or Comparitive testing: During this test, test engineer compares our application
build with similar type of applications or old versions of same application to find
competitiveness.
1. Load Testing
2. Stress Testing
3. Data Volume Testing
4. Storage Testing
Load Testing:
This test is also known as scalability testing. During this test, test engineer
executes our application under customer expected configuration and load to estimate
performance.
Stress Testing:
During this test, test engineer executes our application build under customer
expected configuration and peak load to estimate performance.
Page 13 of 132
Software Testing Material
Storage Testing:
Execution of our application under huge amounts of resources to estimate
storage limitations to be handled by our application is called as Storage Testing.
Trashing
Performance =
--
+
Resources
Security Testing: It is also an advanced testing technique and complex to apply.
To conduct this tests, highly skilled persons who have security domain knowledge.
Access Control: Also called as Privileges testing. The rights given to a user to do a system
task.
Encryption / Decryption:
Encryption- To convert actual data into a secret code which may not be understandable to
others.
Decryption- Converting the secret data into actual data.
Client Server
User Acceptance Testing: After completion of all possible system tests execution, our
organization concentrate on user acceptance test to collect feed back.
To conduct user acceptance tests, they are following two approaches like Alpha () - Test and
Beta () -Test.
Note: In s/w development projects are two types based on the products like software
application ( also called as Project ) and Product.
Software Application ( Project ) : Get requirements from the client and develop the project.
This software is for only one company. And has specific customer. For this Alpha test will be
done.
Page 14 of 132
Software Testing Material
Product : Get requirements from the market and develop the project. This software may have
more than one company. And has no specific customer. For this - Version or Trial version
will be released in the market to do Beta test.
During this Port testing Release team validate below factors in customer site:
The above tests are done by the release team. After the completion of above testing, the
Release Team will gives training and application support in customer site for a period.
During utilization of our application by customer site people, they are sending some Change
Request (CR) to our company. When CR is received the following steps are done
Based on the type of CR there are two types,
1. Enhancement
2. Missed Defect
Page 15 of 132
Software Testing Material
Change Request
Change Control Board: It is the team which will handles customer requests for
enhancement changes.
Testing Team:
From refinement form of V-Model small scale companies and medium scale companies are
maintaining separate testing team for some of the stages in LCT.
In their teams organisation maintains below roles
Page 16 of 132
Software Testing Material
Quality Control
Quality Assurance
Testing Terminology:-
Monkey / Chimpanzee Testing: The coverage of main activities only in your application
during testing is called as monkey testing.(Less Time)
Gerilla Testing: To cover a single functionality with multiple possibilities to test is called
Gerilla ride or Gerilla Testing. (No rules and regulations to test a issue)
Exploratory Testing: Level by level of activity coverage of activities in your application
during testing is called exploratory testing. (Covering main activities first and other activities
next)
Sanity Testing: This test is also known as Tester Acceptance Test (TAT). They test for
whither developed team build is stable for complete testing or not?
Page 17 of 132
Software Testing Material
Smoke Testing: An extra shakeup in sanity testing is called as Smoke Testing. Testing team
rejects a build to development team with reasons, before start testing.
Bebugging: Development team release a build with known bugs to testing them.
Bigbang Testing: A single state of testing after completion of all modules development is
called Bigbang testing. It is also known as informal testing.
Static Testing: Conduct a test without running an application is called as Static Testing.
Ex: User Interface Testing
Manual Vs Automation: A tester conduct a test on application without using any third party
testing tool. This process is called as Manual Testing.
A tester conduct a test with the help of software testing tool. This process is called as
Automation.
For verifying the need for automation they will consider following two types:
No1
No2
Multiply
Result
Page 18 of 132
Software Testing Material
Criticality indicates complex to apply that test manually. Impact indicates test repetition.
Retesting: Re execution of our application to conduct same test with multiple test data is
called Retesting.
Regression Testing: The re execution of our test on modified build to ensure bug fix work
and occurrences of side effects is called regression testing.
11 Test Fail
Development
10 Tests Passed
Selection of Automation: Before starting one project level testing by one separate testing
team, corresponding project manager or test manager or quality analyst defines the need of
test automation for that project depends on below factors.
Page 19 of 132
Software Testing Material
Testing Policy
C.E.O
Company Level
Test Strategy
Test Manager/
QA / PM
Test Methodology
Test Cases
Test Procedure
Project Level
Test Lead, Test Test Script
Engineer
Test Log
Defect Report
Test Lead
Test Summary Report
Testing Policy: It is a company level document and developed by QC people. This document
defines testing objectives, to develop a quality software.
Address
CEO Sign
Page 20 of 132
Software Testing Material
Test Strategy:
1. Scope & Objective: Definition, need and purpose of testing in your in your
organization
2. Business Issues: Budget Controlling for testing
3. Test approach: defines the testing approach between development stages and testing
factors.
TRM: Test Responsibility Matrix or Test Matrix defines mapping between test factors
and development stages.
4. Test environment specifications: Required test documents developed by testing team
during testing.
5. Roles and Responsibilities: Defines names of jobs in testing team with required
responsibilities.
6. Communication & Status Reporting: Required negotiation between two consecutive
roles in testing.
7. Testing measurements and metrics: To estimate work completion in terms of Quality
Assessment, Test management process capability.
8. Test Automation: Possibilities to go test automation with respect to corresponding
project requirements and testing facilities / tools available (either complete
automation or selective automation)
9. Defect Tracking System: Required negotiation between the development and testing
team to fix defects and resolve.
10. Change and Configuration Management: required strategies to handle change requests
of customer site.
11. Risk Analysis and Mitigations: Analyzing of future common problems appears during
testing and possible solutions to recover.
12. Training plan: Need of training for testing to start/conduct/apply.
Test Factor: A test factor defines a testing issue. There are 15 common test factors in S/w
Testing.
Ex:
QC Quality
PM/QA/TM Test Factor
TL Testing Techniques
TE Test cases
PM/QA/TM Portable
TL Compatibility Testing
TE Run on different OS
Page 21 of 132
Software Testing Material
Test Factors:
1. Authorization: Validation of users to connect to application
Security Testing
Functionality / Requirements Testing
2. Access Control: Permission to valid user to use specific service
Security Testing
Functionality / Requirements Testing
3. Audit Trail: Maintains metadata about operations
Error Handling Testing
Functionality / Requirements Testing
4. Correctness: Meet customer requirements in terms of functionality
All black box Testing Techniques
5. Continuity in Processing: Inter process communication
Execution Testing
Operations Testing
6. Coupling: Co existence with other application in customer site
Inter Systems Testing
7. Ease of Use: User friendliness
User Interface Testing
Manual Support Testing
8. Ease of Operate: Ease in operations
Installation testing
9. File Integrity: Creation of internal files or backup files
Recovery Testing
Functionality / Requirements Testing
10. Reliability: Recover from abnormal situations or not. Backup files using or not
Recovery Testing
Stress Testing
11. Portable: Run on customer expected platforms
Compatibility Testing
Configuration Testing
12. Performance: Speed of processing
Load Testing
Stress Testing
Data Volume Testing
Storage Testing
13. Service Levels: Order of functionalities
Stress Testing
Functionality / Requirements Testing
14. Methodology: Follows standard methodology during testing
Compliance Testing
15. Maintainable: Whither application is long time serviceable to customers or not
Compliance Testing (Mapping between quality to testing connection)
Quality Gap: A conceptual gap between Quality Factors and Testing process is called as
Quality Gap.
Test Methodology: Test strategy defines over all approach. To convert a over all approach
into corresponding project level approach, quality analyst / PM defines test methodology.
Step 1: Collect test strategy
Page 22 of 132
Software Testing Material
Testing Process:
Test
Test Test Design Test
Test
Initiation Plannin Executio
Closur
g n
e
Regression
Testing Defect
Test
Report
PET (Process Experts Tools and Technology): It is an advanced testing process developed
by HCL, Chennai.This process is approved by QA forum of India. It is a refinement form of
V-Model.
Page 23 of 132
Software Testing Material
Analysis ( S/wRS )
Test Automation
Defect Independent
Defect If u got any mismatch then
Fixing suspend that Batch
Report
Otherwise
Test Closure
Sign Off
Page 24 of 132
Software Testing Material
Test Planning: After completion of test initiation, test plan author concentrates on test plan
1. Team Formation
In general test planning process starts with testing team formation, depends on below factors.
Availability of Testers
Test Duration
Availability of test environment resources
The above three are dependent factors.
Test Duration:
Common market test team duration for various types of projects.
C/S, Web, ERP projects - SAP, VB, JAVA Small - 3-5 months
System Software - C, C++ - Medium 7-9 months
Machine Critical - Prolog, LISP - Big - 12-15 months
Page 25 of 132
Software Testing Material
Format:
1) Test Plan id: Unique number or name
2) Introduction: About Project
3) Test items: Modules
4) Features to be tested: Responsible modules to test
5) Feature not to be tested: Which ones and why not?
6) Feature pass/fail criteria: When above feature is pass/fail?
7) Suspension criteria: Abnormal situations during above features testing.
8) Test environment specifications: Required docs to prepare during testing
9) Test environment: Required H/w and S/w
10) Testing tasks: what are the necessary tasks to do before starting testing
11) Approach: List of Testing Techniques to apply
12) Staff and training needs: Names of selected testing Team
13) Responsibilities: Work allocation to above selected members
14) Schedule: Dates and timings
15) Risks and mitigations : Common non technical problems
16) Approvals: Signatures of PM/QA and test plan author
After completion of test plan writing test plan author concentrate on review of that document
for completeness and correctness. In this review, selected testers also involved to give
feedback. In this reviews meeting, testing team conducts coverage analysis.
Test Design:
After completion of test plan and required training days, every selected test
engineer concentrate on test designing for responsible modules. In this phase test engineer
prepares a list of testcases to conduct defined testing, on responsible modules.
There are three basic methods to prepare testcases to conduct core level testing.
Page 26 of 132
Software Testing Material
BRS
S/wRS
Usecases +
Functional TestCases
Specifications
HLDD
LLDD
Coding .Exe
TestCase Format:
After completion of testcases selection for responsible modules, test engineer prepare an
IEEE format for every test condition.
Page 27 of 132
Software Testing Material
For preparing this UI testcases they are not studying S/wRS, LLDD etc
Functionality testcases source: S/wRS. I/P domain testcases source: LLDD
DSN
Bal 66.666 66.7
Page 28 of 132
Software Testing Material
DS
Mail Server
Image Image
compression Decompression
Mail + Mail +
.Gif .Gif
Import
Testcase7: Meaningful Help messages or not?(First 6 tc for uit and 7 manual support testing)
Review Testcases: After completion of testcases design with required documentation [IEEE]
for responsible modules, testing team along with test lead concentrate on review of testcases
for completeness and correctness. In this review testing team conducts coverage analysis
Page 29 of 132
Software Testing Material
Test Execution:
Development Site Initial Build Testing Site
Level-0 (Sanity /
Smoke / TAT)
Defect Report
Defect Fixing
Level-1
8-9 (Comprehensive)
Times
Level-3 (Final
Regression)
Test Execution levels Vs Test Cases:
Level 0 P0
Level 1 P0, P1 and P2 testcases as batches
Level 2 Selected P0, P1 and P2 testcases with respect to modifications
Level 3 Selected P0, P1 and P2 testcases at build.
Server
Softbase
Build
FTP
Test
Environment
Page 30 of 132
Software Testing Material
To maintain this original builds and modified builds, development team use version control
softwares.
Server
1 2
Modified Modified
Build Programs
Test
Environment
Embed into
Old Build
During this testing, testing team observes below factors on that initial build.
This level-0 testing is also called as Testability or Octangle Testing (bcz based on 8 factors).
Test Automation: After receiving a stable build from development team, testing team
concentrate on test automation.
Test Automation two types: Complete and Selective.
Test Automation
* (All P0 and
Complete Selective carefully
selected P1
Testcases)
Page 31 of 132
Software Testing Material
Skip Passed
Partial
Blocked Pass / Fail
Level-2 Regression Testing: Actually this Regression testing is part of Level-1 testing.
During comprehensive test execution, testing team reports mismatches to development team
as defects. After receiving that defect, development team performs modifications in coding to
resolve that accepted defects. When they release modified build, testing team concentrate on
regression testing before conducts remaining comprehensive testing.
Severity: Seriousness of the defect defined by the tester through Severity (Impact and
Criticality) importance to do regression testing. In organizations they will be giving three
types of severity like High, Medium and Low.
High: Without resolving this mismatch tester is not able to continue remaining testing. (Show
stopper).
Medium: Able to continue testing, but resolve must.
Low: May or may not resolve.
Page 32 of 132
Software Testing Material
Resolved Bug
Severity
On modified Build to
ensure bug resolving
Case 1: If development team resolved bug and its severity is high, testing team will re
execute all P0, P1 and carefully selected P2 test cases with respect to that modification.
Case 2: If development team resolved bug and its severity is medium, testing team will re
execute all P0, selected P1 [80-90 %] and some of P2 test cases with respect to that
modification.
Case 3: If development team resolved bug and its severity is low, testing team will re execute
some of the P0, P1, P2 test cases with respect to that modification.
Page 33 of 132
Software Testing Material
Defect Age: The time gap between resolved on and reported on.
Defect Submission:
QA
Transmittal Reports
Fig: Large Scale Organizations.
Page 34 of 132
Software Testing Material
Defect Submission:
Project Manager
Transmittal Reports
Fig: Small Scale Organizations.
Defect Status Cycle:
New
Closed
Reopen
Page 35 of 132
Software Testing Material
Detect Defect
Reproduce Defect
Report Defect
Fix Bug
Resolve Bug
Close Bug
Resolution Type:
Testing Development
Defect Report
Resolution Type
Page 36 of 132
Software Testing Material
Version Control bugs: (Medium severity) Difference between two consecutive versions
Test Closure:
After completion of all possible testcase execution and their defect reporting and tracking,
test lead conduct test execution closure review along with test engineers.
Page 37 of 132
Software Testing Material
Testing team try to execute the high priority test cases once again to confirm correctness of
master build.
Gather
requirements
Report Effort
Regression estimation
Execute Plan
Regression Regression
SignOff:
After completion of UA and then modifications, test lead creates Test Summary Report
(TSR). It is a part of s/w release note. This TSR consists of
Page 38 of 132
Software Testing Material
What u r doing?
What type of testing process going on ur company?
What type of test documentation prepared by ur organization?
What type of test documentation u will prepare?
Whats ur involvement in that?
What are key components of ur company test plan?
What type of format u prepare for test cases?
How ur pm selects what type of tests need for ur project?
When u will go to automation?
What is regression testing? When u will do this?
How u report defects to development team?
How u know whither defect accepted or rejected?
What u do when ur defect rejected?
How u will learn project with out documentation?
What is the difference between defect age and Build interval period?
How u will do test without documents?
What do u mean by green box testing?
Experience on winrunner
Exposure to td
Winrunner 8/10.
Load runner 7/10.
Auditing:
Page 39 of 132
Software Testing Material
During testing and maintenance, testing team conducts audit meetings to estimate
status and required improvements. In this auditing process they can use three types of
measurements and metrics.
Product Stability:
N
o.
20% Testing 80% Bugs
O
f
80% Testing 20% Bugs
bu
g
s
Duration
Sufficiency:
Requirements Coverage
Type Trigger Analysis (Mapping between covered requirements and applied tests)
Test Status
Executed tests
In progress
Yet to execute
Delays in Delivery
Defect Arrival Rate
Defect Resolution Rate
Defect Aging
Test Effort
Cost of finding a defect (Ex: 4 defects / person day)
Process Capability Measurements:
Page 40 of 132
Software Testing Material
These measurements are used by quality analyst and test management to improve the
capability of testing process for upcoming projects testing. (It depends on old projects
maintenance level feedback)
Test Efficiency
Type-Trigger Analysis
Requirements Coverage
Defect Escapes
Type-Phase analysis.
(What type of defects my testing team missed in which phase of testing)
Test Effort
Cost of finding a defect (Ex: 4 defects / person day)
This topic looks at Static Testing techniques. These techniques are referred to as "static"
because the software is not executed; rather the specifications, documentation and source
code that comprise the software are examined in varying degrees of detail.
There are two basic types of static testing. One of these is people-based and the other is tool-
based. People-based techniques are generally known as reviews but there are a variety of
different ways in which reviews can be performed. The tool-based techniques examine source
code and are known as "static analysis". Both of these basic types are described in separate
sections below.
Page 41 of 132
Software Testing Material
technical document. Typically the author "walks" the group through the ideas to explain them
and so that the attendees understand the content. Inspection is the most formal of all the
formal review techniques. Its main focus during the process is to find faults, and it is the most
effective review technique in finding them (although the other types of review also find some
faults). Inspection is discussed in more detail below.
Reviews and the test process
Benefits of reviews
There are many benefits from reviews in general. They can improve software development
productivity and reduce development timescales. They can also reduce testing time and cost.
They can lead to lifetime cost reductions throughout the maintenance of a system over its
useful life. All this is achieved (where it is achieved) by finding and fixing faults in the
products of development phases before they are used in subsequent phases. In other words,
reviews find faults in specifications and other documents (including source code) which can
then be fixed before those specifications are used in the next phase of development.
Reviews generally reduce fault levels and lead to increased quality. This can also result in
improved customer relations.
Page 42 of 132
Software Testing Material
Types of review
We have now established that reviews are an important part of software testing. Testers
should be involved in reviewing the development documents that tests are based on, and
should also review their own test documentation.
In this section, we will look at different types of reviews, and the activities that are done to a
greater or lesser extent in all of them. We will also look at the Inspection process in a bit
more detail, as it is the most effective of all review types.
Page 43 of 132
Software Testing Material
Page 44 of 132
Software Testing Material
The more formal review techniques include follow-up of the faults or issues found to ensure
that action has been taken on everything raised (Inspection does, as do some forms of
technical or peer review).
The more formal review techniques collect metrics on cost (time spent) and benefits
achieved.
Roles and responsibilities
For any of the formal reviews (i.e. not informal reviews), there is someone responsible for the
review of a document (the individual review cycle). This may be the author of the document
(walkthrough) or an independent Leader or moderator (formal reviews and Inspection). The
responsibility of the Leader is to ensure that the review process works. He or she may
distribute documents, choose reviewers, mentor the reviewers, call and lead the meeting,
perform follow-up and record relevant metrics.
The author of the document being reviewed or Inspected is generally included in the review,
although there are some variants that exclude the author. The author actually has the most to
gain from the review in terms of learning how to do their work better (if the review is
conducted in the right spirit!).
The reviewers or Inspectors are the people who bring the added value to the process by
helping the author to improve his or her document. In some types of review, individual
checkers are given specific types of fault to look for to make the process more effective.
Managers have an important role to play in reviews. Even if they are excluded from some
types of peer review, they can (and should) review management level documents with their
peers. They also need to understand the economics of reviews and the value that they bring.
They need to ensure that the reviews are done properly, i.e. that adequate time is allowed for
reviews in project schedules.
There may be other roles in addition to these, for example an organisation-wide co-ordinator
who would keep and monitor metrics, or someone to "own" the review process itself - this
person would be responsible for updating forms, checklists, etc.
Deliverables
The main deliverable from a review is the changes to the document that was reviewed. The
author of the document normally edits these. For Inspection, the changes would be limited to
faults found as violations of accepted rules. In other types of review, the reviewers suggest
improvements to the document itself. Generally the author can either accept or reject the
changes suggested.
If the author does not have the authority to change a related document (e.g. if the review
found that a correct design conflicted with an incorrect requirement specification), then a
change request may be raised to change the other document(s).
For Inspection and possibly other types of review, process improvement suggestions are a
deliverable. This includes improvements to the review or Inspection process itself and also
improvements to the development process that produced the document just reviewed. (Note
that these are improvements to processes, not to reviewed documents.)
The final deliverable (for the more formal types of review, including Inspection) is the
metrics about the costs, faults found, and benefits achieved by the review or Inspection
process.
Pitfalls
Reviews are not always successful. They are sometimes not very effective, so faults that
could have been found slip through the net. They are sometimes very inefficient, so that
Page 45 of 132
Software Testing Material
people feel that they are wasting their time. Often insufficient thought has gone into the
definition of the review process itself - it just evolves over time.
One of the most common causes for poor quality in the review process is lack of training, and
this is more critical the more formal the review.
Another problem with reviews is having to deal with documents that are of poor quality.
Entry criteria to the review or Inspection process can ensure that reviewers' time is not wasted
on documents that are not worthy of the review effort.
A lack of management support is a frequent problem. If managers say that they want reviews
to take place but don't allow any time in the schedules for the, this is only "lip service" not
commitment to quality.
Long-term, it can be disheartening to become expert at detecting faults if the same faults keep
on being injected into all newly written documents. Process improvements are the key to
long-term effectiveness and efficiency.
Inspection
Typical reviews versus Inspection
There are a number of differences between the way most people practice reviews and the
Inspection process as described in Software Inspection by Gilb and Graham, Addison-Wesley,
1993.
In a typical review, the document is given out in advance, there are typically dozens of pages
to review, and the instructions are simply "Please review this."
In Inspection, it is not just the document under review that is given out in advance, but also
source or predecessor documents. The number of pages to focus the Inspection on is closely
controlled, so that Inspectors (checkers) check a limited area in depth - a chunk or sample of
the whole document. The instructions given to checkers are designed so that each individual
checker will find the maximum number of unique faults. Special defect-hunting roles are
defined, and Inspectors are trained in how to be most effective at finding faults.
In typical reviews, sometimes the reviewers have time to look through the document before
the meeting, and some do not. The meeting is often difficult to arrange and may last for
hours.
In Inspection, it is an entry criterion to the meeting that each checker has done the individual
checking. The meeting is highly focused and efficient. If it is not economic, then a meeting
may not be held at all, and it is limited to two hours.
In a typical review, there is often a lot of discussion, some about technical issues but much
about trivia. Comments are often mainly subjective, along the lines of "I don't like the way
you did this" or "Why didn't you do it this way?"
In Inspection, the process is objective. The only thing that is permissible to raise as an issue is
a potential violation of an agreed Rule (the Rulesets are what the document should conform
to). Discussion is severely curtailed in an Inspection meeting or postponed until the end. The
Leader's role is very important to keep the meetings on track and focused and to keep pulling
people away from trivia and pointless discussion.
Many people keep on doing reviews even if they don't know whether it is worthwhile or not.
Every activity in the Inspection process is done only if its economic value is continuously
proven.
Page 46 of 132
Software Testing Material
Inspection is more
Inspection contains many mechanisms that are additional to those found in other formal
reviews. These include the following:
Entry criteria, to ensure that we don't waste time Inspecting an unworthy document;
Training for maximum effectiveness and efficiency;
Optimum checking rate to get the greatest value out of the time spent by looking
deep;
Prioritising the words: Inspect the most important documents and their most important
parts;
Standards are used in the Inspection process; there are a number of Inspection
standards also;
Process improvement is built in to the Inspection process
Exit criteria ensure that the document is worth and that the Inspection process was
carried out
correctly
One of the most powerful exit criteria is the quantified estimate of the remaining defects per
page. This may be say 3 per page initially, but can be brought down by orders of magnitude
over time.
Inspection is better
Typical reviews are probably only 10% to 20% effective at detecting existing faults. The
return on investment is usually not known because no one keeps track even of their cost.
When Inspection is still being learned, its effectiveness is around 30% to 40% (this is
demonstrated in Inspection training courses). Once Inspection is well established and mature,
this process can find up to 80% of faults in a single pass, 95% in multiple passes. The return
on investment ranges from 6 hours to 30 for every hour spent.
The Inspection process
The diagram shows a product document infected with faults. The document must pass
through the entry gate before it is allowed to start the Inspection process. The Inspection
Leader performs the planning activities. A Kickoff meeting is held to "set the scene" about
the documents and the process.
The Individual Checking is where most of the benefits are gained. 80% or more of the faults
found will be found in this stage.
A meeting is held (if economic). The editing of the document is done by the author or the
person now responsible for the document. This involves redoing some of the activities that
produced the document initially, and it also may require Change Requests to documents not
under the control of the editor. Process improvement suggestions may be raised at any time,
for improvements either to the Inspection process or to the development process.
The document must pass through the Exit gate before it is allowed to leave the Inspection
process. There are two aspects to investigate here: is the product document now ready (e.g.
has some action been taken on all issues logged), and was the Inspection process carried out
properly? For example, if the checking rate was too fast, then the checking has not been done
properly.
A gleaming new improved document is the result of the process, but there is still a "blob" on
it. It is not economic to be 100% effective in Inspection. At least with Inspection you
Page 47 of 132
Software Testing Material
consciously predict the levels of remaining faults rather than fallaciously assuming that we
have found them all!
How the checking rate enables deep checking in
Inspection
There is a dramatic difference of Inspection to normal reviews, and that is in the depth of
checking. This is illustrated by the picture of a document. Initially there are no faults visible.
Typically in reviews, the time and size of document determine the checking rate. So for
example if you have 2 hours available for a review and the document is 100 pages long, then
the checking rate will be 50 page per hour. (Any two of these three factors determine the
third.)
This is equivalent to "skimming the surface" of the document. We will find some faults - in
this example we have found one major and two minor faults. Our typical reaction is now to
think: "This review was worthwhile wasn't it - it found a major fault. Now we can fix that and
the two other minor faults, and the document will now be OK." Think: are we missing
anything here?
Inspection is different. We do not take any more time, but it is the optimum rate for the type
of document that is used to determine the size of the document that will be checked in detail.
So if the optimum rate is one page per hour and we have two hours, then the size of the
sample or chunk will be 2 pages.
(Note that the optimum rate needs to be established over time for different types of document
and will depend on a number of factors, and it is based on prioritised words (logical page
rather than physical page). Of course it doesn't take an hour just to read a single page, but the
checking done in Inspection includes comparing each paragraph or sentence on the target
page with all source documents, checking each paragraph or phrase against relevant rule sets,
both generic and specific, working through checklists for different role assignments, as well
as the time to read around the target page to set the context. If checking is done to this level
of thoroughness, it is not at all difficult to spend an hour on one page!)
How does this depth-oriented approach affect the faults found? On the picture, we have gone
deep in the Inspection on a limited number of pages. We have found the major one found in
the other review plus two (other) minors, but we have also found a deep-seated major fault,
which we would never have seen or even suspected if we had not spent the time to go deep.
There is no guarantee that the most dangerous faults are lying near the surface!
When the author comes to fix this deep-seated fault, he or she can look through the rest of the
document for similar faults, and all of them can then be corrected. So in this example we will
have corrected 5 major faults instead of one. This gives tremendous leverage to the Inspection
process - you can fix faults you didn't find!
Inspection surprises
To summarise the Inspection process, there are a number of things about Inspection which
surprise people. The fundamental importance of the Rules is what makes Inspection objective
rather than a subjective review. The Rules are democratically agreed as applying (this helping
to defuse author defensiveness) and by definition a fault is a Rule violation.
The slow checking rates are surprising, but the value to be gained by depth gives far greater
long-term gains than surface-skimming review that miss major deep-seated problems.
The strict entry and exit criteria help to ensure that Inspection gives value for money.
The logging rates are much faster than in typical reviews (30 to 60 seconds; typical reviews
log one thing in 3 to 10 minutes). This ensures that the meeting is very efficient. One reason
Page 48 of 132
Software Testing Material
this works is that the final responsibility for all changes is fully given to the author, who has
total responsibility for final classification of faults as well as the content of all fixes.
More information on Inspection can be found in the book Software Inspection, Tom Gilb and
Dorothy Graham, Addison-Wesley, 1993, ISBN 0-201-63181-4.
Static analysis
What can static analysis do?
Static analysis is a form of automated testing. It can check for violations of standards and can
find things that may or may not be faults. Static analysis is descended from compiler
technology. In fact, many compilers may have static analysis facilities available for
developers to use if they wish. There are also a number of stand-alone static analysis tools for
various different computer programming languages. Like a compiler, the static analysis tool
analyses the code without executing it, and can alert the developer to various things such as
unreachable code, undeclared variables, etc.
Static analysis tools can also compute various metrics about code such as cyclomatic
complexity.
Data flow analysis
Data flow analysis is the study of program variables. A variable is basically a location in the
computer's memory that has a name so that the programmer can refer to it more conveniently
in the source code. When a value is put into this location, we say that the variable is
"defined". When that value is accessed, we say that it is "used".
For example, in the statement "x = y + z", the variables y and z are used because the values
that they contain are being accessed and added together. The result of this addition is then put
into the memory location called x, so x is defined.
The significance of this is that static analysis tools can perform a number of simple checks.
One of these checks is to ensure that every variable is defined before it is used. If a variable is
not defined before it is used, the value that it contains may be different every time the
program is executed and in any case is unlikely to contain the correct value. This is an
example of a data flow fault. Another check that a static analysis tool can make is to ensure
that every time a variable is defined it is used somewhere later on in the program. If it isnt,
then why was defined in the first place? This is known as a data flow anomaly and although
can be a perfectly harmless fault, it can also indicate something more serious is at fault.
Control flow analysis
Control flow analysis can find infinite loops, inaccessible code, and many other suspicious
aspects. However, not all of the things found are necessarily faults; defensive programming
may result in code that is technically unreachable.
Cyclomatic complexity
Cyclomatic complexity is related to the number of decisions in a program or control flow
graph. The easiest way to compute it is to count the number of decisions (diamond-shaped
boxes) on a control flow graph and add 1. Working from code, count the total number of IF's
and any loop constructs (DO, FOR, WHILE, REPEAT) and add 1. The cyclomatic
complexity does reflect to some extent how complex a code fragment is, but it is not the
whole story.
Other static metrics
Lines of code (LOC or KLOC for 1000s of LOC) is a measure of the size of a code module.
Operands and operators is a very detailed measurement devised by Halstead, but not much
used now. Fan-in is related to the number of modules that call (in to) a given module.
Modules with high fan-in are found at the bottom of hierarchies, or in libraries where they are
Page 49 of 132
Software Testing Material
frequently called. Modules with high fan-out are typically at the top of hierarchies, because
they call out to many modules (e.g. the main menu). Any module with both high fan-in and
high fan-out probably needs re-designing.
Nesting levels relate to how deeply nested statements are within other IF statements. This is a
good metric to have in addition to cyclomatic complexity, since highly nested code is harder
to understand than linear code, but cyclomatic complexity does not distinguish them.
Other metrics include the number of function calls and a number of metrics specific to object-
oriented code.
Limitations and advantages
Static analysis has its limitations. It cannot distinguish "fail-safe" code from real faults or
anomalies, and may create a lot of spurious failure messages. Static analysis tools do not
execute the code, so they are not a substitute for dynamic testing, and they are not related to
real operating conditions.
However, static analysis tools can find faults that are difficult to see and they give objective
quality information about the code. We feel that all developers should use static analysis
tools, since the information they can give can find faults very early when they are very cheap
to fix.
WinRunner 7.0
Developed by Mercury Interactive
Functionality testing tool ( Not suitable to Performance, Usability and Security
Testing)
Supports c/s and web technologies ( VB, vc++, java, d2k, power builder, Delphi,
HIML etc
WinRunner wont supports .Net, XML, SAP, People Soft, Maya, Flash, oracle
applications etc
To support .Net, XML, SAP, People Soft, Maya, Flash, XML, oracle applications
etc we can use QTP ( Quick Test Professional )
QTP is an extension of WinRunner.
Page 50 of 132
Software Testing Material
Learning
Recording
Edit Script
Run Script
Learning: Recognization of objects and windows in your application by testing tool is called
Learning.
Edit Script: Test Engineer inserts required check points into that recorded test script.
Run Script: A test engineer executes automated test script to get results.
Analyze Results: A test engineer analyzes test results to concentrate on defect tracking.
User Id *****
Passwor
*****
d
Ok
Note: WinRunner 7.0 provides auto learning facility to recognize objects and windows in
your project without your interaction.
Test Script: A test script consists of Navigational Statements & Check Points. In winrunner
scripting language is also called as TSL ( Test Script Language ) like as C.
Page 51 of 132
Software Testing Material
Add-in Manager: This window provides a list of WinRunner supported technologies with
respect to our purchased license.
Note: If all options in Add in Manager are off by default it supports VB, VC++ interface
(Win32 API).
Recording Modes: To record our business operations (Navigations) in winrunner we can use
2 types of recording modes.
Analog Mode: To record mouse pointer movements on the desktop, we can use this mode. In
Analog Mode tester maintains constant monitor resolution and application position during
recording and learning
Application areas: Digital Signatures, Graphs drawing, image movements.
Note:
1. In analog mode, WinRunner records mouse pointer movements with respect to
desktop co-ordinates. Due to this reason, test engineer maintains corresponding
context sensitive mode window in default position in recording and running.
2. If u want to use Analog mode for recording, we can maintain monitor resolution
as constant during recording and running.
mtype(): WinRunner uses this operations this function to record mouse pointer operations on
the desktop.
Syntax: mtype(<T Track Number> < K key on the mouse used > + / - );
Track no Deck top coordinates in which you operate the mouse. It stores the mouse
coordinates. Actually it is a memory location.
Type(): We can use this function to record keyboard operations in analog mode.
Context Sensitive mode: To record mouse and key board operations on our application
build, we can use this mode. It is a default mode.
Context Sensitive Mode: In general functionality test engineer creates automation test
scripts in Context Sensitive mode with required check points. In this mode WinRunner
Page 52 of 132
Software Testing Material
records our application operation with respect to objects and windows. To record mouse and
key board operations on our application build, we can use this mode. It is a default mode.
Ex:
Focus to Window Set_window(Window Name, time);
TextBox Edit_set(Edit Name,Typed Characters);
Password text box Password_edit_set(Pwd Object,Encrypted Pwd);
Push Button Button_press(Button Name);
Radio Button Button_set(Button Name,ON);
Button_set(Button Name,OFF);
Check Box Button_set(Button Name,ON);
Button_set(Button Name,OFF);
List/Combo Box List_select_item(List1, Selected Item);
Menu Menu_select_item(Menu Name; Option Name);
Check points: WinRunner is a functionality testing tool, it provides a set of facilities to cover
below sub tests.
GUI Check point: To automate behavior of objects we can use this check point. It consists of
sub options.
For Single Property: To test a single property of an object we can use this option.
Navigation: select a position in Script, Create Menu, GUI check point, for single property,
select testable object(Double Click), select required property with expected, click paste.
Page 53 of 132
Software Testing Material
Update Order
Focus to Window Disable
Open a Record Disable
Perform Change Enable
If the checkpoints are for numeric value, then no need for double quotes.
If the checkpoints are for string value, then place the data in between double quotes.
But winrunner takes any value by default in string with double quotes.
Problem:
NagaRaju Shopping
Item No
Quantity
Ok
Expected: No. of Items in Fly to equal to, no of items in Fly From -1, when you select an
item in fly from.
NagaRaju Journey
Fly From
Fly To
Ok
Ex: if u select an item in a list box then the no of items in next list boxes decreased by 1.
Problem:
Focus to Window Ok should be Disabled
Page 54 of 132
Software Testing Material
NagaRaju Shopping
Quantity
List1
Ok
List2
List3
Ok
Type
switch(x)
{
case A: edit_check_info(Age,focused, 1);
break;
List
Page 55 of 132
Text
Software Testing Material
Ok
Exp: Selected Item in List box appears in text box after Clicking Ok button.
Exp: Selected Item in List box appears in Sample 2 text object after clicking display button.
Sample1 Sample2
List1 Display
Text
Ok
NagaRaju Employee
Emp No
Dept No
Ok
B Sal Comm
Problem:
If basic salary >= 10000 then commission = 10% of basic salary.
Else If basic salary in between 5000 & 10000 then commission = 5% of basic salary.
Else If basic salary < 5000 then commission = 200 Rs.
Problem:
If Total >= 800 then Grade = A.
Else If Total in between 800 & 700 then Grade = B.
Else Grade = C.
Roll No
Ok Page 56 of 132
Grade
Software Testing Material
For Object/Window: To test more than one properties of a single object, we can use this
option.
Syntax: obj_check_gui(obj name, Check List File.ckl, expected values file.txt, time to
create );
In the above syntax check list file specifies list of properties to test of a single object. And its
extension is .ckl
Expected values file specifies list of expected values for that selected or testable properties.
And its extension is .txt
For Multiple Objects: To test more than one property of more than one object in a single
checkpoint we can use this option. To create this checkpoint tester selects multiple objects in
a single window.
Ex:
Insert Order Update Order Delete Order
Focus to Window Disable Disable Disable
Open a Record Disable Disable Enable
Perform Change Disable Enable & Focused Enable
Navigation: select position in script, create menu, GUI checkpoint, For Multiple Objects,
click add, select testable objects, right click to relieve, specify expected for required
properties for every selected object, click ok.
Syntax: win_check_gui("Object Name", "Check List File.ckl", "Expected Values File", Time
to Create);
Case Study: What type of properties you check for what objects?
Page 57 of 132
Software Testing Material
List Box Count ( No of items in List Box ), Value ( Current Selected Value )
Table Grid Rows, Columns, Table Content
Text / Edit Box Enabled, Focused, Value, Range, Regular Expression, Date Format, Time
Format
WinRunner allows us to perform changes in the existing check points. There are 2 types of
changes in existing checkpoints due to project sudden changes or tester mistake.
Navigation: Create menu, edit gui check list, select check list file name, click ok, select new
properties to test, click ok, to overwrite, change run mode to update, click run executed
(default values selected as exp values), click run in verify mode to get results, perform
changes in result if required
Enable ON
d OFF
Focuse
d Default Value
Value
Running Modes in WinRunner:
Verify mode: in this mode wr compare our expected values with actual.
Update mode: in this runmode, default values select as expected value
Debug mode: to run our test scripts line by line.
During GUI check point creation Winrunner creates checklist files and expected values files
in HardDisk. Winrunner maintains the test scripts by default in tmp folder
Page 58 of 132
Software Testing Material
Navigation: Create Menu, GUI Check point, for object/window, select object, select range
property, enter from & to values, click ok.
Syntax: obj_check_gui(obj name, What Property you are checking, Range of Values
from & To, time to create )
In the above syntax check list file specifies list of properties to test of a single object. And its
extension is .ckl
Expected values file specifies list of expected values for that selected or testable properties.
And its extension is .txt
NagaRaju Sample
Age
Navigation: Create Menu, GUI Check point, for object/window, select object, select Regular
Expression property, enter Expected Expression as []*, click ok.
Syntax: obj_check_gui(obj name, What Property you are checking, Range of Values
from & To, time to create )
In the above syntax check list file specifies list of properties to test of a single object. And its
extension is .ckl
Expected values file specifies list of expected values for that selected or testable properties.
And its extension is .txt
Problem: The Name text box should allow only lower level characters
NagaRaju Sample
Name
Page 59 of 132
Software Testing Material
Bitmap Check Point: It is an optional checkpoint in functionality testing tool. Tester can use
this checkpoint to compare images, logos, graphs and other graphical objects.( Like
signatures)
These options supports testing on static images only. WinRunner doesn't support dynamic
images developed using Flash, Maya
For Object/Window: To compare our expected image with actual image in your application
build, we can use this option.
Navigation: select a position in script, create menu, bitmap checkpoint, for object/window,
select image object.
obj_check_bitmap(Image object Name,
Expected image file. Bmp, time to create the image check point)
For Screen Area (Part of Image Testing): To compare our expected image part with actual
image in your application build, we can use this option.
Navigation: select a position in script, create menu, bitmap checkpoint, for Screen Area,
select required region in testable image, right click to releave.
obj_check_bitmap(Image object Name, Image file. Bmp, time to create the check point,
x, y, width, height )
Database Check Point: To conduct backend testing using WinRunner we can use this
option.
Page 60 of 132
Software Testing Material
Back End Testing: Validating Completeness and Correctness of front end operation impact
on the backend tables. This process is also known as the database testing. In general, the
Backend testing is also known as validation or data and integrity of data.
Application DataBase
DSN
Default Check: To check data validation and data integrity in database, depends on content,
we can use this option.
DSN: Data Source Name. It is a connection string between front end and back end. It will
maintain the connection process.
Steps:
1. Connect to the database
2. Execute the select statement
3. Return results in Excel Sheet
4. Analyze the results manually
Application DataBase
DSN
Front End 1
Back End
Data Base
Check Point 2 Select
Wizard
Page 61 of 132
Software Testing Material
To conduct testing, test engineer collects some information from development team.
Navigation: In GUI & Bitmap checkpoints we will starts with selecting the position in script.
Create Menu, Database Checkpoint, default checkpoint, specify connection to database
(ODBC / Data Junction) , select sql statement(c:\\PF\MI\WR\temp\testname\msqr1.sql), click
next, click create to select DSN, write select statement ( select * from orders ), click finish.
Page 62 of 132
Software Testing Material
A B
X Y
1 1
Expected: 2 2
X A
Y B 3 3
The front objects names should be understandable to the end user. (WYSIWYG)
Runtime Record Checkpoint: Sometimes test engineer use this option to find mapping
between front end objects and backend columns, it is optional checkpoint.
Navigation: Create Menu, Database Checkpoint, runtime record check, specify SQL
statement, click next, click create to select DSN, write select statement with doubtful columns
( select orders.order_number, orders.customername from orders), select doubtful front end
objects for that columns, click next, select any of below options
Exactly one match
One or more match
No match record
Click finish.
Note: For custom and default check points you have to give ; at the end of the sql statement.
But in Runtime record check point u have no need to give it.
In the above syntax checklist specifies expected mapping to test and variable specifies
number of records matched. If mapping correct the same values will be presented.
Runtime record checkpoint allows you to perform changes in existing mapping, through
below navigation.
Create menu, edit runtime recordlist, select checklist file name, click next, change query (if u
want to test on new columns), click next, change object selection for new objects testing,
click finish.
Page 63 of 132
Software Testing Material
Synchronization:
To define the time mapping between testing tool and application, we can use synchronization
point concepts.
Wait(): To define fixed waiting time during test execution, test engineer use this function.
Drawback: This function defines fixed waiting time, but our applications are taking variable
times to complete, depends on test environment.
Navigation: Settings, General options, run tab, change delay & timeout depends on
requirement, click apply, click ok.
Set_window(,6) ;
--
time = 11sec
--
2.
button_press(ok);
time = 10
3. button_check_info(ok,enabled,1);
time = 10sec
Page 64 of 132
Software Testing Material
Drawbacks in Change Settings: If you are changing the Settings once they will be applied
to each and every test without user specifications.
Due to this most of the times they are not using this change runtime settings option.
Now a days most of the test engineers are using the for object / window property for
avoiding the time mismatch problems
Navigation: Select position in script, create menu, synchronization point, for object/window
property, select object, specify property with expected ( Ex: Status / Progress Bar 100%
completed and enabled), specify maximum time to wait, click ok.
Sometimes test engineer defines time mapping between tool and project depends on images
in that application.
Navigation: Select position in script, create menu, synchronization point, for object/window
Bitmap, select the required image,
Navigation: Select position in script, create menu, synchronization point, for Screen Area
Bitmap, select the required image region, right click to releave.
To create this type of check points in testing, we can use this Get Text option from the
create menu.
From object / window: To capture object values into variables we can use this option.
Page 65 of 132
Software Testing Material
Navigation: Create Menu, Get Text, From Object / Window, select required object (Dbl
Click)
Ex: obj_get_info("ThunderTextBox_3","value",v1);
From Screen Area: To capture static text in your application build screen we can use this
option.
Navigation: Create Menu, Get Text, From Screen area, select required required region to
capture value [+sign] , right click to relieve
NagaRaju Sample
Input
Output
Item No Quantity
Ok
Price $ Total
Retesting: Re execution of our test on same application build, with multiple test data is
called retesting. In WinRunner retesting is also called as Data Driven Test (DDT).
Page 66 of 132
Software Testing Material
During the test execution based on first type, tester gives values based on that test execution
will be completed (like scanf() in C)
But in the remaining three types can be done with out tester execution.
Dynamic test data submission: To conduct retesting, to validate functionality, test engineer
submits required test data to tool dynamically.
To read keyboard values during the test execution, test engineer use below TSL statements.
Build
Key Board
Test Script
No1
No2
Multiply
Result
Item No Quantity
Ok
Price $ Total $
Page 67 of 132
Software Testing Material
Tl_step(): tl stands for test log. Test log means that test result. We can use this function to
define user defined pass or fail message.
Pass green 0
Fail red 1
Password_edit_set(pwd,password_encrypt(y))
User Id
Password
Login Next
Sample 2 Sample 1
Display Text1
Text2 Ok
Problem:
First enter EmpNo and Click Ok Button. Then it will displays bsal, comm. and gsal.
Exp: gsal = bsal + comm.
Bsal >= 15000 then comm. is 15%
If bsal between 15000 and 8000 then commission is 5%
If bsal < 8000 then comm. is 200.
Page 68 of 132
Software Testing Material
To manipulate file data for testing test engineer uses below TSL functions
file_open(): To load required flat file into RAM, with specified permissions, we can use
function.
Syntax: file_open(Path of the File,FO_MODE_READ/ FO_MODE_WRITE/
FO_MODE_APPEND);
file_getline(): We can use this function to read a line from a opened file.
Syntax: file_getline( Path of the File, Variable);
Like in C file pointer incremented automatically.
file_close(): We can use function to swap out a opened file into RAM.
Syntax: file_close(Path of the File);
file_printf(): We can use this function to write specified text into a opened file in WRITE
or APPEND mode.
Syntax: file_printf(Path of the File, Format, what values you want to write or which
variable values you want to write);
Substr: we can use this function to separate a substring from given string.
Syntax: Substr (main string, start position, length of substring);
Page 69 of 132
Software Testing Material
Build
File Values
.txt
Test Script
No1
No2
Multiply
Result
Item No Quantity
Ok
Price $ Total $
User Id
Password
Login Next
Page 70 of 132
Software Testing Material
list_get_item(): We can use this function to capture specified list box item through item
number.
list_select_item(): We can use this function to select specified list box item through given
variable.
list_select_item(ListBox Name,Variable);
list_get_info(): We can use this function to information about the specified property(like
enabled, focused, count) of list box item into given variable.
Test
Data
Build
Test Script
NagaRaju Journey
Fly From
Fly To
Page 71 of 132
Ok
Software Testing Material
Sample 2 Sample 1
Display Text1
Text2 Ok
Display
List1
Text
Ok
Type
Data Driven Testing: In generally test engineers are creating data driven tests, depends on
excel sheet data.
Page 72 of 132
Software Testing Material
Loop
--
-- Build
Test Data --
Excel Sheet
Data
Test Script
Navigation:
Create test script for one input, tools menu, data driven wizard, click next, browse the
path of the excel sheet ( path ), specify variable name to assign path of excel sheet ( by
default table as variable ), select add statements to create ddt, select import data from
database, optimized text 1. line by line 2. automatically, click next, specify connection
to database, specify database connection (ODBC/Data Junction), select specify sql
statement mssql1.sql , click next, click create to select dsn (machine data sourse
flight32), write select statement to capture database content for testing into excel sheet,
specify position to replace excel sheet column in ur test script, select show data table now,
click finish.
Test
Script
Col1 Col2 Col3
C3 = c1 + c2
Problems:
1. Prepare a data driven program to find factorial of given number. Write result into
same excel sheet.
2. Prepare a TSL script to write a list box item into excel sheet one by one.
Page 73 of 132
Software Testing Material
Ddt_open(): We can use this function to open excel sheet into Ram. In specified mode.
This function will returns E_FILE_OPEN when that file is opened into RAM. Else it returns
E_FILE_NOT_OPEN.
In the above syntax variable specifies that how many rows newly altered.
Write a program to write list box items into a excel sheet one by one.
Test Suite / Test Batch:- Arranging all tests in one proper order based on their functionality.
It gives what test output is used as a input to all other values.
Batch Testing: In general test engineers are executing their scripts as batches. Every batch
consists of a set of tests, they all are dependent.
In every batch end state of one test is base state to next test. When you are executing our tests
as batches you are getting a chance to increase our probability of defect detection.
Page 74 of 132
Software Testing Material
Parameter passing: Winrunner allows you to pass arguments between, calling test to called
test, or main test to subtest.
Navigation: Open subtest, file menu, test properties, select parameters table, click add to
create more parameters, click apply, click ok, use that parameters in required place to test
script.
From the above model main test is passing values to subtest. To receive that values, subtest
maintains a set of parameter variables.
Data Driven Batch Test: WinRunner allows you to execute our batches with multiple test
data.
texit(): sometimes test engineers are using the statement in test script to stop test execution in
the middle of the process.
Treturn(): we can use this statement to return a value from a called test to a calling test.
Treturn(variable or value);
Treturn(10);
Page 75 of 132
Software Testing Material
Silent Mode: In general winrunner returns pause message, any standard checkpoint is failed
during test execution. If u want to execute our tests scripts without any initiation when a
checkpoint is failed we can follow below navigation to define silent mode.
Navigation: Settings, general options, run tab, select run in batchmode option, click apply,
click ok.
Test2
Test3
Test4
Test3
Test4
Page 76 of 132
Software Testing Material
window appears:
if (win_exists(sample) == E_OK)
win_exists() we can use this function to find existence of a window. In the desktop in min,
max or hidden position.
Homework:
Shopping:
Prepare above batch test for ten users which information available in excel sheet during this
batch execution tester passing item no & quantity as parameters.
User Defined Functions: Like as programming languages winrunner also provides a facility
to create user defined functions. In TSL user defined functions are created by test engineer to
initiate repeatable navigation.
In the above example, test engineer creates four automation test scripts to test four different
functionalities depends on functionality dependency. Test engineers are calling this login
process as base state.
if u want to create a user defined function to maintain end state of one time execution is base
state to next execution we can use static functions.
But static maintains constant locations for internal variables in that current test execution. Out
put of one test execution is input to other test.
Page 77 of 132
Software Testing Material
a = 100
Static
a=0
---
---
---
a = 100
Test
Note1: User Defined Functions allows only context sensitive statements and Control
statements and doesn't allow check points and Analog statements.
Note2: In batch testing one test calling other test through saved test name. One test invoking
one function depends on function name. to call one function in test, that function .exe must
reside in RAM.
Public function add (in a, in b, out c)
{
c = a + b;
}
Calling Test:
X = 6;
Y = 6;
Add( x , y , z );
Printf(z);
Page 78 of 132
Software Testing Material
Calling Test:
X = 6;
Y = 6;
Z = Add( x , y);
Printf(z);
Calling Test:
X = 6;
Y = 6;
Add( x , y );
Printf(y);
In - general args
Out return values
Inout both
Note: udf allow only cs statements & control stmts and doesn't allow check points & analog
statements.
Compiled Module: Open winrunner and build, click new in winrunner, record repeatable
navigations as user defined functions, save that test in dat folder, file menu, test properties,
general tab, change test type to compiled module, click apply, click ok, write load() statement
of that compiled module in startup script of winrunner.
Note: WinRunner maintains a default program as a startup script. This script executed
automatically when u launching winrunner. In this script we can write load() statement to
load your function.
Page 79 of 132
Software Testing Material
unload(): We can use this function to unload unwanted functions from RAM.
Predefined Functions: These functions are also known as built in functions or system
defined functions. WinRunner provides a facility to search required tsl function in a library,
called function generator.
To search for a required function in function generator, we can follow below navigation.
Create menu, insert function, from function generator, select required category, select
required function depends on description, enter arguments, click paste.
Working directory At the time of running the temporary files are stored in this directory. If
u didnt specify any directory by default it takes c:\windows\temp folder.
Db_connect(): We can use this function to connect to database using existing DSN or
Connection.
Ex: db_connect("Query1","DSN=Flight32");
Page 80 of 132
Software Testing Material
Db_write_records(): We can use this function to write query results into specified file.
Ex: db_write_records("Query1","nrdbc1.xls",TRUE,NO_LIMIT);
Generator_add_function: we can use this function to add your user defined function name to
all functions category.
Generator_add_function_to_category():
Generator_add_function_to_category(category name, function name):
Note:
We can execute above third function after completion of second function execution.
We can write above three statements in start up script of WinRunner
Select TSL functions for:
1. Prepare TSL to execute below Prepared Query. (select * from orders where order_number
<= x and order_number >= y)
Page 81 of 132
Software Testing Material
4. Point SystemDate:
Get_time (only time not date)
Time_str
Syntax:
1. dos_system(): To execute DOS commands.
2. time_str(): To capture system date with time.
3. get_time(): To capture system time value.
4. getvar(): To capture system variable values ex: Timeout, delay
5. setvar(): To change sytem variable values
6. getenv(): To capture environment information ex: m_home(), m_root()
7. system(): To open an application using title of the software.
8. invoke_application(): To open an application using .exe path.
Clip board testing: A tester conduct a test on selected content of an object is called clip
board testing.
Some part of application can be tested ie called as clip board testing and All entire application
can be tested is called as general testing.
win_exists() we can use this function to find existence of a window. In the desktop in min,
max or hidden position.
Page 82 of 132
Software Testing Material
Open Application: WinRunner provides a facility to open your project automatically (System
Category Function).
Working directory At the time of running the temporary files are stored in this directory. If
u didnt specify any directory by default it takes c:\windows\temp folder.
Db_connect(): We can use this function to connect to database using existing DSN or
Connection.
Ex: db_connect("Query1","DSN=Flight32");
Db_write_records(): We can use this function to write query results into specified file.
3. win_exists():
4. open application,
execute prepared query(db_disconnect)
Learning: In general a test automation process starts with learning. Learning means that
recognization of objects and windows in your application by testing tool.
Page 83 of 132
Software Testing Material
Auto Learning: During recording, WR recognizes objects and windows with respect to
tester operations. This type of auto recognization is called as Auto Learning.
Steps:
Start recording
Recognize object
Script generation
Catch entries
Catch objects
1
WinRunner Build
Button_press(Ok);
Ok
2
5
3
Logical Name: Ok
{ GUI Map
class: Push Button
label: Ok
}
Save script
Save GUI Map
Note: If GUI Map is empty, our existing test scripts are not able to execute. To maintain these
entries longtime along with our test scripts, we can follow two possible administrations.
Global GUI MAP File: From the above model test engineer creates a global GUI Map file
and maintains explicitly in hard disk. By default WinRunner allows you to create global GUI
Map file.
Page 84 of 132
Software Testing Material
Test1
--
-- GUI Map
--
Test2 --
-- Save --
-- -- --
-- --
Open
--
-- Explicitly (Using File Menu in
-- GUI MAP Editor).
Per Test Mode: It is a new option in WinRunner 7.0. In this mode winrunner implicitly
handles entries in GUI MAP.
From the above model WR maintains auto process for save and open of entries with respect
to test. Due to this reason, WR increase entry redundancy (Repetition) when an object /
window participate in more than one test. By default WR follows Global GUI Map. If you
want to change to per test mode, we can follow below navigation.
Navigation: Settings, general options, environment tab, select the gui map file per test, click
apply, click ok.
Test1 .gui
GUI Map
-- --
-- -- --
-- -- --
.gui
Test2 -- Save
-- --
-- Ope --
n .gui
Test3
-- --
-- Implicitly --
-- --
Pre Learning: Sometimes winrunner7.0 testers are also follows pre learning concept, before
start recording. Due to this reason, pre learning is only suitable for global GUI Map.
Page 85 of 132
Software Testing Material
Navigation: open project, create menu in winrunner, Rapid Test Script Wizard, click next,
show application main window, click next, (Select No Tests), click next, specify sub menu
symbols (.., >>, ->), click next, specify learning mode (Express or Comprehensive), learn,
(after learning) say yes or no to open project automatically, click next, remember the paths of
start up script and gui map file, click ok.
In general test engineers are following the auto learning concept in global GUI map file. They
are not using auto learning with per test mode and pre learning regularly.
Wild Card Character: Sometimes window or object labels are variating with respect to
inputs in your application. To create data driven test on this type of windows and objects we
can perform changes in corresponding entries in GUI Map.
The Wild Card characters can be used to organize entries in WinRunner using !,*
Page 86 of 132
Software Testing Material
Regular Expressions: Sometimes in your application build objects / windows labels are
variating depends on the events. We are changing in logical name. Winrunner at runtime will
catches entries it used in logical name with respect to runtime point.
Start
{
class: push_button,
label: "![S][t][ao][a-z]*"
}
for(i=1;i<=5;i++)
{
set_window ("Personal Web Manager", 3);
button_press ("Start");
printf(" Button Pressed is : "&i);
}
GUI Map Configuration: Sometimes in your application more than one object consists of
same physical description with respect to WinRunner defaults (Class and Labels). To
recognize this object individually we can perform changes in GUI map configuration.
It is used when one object is not recognized by the tool, then in WinRunner it recognizes by
using this feature.
Navigation: Tools, GUI Map Configuration, Select Object Type, Click Configure, Select
distinguishable properties into obligatory and optional (In general Test engineers are
maintaining mswid as optional), click ok.
If class and label are same. We select mswid (Micro Soft Window ID)
If applicable properties and obligatory properties are same we use optional (mswid).
Command1
{
class: push_button,
label: Command1,
MSW_id: 1
}
Note: Here we can maintain MSWID as assistive. Because every two objects consists of
different mswids.
Mapped to Standard Class: Sometimes test engineers are not getting required properties of
an object. This is used when one object is recognized but the required properties are not
coming to that object. Then map this object to any of the standard matching object and get the
required properties.
Navigation: Tools, GUI Map Configuration, Select non testable Object, Click Ok, Click
Page 87 of 132
Software Testing Material
Virtual Object Wizard: To forcibly recognize, non recognized objects we can use this
option.
Navigation: Tools, Virtual Object Wizard, Click Next, Select expected type depends on
nature of the object, click next, mark that non recognized object area, right click to relieve,
click next, enter logical name to new entry, say yes/no to create more, click finish.
Selective Recording:
It is a new concept in WinRunner 7.0. In WinRunner if u have more than one
application on the desktop at the time of recording it may record about the unnecessary
application details also in the TSL if you didnt specify exactly what application u need. For
this type of situations in WinRunner we are specifying it explicitly using this path.
Settings -> General Options -> Record Tab, click selective recording, select record only on
selected application(By default Off), Record on Start Menu and windows explorer, browse
required application path, click ok.
Note: Selective recording is a new concept in WinRunner7.0. This concept is not applicable
to analog mode, because WinRunner records operations with respect to desktop co-ordinates
in analog mode.
User Interface Testing: WinRunner is a functionality testing tool. But it provides a facility to
conduct user interface testing. In this user interface automation testing, WinRunner depends
on Micro Soft 6 Rules.
Load_os_api(): WinRunner use this function to maintain path between, windows OS system
calls and application programming interface to apply that six rules.
Syntax: load_os_api()
Configure_chkui(): To specify, interest of tester to test required six rules in that six.
Syntax: configure_chkui(TRUE/FALSE, TRUE/FALSE, TRUE/FALSE, TRUE/FALSE,
TRUE/FALSE, TRUE/FALSE);
Page 88 of 132
Software Testing Material
Check_ui(): WinRunner use this function to apply configured rules on specified window.
Syntax: check_ui(windowname);
The above three functions are not built in functions. But developed by Mercury Interactive as
a system defined compiled module. Compiled module means that a permanent .exe of user
defined functions.
Navigation: Open application build on desktop, create menu, rapid test script wizard, click
next, show application main window, click next, select user interface test, click next, specify
sub menu symbols(>>, <<, ), click next, specify learning mode (Express /
Comprehensive), click learn, after learning, say yes/no to open your application during
winrunner launching, click next, remember paths of startup script and GUI map file, click
next, remember the path of UI testing, click ok, specify true for required rules, click run,
analyze the results manually.
Regression Testing: Receive modified build from development team, GUI regression,
Bitmap regression, Real regression to ensure bug fixing and resolving.
From the above process, test engineer performs GUI regression and bitmap regression before
perform functionality level regression. To perform this preliminary level verification we can
use WinRunner concepts in RTSW(Rapid Test Script Wizard).
GUI Regression Testing: To find objects properties differences between old build and new
build, we can use this option in RTSW.
Old New
Page 89 of 132
Software Testing Material
Bitmap Regression Testing: To find image objects level differences between old build and
new build, we can use this option in RTSW.
Old New
Navigation: Open old build on the desktop, create menu, Rapid Test Script Wizard, click
next, show application main window, click next, select use existing information, click next,
select Bitmap regression test, click next, remember the path of test script, click next, click ok,
close old build, open new build, click run, analyze results manually.
Note: After receiving modified build testing team plans functionality regression after
completion of GUI regression and Bitmap regression. In this scenario, GUI regression is
mandatory and bitmap regression is optional.
TSL Exceptions: These exceptions are raised when specified TSL statement returns specified
error code.
To create TSL exceptions we can follow below navigations.
Tools, exception handling, select exception type as TSL, click new, enter exception name,
select expected TSL function, select expected return code, enter handler function name, click
ok, click paste, click ok after reading suggestion, click close, record required navigation to
recover expected situations as function body, make it as compiled module, write load
statement in startup script of WinRunner.
Page 90 of 132
Software Testing Material
Build
--
--
--
Down --
Test Script
Enabled
--
-- Handler
--
Page 91 of 132
Software Testing Material
Page 92 of 132
Software Testing Material
Navigation: Start, Programs, Silk test, file menu, click new, click logo, click next, browse
manual test path, click next, select new test frame or existing test frame, click next, read
suggestions, click next, open window by window manually, click return to wizard, click next,
read suggestions for recording, click next, record out business operations, set mouse pointer
on required object and click ctrl + Alt to create check point(Property, method, Bitmap), click
ok, continue recording, insert checkpoints like as above, click done to stop recording, click
next, set application base state, click run test, click close, analyze results manually.
URLs testing:
Enter Base URL, specify depth to walk, click press, analyze results manually ( Red color not
working, black color working)
Some of the professionals are also calling it as Quick Test Pro. The present version is QTP
6.5. WinRunner does not support the ERP and .Net.
Page 93 of 132
Software Testing Material
Learning: Automation starts with learning. Like as WinRunner QTP supports auto learning
only. During recording QTP creates recognization entries for objects and windows.
[In WinRunner every entry is maintained in GUI Map Editor. Every entry consists of logical
name and physical description.
In WinRunner entries are maintained in two ways.
Global GUI Map Editor and Per Test Mode.
1. The Wild Card characters can be used to organize entries in QTP (like WinRunner using !,*
)
3. Virtual Object Wizard: It is used when one object is not recognized by the tool, then in
WinRunner it recognizes by using this feature. But in Winrunner to this task it takes more
time, where as in QTP also same process but with small navigations.
4. Mapped to Standard Class: This is used when one object is recognized but the required
properties are not coming to that object. Then map this object to any of the standard matching
object and get the required properties.
Page 94 of 132
Software Testing Material
5. GUI Map Configuration: Some times two objects may have same logical and physical
names also. To differentiate one object from other in WinRunner it internally uses the
MSWID. But in QTP we have to follow the given path,
Tools -> Object Identification -> select object Type -> Select distinguishable properties into
mandatory and assistive -> click Ok.
Note: Here we can maintain MSWID as assistive. Because every two objects consists of
different mswids.
6. Selective Recording:
In WinRunner if u have more than one application on the desktop at the time of
recording it may record about the unnecessary application details also in the TSL if u didnt
specify exactly what application u need. For this type of situations in WinRunner v are
specifying it explicitly using this path.
Recording: QTP records our business operations in VBScript. By default this tool starts
recording in general mode. If you want to record mouse pointer movements, v can use
1. Standard checkpoint:
To test the behavior and input domains of objects we can use this
checkpoint. This checkpoint allows one object at a time.
Select position in script -> insert menu -> checkpoint -> standard checkpoint ->select
testable object -> click ok after confirmation -> select required properties with expected
->click ok.
In QTP for one property u can give 2 values like constant expected or parameter
expected.
2. Bitmap Checkpoint:
QTP supports static and dynamic images to compare. The maximum
timeout for picture elements is 10 seconds.
3. Database Checkpoint:
Page 95 of 132
Software Testing Material
QTP provides backend testing facility through this checkpoint like as WinRunner default
check.
insert -> checkpoint ->Database Checkpoint -> specify sql statement -> click create to
select DSN -> write select statement -> click finish.
4. Text Checkpoint:
To capture object values into variables we can use this option. Vbscript supports variables
declaration.
5. TextArea checkpoint:
To capture static text from screens we can use this option.
Var = inputbox(Message);
insert -> step -> method -> select required object -> click ok after confirmation -> click
next -> enter arguments -> click next.
3. Excel Sheet:
Create testscript for one input -> insert testdata into excel sheet
columns -> tools menu -> data driver -> select position to use or replace excel sheet
columns -> click parameterize -> click next -> select required column name -> click
finish.
Batch Testing:
Like as WinRunner QTP also allows batch testing. To form batches QTP
supports WinRunner Tests also. Batch Testing can be done in 2 ways
1. QTP Test to QTP Test: Insert -> call to action -> browse subtest -> specify parameter
data using excel sheet columns -> click ok.
2. QTP Test to WinRunner Test: Insert -> call to WinRunner Test-> browse the path of
test -> click ok.
Note: QTP supports WinRunner 7.0 and higher versions only because QTP supports auto
learning and from WinRunner7.0 onwards auto learning is possible.
Page 96 of 132
Software Testing Material
Synchronization points: To define time mapping between QTP and project we can follow
below navigation.
insert -> step -> synchronization piont [this is exactly equal to for object/window property in
WinRunner]-> select indicator object -> click ok after confirmation -> specify expected
property with value -> specify maximum time wait -> click ok.
Tools -> Recovery Scenario Manager -> Click New -> click next -> Select Trigger type
( pop up, object state, application crash, test run error ) -> define the situation with handler
-> click ok.
Project Administrator
Ms Access,
SQL Server,
Test Director
Oracle
Project Administrator: This part is used by test lead, to create new database areas, to store
new projects testing documents and to estimate test status of an on going project.
Create Database: Start, programs, TD 6.0, Project Administrator, Login by Test Lead,
project menu, new project, specify location of database(Private, Common), click create, click
ok.
For one project data database, test director tool maintains tables and views.
Estimate Test Status: start, programs, TD 6.0, project Administrator, login by test lead,
select project name in list, select project name in list, click connect, click Extension symbol
in front of project name, select required table in list, extend query if required, click Run SQL,
analyze the results manually to estimate the test status.
Page 97 of 132
Software Testing Material
Test Director: This part is used by the test engineer to store corresponding test documents
into corresponding database, created by Test Lead.
Start, programs, TD 6.0, Test Director, Select project Name, Login by Test engineer
Plan Tests
Run Tests
Track Defects.
Plan Tests: During test cases writing for responsible modules, test engineers use this part
to store their testcases into database for future references.
Create Subject: Plan Tests, click Folder New, Enter Responsible module name as Test
Script, click Ok.
Create Sub Subject: Plan test, select subject name, click folder new, enter sub subject
name, click ok.
Create TestCase: Plan Test, select subject name, select sub subject, click Test New, select
Test type, enter Test name, click ok.
Details: After completion of testcase creation, test engineer maintains below details for
that testcase.
TestCase ID, TestSuit ID, Priority, Test Environment, Test Duration, Test Effort, Test
Setup and Testcase Pass/Fail Criteria.
Design Steps: After typing required details for testcase we can prepare a step by step
procedure for that testcase to execute.
Design steps, click new, enter step description with expected, click new to create more
steps, click close.
Test Script: For automation test scripts, test director provides launch button to open
WinRunner.
Click launch, set application base state for that test, record required navigation, insert
required check points, click stop recording, click save.
Attachments: To maintain extra information for test cases, test engineer use this part. It
is optional.
Attachment, Click File/Web, Browse required file path to attach, click open.
Run Tests: After receiving a stable build from the development team concentrate on
test execution. TD provides a facility to create automated TestLog during testcase
execution.
Create Batch: Run Tests, click testset builder, click new, enter suit ID, click ok, select
required tests and add into batch, click close.
Page 98 of 132
Software Testing Material
Execute Automated Test: Select automated test in batch, click automated, set application in
base state as per that test, click run, tools menu, test results, file menu, open, browse executed
test, analyze results manually, close winrunner, change test status to Passed / Failed depends
on results analysis.
Manual Test Execution: select manual testing batch, click manual, click start run, set
application in base state, run every step manually, specify status for every step, click close
after execution of last step.
Track Defects.
During test execution, test engineer use this part to report defects to development team.
Track defects, click add, fill fields in the defect report, click create, click close, click mail,
enter To Mail ID, click ok.
Filter: To select required tests or defects in existing list we can use filters concept.
Sort: To arrange defects in a specified order in a list, we can use this sort icon.
Navigation: Click Sort Icon, select required filed, specify sort direction
(Ascending / Descending), click ok.
Report: To create printouts we can use this icon to create hard copies for defects.
Navigation: Click Report Icon, Specify Report Type, info or table, specify printout type,
click ok, click print per every page.
Test Grid: List of testcases coming in single window, under all subjects and sub subjects.
This option provides list of all test cases under all subjects and sub subjects.
Quick Test Professional
Developed by Mercury Interactive
Also known as Quick Test Pro
Functionality testing tool like as WinRunner
Extension of WinRunner
Supports c/s and web technologies
Supports Client/Server, Web Applications, ERP and Multimedia Technologies (Maya,
Flash like dynamic images) for functionality testing.
Records our business operations in VBScript.
Supports launching of WinRunner to execute TSL scripts
Page 99 of 132
Software Testing Material
Note: QTP supports WinRunner 7.0 and higher versions only because QTP supports auto
learning and from WinRunner7.0 onwards auto learning is possible.
26 User Defined Functions User Defined Functions User Defined Actions
Repeatable navigations in Repeatable navigations in
application recorded as application recorded as
functions. To make it as actions to create one
permanent .ext we can use reusable action.
compiled module concept We can follow below
navigation
Insert, new action, enter
action name with
description, select reusable
action, click ok, record
repeatable navigation in your
application
Note: To call that reusable action in required test, we can use insert , call to action
27 Synchronization point Wait insert -> step ->
Change runtime settings synchronization piont [this is
For object/window exactly equal to for
For object/window bitmap object/window property in
For screen area WinRunner]-> select
indicator object -> click ok
after confirmation -> specify
In the year 1947, non government organizations joined together and formed ISO. There are
145 countries are there in ISO. India is among them. ISO is the Greek word. This is derived
from the word ISOCESS. Actually ISOCESS means equal or total. It is equal for all in the
world India, USA
If u want to get Certification first approach any one of the above company they will say
implement 20 clause. Next they will come to audit and finally certifies.
If u dont know how to implement 20 clause they are conducting training through company
as External Auditor 3 months course. They will conduct this. Internal Auditor for Rs 25,000
and they will conduct with in 4-5 days.
Difference between the External or Lead auditor and Internal auditor is the former can work
in two or three companies in a day. The later will works in only one company.
Format The structure is studied. They visit all the departments and prepare this.
Check list What are the requirements
Procedure Work based on 20 Clause.
Procedure Manual Prepare procedure and distribute to all departments and inform them to
implement it to get the Certification.
What ever the work you are doing you have to prepare the documents. Reasons are
1. Future reference
2. Employees may leave organization
Procedure Manual
Procedure
Check List
Format
SEI-CMM levels:
This is given to software companies only
There are five levels are there in CMM like level 1,2,3,4,5
There are different CMMs are there like SEI-CMM also called as Software CMM, PCMM,
CMMI- CMM for Integration.
In the year 1987, MIKE PAULK and BILL CURTIS (They are working as faculty in
CARNEGIE MELLON University, Pits burgh, USA) formed together. They released CMM
version1.0 from the SEI. They have observed the ISO, in ISO software organizations are not
getting any special facilities. So they formed SEI and released CMM.
In CMM auditors are called as Assessors. Anybody can become as Assessors but you have to
attend training classes in Chennai or Mumbai. KPMG etc... Institutes are conducting this
course.
There are five levels of CMM, each level has got number of processes. For example level2
has the process as project management. Each process is called as KPA.
If an organization implements all the KPAs then based on them it is given a level.
Infosys was assessed at level4 in Dec 1997 and at level5 in Dec 1999.
PCMM: People CMM. It also got 5 levels. This is mainly deals with the HR principles. For
selecting and recruiting they are having one structure. That will be given by this.
CMMI: CMM for Integration. They use SEI CMM, Systems engineering principles and IPD-
CMM (Integrated Product Development).
Small company can get up to ISO and CMM Level-3, PCMM Level-3 and CMMI.
CMMI is the latest technology and most of the companies are trying to get this.
6 (Six Sigma)
This is given to all companies.
This is derived from Greek letter which means Standard Deviation.
6 is a metric which gives various standard deviations
The greater the number before the less will be the defect in the process variation, more
will be quality and customer satisfaction.
ISO, CMM and 6 all are for customer satisfaction.
If it is 5 the error may be 265 in 1 million LOC.
If it is 6 the error may be 3 in 1 million LOC.
PPMQ Parts for Proper Million.
DMAIC Define, Measure, Analyze, Improve and Control.
Generally any company first does this DMAIC and next goes for 6 .
DFSS Design for Six Sigma. This is for software organizations.
In 6 you will be given Champion, Major Black Belt, Black Belt, Green Belt, White Belt,
Orange Belt.
6 companies Satyam, Motorola, Wipro, TCS etc But the first company in Hyderabad
which got this one is GE.
CMM Levels:
What is CMM:
It defines how software organizations mature or improve in their ability to develop software.
This model was developed SEI of Carnegie Mellon University in late 80s.
Infosys was addressed at level 4 in Dec 1997 and at level 5 in Dec 1999.
Why CMM:
CMM is a software specific model. CMM describes how software organizations can take the
path of continuing improvement, which is so required in this highly competitive world. Keep
improving is CMM Mantra.
Level1: initial or Ad-hoc. There are no KPAs in this level.
Level2: Repeatable. There are 6 KPAs in this level. KPAs at this level look at project
planning and execution.
Level3: Defined. There are 7 KPAs in this level. Organizational process is the focus area
here.
Level 4: Managed. There are 2 KPAs in this level. Understanding of data
Level 5: Optimizing. There are 3 KPAs in this level. The focus here is continual
improvement.
As we move from level 1 to 5, the project risk decreases and quality and productivity
increases.
The processes with in this level are highly unstable and unpredictable.
The projects are purely person dependent. Ie, when the persons involved leave the
project or the company, things come to a halt. Also the performance depends on the
capabilities of the individuals rather than the organizational capability.
As we move from level1 to level5, the project risk decreases and quality and productivity
increases.
Level2: Repeatable. There are 6 KPAs in this level. KPAs at this level look at project
planning and execution.
Repeatable, as the word reveals, means that processes employed in the project are repeatable.
Basic project management principles are established to track cost, schedule, and functionality.
The necessary process discipline is in place to repeat earlier success on projects with similar
applications using best practices from past projects.
Requirements Management:
To establish a common understanding between the customer and the project team
It involves establishing and maintaining an agreement with the customer on the requirements
for the software project.
Goal: software plans, products, and activities are kept consistent with the system
requirements allocated to software.
Goal: software estimates are documented for use in planning and tracking the software
project.
Goal: Actual results and performances are tracked against the software plans.
A documented (Project Plan) is used for tracking.
A software baseline library is established containing the software baselines as they are
developed. Changes to baselines and the release of software products built from the software
baseline library are systematically controlled via the change control and configuration
auditing functions of Software Configuration Management.
Goal: Software Configuration Management activities are planned. Selected work products are
identified and controlled. Changes to work products are controlled.
Level2 is concentrated on project level processes, Level3 looks from the organizational view
point.
Level3: Defined.
The software process for both management and engineering activities is documented,
standardized, and integrated into a standard SW process for the organization (E.g. Software
Configuration Management process). All projects use approved and tailored versions of the
organizations standard software process for developing and maintaining software. Data and
information from projects is regularly and systematically collected and organized so that the
same can be reused by other projects.
There are 7 KPAs in this level. Organizational process is the focus area here.
The important goal of this KPA is software process development and improvement activities
are coordinated across the organization.
To do an effective job of identifying and using the best practices, organizations must establish
a group with that responsibility and build a plan for how the organization will improve its
process. Such as a plan should include periodic assessments of the organizations process
maturity, leading to plans for improvement in capability. This process engineering is done by
SEPG, which looks out for the interest of every project in the organization.
Training Program:
The purpose of this KPA is to develop the skills and knowledge if individuals so they can
perform their roles effectively and efficiently.
Training Program involves first identifying the training needed by the organization, projects,
and individuals, then developing or procuring training to address the identified needs. Each
software project evaluates its current and future skills needs and determines how these skills
will be obtained. Some skills are effectively and efficiently imparted through informal
methods, where as other skills need more formal training methods to be effectively and
efficiently imparted.
2. Integrate the application development and testing life cycles. You'll get better results
and you won't have to mediate between two armed camps in your IT shop.
3. Formalize a testing methodology; you'll test everything the same way and you'll get
uniform results.
4. Develop a comprehensive test plan; it forms the basis for the testing methodology.
7. Understand the business reason behind the application. You'll write a better application
and better testing scripts.
8. Use multiple levels and types of testing (regression, systems, integration, stress and
load).
10. Don't let your programmers check their own work; they'll miss their own errors.
Configuration Management
What is configuration management?
Oursystemsaremadeupofanumberofitems(orthings).ConfigurationManagementisall
abouteffectiveandefficientmanagementandcontroloftheseitems.
Duringthelifetimeofthesystemmanyoftheitemswillchange.Theywillchangefora
numberofreasons;newfeatures,faultfixes,environmentchanges,etc.Wemightalsohave
differentitemsfordifferentcustomers,suchasversionAcontainsmodules1,2,3,4&5and
versionBcontainsmodules1,2,3,6&7.Wemayneeddifferentmodulesdependingonthe
environmentstheyrununder(suchasWindowsNTandWindows2000).
AnindicationofagoodConfigurationManagementsystemistoaskourselveswhetherwe
cangobacktworeleasesofoursoftwareandperformsomespecifictestswithrelativeease.
Problems resulting from poor configuration management
Oftenorganisationsdonotappreciatetheneedforgoodconfigurationmanagementuntilthey
experience one or more of the problems that can occur without it. Some problems that
commonlyoccurasaresultofpoorconfigurationmanagementsystemsinclude:
the inability to reproduce a fault reported by a customer;
two programmers have the same module out for update and one overwrites the others
change;
unable to match object code with source code;
do not know which fixes belong to which versions of the software;
faults that have been fixed reappear in a later release;
a fault fix to an old version needs testing urgently, but tests have been updated.
Definition of configuration management
AgooddefinitionofconfigurationmanagementisgivenintheANSI/IEEEStandard729
1983,SoftwareEngineeringTerminology.Thissaysthatconfigurationmanagementis:
the process of identifying and defining Configuration Items in a system,
controlling the release and change of these items throughout the system life cycle,
recording and reporting the status of configuration items and change requests, and
verifying the completeness and correctness of configuration items.
Thisdefinitionneatlybreaksdownconfigurationmanagementintofourkeyareas:
configuration identification;
configuration control;
configuration status accounting; and
configuration audit.
ConfigurationidentificationistheprocessofidentifyinganddefiningConfigurationItems
inasystem.ConfigurationItemsarethoseitemsthathavetheirownversionnumbersuchthat
when an item is changed, a new version is created with a different version number. So
configurationidentificationisaboutidentifyingwhataretobetheconfigurationitemsina
system,howthesewillbestructured(wheretheywillbestoredinrelationtoeachother)the
versionnumberingsystem,selectioncriteria,namingconventions,andbaselines.Abaseline
isasetofdifferentconfigurationitems(oneversionofeach)thathasaversionnumberitself.
Thus,ifprogramXcomprisesmodulesAandB,wecoulddefineabaselineforversion1.1of
programXthatcomprisesversion1.1ofmoduleAandversion1.1ofmoduleB.IfmoduleB
changes,anewversion(say1.2)ofmoduleBiscreated.Wemaythenhaveanewversionof
programX,saybaseline2.0thatcomprises version1.1ofmoduleAandversion1.2of
moduleB.
Status accounting enables traceability and impact analysis. A database holds all the
informationrelatingtothecurrentandpaststatesofallconfigurationitems.Forexample,this
wouldbeabletotelluswhichconfigurationitemsarebeingupdated,whohasthemandfor
whatpurpose.
Just about everything used in testing can reasonably be place under the control of a
configurationmanagementsystem.Thatisnottosaythateverythingshould.Forexample,
actualtestresultsmaynotbethoughinsomeindustries(e.g.pharmaceutical)itcanbealegal
requirementtodoso.
1. Static V&V
2. Dynamic V & V
Static V&V:
1. Technical Review
2. Inspection
3. Code Walk through.
Dynamic V&V:
In Dynamic V&V we are conducting Testing the application in real time with executables.
That's why it is called as Dynamic V&V.
SOFTWARE TESTING
Definition 1: Software Testing is the process of executing a program with the intent of
finding bugs.
The basic goal of the software development process is to produce a software that has
no errors. In an effort to detect errors, each phase ends with V & V activity such as Technical
review. But most of the V & V (review) is based on human evaluation and can't detect all
errors.
As testing is the last phase in the SDLC (Software Development Life Cycle) before
the final software is delivered, it has the enormous responsibility of detecting any type of
errors
Combination of white box and block box testing is called as 'Gray box testing'
1.Path testing
2.Condition testing
3.Data flow testing
4.Loop testing.
1.Guarantee that all 'independent paths' within a module have been exercised at least once.
2.Exercise all logical decisions on their true and false sides.
3.Execute all loops at their boundaries within their operational bounds.
4.Exercise internal data structures to assure their validity.
Usually all the organizations go for Block Box Testing. Because in block box testing, we
are checking the functionality of the application. In block box testing, structure of the
program is not considered.
For better customer satisfaction, we have to do white box testing first, then conduct block
box testing.
In Black box testing, testers attempt to find errors in the following categories:
Testing is usually relied on to detect these faults, in addition to the faults introduced in coding
phase.
Due to this, different levels of testing are used in the testing process.
From the service providers point of view the following are to be done.
1.Unit Testing
2.Integration Testing
3.System Testing
UNIT TESTING
In Unit Testing, Different modules are tested, against the specifications produced during
design for the modules.
Unit testing is essentially for verification of the code produced during the coding phase
and hence the goal is to test the internal logic of the modules.
Module interface is tested to ensure that information properly flows into and out of the
program unit under test.
Here we are checking a particular field in a screen or module, to check whether the field
accepts
1.Null characters
2.Unique characters
3.Length
4.Number
5.Date
6.Negative values
7.Default values.
For Example, consider a Course Registration form that contains the following fields.
Based on the above screen, we have to prepare a internal test plan. Based on the internal
test plan, we can prepare test cases.
FC --> Functionality Check and we have to test the functionality of the screen.
Option FC
Type ofFC
course
Student Y N Y Y N N N
name
Address Y N Y N N N N
Phone N N Y Y N Y N
number
Date Y N Y Y Y Y N
Time FC
Timing FC
Student FC
ID
Batch FC
code
Save FC
button
Exit FC
button
We have to write test cases only for the 'Y' option, Not necessary to write test cases for the 'N'
Option,
The above internal test plan is mainly to reduce the number of test cases.
For 'Student name' field we have to write test case. The above test case is written based on
the internal test plan. Test cases is written only for 'Y'. i.e. For applicable one.
Not necessary to write Test case for 'N'. --This is to reduce the number of test cases.
In date range check, we have to check whether the application is accepting greater than the
system date or not.
Date Range check - If we are having a Date field in a screen we have to write test case
to check the date field as
Date field.
Sl no. Test description Test case Expected result
UTC/001 Enter blank space or skip theShould display error
field ( null check) message and set focus
back to the date field.
( because. Date field
should not accept
blank space)
UTC/002 Enter date in DD/MM/YYYYShould accept and
format. (date check) proceed
UTC/003 Enter date in mm/dd/yyyyShould display error
format (date check) message. And set
focus back to the
field( because. It is of
DD/MM/YYYY
format.
UTC/004 Enter number '1234567'Should display error
(number check) message. Because it
should not accept just
numbers.
UTC/005 Enter '-23232324' and proceedShould display error
( -ve check) message. Because it
should not accept -ve
numbers.
UTC/006 Enter date greater than the Should display error
system date. message.( because it
should not accept
more than system
date)
In boundary value check we have to check a particular field with stand in the boundaries
For e.g. If a number field has a range of 0 to 99 we have to check whether the field is
accepting -1, 0, 1 i.e. < , =, > to the lower boundary and 98, 99, 100 -- have to check with
< , = , > values of upper boundary.
In User Interface Check we have to check, how the application is User Friendly.
Functionality checks
ADD or MODIFY or DELETE or VIEW and SAVE and EXIT and other main functions
in a screen.
INTEGRATION TESTING
Many Unit Tested Modules are combined into subsystems, which are then tested.
The goal is to see if the modules can be integrated properly. This testing activity can be
considered testing the design.
Integration Testing refers to the testing in which the software units of an application are
combined and tested for evaluating the interaction bet them.
Here all modules are combined and integrated in advance. The entire program is tested as
a whole. If Set of bugs encountered correction is difficult. If one error is corrected new bug
appears and the process continues.
Advantage:
Disadvantage:
Unit testing of lower modules can be complicated by the complexity of upper
modules.
BOTTOM UP APPROACH
Begins construction & testing with atomic modules (i.e. Modules of lowest levels in
the program structure)
Program is merged and tested from bottom to top.
The terminal module is tested in isolation first, and then the next set of the higher level
modules are tested with the previously tested lower level modules.
Here we have to write ' Drivers'
Driver is nothing more than a program, that accept the test case data, passes such data
to the module (to be tested) and prints the relevant results.
Disadvantage: Test Drivers have to be generated for modules at all levels, except for top
controlling module.
SYSTEM TESTING
VOLUME TESTING: To find the weakness in the system with respect to its handling of
large amount of data, during short time period. ( focus is amount of data)
STRESS TESTING: The purpose of stress testing is, to test the system capacity, whether it
is handling large number of processing transactions during peak periods. (moment)
System performance is generally assessed in terms of response time and throughput rates,
under different processing and configuration condition.
REGRESSION TESTING: Is the re-execution of same subsets of test cases that have
already executed, to ensure that changes(after defect fix) have not propagated unintended side
effects.
Regression Testing is the activity that helps to ensure that changes do not introduce
unintended behavior or additional bugs.
SECURITY TESTING: Attempts to verify that protection mechanisms built into a system
will infact protect it from improper penetration.
System is protected in accordance with importance to organization, with respect to
security levels.
RECOVERY TESTING: Forcing the system to fail in different ways and checking how fast
it recovers from fail.
SERVER TESTING:
Here we have to check Volume, Stress, Performance, data recovery testing, backup and
restore testing, error trapping data security, as a whole.
Here we have to check the PAIN ( e business concept)
PAIN: P-Privacy
A- Authentication of parties
I- Integrity of transactions
N - Non repudiation.
ACCEPTANCE TESTING: Performed with realistic data of the client to demonstrate that
the software is working satisfactorily. Testing here focuses on the external behavior of the
system.
ALPHA TESTING: Alpha testing is conducted at the developers place, by the customer.
The software is tested in a natural setting with the developer 'looking over the shoulder'
of the user(i.e. customer) and recording errors and usage problems.
Alpha test are conducted in a controlled environment.
BETA TESTING: Beta Testing is conducted at one or more customer sites by the end user of
the software. Here the developer is not present during testing.
Here the client tests the software or system in his place and recording defects and
sending his comments to development team.
TEST PLAN:
A test plan is a general document for the entire project that defines the scope, approach
to be taken, and the schedules of intended testing activities. It identifies test items, the
features to be tested, the testing tasks, who will do each task and any risks requiring
contingency planning.
The test planning can be done, well before the actual testing commences and can be
done in parallel with the coding and design phase.
The inputs for forming test plan are
1.Project plan
2.Requirement specification document
3.Architecture and design document.
Requirements document and Design document are the basic documents used for selecting
the test units and deciding the approaches to be used during testing.
Test Unit: Test unit is a set of one or more modules together with associated data, that are
from a single computer program and that are the object of testing. Test unit may be a module
or few modules or a complete system.
Features to be tested: Include all software features and combinations of features that
should be tested. A software feature is a software characteristics specified or implied by the
requirements or design document.
Approach for Testing: specifies the overall approach to be followed in the current project.
The technique that will be used to judge the testing effort should also be specified.
Test Deliverables: Should be specified in the test plan before the actual testing begins
Deliverables could be
Test cases that were used
Detailed results of testing
Test summary report
In general
Test case specification report
Test summary report and
Test Log report. Should be specified as deliverables.
Test summary Report: It defines the items tested, environment in which testing was done,
and any variations from the specification observed during testing.
Test Log Report: Provides chronological record of relevant details about the executions of
the test cases.
Schedule: Specifies the amount of time and effort to be spent on different activities of testing
and testing of different units that have been identified.
Personnel Allocation: Identifies the persons responsible for performing the different
activities.
Steps to be performed to execute the test cases are specified in a separate document
called the 'test procedure specification'. This document specifies special req. that exist for
setting the test environment and describes the methods and formats for reporting the result of
testing.
Output of the test case execution is: Test log report, Test summary report, and bug report.
Test summary report: Gives total number of test cases executed, the number and nature of
bugs found, and summary of any metrics data.
DEFECT CATEGORIES
1. Defects from specifications: Products built varies from the product specified.
2. Defect in capturing user requirement: Variance is something that user wanted,
that is not in the built product. But was also not specified in the product.
Writing test cases to all possible checks is irrelevant. So we can reduce the number of test
cases by avoid some unwanted checks.
To reduce the number of test cases, there are three methods to be followed.
ECP is a black box testing method that divides the input domain of program into classes of
data, from which test cases can be derived. It uncovers classes of errors, there by reducing
the total number of test cases that must be developed.
Rather than selecting any elements of equivalence, BVA leads to the selection of test case at
the 'edges' of the class.
2. If input condition specifies a number of values, test case should be developed that
exercises the minimum and maximum numbers. Values just above and just below the
maximum and minimum should be tested.
Testing is the phase where the errors remaining from all the previous phases (i.e. SDLC)
must be detected. Hence testing performs a very critical role for quality assurance and for
ensuring the reliability of software.
Error: It refers to the discrepancy between computed or measured value and theoretically
correct value. i.e. Difference between actual output and correct output of the software.
Fault: Fault is the basic reason for software malfunction. i.e. Fault is a condition that
causes a system to fail in performing its required function.
Presence of an error implies that a failure must have occurred, and the observance of a
failure implies that a fault must be present in the system.
During the testing process only failures are observed by which presence of fault is
deduced. The actual faults are identified by separate activities commonly referred to us
'debugging'.
In other words, for identifying faults after testing has revealed the presence of faults the
expensive task of debugging has to be performed. This is the reason 'why testing is
expensive.'
Reason for Testing System separately( Unit, Integration and System Testing):
Reason for testing parts separately is that if a test case detects an error in a large program,
it will be extremely difficult to pin point the source of error.
It is difficult to construct test cases so that all the modules will be executed. This may
increase the change of module's error undetected.
Sometimes error occurs because the programmer did not understand the
specification clearly. Testing of a program by its programmer will not detect such
errors, but independent testing may succeed in finding them.
Time concern
If the customer want the third party testing
Non-availability of testing resources
It is not easy for some one to test their own program with proper frame of
mind for testing
Full execution of all test cases with internal acceptance and customer acceptance
When Beta or Alpha Testing period ends
Bug rate falls below certain level
Test budget depleted
Test cases completed with certain % passed
Once the software is 100% bug free. Just to check the efficiency of Tester, we have to
'insert certain number of bugs' in project in various points and give it to tester to test.
We have to check the efficiency of the tester once the software is 100% bug free.
DEFECT CLASSIFICATION
As per ANSI/IEEE standard 729 the following are the five level of defect classification are
1. Critical: The defect results in the failure of the complete software system, of a subsystem,
or of a software unit (program or module) with the system.
2. Major: The defect results in the failure of the complete software system of a subsystem, or
of a software unit (program or module) within the system. There is no way to make the failed
components, however, there are acceptable processing alternatives which will yield the
desired result.
3. Average: The defect does not result in a failure, but causes the system to produce incorrect,
incomplete, or inconsistent results, or the defect impairs the systems usability.
4. Minor: The defect does not cause failure, does not impair usability, and the desired
processing results are easily obtained by working around the defect.
In addition to the defect severity level defined above, defect priority level can be used
with severity categories to determine the immediacy of repair. A five repair priority scale has
also be used in common testing practice.
Resolve Immediately: Further development and /or testing cannot occur until the defect
has been repaired. The system cannot be used until the repair has been effected
Give High Attention: The defect must be resolved as soon as possible because it is
impairing development / and or testing activities. System use will be severely affected
until the defect is fixed.
Normal Queue: The defect should be resolved in the normal course of development
activities. It can wait unit a new build or version is created.
Low Priority: The defect is an irritant that should be repaired but which can be repaired
after more serious defect have been fixed
Defer: The defect repair can be put of indefinitely. It can be resolved in a future major
system revision or not resolved at all.
Defect Closure rate == how much time takes to close the defect
No. of defect
Defect Density == --------------
KLOC/FP
KLOC- Kilo Lines Of Code
FP - Functional Point analysis.
www.softwareqatest.com
www.rstcorp.com
www.mmsindia.com
www.facilita.co.uk
www.autotestco.com
www.kaner.com
www.badsoftware.com
www.model-based-testing.com
www.soft.com
www.jrothman.com
www.webservepro.com
www.testworks.com
www.ftech.com
www.geocities.com
www.aptest.com
www.testing.com
www.stqemagazine.com
www.sqe.com
www.io.com
www.testingstuff.com
www.stickyminds.com