Vous êtes sur la page 1sur 118

1

SOFTWARE
TESTING
By
VIKRAM
2
Contents

Software Quality
Fish Model
V-Model
• Reviews in Analysis
• Reviews in Design
• Unit Testing
• Integration Testing
• System Testing
1. Usability Testing
2. Functional Testing
3. Non Functional Testing
• User Acceptance Testing
• Release Testing
• Testing During Maintenance
• Risks and Ad-Hoc Testing
System Testing Process
• Test Initiation
• Test Planning
• Test Design
• Test Execution
1. Formal Meeting
2. Build Version Control
3. Levels of Test Execution
4. Levels of Test Execution VS Test Cases
5. Level-0(Sanity Testing)
6. Level-1(Comprehensive Testing)
7. Level-2(Regression Testing)
• Test Reporting
• Test Closure
• User Acceptance Testing
• Sign Off
Case Study
Manual Testing VS Automation Testing
WinRunner
Automation Test creation in WinRunner
• Recording Modes
1. Context Sensitive mode
2. Analog mode
• Check Points
1. GUI check point
for single point
for object or window
for multiple objects
2. Bitmap check point
for object or window
for screen area
3. Database check point
3
default check
custom check
runtime record check
4. Text check point
from object or window
from screen area
• Data Driven Testing
1. From Key Board
2. From Flat Files
3. From Front end Objects
4. From XL Sheets
• Silent mode
• Synchronization point
1. wait
2. for object/window property
3. for object/window bitmap
4. for screen area bitmap
5. Change Runtime settings
• Function Generator
• Administration of WinRunner
1. WinRunner Frame Work
Global GUI Map file
Per Test mode
2. Changes in references
3. GUI Map configuration
4. Virtual object wizard
5. Description Programming
6. Start script
7. Selected applications
• User defined functions
• Compiled Module
• Exception Handling or Recovery Manager
1. TSL exceptions
2. Object exceptions
3. Popup exceptions
• Web Test Option
1. Links coverage
2. Content coverage
3. Web functions
• Batch Testing
• Parameter Passing
• Data Driven Batch Testing
• Transaction Point
• Debugging
• Short/Soft Key Configuration
• Rapid Test Script Wizard(RTSW)
• GUI Spy
Quick Test Professionals
• Recording modes
1. General recording
4
2. Analog recording
3. Low level recording
• Check Points
1. Standard check point
2. Bitmap check point
3. Text check point
4. Textarea check point
5. Database check point
6. Accessibility check point
7. XML check point
8. XML check point(File)
VBScript
• Step Generator
• Data Driven Testing
• DDT through Key Board
• DDT through Front end objects
• DDT through XL sheet
• DDT through flat file
Multiple Actions
• Reusable actions
• Parameters
• Synchronization Point
• QTP FrameWork
• Per Test mode
• Export references
• Regular Expressions
• Object Identification
• Smart Identification
• Virtual Object Wizard
Web Test Option
• Links Coverage
• Content coverage
Recovery Scenario Manager
Batch testing
Parameter Passing
With statement
Active Screen
Output value
Call to WinRunner
Advanced Testing Process
5

MANUAL TESTING

Software Quality

*Meet customer requirements(ATM)


*Meet customer expectations(Speed)
*Cost to purchase
*Time to release

Quality Assurance and Quality Control

"The monitoring and measuring the strength of development process is called as Software Quality
Assurance".It is a process based concept.
"The testing on a deliverable after completion of a process is called as Software Quality Control".
The QA is specified as Verification and the QC is specified as Validation.

Fish Model(Development VS Testing)

*In the above process model, the development stages are indicating SDLC (Software Development Life
Cycle) and the lower angle is indicating STLC (Software Testing Life Cycle).
*During SDLC, the organizations are following standards, internal auditing and strategies, called SQA.
*After completion of every development stage, the organizations are conducting testing, called QC.
*QA is indicating defect prevention.The QC is indicating defect detection and correctness.
*Finally the STLC is indicating Quality Control.

BRS: The Business Requirements Specification defines the requirements of the customer to be developed as
software.

SRS: The Software Requirements Specification defines the functional requirements to be developed and the
system requirements to be used.

Walk Through: It is a static testing technique.During this, the responsible people are studying the document to
6
estimate the completeness and correctness.

Inspection: It is also a static testing technique to check a specific factor in corresponding document.

Peer Review: The comparison of similar documents is called as Point to Point review.

HLD: High level design document is representing the overall view of an s/w from root functionality to leaf
functionality.HLD are also known as External Design or Architectural design.

LLD: Low level Design document is representing the internal logic of every functionality.This design is also
known as Internal or Detailed design.

Program: It is a set of executable statements.A s/w means that a set of modules/functionality/features.One


module means that a set of dependent programs.

White Box Testing: It is a program based testing technique used to estimate the completeness and correctness of
the internal programs structure.

Black Box Testing: It is an s/w level testing technique used to estimate the completeness and correctness of the
external functionality.

NOTE: White Box Testing is also known as Clear Box Testing or Open box Testing."The combination of WBT
and BBT is called as Grey Box Testing".
7

V-Model(Verification and Validation Model)


This model defines conceptual mapping in b/w development stages and testing stages (verifying the process
and validating it).

The above V-Model is defining multiple stages of development with multiple stages of testing.The
maintence of separate teams for all stages is expensive to small and medium scale companies.
*Due to this reason, the small and medium scale organizations are maintaining the separate testing team
only for SystemTesting because this is Bottle-Neck stage in s/w process.

A) Reviews in Analysis
After completion of requirements garnering, the Business Analyst category people are developing the SRS
with required functional requirements and system requirements.The same category people are conducting reviews
on those documents to estimate the completeness and correctness.In this review meeting, they are following Walk
Throughs, Inspections and Peer Reviews to estimate below factors.
*are they complete requirements?
*are they correct requirements?
*are they achievable requirements?(Practically)
*are they reasonable requirements?(Budget and Time)
*are they testable requirements?

B) Reviews in Design
After completion of Analysis and their reviews,the designer category people are preparing HLD's and
LLD's.After completion of designing,the same category people are conducting a review meeting to estimate the
completeness and correctness through below factors.
*are they understandable?
*are they complete?
*are they correct?
*are they followable?
*are they handling errors?

C) Unit Testing
After completion of analysis and design,our organization programming category people are starting coding.
8
"The Analysis and Design level reviews are also known as Verification Testing".After completion of Verification
testing,the programmers are starting coding and verify every program internal structure using WBT technique’s as
follows.
1. Basis Path Testing:(whether it is executing or not)In this,the programmers are checking all executable
areas in that program to estimate "whether that program is running or not?".To conduct this testing the programs are
following below approach.
*write program w.r.t design logic (HLD's and LLD's).
*prepare flow graph.
*calculate individual paths in that flow graph called Cyclomatic Complexity.
*run that program more than one time to cover all individual paths.

After completion of Basis Path testing, the programmers are concentrating on correctness of inputs and
outputs using Control structure Testing

2. Control structure Testing: In this, the programmers are verifying every statement, condition and loop in
terms of completeness and correctness of I/O (example: Debugging)

3. Program Technique Testing: During this, the programmer is calculating the execution time of that
program.If the execution time is not reasonable then the programmer is performing changes in structure of that
program sans disturbing the functionality.

*4.Mutation Testing: Mutation means a change in a program.Programmers are performing changes in


tested program to estimate completeness and correctness of that program testing.

D) Integration Testing
After completion of dependent programs development and Unit Testing, the programmers are inter
connecting them to form a complete system.After completion of inter connection, the programmers are checking
the completeness and correctness of that inter connection.This Integration testing is also known as Interface
Testing. There are 4 approaches to inter connect programs and testing on that inter connections

1.Top Down approach: In this approach, programmers are inter connecting the Main module and
completed sub modules sans using under constructive sub modules.In the place of under constructive sub
modules,the programmers are using temporary or alternative programs called Stubs.These stubs are also known as
Called Programs by Main module.
9

2. Bottom to Up Approach: In this approach, programmers are connecting completed Sub modules sans
inter connection of Main module which is under construction.The programmers are using a temporary program
instead of main module called Driver or Callin Program.

3. Hybrid Approach: It is a combination of Top-Down and Bottom-Up approaches.This approach is also


called as Sandwich Approach.

4. System Approach: After completion of all approaches,the programmers are integrating modules,their
modules and Unit testing.This approach is known as Big Bank approach.

E) System Testing
After completion all required modules integration,the development team is releasing a s/w build to a
separate testing team in our organization.The s/w build is also known as Application Under Testing(UAT).This
System testing is classified into 3 levels as Usability Testing,Functional Testing(Black Box Testing Techniques)
and Non Functional Testing (this is an Expensive Testing).

1. Usability Testing
(Appearance of bike)In general the separate testing team is starting test execution with usability
testing to estimate user friendliness of s/w build.During this, test engineers are applying below sub tests

a)User Interface Testing: Whether every screen in the application build is


*Ease of use (understandable).
*Look and feel (attractiveness).
*Speed in interface (short navigations).
10
b) Manual support testing:During the s/w release,our organization is releasing user manuals also.Before
s/w release,the separate team is validating those user manuals also in terms of completeness and correctness.

Case study

2. Functional testing
After completion of user interface testing on responsible screens in our application build, the
separate testing team is concentrating on requirements correctness and completeness in that build.In this
testing; the separate testing team is using a set of Black Box Testing Techniques, like
Boundary Value Analysis, Equivalence Class Partitions, Error Guessing, etc.
This testing is classified into 2 sub tests as follows

a) Functionality Testing: During this test, test engineers are validating the completeness and correctness of
every functionality.This testing is also known as Requirements Testing.In this test,the separate testing team
is validating the correctness of every functionality through below coverage’s.
*GUI coverage or Behavioral coverage (valid changes in properties of objects and windows in our
application build).
*Error handling coverage (the prevention of wrong operations with meaningful error messages like
displaying a message before closing a file without saving it).
*Input Domain coverage (the validity of i/p values in terms of size and type like while giving
alphabets to age field).
*Manipulations coverage (the correctness of o/p or outcomes).
*Order of functionalities (the existence of functionality w.r.t customer requirements).
*Back end coverage (the impact of front end’s screen operation on back end’s table content in
corresponding functionality).

NOTE: Above coverage’s are applicable on every functionality in our application build with the help of Black Box
Testing Techniques.

b)Sanitation testing: It is also known as Garbage Testing.During this test,the separate testing team is
detecting extra functionalities in s/w build w.r.t customer requirements(like sign in link sign in page).

NOTE:*Defects in s/w are 3 types Mistakes, Missing and Extra.

3) Non-Functional Testing
It is also a Manditory testing level in System Testing phase.But it is expensive and complex to
conduct.During this test, the testing team is concentrating on characteristics of an s/w.

*a) Recovery/Reliability testing: During this, the test engineers are validating that whether our s/w
build is changing from abnormal state to normal state or not?
11

b)Compatibility Testing:(friend’s game cd not working in our system)“It is also known as


Portability testing”.(Adjust anywhere).During this test,the test engineers are validating that whether our s/w
build is able to run on customers expected platform or not?Platform means that the OS, Compilers,
Browsers and other system s/w.

c)Configuration Testing: It is also known as H/W compatibility testing.During this,the testing team
is validating that whether our s/w build is supporting different technology devices or not?(Example different
types of printers, different types of n/w etc)

d)Inter system testing:It is also known as End-To-End Testing.During this the testing team is
validating that whether the s/w build is co-existing with other s/w or not?(To share common resources)
EX: E-Server

e)Installation Testing: The order is important INITIAION,DURING & AFTER

f)Data Volume testing:It is also known as Storage testing or Memory Testing.During this the testing
team is calculating the peak limit of data handled by the s/w build.(EX:Hospital software.It is also known as
Mass testing)
EX:MS Access technology oriented s/w builds are supporting 2GB data as maximum.

g)Load Testing: It is also known as Performance or Scalability testing.Load or Scale means that the
number of concurrent users(at the same time) who are operating a s/w.The execution of our s/w build under
customer expected configuration and customer expected load to estimate the performance is LOAD
TESTING.(Inputs are customer expected configuration and output is performance).Performance means that
speed of the processing.
12
h)Stress Testing: The execution of our s/w build under customer expected configuration and
various load levels to estimate Stability or continuity is called Stress Testing or Endurous testing.

i)Security Testing: It is also known as Penetration testing.During this the testing team is validating
Authorization,Access control and Encryption or Decryption.Authorization is indicating the validity of users
to s/w like Student entering a class.Access control is indicating authorities of valid users to use features or
functionalities or modules in that s/w like after entering Student has limited resources to use.
Encryption or Decryption procedures are preventing 3rd party accessing.

NOTE: In general the separate Testing team is covering Authorization and Access control checking, the same
development people are covering Encryption or Decryption checking.

F) User Acceptance Testing


After completion of all reasonable tests,the project management is concentrating on UAT to garner feedback
from real customers or model customers.In this testing the developers and testers are also involving.There are 2
ways to conduct UAT such as(both testing purpose is to garner feed back from customer)
*Alpha Testing
*Beta testing

Alpha Testing Beta Testing


In development site (like Taylor) In model customer site like MS OS s/w
By real Customers By model customers
Suitable for applications Suitable for products
Involvement of Developers and testers Involvement of Developers and testers

G) Release Testing
After completion of UAT and their modifications, the project manager is defining Release or Delivery team
with few developers, few testers and few h/w engineers.This release team is coming to responsible customer site
and conducts Release Testing or Port Testing or Green Box Testing.In this testing the release team is observing
below factors in that customer site.
*Compact Installation.(fully installed or not)
*Overall Functionality.
*Input devices handling.(keyboard,mouse,etc)
13
*Output Devices handling. (Monitor printer, etc)
*Secondary Storage devices handling. (Cd drive, hard disk, floppy etc)
*OS error handling. (Reliability)
*Co-Existence with other s/w application.
After completion of Port testing the responsible release team is conducting TRAINING SESSIONS to end
users or customer site people.

H) Testing During Maintenance


After completion Release testing, the customer site people are utilizing that s/w for required purposes.
During this utilization, the customer site people are sending Change Requests to our company.The responsible team
to handle that change is called CCB (Change Control Board).This team consists of project manager, few
developers, few testers and few h/w engineers.This team will receive 2 types of change requests such as
Enhancement and Missed/Latent defects.

Case study:
Testing phase/level/stage Responsible Testing technique
In analysis Business analyst Walk through,Inspections and Peer reviews
In design Designer Walk through,Inspections and Peer reviews
Unit testing Programmer White box testing
Integration/Interface testing Programmer Top down,Bottom up and Hybrid system approach
System Testing team Black box testing
testing(usability,functionality
and non-functionality)
UAT Real/Model customers Alpha and Beta testing
Release testing Release team Port testing factors
Testing during maintenance CCB Test s/w changes(Regression testing)

Risks and Ad-Hoc testing


Sometimes, organizations are not able to conduct planned testing.Due to some risks, the testing teams are
conducting Ad-Hoc testing instead of planned testing.There are different styles of Ad-Hoc testing.
*Monkey or Chimpanzee testing: Due to lack of time, the testing team is covering Main activities of s/w
functionalities.
*Buddy testing: Due to lack of time, the developers and testers are grouping as Buddies.Every buddy
consists of developer and tester to continue process parallel.
*Exploratory testing: In general, the testing team is conducting testing w.r.t available documents.Due to lack
of documentation, the testing team is depending on past experience, discussions with others, similar projects
browsing and internet surfing.This style of testing is exploratory testing.
14
*Pair testing: Due to lack of skills, the junior test engineers are grouping with senior test engineers to
share their knowledge.
*Debugging testing: To estimate the efficiency of testing people the development team is releasing build to
testing team with known defects.
The above Ad-Hoc testing styles are also known as INFORMAL TESTING TECHNIQUES.

System testing process


In general,the small and medium scale organizations are maintaining the separate testing team only for
System testing stage.This stage is bottle neck stage in s/w development.

Development process VS System Testing process

Test Initiation
In general, the system testing process starts with Test Initiation or Test Commencement.In this stage,the
Project Manager or Test Manager selects a reasonable approach or reasonable methodology to be followed by the
separate testing team.This approach or methodology is called Test Strategy.
The Test Strategy document consists of below components.
1. Scope and Objective: The importance of testing in this project.
2. Business Issues: The Cost and Time allocation for testing (100%cost=Deve&maintenance+36%Testing)
3. Test Approach: Selected list of reasonable testing factors or issues w.r.t the requirements in project, scope
of requirements and risks involved in testing.
4. Roles and Responsibilities: The names of jobs in testing team and their responsibilities.
5. Communication & Status reporting: Required negotiations in b/w every 2 consecutive jobs in testing team
15
6. Test Automation and Tools: The importance of Test Automation in this project testing and the names of
available testing tools in our organization.
7. Defect Reporting and Tracking: The required negotiation in b/w developers and testers to report and to
resolve defects.
8. Testing Measurements and Metrics: Selected lists of measures and metrics to estimate testing process.
9. Risks and Assumptions: List of expected risks will come in future and solution to over come.
10. Change and Configuration management: The management of deliverables related to s/w development and
testing.
11. Training Plan: The required number of training sessions before starting current project testing process by
testing team.

Testing Process or Issues:


To define a quality s/w, there are 15 factors or issues.
1. Authorization: The validity of users to connect to that s/w. (Security testing)
EX: login with password, digital signatures etc.
2. Access Control: The permissions of users to use functionalities in a s/w.(Security testing)
EX: Admin user performs all functionalities and general users perform some of functionalities.
3. Audit Trail: The correctness of data about data (Metadata).This is functionality testing
4. Data Integrity: Correctness of taking inputs (Functionality testing).
EX: Testing AGE object inputs
5. Correctness: The correctness of Output or Outcome.(Functionality testing)
EX: Successful Mailbox opens after Login
6. Ease of use: User friendliness (Usability testing and manual support testing)
EX: Color, font, alignment, etc.
7. Ease of Operate: Easy to maintain in our environment (Installation testing)
EX: Installation, Uninstallation, Downloading, etc
8. Portable: Run on different platforms (Compatibility testing and configuration testing)
EX: Java products are running on Windows and UNIX.
9. Performance: Speed of processing (Load, Volume and Stress testing)
EX: 3 seconds is a performance of a website for link operation.
10. Reliability: Recover from Abnormal situation (Recovery and Stress testing)
EX: Backup of database in a s/w
11. Coupling: Co-Existence with other s/w application to share common resources (Inter System testing)
EX: The Bank account s/w database is shareable to loan system s/w in a bank.
12. Maintainable: Whether our s/w is longtime serviceable to customer site people/not.(Compliance Testing)
EX: Complan food
13. Methodology: Whether the project team is following specified standards or not(Compliance testing)
14. Service Levels: Order of functionalities or features or services (Functionality and stress testing)
EX: After login the s/w is providing mailing facility to users
15. Continuity of processing: Means Inter process communication (Integration testing by developers)
Case Study:
15 Test factors for a quality s/w
-4 (requirements)
_____
11
+2 (scope of requirements)
______
13
-4 (risks)
______
9 (Finalized factors to be applied)
______
16

In above example,9 test factors or issues finalized by PM to be applied by testing team in current project system
testing.

Test Planning
After preparation of Test Strategy documents with required details,the test lead category people are defining
test plan in terms of What to test,How to test,When to test and Who to test.
In this stage,the test lead is preparing the system test plan and then divides that plan into module test plans.
(Master test plan into detailed test plans).In this test planning the test lead is following below approach to prepare
test plans.

a)Testing team formation: In general, the test planning process is starting with testing team formation by
test lead.In this team formation the test lead is depending on below factors.
*Project size(EX: number of functionality points).
*Availability of test engineers.
*Available test duration
*Availability of test environment resources(EX: Testing tools)

b)Identify Tactical risks:After completion of testing team formation,the test lead is analyzing possible risks
w.r.t team.Example risks are
*Lack of knowledge on project requirement domain
*Lack of time
*Lack of resources
*Delays in Delivery
*Lack of Documentation
*Lack of development process seriousness
*Lack of communication

c)Prepare Test plans:After completion of testing team formation and risks analysis,the test lead is
concentrating on Master Test plan and detailed test plans development.Every test plan document follows a fixed
format IEEE 829(Institute of Electrical and Electronics Engineering).These IEEE 829 standards are specially
designed for test documentation.The Format is
1. Test Plan Id: Unique number or name for future reference.
2. Introduction: About project.
3. Test Items: Names of all modules or features
4. Features to be tested: The names of modules or features to test.
5. Features not to be tested: The names of modules or features which are already tested.
3, 4 and 5 indicates what to test.
6. Tests to be applied: The selected list of testing techniques to be applied (From Test Strategy of
Project manager)
7. Test Environment: Required h/w and s/w including testing tools.
17
8. Entry Criteria: When the test engineers are able to start test execution for defect in s/w build.
*prepared all valid test cases
*Establishment of test environment
*received stable build from developers
9. Suspension Criteria: When the test engineers are interrupting test execution
*Test environment is not working
*High severe bug or show stopper problem detected
*Pending defects are not serious but more (called quality gap)
10. Exit Criteria: When the test engineers are stopping test execution
*All major bugs are resolved
*All modules or features tested
*crossed scheduled time
11. Test Deliverables: The names of testing documents to be prepared by test engineers
*test scenarios
*Test case documents
*test logs
*Defect logs
*Summary reports
6 to 9 indicates How to test
12. Staff & Training needs: The selected names of test engineers and required no of training sessions
13. Responsibilities: The work allocation in terms of test engineers VS requirements or test
engineers
VS testing techniques.
12 and 13 indicates Who to Test
14. Schedule: Dates and Time. It indicates When to Test
15. Risks and Assumptions: Previously analyzed list of risks and their assumptions
16. Approvals: The signatures of test lead and project manager or test manager.

d) Review Test plan:After completion of master and detailed test plans preparation,the test lead is
reviewing the documentation for completeness and correctness.In that review meeting,the test lead is depending on
the following factors
*Requirements oriented plans review.
*Testing techniques oriented plans review.
*Risks oriented plans review.
After completion of this review the project management is conducting training sessions to selected test
engineers.In this training period the project management is inviting subject experts or domain experts to share their
knowledge with engineers.

Test Design

After completion of required training, the responsible test engineers are concentrating on test cases
preparation.Every Test case defines a unique test condition to be applied on our s/w build.There are 3 types of
methods to prepare test case
*Functional and System specification based test case design
*Use cases based test case design
*User Interface or Application based test case design

1.Functional and System specification based:In general,the maximum test engineers are preparing test
case depending on functional and system specifications in SRS.
18

From the above model, test engineers are studying all responsible functional and system specifications to
prepare test cases.
Approach:
Step1: Garner all responsible functional and system specifications from SRS.
(Available in configuration repository)
Step2: Select one specification and their dependencies.
Step3: Study that specification and identify base state, input required, output or outcomes, normal flow, end
state, alternative flows and exceptions.
Step4: Prepare test case titles or scenarios.
Step5: Review that titles and then prepare test case documents.
Step6: Go to Step2 until all specifications study and test cases preparation.

Specification1:
A login process allows userid and password to authorize users.The userid is taking alpha numerics in lower
case from 4 to 16 characters long.The password object is allowing alphabets in lower case from 4 to 8 characters
long.Prepare test case scenarios or titles.

Test case title1: check userid

Test case title2: check password


19
Test case title3: check login operation

Specification2:
In an Insurance application,users can apply for different types of Insurance policies.When a user apply for
Type A insurance,system asks age of that user.The age value should be greater than 16yrs and should be less than
70yrs.Prepare test case titles or scenarios.
Test case title1: check Type a selection as insurance type.
Test case title2: check focus to age after selection of type A.
Test case title3: check age value.

Specification3:
In a shopping application, the users can apply for different types of items purchase orders.Every purchase
order is allowing user to select item number and entry of quantity upto 10.Every purchase order returns 1 item price
and total amount.Prepare test case titles or scenarios.
Test case title1: check item number selection.
Test case title2: check quantity value.

Test case title3: check return values using Total amount = Price * Quantity.

Specificatio4:
A door opened when a person comes to infront of that door and the door closed when that person comes into
inside.Prepare test case titles or scenarios.
Test case title1: check door open.
20
Test case title2: check door closed.

Test case title3: check door operation when a person is standing at the middle of the door.

*Specification5:
In an e-banking application users are connecting to bank server using internet connection.In this application
user are filling below fields to login to bank server.
Password - 6 digits number.
Area code - 3 digits number and optional.
Prefix - 3 digits number and does not start with 0 and 1.
Suffix - 6 digits alphanumeric.
Commands - cheque deposit, money transfer, mini statement and bills pay.
Prepare test case titles or scenarios.
Test case title1: check password.

Test case title2: check area code.

Test case title3: check prefix.

Test case title4: check suffix.


21
Test case title5: check command selection.
Test case title6: check login operation to connect to bank server.

Specification6:
For a computer shutdown operation prepare test case titles or scenarios.
Test case title1: check Shutdown option selection using Start menu.
Test case title2: check Shutdown option selection using Alt+F4.
Test case title3: check Shutdown operation using Command prompt.
Test case title4: check Shutdown operation using Shutdown option in start menu.
Test case title5: check Shutdown operation when a process is in running.
Test case title6: check Shutdown operation using Power off button.

Specification7:
For washing machine operations prepare test case titles.
Test case title1: check power supply to washing machine.
Test case title2: check door open.
Test case title3: check water filling.
Test case title4: check cloths filling.
Test case title5: check door closed.
Test case title6: check door closed when clothes over flow.
Test case title7: check selection of washing setting.
Test case title8: check washing operation.
Test case title9: check washing operation with improper power supply.
Test case title10: check its operation when door opened in middle of the process (Security testing).
Test case title11: check its operation when water is leaked from door (Security testing).
Test case title12: check its operation with cloths over load (Stress testing).
Test case title13: check with improper settings.
Test case title14: check with any machinery problem.

Specification8:
Money withdrawl from ATM with all rules and regulations.
Test case title1: check ATM card insertion.
Test case title2: check operation with card insertion in wrong way.
Test case title3: check operation with invalid card insertion (like other bank card, time expired, scratches etc).
Test case title4: check entry pin number.
Test case title5: check operation when you entered wrong pin number 3 times consequently.
Test case title6: check language selection.
Test case title7: check account type selection.
Test case title8: check operation when you have select wrong account w.r.t that inserted card.
Test case title9: check withdrawl option selection.
Test case title10: check amount entry.
Test case title11: check operation when you entered amount with wrong denominations (EX: withdrawl of Rs999).
Test case title12: check withdrawl operation success (received correct amount, getting right receipt and able to take
card back).
22
Test case title13: check withdrawl operation when the given amount is greater than possible balance.
Test case title14: check withdrawl operation when your ATM machine is having lack of amount.
Test case title15: check withdrawl operation when your ATM has machinery or network problem.
Test case title16: check withdrawl operation when our given card amount is greater than day limit.
Test case title17: check withdrawl operation when our current transaction number is greater than number of
transactions per day.
Test case title18: check withdrawl operation when you click cancel after insertion of card.
Test case title19: check withdrawl operation when you click cancel after entering pin number.
Test case title20: check withdrawl operation when you click cancel after language selection.
Test case title21: check withdrawl operation when you click cancel after type selection.
Test case title22: check withdrawl operation when you click cancel after withdrawl selection.
Test case title23: check withdrawl operation when you click cancel after entry of amount.

NOTE: After completion of required test case titles or scenarios, the test engineers are preparing test case
documents with all required details.

Test case document format IEEE829:


1) Test case id: unique number or name for future reference.
2) Test case title or name: the previously selection test case title or scenario.
3) Feature to be tested: the name of module or function or service.
4) Test suit id: the name of test batch, which consists of a set of dependent test cases including current case.
5) Priority: the importance of test case.
*p0 for Functional test cases.
*p1 for Non-Functional test cases.
*p2 for Usability test cases.
6) Test environment: the required h/w and s/w to run this test case on build.
7) Test effort: the expected time to run this test case on build.(person or hour, 20min approximately).
8) Test duration: the expected date and time schedule for this test case.
9) Test setup or pre condition: the necessary tasks to do before start this case execution on our application build
(EX: first register to login).
10) Test procedure or data matrix: test procedure is

Data matrix format is

11) Test case passes or fails criteria: the final result of test case after execution on build or AUT.

NOTE: In above test case format, the test engineers are preparing test procedure when that test case is covering an
operation.And they are preparing data matrix when that test case is covering an object (taking inputs).

NOTE: In general the test engineers are not filling above like lengthy format of test cases.To save their job time,
test engineers are filling some of the fields and remember remaining fields value manually.

NOTE: In general, the test engineers are preparing test cases documents in MS Excel or available test management
tool (like test director).
23
Specification9:
A login process is authorizing users using userid and password.The userid object is allowing alpha numerics
in lower case from 4 to 16 characters long.The password object is allowing alphabets in lower case from 4 to 8
characters long.Prepare test case documents.
Document1:
*test case id: TC_Login_Sri_14_11_06_1.
*test case name: check userid.
*test suit id: TS_Login.
*priority: p0.
*test setup: userid object is taking inputs.
*data matrix:

Document2:
*test case id: TC_Login_Sri_14_11_06_2.
*test case name: check password.
*test suit id: TS_Login.
*priority: p0.
*test setup: password object is taking inputs.
*data matrix:

Document3:
*test case id: TC_Login_Sri_15_11_06_3.
*test case name: check login operation.
*test suit id: TS_Login.
*priority: p0.
*test setup: valid and invalid userid and password object values given.
*test procedure:

Specification10:
In a bank application,the bank employees are creating fixed deposit forms with the help of customers given
data.This fixed deposit form is taking below values from bank employees.
Depositor name: alphabets in lower case with initial as capital.
Amount: 1500-100000.
Tenure: upto 12 months.
24
Interest: numeric with one decimal.
In this fixed deposit operation if the tenure>10 months then the interest is also greater than 10%.Prepare test
case documents.
Document1:
*test case id: TC_FD_Sri_15_11_06_1.
*test case name: check deposit name.
*test suit id: TS_FD.
*priority: p0.
*test setup: deposit name is taking inputs.
*data matrix:

Document2:
*test case id: TC_FD_Sri_15_11_06_2.
*test case name: check amount.
*test suit id: TS_FD.
*priority: p0.
*test setup: amount is taking inputs.
*data matrix:

Document3:
*test case id: TC_FD_Sri_15_11_06_3.
*test case name: check tenure.
*test suit id: TS_FD.
*priority: p0.
*test setup: tenure is taking inputs.
*data matrix:

Document4:
*test case id: TC_FD_Sri_15_11_06_4.
*test case name: check interest.
*test suit id: TS_FD.
*priority: p0.
*test setup: interest is taking inputs.
*data matrix:
25

Document5:
*test case id: TC_FD_Sri_15_11_06_5.
*test case name: check fixed deposit operation.
*test suit id: TS_FD.
*priority: p0.
*test setup: valid and invalid values are available in hand.
*test procedure:

Document6:
*test case id:TC_FD_Sri_15_11_06_6.
*test case name:check tenure and interest rule.
*test suit id:TS_FD.
*priority:p0.
*test setup:valid and invalid values are available.
*test procedure:

Specification11:
Readers Paradise is a library management system.This s/w is allowing new users through registration.In this
new registration,the s/w is taking details from users and then return(o/p) person identity number like
RP_Date_XXXX(EX:RP_15_11_06_1111).Fields in Registration form.
User name:alphabets in capital.
Address:street name(alphabets),city name(alphabets), and pin code(numerics).
DOB:day month year as valid(in date / is taken automatically in this project).
e-mail id:valid ids and optional(userid@sitename.sitetype)
userid:1-256 characters and 0-9.
sitename:1-256 characters and numbers 0-9.
sitetype:1-3 characters.
26
Prepare test case documents.
Document1:
*test case id:TC_RP_Sri_15_11_06_1.
*test case name:check user name.
*test suit id:TS_RP.
*priority:p0.
*test setup:user name object is taking inputs.
*data matrix:

Document2:
*test case id:TC_RP_Sri_15_11_06_2.
*test case name:check street name.
*test suit id:TS_RP.
*priority:p0.
*test setup:street object is taking inputs.
*data matrix:

Document3:
*test case id:TC_RP_Sri_15_11_06_3.
*test case name:check city name.
*test suit id:TS_RP.
*priority:p0.
*test setup:city name object is taking inputs.
*data matrix:

Document4:
*test case id:TC_RP_Sri_15_11_06_4.
*test case name:check pincode.
*test suit id:TS_RP.
*priority:p0.
*test setup:pincode object is taking inputs.
*data matrix:
27
Document5:
*test case id:TC_RP_Sri_15_11_06_5.
*test case name:check date.
*test suit id:TS_RP.
*priority:p0.
*test setup:date object is taking inputs.
*data matrix:

Decision table:
Day Month Year
01-31 01,03,05,07,08,10,12 00-99
01-30 04,06,09,11 00-99
01-28 02 00-99
01-29 02 Leap year in b/w 00-99
Document6:
*test case id:TC_RP_Sri_16_11_06_6.
*test case name:check e-mail id.
*test suit id:TS_RP.
*priority:p0.
*test setup:e-mail object is taking inputs.
*data matrix:

Document7:check registration.
*test case id:TC_RP_Sri_16_11_06_7.
*test case name:check registration.
*test suit id:TS_RP.
*priority:p0.
*test setup:all valid and invalid values available in hand.
*test procedure:
28

2.Usecases based Test case design:Usecases are more elaborative than functional and system specifications
in SRS.In this usecase oriented test case design,test engineers are not taking their own assumptions.

From the above model,the usecase defines How to use a functionality.Every test case defines How to test a
functionality.Every test case is derived from usecase.Depending on agreement,the responsible development team
management people or responsible testing team management people are developing usecases depending on
functional and system specifications in SRS.

Usecase format:
1)usecase id:unique number or name.
2)usecase description:the summary of requirement.
3)actors:the type of users,which are accessing this requirements in our application build.
4)preconditions:necessary tasks to do before start this requirement functionality.
5)event list:a step by step procedure with required input and expected output.
6)post conditions:necessary tasks to do after completion of this requirement functionality.
7)flow diagram:pictorial presentation of requirement functionality.
8)prototype:a sample screen to indicate requirement functionality.
9)business rules:a list of rules and regulations if possible.
10)alternative flows:a list of alternative events to do same requirement functionality,if possible.
11)dependent usecases:a list of usecases related to this usecase.
Depending on above like formatted usecases,test engineers are preparing test cases sans any their own
assumptions because the usecases are providing all details about corresponding requirement functionality.

Usecase1:
*usecase id:UC_Login.
*usecase desc:a login process allows user id and password to authorize users.
*actors:registered users(they have valid id and password).
*pre conditions:every user registered before going to login.
*event list:activate login window.
enter user id as alpha numerics in lower case from 4 to 16 char long.
enter password as alphabets in lower case 4 to 8 characters.
click SUBMIT button.
*post conditions:mail box opened after succesfull login,error message for unsucessfull login.
*flow diagram:
29

*prototype:

*business rules: none.


*alternative flows: none.
*dependent usecase: new user registration and mailbox open.

Prepare test case documents.


Document1:
*test case id: TC_Login_Sri_16_11_06_1.
*test case name: check userid.
*test suit id: TS_Login.
*priority: p0.
*test setup: userid object is taking inputs.
*data matrix:

Document2:
*test case id: TC_Login_Sri_16_11_06_2.
*test case name: check password.
*test suit id: TS_Login.
*priority: p0.
*test setup: password object is taking inputs.
*data matrix:

Document3:
*test case id: TC_Login_Sri_16_11_06_3.
*test case name: check login.
*test suit id: TS_Login.
*priority: p0.
*test setup: login form is verified.
30

*test procedure:
Step no Task/Event Required input Expected output
1 Activate login window None Userid and pwd are empty by default
2 Enter userid and pwd Userid and pwd SUBMIT button enabled
3 Click SUBMIT valid & valid Mail box opened
valid & invalid Error message
invalid & valid Error message
valid & blank Error message
blank & value Error message

Usecase2:
*use case id: UC_Book_Issue.
*usecase desc: administrator opens the book issue form and enters book id to know the availability and if
Available, issues book to the valid user.
*actors: administrators and valid users.
*pre conditions: the administrator and user should be registered.
*event list: check for book by entering the bookid in bookid field and click GO (EX: RP_XXXX).
Check the availability from the message window that is displayed on GO click.
For a given user id verifies whether the user is valid or not (EX: RP_Date_XXXX).
If the user is valid, then the message window is displayed on GO click.
Issue that book through Click issue Dutton.
if the book is not available or the user is not valid then click Cancel.
*post condition: issue the book.
*flow diagram:

*prototype:

*business rules: none.


*alternative flows: none.
*dependent usecase: user registration.
31

Prepare test case documents.


Document1:
*test case id: TC_BookIssue _Sri_16_11_06_1.
*test case name: check book id format.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: book id object is taking inputs.
*data matrix:

Document2:
*test case id: TC_BookIssue _Sri_17_11_06_2.
*test case name: check GO for availability verification.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: valid and invalid book ids are available in hand.
*test procedure:
Step no Task/Event Required input Expected
1 activate BookIssue window none bookid object focused
2 enter bookid and click GO available bookid message as Available
unavailable message as Unavailable
Document3:
*test case id: TC_BookIssue _Sri_17_11_06_3.
*test case name: check user id value.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: user id object takes some value.
*test matrix:

Document4:
*test case id: TC_BookIssue _Sri_17_11_06_4.
*test case name: check user id validation by GO.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: valid and invalid user ids are available in hand.
*test procedure:
Step no Task/Event Required input Expected
1 activate BookIssue window none bookid object focused
2 enter bookid and click GO available bookid message as Available and focus to user id
3 enter user id and click GO valid id message as Issue book permitted
invalid id not permitted.Cancel message
32
Document5:
*test case id: TC_BookIssue _Sri_17_11_06_5.
*test case name: check BookIssue operation.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: valid bookid and valid user id available in hand.
*test procedure:
Step no Task/Event Required input Expected
1 activate BookIssue window none bookid object focused
2 enter bookid and click GO available bookid message as Available and focus to user id
3 enter user id and click GO valid id message as Valid user and Issue button
enabled
4 click Issue none “Acknowledgement”.
Document6:
*test case id: TC_BookIssue _Sri_17_11_06_6.
*test case name: check Cancel operation.
*test suit id: TS_BookIssue.
*priority: p0.
*test setup: invalid bookid and invalid user id available in hand.
*test procedure:
Step no Task/Event Required input Expected
1 activate Book Issue window none bookid object focused
2 enter bookid and click GO unavailable bookid focus to Cancel button
3 enter user id and click GO un valid id focus to Cancel button

NOTE: In general, the maximum testing team are following functional and system specification based test case
design depending on SRS.In this method,the test engineers are exploring their knowledge depending on
SRS,previous experience,discussions with others,similar s/w browsing,internet surfing,etc.

3.User Interface test case design:In general,the test engineers are preparingtest cases for functional and
non functional tests depending on any one of previous 2 methods.To prepare test cases for Usability testing,test
engineers are depending on user interface based test case design.
In this method, test engineers are identifying the interest of customer site people and user interface rules in
market.
Example test cases:
Test case title1: check spelling.
Test case title2: check font uniqueness in every screen.
Test case title3: check style uniqueness in every screen.
Test case title4: check labels initial letters as capitals.
Test case title5: check alignment of object in every screen.
Test case title6: check color contrast in every screen.
Test case title7: check name spacing uniqueness in every screen.
Test case title8: check spacing uniqueness in b/w label and object.
Test case title9: check spacing in b/w objects.
Test case title10: check dependent objects grouping.
Test case title11: check borders of object groups.
Test case title12: check tool tips of icons in all screens.
Test case title13: check abbreviations or full forms.
Test case title14: check multiple data objects positions in every screen (Ex: Dropdown list box, Menus
(always at top), Tables and data windows).
Test case title15: check scroll bars in every screen.
Test case title16: check short cut keys in keyboards to operate our build.
33
Test case title17: check visibility of all icons in every screen.
Test case title18: check help documents (Manual support testing).
Test case title19: check identity controls (EX: title of s/w, version of s/w, logo of company, copy wright of
win).

NOTE: Above usability test cases are applicable on any GUI application for Usability testing.For these test cases,
p2 is given as priority by testers.

NOTE: The maximum above usability test cases are STATIC because the test cases are applicable on build sans
operating.

Review Test cases


After completion of all reasonable test cases testing team is conducting a review meeting for completeness
and correctness.In this review meeting the test lead is depending on below factors.
*requirements oriented coverage.
*testing techniques oriented coverage.

Case Study
Project – Flight Reservation.
Feature to be tested – login.
Tests to be conducted – usability, functional and non functional (compatibility and performance) testing.
Test case titles:
1. Functional testing:
*check agent name.
*check password.
*check login operation.
*check Cancel operation.
*check help button.
2. Non functional testing:
*compatibility testing: check login in windows 2000, xp, win NT. (these are customer expected
platforms)
*load testing: check login performance under customer expected load.
*stress testing: check login reliability under various load levels.
3. Usability testing:
*refer user interface test cases examples given.

Test Execution

After completion of test cases design and review, the testing people are concentrating on test execution.In
this stage the testing people are communicating with development team for features negotiations.

a)Formal Meeting: The test execution process is starting with a small formal meeting.In this meeting,the
PM,project leads,developers,test leads and test engineers are involved.In this meeting the members are confirming
the architecture of required environment.
34

SCM: Software Configuration or Change Management.


TCDB: Test Cases Database
DR: Data Repository.
SCM repository consists of:
*Development documents (project plan, BRS, SRS, HLD and LLD).
*Environment files (required s/w used in this project).
*Unit and integration test cases.
*S/w coding (build or AUT).
TCDB consists of:
*Test case titles or scenarios.
*Test case and defects reference.
*Test log (test results).
*Test case documents with reference.
DR consists of:
*Defect details (report them to developers).
*Defect fix details (they are accepted or rejected by developers).
*Defect communication details.
*Defect test details (when accepted some side effects may come).

b)Build version control: After confirming required environment the formal review meeting members are
concentrating on build version control.From this concept the development people are assigning unique version
number to every modified build after solving defects.This version numbering system is understandable to testing
team.
c)Levels of test execution:After completion of formal review meeting, the testing people are concentrating
on the finalization of test execution levels.
*Level -0 testing on Initial build.
*Level-1 testing on Stable build or Working build.
*Level-2 testing on Modified build.
*Level-3 testing on Master build.
*UAT on release build.
35
*Finally Golden build is released to customer site.

d)Levels of Test execution VS Test cases:


Level-0: Selected P0 test cases (mainly functionality area)
Level-1: All P0, P1, P2 test cases (entire build)
Level-2: Selected P0, P1 and P2 test cases (modified functionalities in build)
Level-3: Selected P0, P1 and P2 test cases (high defect density areas in build)

e)Level-0 (Sanity testing):Practically the test execution process is starting with Sanity test execution to
estimate stability of that build.In this Sanity testing,the test engineers are concentrating on below factors through
the coverage of basic functionality in that build.
*Understandable.(On seeing the project)
*Operatable.(No hanging during operation)
*Observable (know its flow)
*Controllable (do the operation and undo the operation)
*Consistency (in functionality)
*Simplicity (means less navigation required)
*Maintainable (in testers system)
*Automatable (whether some tools are applicable or not)
The above like level-0 Sanity testing is estimating Testing ability of build.This testing is also known as
Sanity testing or Testability testing or Build Acceptance testing or Build verification testing or Tester acceptance
testing or Octangle testing(Above 8 testing factors).

f)Level-1(Comprehensive testing):After completion of Sanity testing test engineers are conducting level1
real testing to detect defects in build.In this level,the test engineers are executing all test cases either in manual or in
automation as test batches.Every test batch consists of a set of defined test cases.This test batch is also known as
Test Suit or Test set or Test Chain or Test build.
In these test cases execution as batches on the build the test engineers are preparing test log documents with
3 types of entries.
*Passed: All our test case expected values are equal to build actual values.
*Failed: Any one expected value is not equal to build actual value.
*Blocked: Our test case execution postponed due to incorrect parent functionality.
In this Level-1 test execution as test batches, the test engineers are following below approach.
36

From the above approach,the test engineers are skipping some test cases due to lack of time for test
execution.The final status of every test case is CLOSED or SKIP.

g)Level-2(Regression testing):During level1 Comprehensive testing the test engineers are reporting
mismatches in b/w our test case expected values and build actual values as defect reports.After receiving defect
reports from testers,the developers are conducting a review meeting to fix defects.If our defect accepted by the
developers then they are performing changes in coding and then they will release Modified Build with Release
note.The release note of a modified build is describing the changes in that modified build to resolve reported
defects.
Test engineers are going to plan Regression testing to conduct on that modified build w.r.t release note.
Approach to Regression testing:
*receive modified build along with release note from developers.
*apply Sanity or Smoke test on that modified build.
*Select test cases to be executed on that modified build w.r.t modifications specified in release note.
*run that selected test cases on that modified build to ensure correctness of modifications sans having side
effects in that build.
In above regular Regression testing approach, the selection of test cases w.r.t modifications is critical task.
Due to this reason, the test engineers are following some standardized process models for regression testing.

Case1: If the development team resolved defect severity is high, then the test engineers are re-executing all
functional, all non-functional and maximum usability test cases on that modified build to ensure the correctness of
modifications sans side effects.
Case2: If the development team resolved defect severity is medium, then the test engineers are re-executing
all functional, maximum non-functional and some usability test cases on that modified build to ensure the
correctness of modifications sans side effects.
Case3:If the development team resolved defect severity is low,then the test engineers are re-executing some
functional,some non-functional and some usability test cases on that modified build to ensure the correctness of
modifications sans side effects.
Case4: If the development team released modified build due to sudden changes in customer requirements,
then the test engineers are performing changes in corresponding test cases and then re-executing that test case on
that modified build to ensure the correctness of modifications w.r.t changes in requirements.
After completion of the required level of regression testing, the test engineers are continuing remaining
level1 test execution.
37
Test Reporting
During level1 and level2 test execution test engineers are reporting mismatches to development team as
defects.The defect is also known as Error or Issue or Bug.
A programmer detected a problem in program is called ERROR.
The tester detected a problem in build is called DEFECT or ISSUE.
The reported defect or issue accepted to resolve is called BUG.
In this defect reporting to developers the test engineers are following a standard defect report format
(IEEE829).

a)Defect Report:
*defect id: the unique name or number.
*description: the summary of the defect.
*build version id: the version number of build, in this build the test engineer detect defect.
*feature: the name of module or function, in that module the test engineers find this defect.
*test case title: title of failed test case.
*detected by: name of the test engineer.
*detected on: date of the defect detection and submition.
*status: New (reporting first time), Re-Open (re reporting).
*severity:the seriousness of defect in terms of functionality.If it is High or Show stopper,then not able to
continue testing sans resolving that defect.If it is Medium or Major,then able to continue testing but Manditory to
resolve.If it is Low or Minor,then able to continue testing and may or may not to resolve.
*priority: the importance of defect to resolve in terms of customer (high, medium, low ex name).
*reproduceable: Yes or No.Yes means defect appears every time in test execution (then attach test
procedure).No means defect rarely appears in test execution (then attach snapshot and test procedure.Snapshot is
taken by Print screen button when defect is occurred).
*assigned to: the name of responsible person to receive this defect at development site.
*suggested fix: the suggestion to accept or reject the defect.It is optional.

NOTE: The defect priority is modifiable by PM and project lead also.

NOTE: In general the test engineers are reporting defect to development team after getting permission from test
lead.

NOTE: In application oriented s/w development test engineers are reporting defects to customer site also.

b)Defect submission process:


38

c)Defect life cycle:

New->Open->Closed
New->Open->Reopen->Closed
New->Reject->Closed
New->Reject->Reopen->Closed
New->Deferred

NOTE: The final status of every defect is CLOSED or DEFERRED.

d)Defect age: The time gap in b/w defect reporting and defect closing or deferring is called Defect age.

e)Defect density: The average no of defects detected by our testing team in one module of application
build.

f)Defect Resolution types: After receiving defect report from testing team the development team is
conducting a review meeting to fix that defect and then sending resolution type to testing team.There are 12 types
as
*duplicate: test engineer reported defect rejected due to similarity with previously reported defect.
*enhancement: the test engineer reported defect rejected due to relation with future requirements of
customer.
*s/w limitation: rejected due to relation with limitation of s/w technology (ex 2 GB->999 records only).
*h/w limitation: rejected due to relation with limitation of h/w technology.
*not applicable: rejected due to wrong test case execution.
*functions as designed: rejected due to correctness of coding w.r.t design documents.
*need more information: defect is not accepted and not rejected but developers require more information to
understand a defect properly.
*not reproduceable: the test engineer reported defect is not accepted and not rejected but developers require
correct procedure to reproduce the defect.
*no plan to fix it: defect is not accepted and not rejected but developers require some extra time to fix that
defect.
*fixed or open: defect is accepted and developers are ready to resolve the defect and release modified build
along with release note.
*fixed indirectly or deferred: report is accepted and postponed to future release due to low severity and low
priority.
*user direction: defect is accepted but the developers are producing a message in build about that defect
sans resolving (like showing message to user as Error)

g)Types of defects:During usability,functional and non functional test execution on our application build or
UAT the test engineers are detecting below categories
*user interface defects (low severity):
Spelling mistakes (high priority)
39
Invalid label of object w.r.t functionality (medium priority)
Improper right alignment (low priority)
*error handling defects (medium severity)
Error message not coming for wrong operation (high priority)
Wrong error message is coming for wrong operation (medium)
Correct error message but incomplete (low)
*input domain defects (medium severity)
Does not taking valid input (high)
Taking valid and invalid also (medium)
Taking valid type and valid size values but the range is exceeded (low)
*manipulations defects (high severity)
Wrong output (high)
Valid output with out having decimal points (medium)
Valid output with rounded decimal points (low)
EX: actual answer is 10.96
High (13), medium (10) and low (10.9)
*race conditions defects (high)
Hang or dead lock (show stopper and high priority)
Invalid order of functionalities (medium)
Application build is running on some of platforms only (low)
*h/w related defects (high)
Device is not connecting (high)
Device is connecting but returning wrong output (medium)
Device is connecting and returning correct output but incomplete (low)
*load condition defects (high)
Does not allow customer expected load (high)
Allow customer expected load on some of the functionalities (medium)
Allowing customer expected load on all functionalities w.r.t benchmarks (low)
*source defects (medium)
Wrong help document (high)
Incomplete help document (medium)
Correct and complete help but complex to understand (low)
*version control defects (medium)
Unwanted differences in b/w old build and modified build
*id control defects (medium)
Logo missing, wrong logo, version number missing, copy right window missing, team members
names missing.

Test Closure(UAT)
After completion of all reasonable test cycles completion the test lead is conducting a review meeting to
estimate the completeness and correctness of the test execution.If the test execution status is equal to EXIT
CRITERIA then testing team is going to stop testing.Otherwise the team will continue remaining test execution
w.r.t available time.In this test closure review meeting the test lead is depending on below factors.

a)Coverage analysis:
*requirements oriented coverages
*testing techniques oriented coverages
b)Defect density:
*modules or functionalizes
40

C need for regression testing


c)Analysis of deferred defects:
*whether the all deferred defects are postpone able or not

NOTE: In general the project management is deferring low severity and low priority defects only.
After completion of above test closure review meeting the testing team is concentrating on level-3 test
execution.This level of testing is also known as Postmatern testing or Final regression testing or Pre-acceptance
testing.In this test execution the test engineers are following below approach.

In above like Final regression testing the test engineers are concentrating on high defect density modules or
functionalities only.”If they got any defect in this level, they are called as Golden defect or Lucky defect”.After
resolving the all golden defects, the testing team is concentrating on UAT along with developers.

UAT(User Acceptance Testing)


In this level, the PM is concentrating on the feedback of the real customer site people or model customer site
people.There is 2 ways in UAT as follows
*Alpha testing.
*Beta testing.

Sign Off
After completion of UAT and their modifications,the test lead is conducting sign off review.In this review
the test lead is garnering all testing documents from test engineers as Test Strategy,System test plan and detailed test
plans,Test scenarios or titles,Test case documents,Test log,Defect report and
Final defects summary reports (defect id, description, severity, detected by and status (closed or deferred)).
Reuqirements Traceability matrix (RTM) (reqid, test case, defected, status).
*RTM is mapping in b/w requirements and defects via test cases.

Case Study
1. Test Initiation
Done by Project manager.
Deliver test strategy document.
Test Responsibility Matrix (RTM) defines reasonable tests to be applied (part in test strategy).
2. Test Planning
Done by Test lead or senior test engineer.
Deliver System test plan and detailed test plans.
Follows IEEE 829 document standards.
41
3. Test Design
Done by test engineer.
Deliver test scenarios or titles and test documents.
4. Test Execution
Done by test engineer.
Prepare automation programs (if possible).
Deliver test logs or test results.
5. Test Reporting
Done test engineer and test lead.
Send defect reports.
Receive modified build with release note.
6. Test Closure
Done by test lead and test engineer.
Plan post marten testing.
Initiate UAT (User Acceptance testing).
7. Sign Off
Done by test lead
Garner all test documents
Finalize RTM (Requirements traceability Matrix).

Case Study (3 to 5 months test plan)


Test Deliverable Responsibility Completion Time
Test strategy Project or test manager 5-10 days
Test plan Test lead 5-10 days
Training sessions to test engineers Business analyst or subject or 10-15 days
domain experts
Test scenarios or title selection Test engineer 5-10 days
Test cases documents Test engineer 5-10 days
Review test cases Test lead and Test engineer 1-2 days
Receive initial build and Sanity Test engineer 1-2 days
testing
Test automation (if possible) Test engineer 10-15 days
Test execution (level1 and level2) Test engineer 30-40 days
Test reporting Test engineer and test lead On going
Status reporting Test lead Weekly twice
Test closure and post marten Test lead and Test engineer 5-7 days
UAT (User Acceptance Testing) Customer/ model customers with 5-7 days
involvement of developers & testers
Sign off Test lead 1-2 days
42

Manual Testing VS Automation Testing


In general, the test engineers are executing Test cases manually. To save test execution time and to decrease
complexity in manual testing, the engineers are using Test automation. The test automation is possible for two
manual tests.
* Functional testing.
* Performance testing (of non functional testing).
“WinRunner, QTP (Quick Time Professional), Rational Robot and Silk Test are Functional testing Tools”.
“LoadRunner, Rational Load Test, Silk performer and Jmeter are Performance Testing Tools to automate load and
stress testing”.
The organizations are using tools for test management also.
Ex: Test Editor, Quality Center and Rational Manager.
43
WinRunner8.0

* Released in January 2005.


* Developed by Mercury Interactive (taken over by HP).
* Functional Testing Tool.
* Support VB,Java,VB.NET,HTML,Power Builder(PB),Delphi,D2K,VC++ and Siebel technology builds
for Functional testing.
* To support above technologies including XML, SAP, People Soft, ORACLE Apps and Multi Media we
can use QTP.
* WinRunner runs on Windows platform.
* XRunner runs on Linux and UNIX.
* WinRunner converts our manual test cases into TSL programs (Test Scripting Languages) as automation
like C.

Objective
* Study Functional and system specification or Use Cases.
* Prepare Functional test cases in English.
* Convert to TSL programs (automation).

WinRunner Test Approach


* Select manual Functional test cases to be automated.
* Receive Stable build from developers (after Sanity Testing).
* Create TSL program for that selected test cases on that Stable build.
* Make those programs as Test batches.
* Run Test batches on that build to detect mismatching.
* Analyze results and report mismatch (if required).

Add-In Manager
This Window list out all WinRunner supported technologies w.r.t license. The test engineers are selecting the
current application Build technology.

Icons in WinRunner Screen


44
Automation Test creation in WinRunner
WinRunner is a functionality testing tool. Test engineers are automating corresponding manual functional
test cases into automation programs in two steps.
1) Recording or description of build actions (means operating) 2) Inserting “Check Points”

1) Recording Modes: To generate automation program, test engineers are recording build actions or operation
in 2 types of modes
* Content Sensitive mode
* Analog mode

Content Sensitive mode: In this mode, the WinRunner is recording all mouse and key board operations w.r.t
Objects and Windows in our application build. To select this mode we can use below options
* Click “start record” icon
* Text menu -> Context Sensitive option
Analog mode: In this mode, the WinRunner is recording all mouse pointer movements w.r.t desktop coordinates.
To select analog mode we can use below options
* Click Start Record icon twice
* Text menu -> Analog option (Example: Recording digital signatures, graphs drawing and image
movements)

NOTE: To change from one mode to another mode, the test engineers are using F2 as a short key.

NOTE: In Analog mode, the WinRunner is recording mouse pointer movements w.r.t desktop coordinates instead
of windows and objects. Due to this reason, the test engineers are maintaining corresponding window position on
the desktop and monitor resolution as constant.

2) Check Points: After completion of required action or operations recording, the test engineers are inserting
required check points into that recorded script. The WinRunner8.0 is supporting 4 types of check points.
*GUI check point.
*Bitmap check point.
*Database check point.
*Text check point.
Every check point is comparing test engineer given expected value and build actual value. The above 4
check points are automating all functional test coverage’s on our application build.
*GUI or Behavioral coverage’s.
*Error handling coverage’s.
*Input domain coverage’s.
*Manipulation coverage’s.
*Backend coverage’s.
*Order of functionality coverage’s.

GUI check point: To check behavior of objects in our application windows, we can use this check point. This
check point consists of three sub options.
*For single property.
*For object or window.
*For multiple objects.
45

a) For single property: to verify one property of one object we can use this option (like starting a Mile with one
step)
EX1: Manual test case
Test case id: TC_EX_SRI_24NOV_1
Test case name: check Delete Order button
Test suit id: TS_EX
Priority: P0
Test set up: already one record is inserted to delete
Test procedure:
Step no Event Input required Expected output
1 Focus to Flight Reservation window None “Delete Order” button disabled
2 Open an order Valid order number “Delete order” button enabled
Build -> Flight Reservation window
Automation program:
set_window (“Flight Reservation”, 1);
button_check_info (“Delete Order”,” enabled”, 0);
#check point on Delete order button
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Delete Order”,” enabled”, 1);
#check point on Delete order button
EX2: Manual test case
Test case id: TC_EX_SRI_24NOV_2
Test case name: check Update Order button
Test suit id: TS_EX
Priority: P0
Test set up: already one valid record is inserted to update
Test procedure:
Step no Event Input required Expected output
1 Focus to Flight Reservation window None “Update Order” button disabled
2 Open an order Valid order number “Update Order” button disabled
3 Perform a change Valid change is required “Update Order” enabled
Build -> Flight Reservation window
46
Automation program:
set_window (“Flight Reservation”, 2);
button_check_info (“Update Order”, “enabled”, 0);
#check point on Update order button
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Update Order”, “enabled”, 0);
#check point on Update order button
set_window (“Flight Reservation”, 2);
button_set (“First”, ON);
button_check_info (“Update Order”, “enabled”, 1);
#check point on Update order button

Context Sensitive statements in TSL


*Focus to a window: set_window (“Window name”, time to focus);
*Click BUTTON: button_press (“Button name”);
*Select a Radio button: button_set (“Radio button name”, ON);
*Select a Check box: button_set (“Check box name”, ON);
*Fill a Text box: edit_set (“Text box name”, “input”);
*Fill a Password text box: edit_set (“Password text box name”, “given text Encrypted value”);
*Select an item in a List box: list_select_item (“list box name”, “selected item name”);
*Select an Option Menu: menu_select_item (“Menu name; Option name);
*texit():We can use this statement to quit or Terminate from test execution like as pause.
EX3: Manual test case
Test case id: TC_EX_SRI_28NOV_3
Test case name: check OK button
Test suit id: TS_EX
Priority: P0
Test set up: all input objects are taking values
Test procedure:
Step no Event Input required Expected output
1 Focus to Sample window None OK button disabled
2 Enter Name Valid OK button enabled
Build -> Sample

Automation program:
set_window (“Sample”, 5);
button_check_info (“OK”,” enabled”, 0);
edit_set (“Name”, “Sri”);
button_check_info (“OK”,” enabled”, 1);
EX4: Manual test case
Test case id: TC_EX_SRI_28NOV_4
47
Test case name: check SUBMIT button
Test suit id: TS_EX
Priority: P0
Test set up: all input objects are taking values
Test procedure:
Step no Event Input required Expected output
1 Focus to Registration window None SUBMIT button disabled
2 Enter Name Valid SUBMIT button disabled
3 Select Gender as M or F None SUBMIT button disabled
4 Say Y/F for Passport availability None SUBMIT button disabled
5 Select Country None SUBMIT button enabled
Build->Registration form

Automation program:
set_window (“Registration”, 5);
button_check_info (“SUBMIT”,” enabled”, 0);
edit_set (“Name”, “Sri”);
button_check_info (“SUBMIT”,” enabled”, 0);
button_set (“Male”, ON);
button_check_info (“SUBMIT”,” enabled”, 0);
button_set (“YES”, ON);
button_check_info (“SUBMIT”,” enabled”, 0);
list_select_item (“COUNTRY”,”INDIA”);
button_check_info (“SUBMIT”,” enabled”, 1);

Case Study
Object Type Testable Properties
Push button Enabled (0 or 1), Focus
Radio button Enabled (0 or 1), Status (ON or OFF)
Check box Enabled (0 or 1), Status (ON or OFF)
List or Combo box Enabled (0 or 1), Count, Value (of selected item)
Menu Enabled (0 or 1), Count
Edit or Text box Enabled (0 or 1), Focused, Value, Range, Regular
expression (text or pwd), Data format, Time
format…………..
Table guard Rows count, Columns count, Cell count
EX5: Manual test case
Test case id: TC_EX_SRI_28NOV_5
Test case name: check Flight to count
Test suit id: TS_EX
Priority: P0
Test set up: Fly From and Fly To consists of valid city name
Test procedure:
48
Step no Event Input required Expected output
1 Focus to Journey window and select one None Fly To count decreased by one
city name in Fly From
Build -> Journey

Automation program:
set_window (“Journey”, 5);
list_get_info (“Fly To”, “count”, x);
list_select_item (“Fly From”, “VIZ”);
list_check_info (“Fly To”, “count”, x-1);
EX6: Manual test case
Test case id: TC_EX_SRI_28NOV_6
Test case name: check Message value
Test suit id: TS_EX
Priority: P0
Test set up: all valid names are available for Messages
Test procedure:
Step no Event Input required Expected output
1 Focus to Display window None OK button disabled
2 Select a Name None OK button enabled
3 Click OK None Coming message is equal to selected message
Build->Display form

Automation program:
set_window (“Registration”, 5);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“Name”, “Sri”);
list_get_info (“Name”, “value”, x);
button_check_info (“OK”,” enabled”, 1);
button_press (“OK”);
edit_check_info (“Message”, “value”, x);
EX7: Manual test case
Test case id: TC_EX_SRI_28NOV_7
Test case name: check SUM button
Test suit id: TS_EX
Priority: P0
Test set up: input objects consists of numeric values
Test procedure:
Step no Event Input required Expected output
1 Focus to Addition window None OK button disabled
49
2 Select input one None OK button disabled
3 Select input two None OK button enabled
4 OK click none Coming output is equal to addition of 2 inputs
Build->Addition form

Automation program:
set_window (“Addition”, 5);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“INPUT1”, “20”);
list_get_info (“INPUT1”, “value”, x);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“INPUT2”, “4”);
list_get_info (“INPUT2”, “value”, y);
button_check_info (“OK”,” enabled”, 1);
button_press (“OK”);
edit_check_info (“SUM”, “value”, x+y);
EX8: Manual test case
Test case id: TC_EX_SRI_29NOV_8
Test case name: check Age, Gender and Qualification objects
Test suit id: TS_EX
Priority: P0
Test set up: all insurance policy types are available
Test procedure:
Step no Event Input required Expected output
1 Focus to Insurance window and None If type is A then Age is focused.
select type of insurance policy If type is B then Gender is focused.
If other then Qualification is focused

Build->Insurance form

Automation program:
set_window (“Insurance”, 5);
list_select_item (“Type”, “xx”);
list_get_info (“Type”, “value”, x);
50
if (x= = “A”)
edit_check_info (“Age”, “focused”, 1);
else if (x= = “B”)
list_check_info (“Gender”, “focused”, 1);
Else
list_check_info (“Qualification”, “focused”, 1);
EX9: Manual test case
Test case id: TC_EX_SRI_29NOV_9
Test case name: check Student grade
Test suit id: TS_EX
Priority: P0
Test set up: all valid students’ mark’s already feeded
Test procedure:
Step no Event Input required Expected output
1 Focus to Student window None OK button disabled
2 Select a Student roll number None OK button enabled
3 OK click none Returns total marks and grade
If total >= 800 then grade is A.
If total >= 700 and <800 then grade is B.
If total >= 600 and <700 then grade is C.
If total <600 then grade is D.
Build->Student form

Automation program:
set_window (“Student”, 5);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“Roll no”, “xx”);
button_check_info (“OK”,” enabled”, 1);
button_press (“OK”);
edit_get_info (“Total”, “value”, x);
if (x> = 800)
edit_check_info (“Grade”, “value”, “A”);
Else if (x<800 && x>=700)
edit_check_info (“Grade”, “value”, “B”);
Else if (x<700 && x>=600)
edit_check_info (“Grade”, “value”, “C”);
Else
edit_check_info (“Grade”, “value”, “D”);
EX10: Manual test case
Test case id: TC_EX_SRI_29NOV_10
Test case name: check Gross salary of an Employee
Test suit id: TS_EX
Priority: P0
Test set up: all valid employees Basic salaries are feeded.
Test procedure:
51

Step no Event Input required Expected output


1 Focus to Employee window None OK button disabled
2 Select a Employee number None OK button enabled
3 OK click None Returns Basic and Gross salaries
If Basic >= 15000 then Gross=Basic+10% of Basic
If Basic<15000&&>=8000 Gross=Basic+5%of Basic
If Basic < 8000 then Gross=Basic+200
Build->Employee Form

Automation program:
set_window (“Employee”, 5);
button_check_info (“OK”,” enabled”, 0);
list_select_item (“Empno”, “xxx”);
button_check_info (“OK”,” enabled”, 1);
button_press (“OK”);
edit_get_info (“Basic”, “value”, x);
If (x> = 15000)
edit_check_info (“Gross”, “value”, x+ (10/100)*x);
Else if (x<1500 && x>=8000)
edit_check_info (“Gross”, “value”, x+ (5/100)*x);
Else
edit_check_info (“Gross”, “value”, x+200);
b) For Object or Window: To verify more than one properties of one object, we can use this option.
EXAMPLE:
*Update order button is disabled after focus to window
*Update order button disabled after open a record
*Update order button enabled and focused after perform a change. (Here one object with TWO properties)
Build -> Flight Reservation window
Automation program:
set_window (“Flight Reservation”, 5);
button_check_info (“Update Order”,” enabled”, 0);
#check point on Update order button
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Update Order”,” enabled”, 0);
#check point on Update order button
set_window (“Flight Reservation”, 2);
button_set (“First”, ON);
obj_check_gui (“Update Order”, “list1.ckl”, “gui1”, 1);
#check point for MULTIPLE properties
SYNTAX for Multiple properties:
52
obj_check_gui (“Object name”, “CheckListfile.ckl”, “Expected values file (GUI)”, Time);
In above syntax
CHECKLIST FILE specifies the selected list of properties
EXPECTED VALUES FILE specifies the selected values for that properties

c) For Multiple Objects: To check more than one property of more Objects, we can use this option. (Objects must
be in same WINDOW).
EXAMPLE
*Insert, Delete and Update order buttons are disabled after focus to window.
*Insert and Update order buttons are disabled, Delete order button is enabled after open a record.
*Insert order button is disabled; Update order button enabled and focused and Delete button is enabled after
perform a change.
Automation program:
set_window (“Flight Reservation”, 5);
win_check_gui (“Flight Reservation”, “list1.ckl”, “gui1”, 1);
#check point
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
win_check_gui (“Flight Reservation”, “list2.ckl”, “gui2”, 1);
#check point
set_window (“Flight Reservation”, 1);
button_set (“First”, ON);
win_check_gui (“Flight Reservation”, “list3.ckl”, “gui3”, 1);
#check point
Syntax for Multiple Objects:
win_check_gui (“Window name”, “CheckListfile.ckl”, “Expected values file (GUI)”, Time);
NOTE: This check point is applicable on more than one object in a same window
Navigation to insert Check Point:
*Select a position in Script
*Choose Insert Menu option
*In it, choose GUI Check point
*Then select sub option as For Multiple objects
*Click Add button and select Testable objects
*Now Right click to relive from selection
*Select required properties with expected values
*Click OK
EX11: Manual test case
Test case id: TC_EX_SRI_30NOV_11
Test case name: check value of tickets
Test suit id: TS_EX
Priority: P0
Test set up: all valid records feeded.
Test procedure:
Step no Event Input required Expected output
1 Focus to Flight Reservation Valid Order no No of Tickets value is numeric up to 10
window and Open an order
53
NOTE: Testing the TYPE OF VALUE of an Object is called as REGULAR EXPRESSION.
Build->Employee Form
Automation program:
set_window (“Flight Reservation”, 2);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,”1”);
button_press (“OK”);
set_window (“Flight Reservation”, 1);
obj_check_gui (“Tickets”, “list1.chl”, “gui”, 1);
#Check point for Range and Regular Expression with 0 to 10 and [0-9]* (* is for multiple positions)
EX12: Prepare Regular expression for Alpha numeric
[a-zA-Z0-9]*
EX13: Prepare Regular expression for Alpha numeric in lower case with initial as capital.
[A-Z][a-z0-9]*
EX14: Prepare Regular expression for Alpha numeric in lower case but start with capital and end with lower case.
[A-Z][a-z0-9]*[a-z]
EX15: Prepare Regular expression for Alpha numeric in lower case with underscore, which does not start with _
[a-z0-9][a-z0-9_]*[a-z0-9]
EX16: Prepare Regular expression for Yahoo mail user id
Changes in Check points: most Irritating part in s/w testing
Due to sudden changes in customer requirements or mistakes in test creation, the test engineers are
performing changes in existing check points.
a) Changes in expected values:
*Run our test
*Open result
*Perform changes in expected values
*Click OK
*Close results
*Re-execute that test
b) Add new properties:
*Insert Menu
*edit GUI checklist
*select Checklist file name
*click OK
*select new properties for testing
*click OK
*click OK to over write checklist file
*click OK after reading suggestion to update
*change run mode to Update mode and run from top (the modified checklist is taking default values
as expected)
*run our test script in verify mode to get results
*analyze that results manually
If the defaults expected are not correct, then test engineers are changing that expected values and re run the
test.
Bitmap check point: (binary presentation of an image). It is an optional check point in functional testing. Test
engineers are using this option to compare images. This check point is supporting static images only. This check
point consists of 2 sub options
1) For object or window bitmap: To compare our expected image with our application build actual image, we
can use this option.
EX: logo testing
54

EX2: graphs comparison

Navigation to create Bitmap check point:


*open expected image
*select Insert menu
*select Bitmap check point option
*select for object/window sub option
*show the Expected image with double click on it
*now close the Expected image and open Actual image
*run check point
*analyze results manually
Syntax: obj_check_bitmap (“Image object name”, “Image file”, time to focus);
2) For screen area: we can use this option to compare Expected image area with actual
Navigation to create Bitmap check point:
*open expected image
*select Insert menu
*select Bitmap check point option
*select for screen area sub option
*select requirement expected image area
*now close the Expected image and open Actual image
*run check point
*analyze results manually
Syntax: obj_check_bitmap (“Image object name”, “Image file”, time to focus, x, y, width, height);

NOTE: TSL is not supporting Functional Overloading. TSL is supporting Variable Number of Arguments or
Parameters for functions.
55
NOTE: The GUI check point is Manditory, but the Bitmap check point is optional because all windows are not
consisting of images
Database Check point: The GUI and Bitmap check points are applicable on our application build front end screens
only. This Database check point is applicable on our application build back end tables to estimate the impact of
front end screen operation on back end tables content. This checking is called DATABASE OR BACK END
TESTING.

To automate Database or Back end testing, the database check point in WinRunner is following below approach

* Database check point wizard is connecting to our application build Database.


* execute a Select statement on that connected database
* Retrieve selected data from database into an XL sheet.
* The test engineer is analyzing that selected data to estimate the completeness and correctness of the front
end operation impact on that database.
To follow above approach using WinRunner for database testing, test engineers are collecting some
information from development team.
* The name of connectivity in b/w application build front end windows and back end database
* The names of Tables including columns
* Front end screens VS Back end tables
The above information is available in Database Design Documents (DDD)
The Database checkpoint in WinRunner consists of 3 sub options
a) Default Check point: To conduct database testing depending on the content of database tables we use
this option
EX:
* create database check point (current database content is selected as Expected)
* perform testable operation on the front end screen
* run database check point (current database content is selected as Actual)
56
If the Expected = = Actual content then test is FAIL else test is PASS, only when that differences are valid w.r.t to
operation performed on the front end screen.
Navigation to create Database check point:
* open WinRunner
* select Insert menu
* select database check point
* select default check sub option
* specify connect to database using ODBC or Data Junction (ODBC for local host database and Data
Junction for remote host database)
* Select specify SQL statement option
* click next
* click Create to select connectivity name
* write select statement to retrieve requires DB content
* click Finish
* open our application build
* perform testable operation on our front end screen
* run database check point
* analyze differences in actual database manually

Syntax: db_check (“Checklist file.cdl”, “select statement result content (.xls”);


In above syntax, the first argument file specifies that the Content as testable property. The second argument file
consists of the selected content of database
EX: select * from orders
b) Custom check point: To conduct database testing depending on the Content, Rows count and Columns
count we can use this definition
* In general the test engineers are using Default check point option for DB testing, because the content of
DB is measurable in terms of no of rows and no of columns. Due to this reason the test engineers are not using
Custom check point option for DB testing.
NOTE: In this custom check, the WinRunner is providing a facility to select required database properties by test
engineer.

Syntax: db_check (“Checklist file.cdl”, “select statement result content (.xls”);


In above syntax, the first argument file specifies that the Content as testable property. The second argument file
consists of the selected content of database
EX: select * from orders
c) Run Time record check: We can use this option to verify mapping in b/w front end report objects and
back end database table columns.

From the above model every application build report screens are retrieving data from database tables. To
estimate completeness and correctness of that retrieving process, test engineers are using this check point in
WinRunner.
57

Expected: x<- a and y<- b


Navigation:
* Open Build and WinRunner
* select Insert menu and select Database check point option
* In that select Runtime record check sub option
* Select specify SQL statement option
* click next
* click Create to select our database connectivity
* write Select statement with doubtful columns
(Select orders.customer_name, orders.order_number from orders)
* click next
* select doubtful front end objects for those columns
* click next
* select one or more than matching option
* click Finish
* open record by record in front end report screen and then run that check point
* analyze results finally to confirm that mapping in b/w back end table columns and front end report objects
Syntax: db_record_check (“Checklist file.cvr”, flag, variable);
In above syntax, check list file (checklist verification at runtime) specifies your selected mapping in b/w
back end table columns and front end report objects. Flag is indicating type of matching like
DVR_ONE_MATCH or DVR_ONE_OR_MORE_MATCH or DVR_NO_MATCH
DVR: Data Verification at run time
And Variable specifies the number of records matched
Text check point: We can use this option to verify all manipulations in our application build (like Calculations).
* This check point is a user defined one in WinRunner. To create this check point, test engineer is using Get
Text option including manual “IT” conditions. We can use Get Text option to capture front end object values into
variables. Test engineers are using if conditions to compare that variable values w.r.t customer requirement.

Expected output is input * 100.


From the previous diagram, the test engineers are using Get Text option to capture front end objects values
into variables. This option consists of two sub options
* From object or window.
* From screen area.
a) From Object or window: We can use this option to capture front end objects values.
Navigation: Insert menu, Get Text, from object or window, select required object.
Syntax: obj_get_text (“Object name”, variable); (by default TEXT is variable)
* The above function is equal to edit_get_info (“Object name”, “value”, variable);
Automation Program:
set_window (“Sample”, 2);
obj_get_text (“Input”, x);
obj_get_text (“Output”, y);
58
If (y = = x*100)
Printf (“Test is pass”);
Else
Printf (“Test is Fail”);
b) From screen area: We can use this option to capture selected screen area values.
Navigation: Insert menu, Get Text, from screen area, select required value region, right click to relive.
Syntax: obj_get_text (“Screen area name”, variable, x1, y1, x2, y2);
EX1: Manual expected is the total tickets sold in graph incremented by one after inserting a new order.
Build: Flight Reservation and Graph window.
Automation program:
set_window ("Flight Reservation", 3);
menu_select_item ("Analysis; Graphs...");
set_window("Graph", 16);
obj_get_text("GS_Drawing", x, 239, 222, 267, 237);
win_close ("Graph");
set_window ("Flight Reservation", 7);
menu_select_item ("File;New Order");
obj_type ("MSMaskWndClass","070707");
list_select_item ("Fly From:", "Denver");
list_select_item ("Fly To:", "London");
obj_mouse_click ("FLIGHT", 28, 36, LEFT);
set_window ("Flights Table", 1);
list_select_item ("Flight", "20259 DEN 07:12 AM LON 02:23 PM AA $112.20");
button_press ("OK");
set_window ("Flight Reservation", 7);
edit_set ("Name:", "srikanth");
button_press ("Insert Order");
set_window ("Flight Reservation", 29);
menu_select_item ("Analysis;Graphs...");
set_window("Graph", 5);
obj_get_text("GS_Drawing", y , 237, 221, 263, 235);
if(y == x+1)
tl_step("s1",0,"Test is pass");
else
tl_step("s1",1,"Test is fail");
EX2: Manual expected is Total = Price*No of tickets
Build: Flight Reservation Window
Automation program:
set_window ("Flight Reservation", 2);
menu_select_item ("File;Open Order...");
set_window ("Open Order", 1);
button_set ("Order No.", ON);
edit_set ("Edit", "1");
button_press ("OK");
set_window("Flight Reservation", 8);
obj_get_text("Tickets:", x); # 1
obj_get_text("Price:", y); # $312.00
y=substr(y,2,length(y)-1);
obj_get_text("Total:", z); # $312.00
z=substr(z,2,length(z)-1);
if(z==y*x)
tl_step("s1",0,"test is pass");
59
else
tl_step("s1",1,"test is fail");
EX3: Manual expected is Sum = File1 + File2 sizes.
Build:

Automation program:
set_window (“Audit”, 2);
obj_get_text (“File1”, x);
x = substr (x, 1, length(x)-2);
obj_get_text (“File2”, y);
y = substr (y, 1, length(y)-2);
obj_get_text (“Sum”, s);
s = substr (s, 1, length(s)-2);
If (s = = x+y)
Printf (“Test is pass”);
Else
Printf (“Test is Fail”);
EX4: Manual expected is Total = price * quantity
Build:

Automation program:
set_window (“Shopping”, 2);
obj_get_text (“Quantity”, q);
obj_get_text (“Price”, p);
p = substr (p, 4, length(p)-5);
obj_get_text (“Total”, t);
t = substr (t, 4, length(t)-5);
If (t= =p*q)
Printf (“Test is pass”);
Else
Printf (“Test is Fail”);
* tl_step (); we can use this to create tester defined Pass or Fail message in test results.
Syntax: tl_step (“Step name”, 0/1, “message”);
0 for pass
1 for fail
*Data Driven Testing: The re-execution of a test with multiple test data is called DDT or Iterative testing or Re-
testing. WinRunner8.0 is supporting 4 types of DDT.
*DDT with test data from Key Board.
*DDT with test data from a Flat File. (.txt files)
60
*DDT with test data from Front end Objects.
*DDT with test data from XL Sheets.
1. From Key Board: Sometimes the test engineers are re-executing their test cases with multiple test data
through Dynamic submission using Keyboard.

To get required Test data from keyboard, the test engineers are using below TSL statement in corresponding
automation program.
create_input_dialog (“Message”);
EX1: Manual expected is Delete order button is enabled after open an order.
Build: Flight Reservation
Test data: Ten unique order numbers
Automation program:
For (i=1; i<=10; i++)
{
x=create_input_dialog (“Enter the order number”);
set_window (“Flight Reservation”, 1);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”, x); #Parameterization.
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Delete Order”,” enabled”, 1);
}
EX2: Manual expected is Result = intput1 * intput2.
Build:Multiply

Test data: Ten pairs of inputs


Automation program:
For (i=1; i<=10; i++)
{
x=create_input_dialog (“Enter input1”);
y=create_input_dialog (“Enter input2”);
set_window (“Multiply”, 1);
edit_set (“Input1”, x);
edit_set (“Input2”, y);
# x and y are Parameterization means sample input is replaced by Multiple inputs.
button_press (“OK”);
obj_get_text (“Result”, z);
61
If (z = = x*y)
tl_step (“s1”, 0, “Pass”);
Else
tl_step (“s1”, 1, “Fail”);
}
EX3: Manual expected is Total = price * quantity.
Build:

Test data: Ten pairs of item numbers and quantities.


Automation program:
For (i=1; i<=10; i++)
{
x=create_input_dialog (“Enter item no”);
y=create_input_dialog (“Enter quantity”);
set_window (“Shopping”, 1);
edit_set (“Itemno”, x);
edit_set (“Quantity”, y);
# x and y are Parameterization means sample input is replaced by Multiple inputs.
button_press (“OK”);
obj_get_text (“Price”, p);
p=substr (p, 4, length (p)-3);
obj_get_text (“Total”, t);
t=substr (t, 4, length (t)-3);
If (t = = p*y)
tl_step (“s1”, 0, “Pass”);
Else
tl_step (“s1”, 1, “Fail”);
}
EX4: manual expected is OK button is enabled after entering userid and password.
Build:Login

Test data: Ten pairs of userid and password.


Automation program:
For (i=1; i<=10; i++)
{
x=create_input_dialog (“Enter userid”);
y=create_input_dialog (“Enter password”);
set_window (“Login”, 1);
62
edit_set (“Userid”, x);
password_edit_set (“Password”, password_encrypt(y));
button_check_info (“OK”, “enabled”, 1);
button_press (“Clear”);
}
2. From Flat files: Sometimes the test engineers are re-executing tests depending on multiple test data in a
Flat file. In this method, the test engineer is not interacting with tool while running test.

In above approach, the required test data is coming from a Flat file without test engineer interaction. So this
method is known as 24/7 testing.
a) file_open(); We can use this function to open a specified Flat file into RAM.
file_open (“Path of file”, FO_MODE_READ or FO_MODE_WRITE or FO_MODE_AOOEND):
b) file_getline(); We can use this function to read a line of text from opened file.
file_getline (“Path of file”, variable”);
In above syntax, the file pointer is automatically incremented.
c) file_close(); We can use this function to swap out a opened file from RAM.
file_close(“Path of file”);
EX1: Manual expected is Insert order button disabled after open an existing order.
Build: Flight Reservation window
Test data: C:\\Documents and Settings\\Administration\\My Documents\\b8amdynamic.txt

Automation program:
f= “C:\\Documents and Settings\\Administration\\My Documents\\b8amdynamic.txt”;
file_open(f,FO_MODE_READ);
while(file_getline(f,x)!=E_FILE_EOF)
{
set_window (“Flight Reservation”, 1);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”, x); #Parameterization.
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Insert Order”,” enabled”, 0);
}
file_close(f);
EX2: Manual expected is Result = Input1 * Input2.
Build:
63

Test data: C:\\Documents and Settings\\Administration\\My Documents\\result.txt

Automation program:
f= “C:\\Documents and Settings\\Administration\\My Documents\\result.txt”;
file_open(f,FO_MODE_READ);
while(file_getline(f,x)!=E_FILE_EOF)
{
split(x,y, “”);
set_window (“Multiply”, 1);
edit_set (“Input1”, y[1]);
edit_set (“Input2”, y[2]);
button_press (“OK”);
obj_get_text (“Result”, z);
If (z = = y[1]*y[2])
tl_step (“s1”, 0, “Pass”);
Else
tl_step (“s1”, 1, “Fail”);
}
file_close(f);
EX3: Manual test expected is Total = Price * Quantity.
Build:Shopping

Test data: C:\\Documents and Settings\\Administration\\My Documents\\price.txt


64
Automation program:
f= “C:\\Documents and Settings\\Administration\\My Documents\\price.txt”;
file_open(f,FO_MODE_READ);
while(file_getline(f,x)!=E_FILE_EOF)
{
split(x,y, “”);
set_window (“Shopping”, 1);
edit_set (“Itemno”, y[3]);
edit_set (“Quantity”, y[6]);
button_press (“OK”);
obj_get_text (“Price”, p);
p=substr (p, 2, length (p)-1);
obj_get_text (“Total”, t);
t=substr (t, 2, length (t)-1);
If (t = = p*y[6])
tl_step (“s1”, 0, “Pass”);
Else
tl_step (“s1”, 1, “Fail”);
}
file_close(f);
EX4: Manual expected is OK button enabled after entering Userid and Password.
Build:

Test data: C:\\Documents and Settings\\Administration\\My Documents\\login.txt

Automation program:
f= “C:\\Documents and Settings\\Administration\\My Documents\\login.txt”;
file_open(f,FO_MODE_READ);
while(file_getline(f,x)!=E_FILE_EOF)
{
split(x,y, “”);
split(y[1],z,”@”);
set_window (“Login”, 1);
edit_set (“Userid”, z[1]);
password_edit_set (“Password”, password_encrypt(y[2]));
button_check_info (“OK”, “enabled”, 1);
button_press (“Clear”);
}
65
file_close(f);
d)file_compare(); The WinRunner is providing this function to compare two files content.
file_compare(“Path of file1, “Path of file2”, “Folder name”);
*file comparison and file concatenation in New folder.
In above syntax, the Folder name is optional. The WinRunner is creating a new folder with that name
to store that both compared files.
e)file_printf(); We can use this function to write a line of text into a opened file in WRITE or APPEND
MODES.
file_printf(“Path of file”, “Format”, values or variables);
In the above syntax, the format is specifying the type of value to write in specified fields.
EX:

a=xxxx and b=xxxx to be written in file then file_printf(“Path of file”,”a=%d and b=%d”,a,b);
%d for int, %f for real/float, %c for char, %s for string.
3.From Front end objects: Sometimes the test engineers are re-executing tests depending on multiple data
objects values like Listbox, Table grid, Menus, ActiveX controls and data windows.

EX1: Manual expected is selected City name in Flyfrom does not appear in Flyto list.
Build:Journey

Test data: All existing city names in Flyfrom and Flyto.


Automation program:
set_window (“Journey”, 5);
list_get_info (“Fly To”, “count”, n);
for(i=0;i<n;i++)
{
list_get_item(“Flyfrom”,i,x);
list_select_item(“Flyfrom”,x);
if(list_select_item(“Flyto”,x) = = E_OK)
tl_step(“S1”,1,”Country appears”);
else
tl_step(“S1”,0 ,”Country does not appears”);
}
EX2: Manual expected is selected name is appeared in Ouput message.
Build:Sample
66
Test data: All existing names .
Automation program:
set_window (“Sample”, 5);
list_get_info (“Name”, “count”, n);
for(i=0;i<n;i++)
{
list_get_item(“Name”,i,x);
list_select_item(“Name”,x);
button_press(“Display”);
obj_get_text(“Message”,y);
split(y,z,””);
if(x= = z[4])
tl_step(“S1”,0,”PASS”);
else
tl_step(“S1”,1 ,”FAIL”);
}
EX3: Manual expected is If Life insurance type is “A” then Age object is focused. If Life insurance type is “B” then
Gender object is focused. If Life insurance other one then Qualification object is focused.
Build:Insurance

Test data: All existing Insurance types.


Automation program:
set_window (“Insurance”, 5);
list_get_info (“Type”, “count”, n);
for(i=0;i<n;i++)
{
list_get_item(“Type”,i,x);
list_select_item(“Type”,x);
if(x = = “A”)
edit_check_info(“Age”,”focused”,1);
else if(x = = “B”)
list_check_info(“Gender”,”focused”,1);
else
list_check_info(“Qualification”,”f ocussed”,1);
}
EX4: Manual expected is If Basic>=15000 then Commission is 10% of basic.If Basic<15000 and >=8000 then
Commission is 5% of basic.Else if basic<8000 then Commission is 100/-
Build:Employee
67
Test data: All existing employee numbers
Test setup:All valid employees’ basic salaries already feeded into database.
Automation program:
set_window (“Employee”, 5);
list_get_info (“Empno”, “count”, n);
for(i=0;i<n;i++)
{
list_get_item(“Empno”,i,x);
list_select_item(“Empno”,x);
button_press (“OK”);
obj_get_text (“Basic”, y);
y=substr (y, 4, length (y)-3);
obj_get_text (“Commission”, z);
z=substr (z, 4, length (z)-3);
If (y> = 15000 && z= =y*10/100)
tl_step(“s1”,0,”Pass”);
Else if (y<1500 && y>=8000 && z= =y*5/100)
tl_step(“s1”,0,”Pass”);
Else if(y<8000 && z==100)
tl_step(“s1”,0,”Pass”);
Else
tl_step(“s1”,1,”Fail”);
}
EX5: Manual expected is Total = price * quantity
Build:Shopping

Test data: All existing rows in bill table


Automation program:
set_window (“Shopping”, 5);
tbl_get_rows_count(“Bill”,n);
for(i=1;i<n;i++)
{
tbl_get_cell_data(“Bill”,”#”&i,”#1”,q);
tbl_get_cell_data(“Bill”,”#”&i,”#2”,p);
p=substr(p,2,length(p)-1);
tbl_get_cell_data(“Bill”,”#”&i,”#3”,tot);
tot=substr(tot,2,length(tot)-1)
if(tot= = p*q)
tl_step(“s1”,0,”Pass”);
Else
tl_step(“s1”,1,”Fail”);
}
EX6: Manual expected is Sum = Sum of all Size columns in values.
68
Build:Audit

Test data: All existing Size values


Automation program:
set_window (“Audit”, 5);
tbl_get_rows_count(“Files”,n);
s=0;
for(i=1;i<n;i++)
{
tbl_get_cell_data(“Files”,”#”&i,”#3”,x);
x=substr(x,1,length(x)-2);
s=s+x;
}
obj_get_text(“Sum”,y);
y=substr(y,1,length(y)-2);
if(y= =s)
tl_step(“s1”,0,”Pass”);
Else
tl_step(“s1”,1,”Fail”);
*4.From XL Sheets: In general the maximum test engineers are executing our automation programs with
multiple test data on our application build.In this scenario the test engineers are maintaining required test data in an
XL sheet.There are 2 ways to fill our XL sheet with test data such as
*import data from database.
*manual entry of data.

From the above model, the test engineers are using Xl sheet content as test data in their automation
programs.To manipulate XL sheet content as test data, test engineers are using below TSL functions.
a)ddt_open();We can use this function to open an XL sheet into RAM.
ddt_open(“Path of the XL sheet”,DDT_MODE_READ/READWRITE);
b)ddt_get_row_count():We can use this function to find no of rows in an XL sheet.
ddt_get_row_count(“Path of Xl sheet”,variable);
In above syntax the variable specifies the no of rows in XL sheet excluding header.
69
c)ddt_set_row():We can use this function to point a row in an XL sheet.
ddt_set_row(“Path of XL sheet”,rownumber);
d)ddt_val():We can use this function to capture specified XL sheet column value.
ddt_val(“Path of XL sheet”,columnname);
e)ddt_close():to swap out of a opened XL sheet from RAM, we can use this function.
ddt_close(“Path of XL sheet”);
NOTE: In this method, the WinRunner is generating DDT script implicitly.
Navigation:
*open WinRunner and application build.
*create an automation program for corresponding manual test case.
*select Table menu.
*And select Data Driver Wizard option.
*In wizard click Next.
*browse the path of XL sheet(Default is given by tool).
*specify variable name to store path of XL sheet.
*select Import data from database option.
*specify connect to database using ODBC or data junction
*select Specify sql statement option
*click Next
*click create to select connectivity.
*write select statement to import required data from database into XL sheet.
*click Next.
*parameterize that imported data in required place of automation program.
*click Next.
*say Yes or No to show XL sheet.
*now run the project and analyze the results manually.
EX1:Manual expected is Delete order button is enabled after open an existing record.
Build:Flight Reservation window.
Test data:Imported existing order numbers form database which are available in an XL sheet.
Automation program:
table=”default.xls”;
rc=ddt_open(table,DDT_MODE_READWRITE);
if(rc!=E_OK && rc!=E_FILE_OPEN)
pause(“Cannot open the table”);
ddt_update_from_db(table,”msqr1.sql”,count);
ddt_save(table);
ddt_get_row_count(table,n);
for(i=1;i<=n;i++)
{
ddt_set_row(table,i);
set_window (“Flight Reservation”, 1);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”,ddt_val(table,”order_number”));
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Delete Order”,” enabled”, 1);
}
ddt_close(table);
EX2:Manual expected is Insert order button is disabled after open an existing record.
Build:Flight Reservation window.
70
Test data:Manually entered valid Order numbers, which are available in XL sheet.
Automation program:
table=”default.xls”;
rc=ddt_open(table,DDT_MODE_READ);
if(rc!=E_OK && rc!=E_FILE_OPEN)
pause(“Cannot open the table”);
ddt_get_row_count(table,n);
for(i=1;i<=n;i++)
{
ddt_set_row(table,i);
set_window (“Flight Reservation”, 1);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 1);
button_set (“Order no”, ON);
edit_set (“Edit”,ddt_val(table,”input”));
button_press (“OK”);
set_window (“Flight Reservation”, 1);
button_check_info (“Insert Order”,” enabled”, 0);
}
ddt_close(table);
EX3:Manual expected is Result=input1*Input2.
Build:Multiply

Test data:Manually entered values


default.xls
Data1 Data2 -----------
x x ----------
x x ----------
x x - - - - - - -- - -
-- -- - - - - -- -- -- -
-- -- - -- - - - - - - -
-- -- -----------
Automation program:
table=”default.xls”;
rc=ddt_open(table,DDT_MODE_READ);
if(rc!=E_OK && rc!=E_FILE_OPEN)
pause(“Cannot open the data table”);
ddt_get_row_count(table,n);
for(i=1;i<=n;i++)
{
ddt_set_row(table,i);
set_window (“Multiply”, 1);
edit_set (“Input1”,ddt_val(table,”Data1”));
edit_set (“Input2”,ddt_val(table,”Data2”));
button_press (“OK”);
obj_get_text (“Result”,r);
71
If (r= =ddt_val(table,”Data1”)*ddt_val(table,”Data2”))
tl_step (“s1”, 0, “Pass”);
Else
tl_step (“s1”, 1, “Fail”);
}
ddt_close(table);
EX4:Manual expected is Result=input1+input2.
Data Table:
Input1 Input2 Result
x x
x x
x x
-- --
-- --
-- --
Automation program:
table=”default.xls”;
rc=ddt_open(table,DDT_MODE_READWRITE);
if(rc!=E_OK && rc!=E_FILE_OPEN)
pause(“Cannot open the data table”);
ddt_get_row_count(table,n);
for(i=1;i<=n;i++)
{
ddt_set_row(table,i);
x=ddt_val(table,”Input1”));
y=ddt_val(table,”Input2”));
z=x+y;
ddt_set_val(table,”result”,z);
ddt_save(table);
}
ddt_close(table);
f)ddt_update_from_db();We can use this function to perform changes in XL sheet w.r.t changes in
database content.
ddt_update_from_db(“path of XL sheet”,”query file”,variable);
In the above syntax the variable specifies the number of modifications.
g)ddt_set_val();We can use this function to write a value to XL sheet.
ddt_set_val(“path of XL sheet”,”column name”,value/variable);
h)ddt_save ();We can use this function to save existing modifications in XL sheet.
ddt_save(“path of XL sheet”);
Silent mode:It is a run time setting.During test execution,the test engineers are using this setting to continue test
execution sans any interruption either our test passed or failed.
Navigation:
*Tools menu.
*General Options option.
*select Run tab.
*select Run in batch mode option.
NOTE:The test engineers are using Silent mode concept for DDT through flat files,from front end objects and XL
sheets.Not used for DDT from keyboard.
Synchronization point:Sometimes our application build operations are taking some valid delay to return output or
outcome.During this type of operations testing using WinRunner,test engineers are inserting Synchronization point
to define time mapping in b/w build and tool.
a)wait();This function defines a fixed waiting time.
72
wait(time in seconds);
b)for object/window property:We can use this option to synchronize tool and build depending on
properties of indicator objects like Status bar,Progress bar etc.
100% color filled then process is completed and it is in Enabled position.
Less than 100% color filling then process is incomplete and it is in Disabled position.
Navigation:
*select position in script.
*select Insert menu.
*select Synchronization point option.
*select for object/window property.
*select process completion indicator object.
*select Enabled property as 1.
*specify maximum time to wait.
*click Paste.
Obj_wait_info(“progress bar name”,”enabled”,1,maximum time to wait);
c)for object/window bitmap:Sometimes the test engineers are synchronizing tool and build depending on
process completion indicator images.
Navigation:
*select position in script.
*select Insert menu.
*select Synchronization point option.
*select for object/window bitmap sub option.
*select indicator image object(from project which indicates process completion).
*specify maximum time to wait in script.
obj_wait_bitmap(“Image object name”,”Image file name”,time to wait);
d)for screen area bitmap:Sometimes test engineers are using a part of image to synchronize tool and build
while running test.
Navigation:
*select position in script.
*select Insert menu.
*select Synchronization point option.
*select for screen area bitmap sub option.
*select indicator image area(from project which indicates process completion).
*right click to relive.
*specify maximum time to wait in script.
obj_wait_bitmap(“Image object name”,”Image file name”,time to wait,x,y,width,height);
*e)Change Runtime settings:Sometimes our application build is not providing process completion
indicator objects and images.To synchronize this type of build and tool,the test engineers are depending on Runtime
settings.
Navigation:
*select Tools menu.
*general options.
*select Run tab.
*in it select Settings option.
*increase Timeout to reasonable time in milliseconds.
*click Apply and Ok.
NOTE:By default,the WinRunner8.0 is maintaining 10000ms as timeout.If our application operations are taking
more than 10seconds of time,then test engineers are using any one of above 5 concepts to synchronize tool and
build.
Function Generator:It is a library of TSL functions.Test engineers are using this library to search unknown TSL
functions.
Navigation:
73
*select position in script.
*select Insert menu.
*Function option is selected.
*from function generator sub option.
*select category(like File functions).
*select TSL functions depending of description.
*fill arguments.
*click Paste.
EX1:Search TSL function to capture selected part of object value(selected text in textbox input).
edit_get_selection(“Object name”,variable);
EX2:Search TSL function to verify the existence of window on the desktop.
win_exists(“Window name”,time);
In above syntax,time is specifying delay before existence checking.By default this time is Zero seconds.If our
specified window exists on desktop this function returns E_OK,otherwise this function returns E_NOT_FOUND.
EX3:Search TSL function to open project while test.
EX4:Search TSL function to print current test name.
EX5:Search TSL function to print system date.
EX6:Search TSL function to print parent path of WinRunner.
Administration of WinRunner
During utilization of WinRunner testing tool for functional testing automation,the test engineers are
administrating WinRunner tool to get completeness and correctness in automation test creation.
1.WinRunner Frame Work:

Step1:start recording to record build operations and insert check points.


Step2:identification or recognization of operated object in build.
Step3:script generation w.r.t that object operation in TSL.
Step4:search required reference of recognized object in GUI Map.
Step5:catch corresponding object in build depending on identified reference in GUI Map.
The steps 1,2 and 3 are during recording and the steps 4,5 are during running the test.
To open GUI Map the test engineers are following this way
*Tools menu.
*GUI Map editor.
If our GUI Map is empty then the WinRunner is not able to run existing programs also.Due to this
reason,test engineers are maintaining protection to automation programs and corresponding GUI Map
references.There are 2 modes to save GUI Map references.
a)Global GUI Map file:In this mode the WinRunner is maintaining the identification or recognization
reference of objects and windows in a common file called Global GUI Map file.
74

In this mode,the WinRunner toll is maintaining a fixed buffer to store unsaved GUI Map references.
*Tool menu.
*GUI Map editor.
*View sub menu.
*GUI files.
*LO<temporary> which is a local buffer.
b)Per Test mode:In this mode,the WinRunner is creating references to objects and windows as separate for
every test.

By default WinRunner8.0 is maintaining Global GUI Map file mode to prevent repetition in references of
objects and windows.To select Per test mode,the test engineers are following below navigation.
*Tools menu.
*General options.
*General tab.
*change GUI Map file mode to GUI Map file per test.
*click Apply and Ok.
2.Changes in references:Sometimes the test engineers are changing references of dynamic objects and
windows.The dynamic objects and windows are changing of label names during operation.To recognize this type of
objects and windows,test engineers are performing changes in GUI map reference of that corresponding objects and
windows.
Navigation:Tools menu->GUI Map editor->select Dynamic object or window reference->click Modify->in Modify
window->add wild card characters or regular expressions(like ! and *) to label of that object in Physical
description->click OK.
EX1:logical name:Fax order no.1
{
class: window,
label: ”Fax order no.1”
}
This is actual reference in GUI Map editor.
logical name:Fax order no.1
{
class: window,
label: ”!Fax order no.*”
}
This is modified one by placing ! and *.
EX2:Stop or Start on same button called as Toggle objects.
logical name: Start logical name: Start
{ {
class: push-button, modified to -> class: push-button,
label: “Start” label: “![S][t][ao][rp][a-z]*”
} }
75
3.GUI Map configuration:sometimes our application build screens are maintaining more than one
similar objects.To distinguish this similar objects the test engineers are enhancing physical description of that
similar objects.

logical name: Ok logical name: Ok_1


{ {
class: push-button, class: push-button,
label: “Ok” label: “Ok”
} }
In the above example,two objects label are same and their physical description is same.To distinguish these
objects while running tests,the test engineers are enhancing physical description of that objects using 3rd property
value like MSW_id(Microsoft Windows identity).
Navigation:Tools menu->GUI Map configuration->select object type->double click it->select obligators and
optional properties->click Ok.
NOTE:WinRunner catch an object in our application build in two ways depending on physical description of
reference such as Location and Index.
logical name: Ok logical name: Ok_1
{ {
class: push-button, class: push-button,
label: “Ok”, label: “Ok”,
msw_id: xxxx msw_id: xxxyz
} }
4.Virtual object wizard:To convert non standard objects into standard objects,test engineers are using this.
Navigation:Tools menu->Virtual Object Wizard->click Next->select standard class or type depending on the nature
of non standard object->Next->mark the region of non standard object->right click to relive->Next->specify logical
name to that new reference->Next->say Yes or No to create more virtual objects(select No as projects do not have
more than one).
In project obj_mouse_click(“Flight”,40,23,left); it is changed to button_press(“Flight”);
5.Selected applications:To restrict test creation and execution on specific applications we can use selected
applications options in settings(for chatting with lead we can use this to record only test).
Navigation:Tools menu->General options->Record tab->selected applications sub option->select Record only on
selected applications” option->say Yes or No to conduct recording on Start menu and Windows explorer which is
optional one->click on blue shade to browse the path of selected application->click Apply and OK.
6.Start script:During WinRunner launching,a default program is running automatically.This program is
called Start Up Script or Start Up Program.This start up script is creating required setup before start test creation
and execution.
Navigation:Tools menu->General options->general tab->”Start up” option->browse the path of start up test file-
>click Apply and OK.
The above browsed file script is running automatically after launch of WinRunner.
7.Description Programing:WinRunner8.0 is allowing us to generate script sans recording.This manual
generation of script is called Description programming.
Record or run time:

Descriptive program:

EX:TSL statement:button_press(“OK”);
76
GUI Map reference:logical name:OK
{
Class:push-button,
Label:”OK”
}
It is converted into Description programming as
TSL statement:button_press(“{Class:push-button,label:”OK”}”);
User defined functions
For code re-usability,test engineers are using user defined functions concept.Every user defined function is
indicating re-usable operation in our application build w.r.t testing.
EX:All are automation programs.
*Function is indicating only operations or recorded statements.
*Tests are indicating operations and checkpoints to test.
Syntax: public function functionname(in/out arg1,…….)
{
Body of function re-usable for other tests
}
EX1:public function add(in a,in b,out c)
{
c=a+b;
}
calling test:
x=10;
y=20;
add(x,y,z); #x to a,y to b and z from c.
printf(z);
EX1:public function add(in a,inout b)
{
b=a+b;
}
calling test:
x=10;
y=20;
add(x,y); #x to a and y is sent to b and get back to y.
printf(y);
EX3:public function login(in x,in y)
{
set_window(“Login”,2);
edit_set(“Agent name”,x);
password_edit_set(“Password”,password_encrypt(y));
button_press(“OK”);
}
calling test:
login(“sri”,”mercury”);
Case Study:
User Defined Functions.
public function login(in x,in y)
{
set_window(“Login”,2);
edit_set(“Agent name”,x);
password_edit_set(“Password”,password_encrypt(y));
button_press(“OK”);
77
}
public function open(in x)
{
set_window (“Flight Reservation”, 2);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,x);
button_press (“OK”);
}
Calling test:Check Update order button
login(“sri”,”mercury”);
set_window(“Flight Reservation”,2);
button_check_info(“Update order”,”enabled”,0);
open(5);
set_window(“Flight Reservation”,1);
button_check_info(“Update order”,”enabled”,0);
button_set(“Business”,ON);
button_check_info(“Update order”,”enabled”,1);
Calling test2: Check Delete order button
login(“sri”,”mercury”);
set_window(“Flight Reservation”,2);
button_check_info(“Delete order”,”enabled”,0);
open(2);
set_window(“Flight Reservation”,1);
button_check_info(“Delete order”,”enabled”,1);
Compiled Module
After creating UDF’s in TSL,test engineers are making that UDF’s as executable forms.An executable form
of UDF is called a Compiled module.To create Compiled Modules we can follow below navigation.
*open WinRunner and Build.
*record repeatable operations once.
*make that repeatable operations as user defined functions with unique function names.
*save those functions.
*open File menu.
*select Test properties option.
*change test type to Compiled Module.
*click Apply and OK.
*and then execute once that saved file.(this file may contain more functions)
*write load(“filename”,0,1); in start up script or program of WinRunner.
load(); function is used to load Compiled module file into RAM.
Load(“Compiled Module filename”,0or1,0or1);
In above syntax,in second argument 0 indicates UDM and 1 indicates System Defined Module loaded
automatically.In third argument,0 indicates Function body to appear while running and 1 indicates Function body to
hide while running.
Exception Handling or Recovery Manager
To recover from abnormal situations while running test,test engineers are using 3 types of recovery
techniques.
*TSL exceptions.
*Object exceptions.
*Popup exceptions.
1.TSL exceptions:These exceptions raised when a TSL statement returns a specified returned code.
78

Navigation:
*Tools menu.
*select Recovery Manager.
*click New button.
*select Exception event type as TSL.
*click Next.
*enter exception name with description for future reference.
*click Next.
*select TSL function name with return code to define problem.
*click Next.
*enter Recovery function name.
*click Define recovery function button.
*select Paste to paste it to current test option.
*click OK and Next and Finish.
*define function body to recover from problem.
*make that function as Compiled Module and write load statement of that function in Start up script.
EX: TSL Function:<<any function>> option in that wizard is selected.
Error code:E_ANY_ERROR.
Handler function:abc.
public function abc(in func,in rc)
{
printf(func&”returns”&rc); #the & represents concatenation.
}
2.Object exceptions:Thse exceptions raised when a specified object property is equal to our expected value.

Navigation:
*Tools menu.
*select Recovery Manager.
*click New button.
*select Exception event type as Object.
*click Next.
*enter exception name with description for future reference.
*click Next.
*select object property with value to define problem.
*click Next.
*enter Recovery function name.
*click Define recovery function button.
79
*select Paste to paste it to current test option.
*click OK and Next and Finish.
*define recovery function body.
*make that function as Compiled Module and write load statement of that function in Start up script.
EX: Window:Flight Reservation.
Object:Isert order button.
Property:enabled.
Value:1
Handler function:pqr
public function pqr(in win,in obj,in attr,in val)
{
printf(“enabled”);
}
3.Popup Window exception:These expections raised when a specified un wanted window in build.
Navigation:
*Tools menu.
*select Recovery Manager.
*click New button.
*select Exception event type as Popup.
*click Next.
*enter exception name with description for future reference.
*click Next.
*select that un wanted window.
*click Next.
*specify Recovery operation to skip that window.
*click button,close window and execute a function.
*click Next and Finish.
NOTE:If our operation is a function,then test engineers are creating that recovery function as a Compiled Module
and write load statement in Start up script.
EX: Window:Flight Reservation
Handler action:fname
public function fname(in window)
{
set_window(“Flight Reservation”,2);
butoon_press(“OK”);
set_window(“Open order”,2);
edit_set(“Edit”,”1”); #default order no(1) is given by tester
button_press(“OK”);
}
This exception is occurred when order in not there while running open order for n times.
NOTE:WinRunner8.0 is allowing you to administrate existing exceptions.
*exception_off(“exception name”); is used to disable the exception.
*exception_on(“exception name”); is used to enable the exception.
*exception_off_all(“exception name”); is used to disable all exceptions.
NOTE:In this exception handling and recovery managed concept,the test engineers are predicting raising abnormal
status and solutions to overcome them depending on available documents,past experience,discussions with others
and browse our application build more than one time.
Web Test Option
The WinRunner8.0 is supporting functional test automation on web applications also like HTML pages.It is
not supporting XML web pages.To conduct testing on XML web pages,the test engineers are following manual test
execution or using QTP tool.
80
During functional testing on website,test engineers are concentrating on below manual automation
coverages.
*behavioral coverage(changes in properties of web objects).
*input domain coverage(the type and size of web input objects).
*error handling coverage(the prevention of wrong operation).
*manipulations coverage(the correctness of output or outcome).
*order of functionalities coverage(the arrangement of functionalities or modules in a web site).
*backend coverage(the impact of web page operation on backend tables).
*links coverage or URL coverage(the execution and existence of every link in a web page).
*content coverage(the completeness and correctness of existence test in a web page).
NOTE:The last 2 coverages are applicable for Web applications functionality testing.
NOTE:One Web site consists of more than one interlinked web pages.
NOTE:To open web applications or sites,the browser is manditory.
NOTE:During web applications development and testing,the development and testing team are using
Off-Line mode.There are two types of Off-Line modes such as Local host and local network.

1.Links coverage:It is a new coverage in web functionality testing.During this test,the test engineers are
validating every link in every web page of a website in terms of link execution and link existence.
“Link Execution means that the correctness of next page after clicking a particular link”.
“Link Existence means that the place of link in a web page ie whether it is in correct position and order or
not”.
To automate this links coverage,test engineers are using Web Test option in Add-In manager of WinRunner.
After launching WinRunner with WebTest option,test engineers are using GUI check point to automate
every link object.
a)Text link:Insert menu->GUI check point->for object/window->select testable text link->select URL and
Brokenlink properties with expected values->click OK.
obj_check_gui(“Link text”,”checklist name.ckl”,”expected values file”,time);
b)Image link:Insert menu->GUI check point->for object/window->select testable image link->select URL
and Brokenlink properties with expected values->click OK.
obj_check_gui(“Image text file name”,”checklist name.ckl”,”expected values file”,time);
c)Page/Frame:WinRunner8.0 is allowing us to create check point on all page level links.The flow is Insert
menu->GUI check point->for object/window->select one link object in testable Webpage->change your selection
from link object to Page->specify expected values for URL and Brokenlink properties->click OK.
win_check_gui(“Web page name”,”checklist name.ckl”,”expected values file”,time);
NOTE:In general,the test engineers are using GUI check point at page level for link coverages.
2.Content coverage:It is also a new coverage in Web functionality testing.During this test,the test engineers
are checking spelling,grammar,word missing,line missing etc. in content of web page.To automate this coverage
using WinRunner,test engineers are using Text check point.This check point consists of 4 sub options,when you
select the Web test option in Add-In manager.
*form object/window.
*from screen area.
*from selection(web only).
*web text check point.
a)from object/window:It is used to capture specified web object value(like capturing value from text box).
81
Navigation:Insert menu->get text check point->from object/window->select testable object in Web page to
capture that object value into variable.
web_obj_get_text(“Object name”,”#line row no”,”#line column no”,variable,”text before”,”text after”,time);
In above syntax,row and column numbers are indicating the line number of text area object content.For
text box or edit box consists of only one line of text i.e. #0 and #0.Text before and Text after are indicating
unwanted content in required line of text in a text box or textarea box.
EX:

Expected:Amount = Sum of Totals.


Automation program:
s=0;
set_window (“Shopping”, 5);
tbl_get_rows_count(“Bill”,n);
for(i=1;i<n;i++)
{
tbl_get_cell_data(“Bill”,”#”&i,”#4”,tot);
tot=substr(tot,2,length(tot)-1);
s=s+tot;
}
web_obj_get_text(“Amount”,”#0”,”#0”,amt,”$”,””,1);
if(amt= =s)
tl_step(“s1”,0,”Pass”);
else
tl_step(“s1”,1,”Fail”);
b)from screen area:We can use this option to capture selected area value,but this option is not applicable
on web pages because web pages are not co ordinates based concepts.
c)from selection(web only):We can use this option to capture specific area value of a web page.
Navigation:Insert menu->get text check point->from selection->select Web page content and right click to relive-
>specify Text before and Text after to mark our required text->click OK.
web_obj_get_text(“Object name”,”#line row no”,”#line column no”,variable,”text before”,”text after”,time);
d)web text check point:We can use this option to compare expected text with web page content.
Navigation:Insert menu->get text check point->web text check point->mark testable text in Web page using Text
before and Text after.
web_frame_text_exists(“Web page name”,expected text,”text before”,”text after”);
EX: set_window(“Text.txt”,1);
obj_get_text(“Edit”,x); # x is tha value expected.
web_frame_text_exists(“Google Accounts”,x,”learn more”,”after”);
Web functions
a)web_link_click();WinRunner use this function to record a text link operation.
web_link_click(“Link Text”);
b)web_image_click();WinRunner use this function to record a image link operation.
web_image_click(“Image file name”,x,y);
c)web_browser_invoke();We can use this function to open a web page while running tests in WinRunner.
Web_browser_invoke(IE/NETSCAPE,”URL”);
82
Batch Testing
“The sequential execution of more than one test cases is called Batch Testing”.Every test batch consists of a
set of dependent test cases.In every test batch,the end state of one test case is base state to next test case.Test
batches are also known as Test Suit or Test Chain or Test Belt or Test Set or Test Group.
EX:*each of the scenario is dependent of other.
Test scenario1:check Login operation.
Test scenario2:check Order open.
Test scenario3:check Insert,Update and Delete order buttons after open an order.
To create test batches in WinRunner,test engineers are using call statement.
call testname();for tests in same folder or call “path of test”();for tests in different forlder.
Parameter Passing:Like as programming languages,WinRunner is also allowing you to pass parameters in b/w
tests.
test1: test2:
------ ------
------ edit_set(“Edit”,x);
call test2(***); ------
In above example,x is the parameter with value passed by call function.The test1() is calling test2() with a
parameter.*The number of parameters are depending on number of inputs required to test2();.To create parameters
for test2(); like programs,we can use the following navigation.
*open test2(); like called programs in WinRunner.
*in File menu.
*select Test properties option.
*in it select Parameters tab.
*now click + (add icon) to enter parameter name and description.
*specify Default value for use if expected is not found.
*click OK.
*now use that parameters in required place of sample inputs like edit_set(“Edit”,x);
*save that called program.
Data Driven Batch Testing
The re-execution of more than one tests in a batch with multiple test data is called DDBT.In this testing,the
end state of last test is base state to first test.
EX4:Opening an order with order no’s from XL sheet.
Data Table:
Input
x
x
x
--
--
--
Automation program:
test1: #check Login operation.
table=”default.xls”;
rc=ddt_open(table,DDT_MODE_READ);
if(rc!=E_OK && rc!=E_FILE_OPEN)
pause(“Cannot open the data table”);
ddt_get_row_count(table,n);
for(i=1;i<=n;i++)
{
ddt_set_row(table,i);
invoke_application(“flight.exe”,””,””,SW_SHOW);
83
set_window(“Login”,2);
edit_set(“Agent name”,”sri”);
password_edit_set(“Password”,”encrypted value”);
button_press(“OK”);
if(win_exists(“Flight Reservation”,0)= =E_OK) #E_OK for window is found else E_NOT_FOUND.
call test2(ddt_val(“table”,input);
else
pause(“window not found”);
}
ddt_close(table);
test2: #check Open an order
set_window (“Flight Reservation”, 2);
menu_select_item (“File; Open order ….”);
set_window (“Open Order”, 4);
button_set (“Order no”, ON);
edit_set (“Edit”,x);
button_press (“OK”);
set_window (“Flight Reservation”, 2);
obj_get_text(“Name”,t);
if(t= =””)
pause(“window not found”);
else
call test3();
test3: #check Insert,Update and Delete order buttons.
set_window (“Flight Reservation”, 2);
button_check_info(“Delete order”,”enabled”,1);
button_check_info(“Update order”,”enabled”,0);
button_check_info(“Insert order”,”enabled”,0);
win_close(“Flight Reservation”);
Transaction Point
We can use this option to calculate the execution time of specific part of program.
Navigation:
*select position at the top of the required code.
*select Insert menu.
*in it select Transaction option.
*select Start transaction sub option.
*enter transaction name and click OK.
*now select position at bottom of the required code.
*again select Insert menu and select again the Transaction option
*now select End transaction option.
*enter the same transaction name declared above.
*click OK.
EX:In this automation program we place transaction as
--------
- - - - - - -- -
start transaction(name); #start of the transaction
set_window(“Login”,2);
edit_set(“Agent name”,”sri”);
password_edit_set(“Password”,”encrypted value”);
button_press(“OK”);
end transaction(name,PASS); #end of the transaction
-------
84
In the above automation program with some check points,we calculate the time taken for WinRunner to
execute Login operation.
Debugging
To run our automation programs line by line,the test engineers are using Step option in Debug menu with F6
as short key.To trace run time errors in TSL programs,test engineers are using this debugging concepts.
Short/Soft Key Configuration
Sometimes,the Short keys of WinRunner are indicating operations in Build also.Due to this reason test
engineers are configuring new short key configurations.
Navigation:
*select Start->Programs->WinRunner->Soft Key Configuration->select a new key to configure an operation
in WinRunner->Save it and exit.
Rapid Test Script Wizard(RTSW)
Sometimes,test engineers are using this option to create references to objects and windows before starting
test creation.In general,WinRunner8.0 is creating references for objects and windows while creating automation
tests also through recording.
Navigation:
*select Insert menu.
*in it select RTS wizard option.
*click Next.
*show application main window.
*click Learn.
*click Next after completion of learning.
*click OK.
GUI Spy
Test engineers are using this option to identify whether an object is recognizable or not.
Navigation:
*select Tools menu.
*choose GUI Spy option.
*click Spy button and put the mouse on the doubtful object in build.
*identify recognization.
*press left Ctrl+F3 to quit.
85

Quick Test Professionals(QTP)8.2

It is a Functional testing tool developed by Mercury Interactive.It is derived from WinRunner and Astra
Quick test.WinRunner is testing tool for Mercury Interactive with script in TSL where as Astra Quick Test is for
Astra company with script in VBScript.Mercury Interactive had taken over Astra company and a new testing tool
came called QTP with both WinRunner+Astra Quick test concepts.This QTP scripting is in VBScript.
It converts our manual test cases into VBScript programs.QTP supports all WinRunner8.0 supporting
technologies and XML,Multimedia,SAP,People soft,ORACLE Apps.
Mercury Interactive is taken over by HP company.
Test process
*select manual functional test cases to be automated.
*receive Stable build from developers ie build after Sanity testing.
*convert selected manual cases into VBScript programs.
*make those programs as test batches.
*Run test batches.
*analyze results manually for defect reporting if required.
Add-In Manager
This window list out all QTP supported technologies w.r.t license.
Welcome Screen window provides 4 options unlike WinRunner.
*Tutorial->used for help documents.
*Start recording->used to open a new test with recording mode.
*Open existing->used to open previously created test.
*Blank test->used to open new test.
NOTE:
*Like as WinRunner8.0,the QTP8.2 is also maintaining one global XL sheet for every test.But in
WinRunner the XL sheet is opening explicitly where as in QTP it is opened implicitly.
*The QTP Stop icon is useful to stop recording and running both.
*QTP8.2 is taking required application path before starting every automation program creation.This
“Selected Application” option is optional in WinRunner but in QTP it is manditory.
*Unlike WinRunner8.0,the QTP8.2 is maintaining recorded script in 2 views as:Expert view and Keyword
view.The expert view is maintaining script in VBScript language where as the keyword view is maintaining the
script in English documents.
*VBScript is not case sensitive and it is not maintaining delimeter(;) at the end of every statement.
Recording modes
The QTP8.2 is allowing us to record our manual test case actions in 3 types of modes.
*General recording.
*Analog recording.
*Low Level recording.
a)General recording:In this mode,the QTP is recording mouse and Keyboard operations w.r.t objects and
windows in our application build same as Context Sensitive mode in WinRunner.This mode is a default one in
QTP.To select this mode,we can use below options.
*click Start record icon or
*Test menu->Record option or
*F3 as short key.
86
b)Analog recording:To record mouse pointer movements on the desktop,we can use this mode.To select
this mode,test engineers are using below options
*Test menu->Analog recording option or
*Ctrl+Shift+F3 as short key.
EX:Digital signatures recording,Graphs drawing recording,image movements recording(not available in
WinRunner),etc.
NOTE:
*unlike in WinRunner,the recording of operations are starting with General recording in QTP.To change to
other modes,test engineers are using available options.
*unlike in WinRunner,the QTP8.2 is providing a facility to record mouse pointer movements relative to
desktop(this option is there in WinRunner) or relative to specific window(this is new option in QTP).
*in QTP,the test engineers are using same options or short keys to Start and to Stop Analog and Low level
recording modes.
c)Low level recording:Test engineers are using this mode to record non recognized or advanced technology
object operations in our application build.
EX:Advanced technology objects,Time based operations on objects,non recognized objects in supported technology
etc.
To select this mode,test engineers are using below process.
*Test menu->low level recording option or
*Ctrl+Shift+F3 as short key.
NOTE:The QTP8.2 is not maintaining any common key for above 3 modes like in WinRunner where WinRunner is
maintaining F2 as short key for both the modes.
Check Points
The QTP8.2 is a Functional testing tool and it provides facilities to automate functional testing coverages
like
*GUI or behavioral coverage
*input domain coverage
*error handling coverage
*manipulations coverage
*order of functionalities
*backend coverage
*links coverage or URL coverage(for web site only)
*content coverage(for web site only)
To automate above coverages the test engineers are using below 8 check points in QTP.
1)Standard check point->to check the properties of objects and windows like GUI check point in
WinRunner.
2)Bitmap check point->to compare static images and Dynamic images(new in QTP).
3)Text check point->to check selected object value.
4)Textarea check point->to check selected area value.
5)Database check point->to check the completeness and correctness of changes in database tables.
6)Accessibility check point->to check point hidden properties for Web sites testing only.
7)XML check point->to check properties of XML objects in our web pages.
8)XML check point ->to check XML code tags.
*The QTP is allowing us to insert check points in Recording mode only,except Database check point and
XML check point(File).But in WinRunner we can insert check points while recording and after completion of
recording also.
1)Standard check point:To check properties of objects,we use this check point.
Navigation:
*select a position in script to insert check point.
*select Insert menu.
*choose Check point option.
*select Standard check point sub option.
87
*now select the testable object in build.
*click OK after confirmation message.
*select testable properties with expected values.
*click OK.
Syntax:Window(“Window name”).WinObject(“Object name”).Check Checkpoint(“name of the check point”)
NOTE:
*QTP is allowing checkpoints insertion while recording operations only.
*objects confirmation is manditory while inserting check points.
*the QTP check points are allowing one object at a time.
*VBScript is maintaining similar syntax for all types of check point statements.
*the QTP check points are taking 2 types of expected values such as Constant and Parameter(for parameter
XL sheet column name).
*If our expected is Constant in a check point,then QTP runs that automation program one time by default.If
our expected is a Parameter in a check point,then the QTP runs that automation program more than one time
depending on number of rows in an XL sheet column.
*the silent mode concept is optional in WinRunner but it is implicit or manditory in QTP,because QTP
continues a test execution when a check point is Fail also.
2)Bitmap check point:we can use this option to compare expected image and actual image.Unlike to
WinRunner,this check point is supporting Dynamic images comparison also.To create this check point on dynamic
images,test engineers are selecting Multi Media option in Add-In Manager.
EX: logo testing

EX2: graphs comparison

Navigation :
*open expected image in build.
*select Insert menu.
*select Bitmap check point option.
*select that expected image.
*click OK after confirmation message.
*select area of image if required and click OK.
*now close the Expected image and open Actual image.
*run check point.
88
*analyze results manually.
NOTE:The Bitmap check point in QTP is supporting static images and dynamic images comparison,but this check
point is not providing differences in b/w images.
3)Database check point:We can use this check point to conduct database testing.During database
testing,the testing team is concentrating on the impact of front end screen operations on database table content in
terms of data validity and data integrity.
Data validity means that the correctness of new data stored into database.Data integrity means that the
correctness of changes in existing values.

To automate above like observations on our application build database,test engineers are using this check
point in QTP.This check point is depending on the content of database tables like as default check point in
WinRunner database check point.
Navigation:
*open QTP and select database check point in Insert menu(no need to insert this check point while
recording).
*specify connect to database using ODBC or Data Junction.
*select Specify sql statement manually option.
*click Next.
*click Create to select database connectivity name of our application build provided by development team.
*write select statement on impacted database tables.
*click Finish and click OK after confirmation of database content as expected.
*open our application build.
*perform front end operation in build and Run database check point.
*analyze results manually.
EX:

Syntax: DbTable(“Table name”).Check Check point(“Check point name”)


NOTE:The above QTP database check point is verifying the front end screen or form operation impact on back end
tables content,but this option is not validating the mapping in b/w back end table column and front end report
objects like as Run time record check point option in WinRunner database check point.
4)Text check point:In WinRunner8.0,this option is tester defined check point using their own “IF”
conditions.But in QTP,the text check point is Built in to compare test engineer expected value of an object with
build object actual value.This check point is applicable on front end objects values.
To apply testing on screen area values we can use Text Area check point.
In this Text check point,the test engineers are checking expected value and build actual value in 4
ways:Match Case,Exact Match,Ignore spaces and Text Not Displayed.
Navigation:
89
*select a position in script.
*select Insert menu.
*choose Check point option.
*select Text check point sub option.
*now select testable object in build.
*click OK after confirmation message.
*specify expected as Constant or Parameter.
*select type of comparison in b/w given expected and build object actual(like Match case,Exact
match,Ignore space and Text not displayed options).
*click OK.
Syntax: Window(“Window name”).Winedit(“textbox name”).Check Checkpoint(“Check point name”)
a)Match Case:In this comparison,the tester expected value is equal or substring to build object actual value.
b)Exact Match:In this comparison,the tester expected value is exactly equal to build object actual value.
c)Ignore Space:In this comparison,the tester expected value is equal to build object actual value sans
consideration of spaces.
d)Text Not Displayed:In this comparison,the tester expected value is equal to build object actual value or
blank field as valid.
The above comparisons are applicable on selected part of object values also using “Text Before” and “Text
After” options in Configuration of text check point.
5)Text Area check point:To apply above like comparisons on screen area values instead of object
values,we can use this check point in QTP.
Navigation:
*select a position in script.
*select Insert menu.
*choose Check point option.
*select Textarea check point sub option.
*now select testable screen area in build.
*click OK after confirmation message.
*specify expected as Constant or Parameter.
*select type of comparison in b/w given expected and build object actual(like Match case,Exact
match,Ignore space and Text not displayed options).
*click OK.
Changes in check points:
The QTP8.2 is allowing us to perform changes in existing check points due to sudden changes in customer
requirements or mistakes in check points creation.
Navigation:
*place the cursor on the check point statement.
*select Step menu.
*select Check point properties sub option.
*perform valid changes in check point definition.
*click OK.
*re execute program to get final result.
NOTE:The previous all check points in QTP are supporting one object at a time.To over come this we use VBscript
in depth.
VBScript
The QTP8.2 is recording mouse and keyboard operations w.r.t objects and windows in our application build
using VBScript.It is a object oriented scripting language like as JavaScript,Jscript,Java etc,but it is not case
sensitive scripting language.To enhance the strength of automation program the test engineers are using the
programming facilities in VBScript.
a)Variables declaration:variables are declared as
option explicit
dim (variable names to be used)
90
b)Type casting:sometimes the variables in our automation programs are changing from initial type to
other type.In this situations,the test engineers are using Type casting statements in VBScript.
Cint->to convert into integer.
Cdbl->to convert into float or double.
Cstr->to convert into string.
c)IF condition:
if condition then
-------
-------
end if
d)Multiple if:
if condition then
-------
-------
else if condition then
-------
-------
else
-------
end if
e)while statement:
while condition
-------
-------
wend
f)for loop statement:
for i=1 to n step1
-------
-------
next
g)Read input from keyboard:
option explicit
dim x
x=inputbox(“Message”)
h)Display a message:
msgbox(“Message”)
i)Tester defined pass/fail:
reporter.reportevent 0/1,”step name”,”message”
0->pass
1->fail
j)Capture object value:
option explicit
dim x
x=Window(“Window name”).WinEdit(“Textbox name”).GetVisibleText
Step Generator
To search unknown statements in VBScript,we can use step generator.It classifies VBScript statements into
3 categories like
*Functions.
*Test Objects.
*Utility Objects.
The function category consists of mathematical,string,date,curremcy functions etc.The test objects category
consists of VBScript statements related to testbox,push button,radio button,list or combobox,checkbox,menu,text
91
link,image link,page or frame and other standard objects.The utility objects category consists of XL sheet
functions,encryption or decryption functions,environment functions,reporter functions etc.
Navigation:
*select a position in script.
*Insert menu.
*Step option.
*and Step generator sub option.
*select Category(like functions or test objects or utility objects).
*select operation depending on description.
*fill arguments,if required.
*click OK.
EX1: Manual expected is Output=input*100
Build:

Automation program:
option explicit
dim x,y
x=Window(“Sample”).WinEdit(“Input”).GetTextVisible
y=Window(“Sample”).WinEdit(“Output”).GetTextVisible
if y=x*100 then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 0,”S1”,”FAIL”
end if
EX2: Manual expected is Total = price * quantity
Build:

Automation program:
option explicit
dim q,p,t
q=Window(“Shopping”).WinEdit(“Quantity”).GetTextVisible
p=Window(“Shopping”).WinEdit(“Price”).GetTextVisible
p=mid(p,4,len(p)-3)
t=Window(“Shopping”).WinEdit(“Total”).GetTextVisible
t=mid(t,4,len(t)-3)
if cdbl(t)=cdbl(p)*q then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 0,”S1”,”FAIL”
end if
EX3: Manual expected is Total = price * no of tickets
92
Build:Flight Reservation Window
Automation program:
option explicit
dim t,p,tot
Window(“Flight Reservation”).WinMenu(“Menu”).Select (“File;Open Order”)
Window(“Flight Reservation”).Dialog(“Open Order”).WinCheckbox(“Order no”).Set “ON”
Window(“Flight Reservation”).Dialog(“Open Order”).WinEdit(“Edit”).Set “1”
Window(“Flight Reservation”).Dialog(“Open Order”).WinButton(“OK”).Click
t=Window(“Flight Reservation”).WinEdit(“Tickets”).GetTextVisible
p=Window(“Flight Reservation”).WinEdit(“Price”).GetTextVisible
p=mid(p,2,len(p)-1)
tot=Window(“Flight Reservation”).WinEdit(“Total”).GetTextVisible
t=mid(t,2,len(t)-1)
if cdbl(tot)=cdbl(p)*t then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 0,”S1”,”FAIL”
end if
*Data Driven Testing
The re-execution of a test on same application with multiple test data is called DDT or Iterative testing or
Re-testing.In this DDT,test engineer is concentrating on the validation of a functionality with possible input
values.There are four types of DDT
*DDT through Key Board.
*DDT through Flat File. (.txt files)
*DDT through Front end Objects.
*DDT through XL Sheets.
1. DDT through Key Board: Sometimes the test engineers are re-executing their test cases with multiple
test data through Dynamic submission using Keyboard.

To read data from keyboard dynamically,we can use below statement in VBScript
option explicit
dim x
x=inputbox(“Message”)
EX1: Manual expected is Result = intput1 * intput2.
Build:Multiply

Test data: Ten pairs of inputs are available


Automation program:
option explicit
93
dim i,x,y,r
for i=1 to 10 Step1
x=inputbox(“Enter input1”)
y=inputbox(“Enter input2”)
Window(“Multiply”).WinEdit(“Input1”).Set x
Window(“Multiply”).WinEdit(“Input2”).Set y
Window(“Multiply”).WinButton(“OK”).Click
r=Window(“Multiply”).WinEdit(“Result”).GetTextVisible
if r=x*y then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 0,”S1”,”FAIL”
end if
next
EX2: manual expected is OK button is enabled after entering userid and password.
Build:

Test data: Ten pairs of userid and password.


Automation program:
option explicit
dim i,x,y
for i=1 to 10 Step1
x=inputbox(“Enteruserid”)
y=inputbox(“Enter password”)
Window(“Login”).WinEdit(“Userid”).Set x
Window(“Login”).WinEdit(“Password”).Set secure crypt.encrypt(y)
Window(“Login”).WinButton(“OK”).Check CheckPoint(“OK”)
Window(“Login”).WinButton(“Clear”).Click
Next
2.DDT through Front end objects: Sometimes the test engineers are re-executing tests depending on
multiple data objects values like Listbox, Table grid, Menus, ActiveX controls and data windows.

EX1: Manual expected is If Life insurance type is “A” then Age object is focused. If Life insurance type is “B” then
Gender object is focused. If Life insurance other one then Qualification object is focused.
Build:
94

Test data: All existing Insurance types.


Automation program:
option explicit
dim i,n,x
n=Window(“Insurance”).WinCombobox(“Type”).GetItemsCount
for i=0 to n-1 Step1
x=Window(“Insurance”).WinCombobox(“Type”).GetItem(i)
Window(“Insurance”).WinCombobox(“Type”).Select x
if x=’A’ then
Window(“Insurance”).WinEdit(“Age”).Check CheckPoint(“Age”)
else if x=’B’ then
Window(“Insurance”).WinCombobox (“Gender”).Check CheckPoint(“Gender”)
else
Window(“Insurance”).WinCombobox (“Qualification”).Check CheckPoint(“Qualification”)
end if
next
EX2:Manual expected is If total >= 700 then grade is A.If total >= 600 and <700 then grade is B.If total >= 500 and
<600 then grade is C.If total <500 then grade is D.
Build:

Test data is all existing students roll numbers


Automation program:
option explicit
dim i,n,x,m,g
n=Window(“Student”).WinCombobox(“Rollno”).GetItemsCount
for i=0 to n-1 Step1
x=Window(“Student”).WinCombobox(“Rollno”).GetItem(i)
Window(“Student”).WinCombobox(“Rollno”).Select x
m=Window(“Student”).WinEdit(“Marks”).GetTextVisible
g=Window(“Student”).WinEdit(“Grade”).GetTextVisible
if m>=700 and g=’A’ then
reporter.reportevent 0,”S1”,”PASS”
else if m>=600 and m<700 and g=’B’ then
reporter.reportevent 0,”S1”,”PASS”
else if m>=500 and m<600 and g=’C’ then
reporter.reportevent 0,”S1”,”PASS”
else if m<500 and g=’D’ then
95
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 1,”S1”,”FAIL”
end if
next
EX3: Manual expected is selected name is appeared in Ouput message.
Build:

Test data: All existing names.


Automation program:
option explicit
dim i,n,x,m,y
n=Window(“Display”).WinCombobox(“Name”).GetItemsCount
for i=0 to n-1 Step1
x=Window(“Display”).WinCombobox(“Name”).GetItem(i)
Window(“Display”).WinCombobox(“Name”).Select x
Window(“Display”).WinButton(“OK”).Click
m=Window(“Display”).WinEdit(“Message”).GetTextVisible
y=split m,””
if x=y(3) then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 1,”S1”,”FAIL”
end if
next
NOTE:In WinRunner TSL language,the array index is starting from 1 and using [] brackets we represent fields
where in QTP VBScript language,the array index is starting from 0 and using () parenthesis we represent each field.
3.DDT through XL sheet:By implicit the every QTP test is providing a default global XL sheet and also
provides DDT wizard to extend our automation program for multiple input’s automatically.

Import data from database:


*open new test in QTP.
*place mouse pointer on XL sheet or Data table.
*right click on it.
*select Sheet option.
*choose Import option.
*select From data base option.
96
*specify connect to database using ODBC or Data Junction.
*select specify sql statement manually option.
*click Next and click create to select database connectivity provided by developers.
*write select statement to retrieve required data into XL sheet.
*click Finish.
Manually entered data:
*open new test in QTP.
*click on default column header.
*enter our own column name and click OK.
*enter required test data under that column using keyboard.
Create DDT:
*open application build after filling XL sheet with required test data.
*convert corresponding manual test case with required check points.
*select Tools menu.
*choose Data Driver option and select sample inputs in automation program.
*click Parameterized and click Next after confirmation of selected input.
*click parameter options icon to select XL sheet column name.
*click OK and click Finish.
*follow above navigation to parameterize all sample input’s in that automation program.
4.DDT through flat file:sometimes the test engineers are re-executing an automation program more than
one time on our application build with the help of multiple data in a flat file.

To use file content as test data in an automation program,test engineers are adding below VBScript
statements to that automation program.
Option explicit
Dim fso,f
set fso=createobject(“scripting.filesystemobject”)
set f=fso.opentextfile(“file path”,1,TRUE)
Here 1 is used for READ mode,2 for WRITE mode and 8 for APPEND mode.
Test engineers are using above VBScript command set to use flat file content as test data.
EX1: Manual expected is Delete order button enabled after open an existing order.
Build: Flight Reservation window
Test data: C:\\Documents and Settings\\Administration\\My Documents\\b8amdynamic.txt

Automation program:
Option explicit
Dim fso,f,p,x
set fso=createobject(“scripting.filesystemobject”)
p=” C:\\Documents and Settings\\Administration\\My Documents\\b8amdynamic.txt”
97
set f=fso.opentextfile(p,1,TRUE)
while f.atendofline<>true
x=f.readline
Window(“Flight Reservation”).WinMenu(“Menu”).Select(“File;Open Order”)
Window(“Flight Reservation”).Dialog(“Open Order”).WinCheckbox(“Order no”).Set “ON”
Window(“Flight Reservation”).Dialog(“Open Order”).WinEdit(“Edit”).Set x
Window(“Flight Reservation”).Dialog(“Open Order”).WinButton(“OK”).Click
Window(“Flight Reservation”).WinButton(“Delete order”).Check Checkpoint(“Delete order”)
wend
EX2: Manual expected is Result = Input1 * Input2.
Build:

Test data: C:\\Documents and Settings\\Administration\\My Documents\\result.txt

Automation program:
Option explicit
Dim fso,f,p,x,y,r
set fso=createobject(“scripting.filesystemobject”)
p=” C:\\Documents and Settings\\Administration\\My Documents\\result.txt”
set f=fso.opentextfile(p,1,TRUE)
while f.atendofline<>true
x=f.readline
y=split x,””
Window(“Multiply”).WinEdit(“Input1”).Set y(0)
Window(“Multiply”).WinEdit(“Input2”).Set y(1)
Window(“Multiply”).WinButton(“OK”).Click
r=Window(“Multiply”).WinEdit(“Result”).GetTextVisible
if r=y(0)*y(1) then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 1,”S1,”FAIL”
end if
wend
EX3: Manual test expected is Total = Price * Quantity.
Build:Shopping
98

Test data: C:\\Documents and Settings\\Administration\\My Documents\\price.txt

Automation program:
Option explicit
Dim fso,f,p,x,y,price,tot
set fso=createobject(“scripting.filesystemobject”)
p=” C:\\Documents and Settings\\Administration\\My Documents\\price.txt”
set f=fso.opentextfile(p,1,TRUE)
while f.atendofline<>true
x=f.readline
y=split x,””
Window(“Shopping”).WinEdit(“Itemno”).Set y(2)
Window(“Shopping”).WinEdit(“Quantity”).Set y(5)
Window(“Shopping”).WinButton(“OK”).Click
price=Window(“Shopping”).WinEdit(“Price”).GetTextVisible
price=mid(price,2,len(price)-1)
tot=Window(“Shopping”).WinEdit(“Total”).GetTextVisible
tot=mid(tot,2,len(tot)-1)
if cdbl(tot)=cdbl(y(5))*price then
reporter.reportevent 0,”S1”,”PASS”
else
reporter.reportevent 1,”S1,”FAIL”
end if
wend
Case study
Data Driven Testing method Tester involvement in scripting Tester involvement in execution
Through keyboard Yes(by using inputbox(“”) option) Yes
Through front end objects Yes(using VBScript statements) No(this is 24/7 testing)
Through XL sheet No(using the navigations) No
Through flat files Yes(filesystemobject statements) No
Multiple Actions
The QTP8.2 is allowing multiple actions creation in an automation program.
99

To create multiple actions in an automation program,test engineers are following below navigation.
*select Insert menu.
*choose Call to new action option.
*specify Action name and click Ok.
Reusable actions
To improve modularity in calculation program,test engineers are using reusability concept.An action of a
program is invoking in other automation program,which action is called as Reusable action

In the above example,the Action1 of test1 is invoked in Action1 of test2 for code reusability.
To create reusable actions test engineers are following below navigation.
*record a reusable operation in our application build in a test as separate Action.
*select Step menu and choose Action properties.
*select Reusable Action checkbox and click OK.
*now open other test and select position in required place to insert reusable action.
*select Insert menu.
*choose Call to existing action option.
*browse previous test path and reusable action name.
*click OK.
Here one test reusable action is added to another test.
Parameters
*open reusable action in script.
*select Step menu and choose Action properties option.
*now select Parameters tab.
*click Add icon(+) to add parameter.
*now enter details.
*click add icon again to add more parameters and click OK.
*use that parameter in required place of sample inputs.
*save modifications in reusable action.
EX:
Test1
Action1(this is set as Reusable action)
--------
--------
Window(“login”).WinEdit(“Agent name”).set parameter(“x”)
Window(“login”).WinEdit(“Password”).setsecure crypt.encrypt(parameter(“y”))
100
--------
--------
Action2
- - - - - - - - some check points
--------
Test2
Action1
RunAction “Action1[Test1]”,one iteation,”sri”,”mercury”
--------
--------
The x,y parameters are sent while calling Action1 of test1 in Action1 of Test2.
NOTE:In WinRunner TSL,the sample input values are replacing with parameter names.But in QTP VBScript,the
sample input values are replaced by parameter statements like parameter(“parameter name”).
Synchronization Point
For successful test execution the test engineers are maintaining synchronization point to define time
mapping in b/w tool and our application build.
a)wait();this function defines fixed waiting time in test execution.
wait(time) or wait time
b)for object or window status:the above function is defining fixed weighting time but our application build
operations are taking variable times to complete.Due to this reason test engineers are synchronizing tool and build
depending on properties of objects like status bar,progress bar.
Navigation:
*select a position in script.
*select Insert menu.
*choose Step option.
*select synchronization point option.
*select process completion indicator object(like status bar,progress bar).
*click OK after confirmation message.
*select enabled property as true.
*specify maximum time to wait in milliseconds and click OK.
Syntax: Window(“Window name”).WinObject(“Object name”).WaitProperty”enabled”,TRUE,100000
c)Change Run Settings:Sometimes our application build is not maintaining progress completion indicator
objects.In this situation test engineers are changing Run Settings of QTP to synchronize with build.
Navigation:
*Test menu.
*select Settings option.
*choose Run tab.
*increase Timeout in milliseconds.
*click Apply and click Ok.
NOTE:By default in WinRunner8.0,the timeout is 10000ms.But in QTP8.2 the timeout is 20000ms.In Qtp,the
timeout settings are applied for that specified test only.
NOTE:The Run settings in WinRunner are Tool level where as in QTP the settings are test level.For every new test
settings will change.
Administration of QTP
a)QTP FrameWork:
101

Step1:start recording to record build operations and insert check points.


Step2:identification or recognization of operated object in build.
Step3:script generation w.r.t that object operation in VBScript.
The 3 steps are during recording the test.
Step4:search required reference of recognized object in Object Repository by QTP.
Step5:catch corresponding object in build depending on identified reference in Object repository.
The above 2 steps are during execution of test.
b)Per Test mode:Unlike in WinRunner,the QTP is maintaining references to objects and windows in per
test mode only.

c)Export references:Sometimes the test engineers are executing automation program in VBScript at
different locations.In this situation the test engineers are following below navigation.
Syatem1:create test(recording + inserting check points)
save test
export references
System2:download test and references files
open build and run that test
To export references to external file,test engineers are following below navigation.
*Tools menu.
*Object Repository.
*click Export button.
*save file name with .tsr as extension(Test Script Resource).
*click Save.
To use these references in other system,test engineers are following as
*Test menu.
*Settings.
*Resources tab.
*check Shared object option and browse the path of .tsr file.
*click Apply and click Ok.
d)Regular Expressions:Sometimes our application build object or windows labels are changing
dynamically.To recognize these objects and windows by QTP test engineers are using regular expressions to change
corresponding dynamic objects or windows reference.
Navigation:
102
*Tools menu.
*Object Repository.
*select Corresponding reference.
*click Constant value options icon.
*insert Regular expression into constant label.
*select regular expression checkbox.
*click OK.
EX:logical name:Fax Orderno:1 logical name:Fax Orderno:1
{ changed to {
class:Window class:Window
label:”Fax Orderno:1” label:”Fax Orderno:[0-9]*”
} }
check Regular Expression. option
e)Object Identification:Sometimes our application build windows consists of more than similar objects to
distinguish these object by QTP test engineer is using object identification concept like as GUI map configuration
in WinRunner.
*select Tools menu.
*choose Object Identification option.
*select Environment(means the technology used to construct our application build).
*select similar objects type.
*add MSWId as Assistive properties and finally click OK.
EX:

logical name:OK logical name:OK_1


{ {
class:pushbutton class:Window
label:”OK” label:”OK”
msw_id:xxx msw_id:yyy
} }
In these 2 references,the MSW_ID is different(assistive property).Class and Label are manditory properties.
f)Smart Identification:To recognize non recognized objects forcibly,we are using different possibilities in
QTP as

Sometimes the test engineers are using Smart Identification concept to recognize non recognized or non
standard objects instead of Low level recording.
Navigation:
*select Tools menu.
*choose Object Identification option.
*select Environment.
*select Object type.
*check enable Smart Identification option.
103
*click Configure.
*select our required properties as base and optional.
*click OK.
g)Virtual Object Wizard:Instead of low level recording and smart identification,test engineers are using
VOW to recognize a non recognized objects forcibly.
Navigation:
*select Tools menu.
*Virtual objects option.
*new virtual object.
*click Next.
*select expected standard type.
*click Next.
*mark non recognized object area.
*click Next.
*confirmation of non standard object area selection.
*click Next.
*say Yes or No to create more virtual objects and click Finish.
NOTE:In general,the QTP8.2 is maintaining references for objects and windows in Per test mode using object
repository.This tool is maintaining Virtual objects references in Global mode using Virtual Object Manager.
Web Test Option
Like as WinRunner8.0,the QTP is also allowing us to automate Functional test cases on Web Sites.During
functional testing on web applications,test engineers are concentrating on Links coverage and Content coverage as
extra.To automate these coverages,test engineers are using Standard check point(for HTML) or XML check point
and Text check point respectively.
During web site testing,test engineers are maintaining OFF line mode like Localhost or Local n/w.To
automate functional testing on web sites,test engineers are selecting Web option in Add-In manager.
a)Links Coverage:It is a new coverage in web functionality testing.During this test,test engineers are
verifying every link existence and execution.In automation of links coverage,testers are using standard check point
for HTML links and XML check point for XML links.The links coverage automation is possible at 2 levels such as
link level and page level.
1)Text link or Image link:
*select Insert menu.
*choose Check point option.
*select Standard check point option.
*select testable Text or Image link.
*select HREF property with expected path of next page.
*click OK.
Browser(“Browser name”).Page(“Page name”).Link(“Link text”).Check Check point(“Checkpoint name”)
Browser(“Browser name”).Page(“Page name”).Image(“Image link text”).Check Check point(“Checkpoint name”)
2) Page/Frame:
*select Insert menu.
*choose Check point option.
*select Standard Check Point.
*select testable link.
*now select page level selection in confirmation message and click OK.
*click Filter link check.
*specify expected href/url for all page level links.
*click OK.
*say Yes (or) No to verify broken links.
*click Ok.
Browser(“browser name”).Page(“page name”).Check Check point(“check point name”)
NOTE:The QTP8.2 is allowing you to automate broken links testing at page level only.
104
NOTE:The WinRunner8.0 is not supporting XML but QTP8.2 is providing XML check point web page and
XML check point(File) to verify XML web page properties and XML code respectively.
b)Content coverage:During this test,the test engineers are validating the correctness of existing text in
corresponding web pages in terms of spelling and grammar.To automate this content coverage using QTP,test
engineers are using Text check point in Insert menu.This check point is comparing test engineers given expected
text with web page actual text in terms of Match Case,Exact Match,Ignore Space and TextNotDisplayed.

Navigation:
*select Insert menu.
*choose Check Point option.
*select Text Check point option.
*select web page content to test.
*specify expected text provided by customer as Constant or Parameter.
*select required type of comparison(like 4 match types).
*click OK.
Browser(“Browser name”).Page(“Page name”).Check Check point(“Checkpoint name”)
NOTE:The QTP8.2 is allowing us to apply above testing on part of web page content also.The marking of part of
web page content is depending on “Text Before” and “Text After” buttons.
Recovery Scenario Manager
In general,the test engineers are planning 24/7 testing in test automation.In this style of test execution,the
testing tools are recover from Abnormal state Normal state using existing recovery scenarios.
Approach:
Step1:specify details about Abnormal state(Popup(for windows),Objects,Test run and Application crash).
Step2:specify details about recovery(mouse/keyboard operation,function calling,close window,restart
windows).
Step3:specify post recovery to continue testing.
Step4:save above details for future reference.
Navigation:
*select Tools menu.
*choose Recovery scenario manager option.
*click new scenario icon.
*click Next.
*select type of problem(popup/object/test run error/application crash).
*specify required details to define that problem.
*click Next.
*select type of recovery(keyboard or mouse operations/function call/close application process/restart MS
windows).
*click Next.
*specify details to define recovery.
*click Next.
*specify post recovery option as(repeat current and continue/procees to next step/proceed to next
action/procees to next test iteration/restart current test execution/stop test execution).
*specify scenario name with description for future reference.
*click Next.
*select add to current test or add to default test settings.
*click Finish.
Batch testing
The sequential execution of more than one automation program is called Batch Testing.Every test batch
consists of a set dependent test cases.The end state of one test is base state to next test to continue test execution
105
sans missing and sans repeating of test cases execution.Every test batch is also known as Test Suit/Test Belt/Test
Chain/Test Built.To create test batches in QTP,test engineers are using 2 types of concepts such as
*Test Batch Runner option.
*RunAction command.
a)Test Batch Runner:It is a new topic in QTP8.2.It provides a facility to test engineers to make dependent
tests as batches.To create batch using test batch runner,we follow below navigation.
*Start menu.
*Programs.
*QTP.
*Tools.
*Test Batch Runner.
*click add icon.
*browse the path of test in order.
*click Run icon after base state creation in build for first test in that batch.
*analyze results of batch manually.
b)RunAction command:Like as in WinRunner CALL statement,the test engineers are using RunAction
command to interconnect dependent automation program.

Parameter Passing
*make dependent tests as a batch using RunAction command.
*provide Reusable action permission to all Sub/Called tests.
*open required Called/Sub test.
*select Step menu.
*choose Action properties.
*select Parameters tab.
*click Add icon to create parameters with required details.
*replace that parameter in the place of sample input using parameter statement.
*open corresponding main test and add constant or parameter value to RunAction statement.
Ex:for constant give value directly as 3.
for parameter give as datatable.value(“Input”)
NOTE:
*whenever check point is failed the QTP8.2 is providing screen shots in results.
*Test Batch Runner like option is not available in WinRunner tool.
*Test Batch Runner concept is not allowing parameter passing in b/w tests.
*In Data Driven Testing,the end state of last test is base state to first test.
EX: Test1(Login)
SystemUtil.Run “flight.exe”,””,””,”open”
Dialog(“Login”).WinEdit(“Agent name”).Set “sri”
Dialog(“Login”).WinEdit(“Password”).SetSecure “554ds”
Dialog(“Login”).WinButton(“OK”).Click
RunAction “Action1[Test2]”,oneIteration,datatable.value(“Input”)
106
Test2(Open order operation and x as parameter)
Window(“Flight Reservation”).WinMenu(“Menu”).Select(“File;Open Order”)
Window(“Flight Reservation”).Dialog(“Open Order”).WinCheckbox(“Order no”).Set “ON”
Window(“Flight Reservation”).Dialog(“Open Order”).WinEdit(“Edit”).Set parameter(“x”)
Window(“Flight Reservation”).Dialog(“Open Order”).WinButton(“OK”).Click
RunAction “Action1[Test3]”,oneIteration
Test3(chek point insertion)
Window(“Flight Reservation”).WinButton(“Delete order”).Check Checkpoint(“Delete order”)
Window(“Flight Reservation”).WinButton(“Insert order”).Check Checkpoint(“Insert order”)
Window(“Flight Reservation”).WinButton(“Update order”).Check Checkpoint(“Update order”)
Window(“Flight Reservation”).Close
With statement
To decease the size of a program in VBScript,the test engineers are using this With statement.
*select Edit menu.
*choose Apply “With” to script option
We can use Remove “with” statement option to get original VBScript using same navigation as shown
above.
With Window(“Flight Reservation”)
.WinMenu(“Menu”).Select(“File;Open Order”)
With .Dialog(“Open Order”)
.WinCheckbox(“Order no”).Set “ON”
.WinEdit(“Edit”).Set parameter(“x”)
.WinButton(“OK”).Click
End With
End With
Active Screen
The QTP8.2 is providing a facility to see snapshot of build w.r.t existing automation program in VBScript.
*select View menu.
*and choose Active Screen option.
NOTE:Like as in WinRunner8.0,the QTP8.2 is also allowing us to decrease the memory space of automation
program in VBScript.To decrease space,test engineers are using
*in File menu.
*select Export test to zip file option.
And to get original VBScript code,test engineers are using
*in File menu.
*select Import test from zip file option.
Accessibility check point:This check point is only applicable on Web Pages.Test engineers are using this check
point to verify hidden properties of web objects.
EX:Alternative Image testing,Page/Frame Titles check,Server Side Image check,Tables check.
Navigation:
*in Insert menu
*select Check point option.
*now choose Accessibility check point.
*select a Page/Object in build.
*select Accessibility setting.
*click OK.
Syntax is same like as others browser check point.
Q:How to conduct testing on Web Site development rules and regulations?
A:Using Accessibility check point we can conduct testing on properties of web objects.
XML check point(File):This check point is applicable on XML source code.
One Black Box Tester trying to know the internal code of a build is called as Gray Box Testing.
Navigation:
107
*in Insert menu
*select Check point.
*choose XML check point(File).
*browse XML file path.
*click OK.
*select testable statement in that XML code.
*specify required properties with expected values using Attributes button.
*click OK.
NOTE:Unlike to WinRunner,the QTP8.2 is allowing us to automate our testing on XML web pages and XML
program files.
Syntax: XMLFile(“.xml”).Check Checkpoint(“.xml”)
Output values:This concept is providing a facility to get values from build like get_info statement in WinRunner.
Syntax:for browser Browser(“browser name”).Page(“page name”).Link(“link text”).Output Checkpoint(“name”)
For window Window(“window name”).WinObject(“object name”).Output Checkpoint(“name”)
Navigation:
*select Insert menu.
*choose Output value option.
*select type of output.
*provide details.
*select required properties to get value of that property.
*click OK.
Call to WinRunner
In companies,version one of a s/w is tested in WinRunner and in version two testing they test main module
in VBScript and sub modules in TSL.
The QTP is allowing us to call TSL programs and functions in required place of VBScript program.
a)Test Call:
*select position in QTP.
*and now select Insert menu.
*choose Call to WinRunner option.
*in it choose Test option.
*browse the path of test.
*specify parameters if required.
*say Yes/No to run WinRunner minimized.
*say Yes/No to close WinRunner after running the test.
*click OK.
Syntax: TSLTest.RunTestEx “path of the WinRunner test file”,parameters,T/F,T/F
b)Function Call:
*select position in QTP.
*and now select Insert menu.
*choose Call to WinRunner option.
*in it choose Function option.
*browse the path of the Compiled Module file.
*enter function name used in that module.
*specify parameters if required.
*say Yes/No to run WinRunner minimized.
*say Yes/No to close WinRunner after running the test.
*click OK.
Syntax: TSLTest.CallFunEx “path of the WinRunner Compiled Moduled file”,”function name”,parameters,T/F,T/F
NOTE:
*like as WinRunner8.0,the QTP8.2 is also allowing Transactions to calculate execution time of selected part
of our program.
108
*like as WinRunner8.0,the QTP8.2 is also supporting Run From Step to run a part of program and
Update run to run our automation check point with default values as expected.
*in WinRunner,the test engineers are using F6 as short key to debug our program line by line.But in QTP,the
test engineers are using F11 as short key to debug our VB program line by line.
*like as GUI Spy in WinRunner,the test engineers are using Object Spy in QTP to know whether an object
in our build is identifiable by our tool or not?
*like as WinRunner,the QTP is also allowing User Defined Programs creation with repeatable operations in
our application build.
Syntax for User Defined functions:
Function functionname(Parameters)
-------
-------
End Function
*Save this file with (.vbs) as extension.
*open a test after saving that UDF.
*select Test menu.
*choose Setting option.
*select Resources tab.
*browse path of that function file.

Advanced Testing Process

Load Testing
The execution of our application build under customer expected configuration and customer expected load
to estimate speed of processing is called Load Testing.
Load means that number of concurrent user working on our application build at same time.
Stress Testing
The execution of our application build under customer expected configuration and various levels to estimate
continuity is called Stress Testing or Endurous testing.
Manual Vs Automation Performance testing
109

The manual load and stress testing is expensive and complex to conduct.Due to this reason,the testing teams
are concentrating on test automation to create virtual load
Examples:Load Runner,Silk performer,Jmeter,Rational Load test etc.

LoadRunner8.0

It is also developed by Mercury Interactive.It is a load and stress testing tool.It supports
Client/Server,Web,ERP and Legacy applications.
Create virtual users instead of real network.
Virtual Environment

RCL:The Remote Command Launcher is converting a local request into a remote request using loop back
addressing(means source computer address and destination computer address is Same).
VUGen:The Virtual User Generator is making the one real remote request as multiple virtual requests but
responses generated due to these virtual requests are real.
110
Port Mapper:It submits all virtual user requests to a single server process port.
Controller Scenario(CS):The CS returns the performance results.
NOTE:The Remote Command Launcher and Port Mapper are internal components in LoadRunner.VUGen,
Controller Scenario and Results Analyzer are external components.
Time Parameters:
a)Think Time:The time to create a request in client process.
b)Elapsed Time:The time taken to complete request transmission,processing in server and response
transmission is called Elapsed/Turn Around/Swing Time.

c)Response Time:The time to get first response from server process or the time to start an operation in
server.

Response time = request time + ACK time


d)Hits per sec:The number of requests received by server in one second of time is called Hits per second.
e)Throughput:The speed of server process to response to that request in one second of time(in kbps).
The time parameters Hits per second and Thruoghput are used in Web load and stress testing time.
Software performance depends on software coding and computer configuration.

I)Client/Server application load and stress testing


a)Architecture of Client/Server software:

b)Load and Stress testing Environment:we are using the following to establish test environment
*customer expected computer configuration.
*AUT or Build.
*LoadRunner.
*Database server.
c)Load and stress test cases:
111

d)Navigation to create load and stress testing:let us assume our computer as customer expected
configured computer.
*Install our application build and LoadRunner software
*Install corresponding database server
*select Start menu
*Programs
*open corresponding database server(like Oracle,Sql,MySql,Quardbase etc)
*now select Start menu
*Programs
*Mercury LoadRunner
*LoadRunner
*choose Create/Edit script option in Welcome screen
*specify build type as Client/Server
*select database server type(by default ODBC)
*click OK and specify path of our application build
*select recording to action as(Vuser_init,Action(one action only),Vuser_end)
*click OK and record our build front end operations per one user as init,action and end
*click stop recording
*save that script per one user(VSL)
*Tools menu
*Create Controller Scenario
*specify number of users to define customer expected load and click OK
*click Start Scenario
NOTE:If our applied load is passed on build,then the testing team is analyzing performance results.If our applied
load is not accepted by build(Memory leakage:means space is not sufficient),then testing team is reporting defect to
development team.
The final result is for whole process(init,action and end)
Transaction point:To get performance results for specific operation,test engineers are marking required operation
as transactions.
Navigation:
*select position on the top of the required operation in Action
*Insert menu
*Start transaction and specify a name and click OK
*now select position at the end of operation
*Insert menu
*End transaction and specify same name and click OK
EX: lr_start_transaction(“transaction name”);
open cursor
select/insert/delete/update/commit/rollback
close cursor
lr_start_transaction(“transaction name”,LR_AUTO);
The above script is Operation/Action per one user and LR_AUTO gives the required action time.
112
Rendezvous point:It is an Interrupt point in load test execution.This point stops current Vuser process
execution until remaining Vusers process are also executed upto that point called as Load Balancing/Load
Controlling.
lr_rendezvous(“name”);
lr_start_transaction(“transaction name”);
open cursor
select/insert/delete/update/commit/rollback
close cursor
lr_start_transaction(“transaction name”,LR_AUTO);
It is placed above transaction point so that any Vuser executing fastly can be slowed down until other Vusers
come there.
Analyze Results
If our applied customer expected load is passed,then the test engineers are concentrating on results
analyzing else test are concentrating on defect reporting and tracking.
*Results menu
*Analyze results
*identify average response time in seconds.
Results Submission:After completion of required load test scenarios execution,the test engineers are submitting
performance results to project management in below format
Scenario(select/insert/delete/update/commit/rollback) Load(no of Vusers) Average response time in
seconds
Select 5 0.02
Select 10 0.05
Select 20 0.07
--- --- ---
--- --- ---
NOTE:In load and stress testing,our application build accepted peak load is greater than or equal to customer
expected load.
Bench Mark Testing
After receiving performance results from testing team,PM is conducting Bench Mark Testing.In this test,the
PM confirms whether this performance is acceptable or not.
In this Bench Mark Testing,PM is depending on performance results of previous version of s/w or
competitive products in market or interests of customer site people or interest of product managers.If reported
performance of build is acceptable,then PM is concentrating on release of that s/w.If the reported performance of
build is not acceptable,then programmers are concentrating on changes in coding structure sans disturbing
functionality or suggest to customer site people to improve configuration of working environment.
Increase load
The LoadRunner is allowing us to increase load upto peak load.In general the test engineers are increasing
load to apply stress on build.
Navigation:
*Tools menu
*Create Controller Scenario
*Vusers button
*Add Vusers button
*specify Quantity to Add to increase load and click OK
*click Close
*click Start Scenario for load test execution
*follow above navigation until peak load
Mixed operations Load testing
LoadRunner8.0 is allowing us to perform load testing with various operations in our application build.
EX:5 users for Select operation and 2 users for Update operation,but the total load is for 7 Vusers.
Navigation:
113
*create Vuser scripts separately for all required operations
*maintain Rendezvous point name as same in all Vuser scripts
*Tools menu
*Create Controller Scenario option
*specify load for current Vuser script and click OK
*select Add group button
*specify other Vuser script with quantity
*click Add group again to add multiple Vuser scripts with different loads
*click Start Scenario
*analyze results manually when load is passed
*send reports if failed
NOTE:for Mixed Operations load testing,test engineers are maintaining Rendezvous point name as same.At
Rendezvous point,the Maximum waiting time of Vuser for other Vusers is 30 seconds.
Since the Rendezvous point is placed above the Transaction point,the waiting time for Vuser is not included
in final performance time.
II)Web load and stress testing
a)Web site architecture

b)Load and Stress test environment


*customer expected configured computer
*web site under testing(Build)
*LoadRunner
*browser(Internet explorer/Netscape navigator)
*web server
*database server
c)Web load and Stress testcases
Tester is at Entry Criteria when he receive stable build from developers,is having test environment and able
to prepare testcases.
*URL open
*text/image link
*form submission(login,resume submission etc)
*data submission(when corresponding time is completed,then the browser automatically send some
information to web server like Hidden data processing)
URL open:It emulates us to open a homepage of website under load.
NOTE:In web load and stress testing,test engineers are selecting E-Business->Web(HTML/HTTP) option as
category of project or build in Vuser Generator.
NOTE:The WebServer(IIS with chair symbol) is already opened and now open our build database.
NOTE:LoadRunner8.0 is allows load operation creation in 2 types such as Recording or Inserting.
*select a position in script Action part
*Insert menu
*New Step
*select Step type(like URL,image link,text link,submit form,submit data)
*fill arguments/parameters and click OK
Text link:It emulates us to open next page of web site using text link under load.
Image link:It emulates us to open next page of web site using image link under load.
Form Submission:It emulates us to submit a form data to web server under load.
114

Data Submission:It emulates us to submit a formless or content less data to web server under load like auto
login,autologout,auto transaction(Dog look) etc.
Performance Results Submission
After completion of required scenarios load and stress testing,the testing team is submitting performance
results to project management.
Scenario(URL,image link,text Hits per second Throughput Average Response time
link,form and data submission)
URL open 100 25kbps 1sec
URL open 150 42kbps 1sec
URL open 200 50kbps 2sec
URL open 250 50kbps 3sec
-------- ------ ------- -----
Bench Mark testing
After receiving performance results from testing team,the project management is conducting Bench Mark
testing.In this test,the PM is comparing current performance values with previous version performance or
competitive websites performance or world wide web consortium(W3C) standards.From these standards,a quality
website is taking 3seconds for links operations and 12seconds for transactional operations.
NOTE:If the estimated performance is not acceptable,then the project management sends back that website build to
development team to perform changes in coding structure sans disturbing functionality.Sometimes the project
management is suggesting to customer site to improve configuration of the environment.
Run Time Settings
To conduct load and stress testing efficiently,the test engineers are using Run Time settings in Virtual User
Generator.

a)Pacing:we can use this option to run our Vuser script iteratively with fixed delay or random delay.
b)Log:LoadRunner is maintaining 2 types of performance results such as Standard log and Extended log.
c)Think Time:the time to create a request by client.LoadRunner is allowing us to add think time to
performance time if required.But we don’t require think time as it was the time for user request not build
performance time.
EX:Cookies of server running at client.
d)Additional Attributes:LoadRunner8.0 is allowing parameter passing in Vuser scripts.
e)Miscellaneous:LoadRunner is maintaining silent mode if required.It allows processing in Multithreading
or Multiprocessing mode.It defines each action as a transaction if required.

Test Director8.0
115

Configuration Repository
It is a storage area in our organization server.It consists of all documents and s/w programs related to a
project or product for future reference.

Soft base:s/w build + development related documents(manually/VSS-Visual Source Safe).


Test base:test related documents(manually using MS Word/Excel)(Test Director/Quantity center).
Defect Repository:defect reports and resolution reports(manually using MS Outlook)(Test
Directory/Quality center/Bugzilla).
Note:for people co-ordination in team,MS Outlook mailing system/our company website.
Purpose of Test Director
*developed by Mercury Interactive
*test management tool to create test base and defect reporting
*is a network based s/w like as a web site
116
*consists of 2 parts such as:Project admin/Site admin and Test Director(another topic in Test Director
with same name)
TestDirector Architecture
SA:Site or Project Administrator
TD:TestDirector
IIS:Internet Information Server
By default Database is MS-Access
For co-ordination b/w test lead and test engineers MS-Outlook mailing system
I)Site Administrator
This part is only accessible to test lead category people for database creation and database access.
a)Create Domain
*Start
*Programs
*TestDirector8.0
*Site Administrator
*login by the test lead
*select Create Domain option
*enter domain name and click OK
*choose Create Project option
*enter project name and click Next
*click CREATE after confirmation of given details and click OK
NOTE:For one project database,the site administrator is creating 44 database tables as empty.Test engineers are
filling these tables with all testing deliverables.
b)Open Database
Start->Programs->TestDirector8.0->Site Administrator
->login by test lead
->select project name to estimate test status
->select required database table to estimate:Team effort and Individual effort(by using SELECT statement)
II)TestDirector
This part is accessible to valid test engineers in corresponding project testing team.In this part the valid test
engineers are storing their documents into database for future reference(because the s/w was build in version by
version)
Start->Programs->TestDirector8.0->TestDirector
->select project name along with domain
->login by test engineer
When the test engineer is logged in he faces 4 menu’s
*Requirements
*Test Plan(means Test Design)
*Test Lab(means Test Execution)
*Defects(Test Reporting)
Requirements
In this part the corresponding test engineer is specifying responsible areas or modules in corresponding
project
Requirement menu->New Requirement->enter Name of requirement and click OK
->follow this navigation to create all responsible requirements
Test Plan
In this part test engineers are preparing test scenarios and test cases for responsible requirements.
a)Create Subject
Test Plan menu->Planning->New Folder
->enter folder name with name as corresponding testing like Usability/Functional or Non-Functional
->click OK and follow same navigation to create all reasonable tests as subjects
b)Create Test Scenarios
117
Test Plan->select corresponding test as subject
->Planning->New Test->select test type(Manual or Automation)
->specify test case title or scenario and click OK
->follow above navigation to create all reasonable test scenarios for all reasonable tests to be applied on
selected requirements
Case Study
a)Requirements
Req1->Insert Order
Req2->Open Order
Req3->Update Order
Req4->Delete Order
b)Test scenarios
1)Usabilty testing
*verify spelling(M)
*verify initcap(M)
*verify alignment(M)
* verify line spacing in b/w controls and objects(M)
*verify look and feel with color font style and size(M)
*verify help documents(M)
2)Functional testing
*verify insertion with valid values(M/A)
*verify insertion without filling all fields(M/A)
*verify open order with valid orderno(M/A)
*verify open order with invalid orderno(M/A)
*verify deletion(M/A)
*verify updation with valid change(M/A)
*verify updation with invalid change(M/A)
*verify updation without a change(M/A)
3)Load testing
*verify insertion under customer expected load(A)
*verify open order under customer expected load(A)
*verify deletion under customer expected load(A)
*verify updation under customer expected load(A)
4)Stress testing
*verify insertion under various load levels(A)
*verify open order under various load levels(A)
*verify updation under various load levels(A)
*verify deletion under various load levels(A)
5)Compatability testing
*verify insertion on customer expected platforms(A)
*verify open order on customer expected platforms(A)
*verify updation on customer expected platforms(A)
*verify deletion on customer expected platforms(A)
NOTE:TestDirector8.0 is not supporting QTP launching.To launch QTP the testing teams are planning to use
TestDirector8.2 or Quality Center
c)Create test cases
After completion of test scenario selection test engineers are extending that scenario as Details,Design
Steps(Procedures),Test Script and Attachments with requirements coverage if required for Traceability.
d)Details
In this part,the test engineer is maintaining required details for corresponding test case like priority,test
suited,test setup,test effort,test environment(like required h/w and s/w),test duration(manually 20sec) etc in
description part of details option.
118
e)Design Steps
->Test Plan
->select corresponding test case in available list
->click Design steps
->click Step icon
->enter step description with expected
->follow same navigation to create multiple steps up to end state and click OK
NOTE:Like as above the test engineer is preparing details and design steps for all test case titles and then they will
receive Initial Build from developers.To estimate stability of that initial build,they will conduct Sanity testing or
Smoke testing.
f)Test Script
In this option,test engineers are automating corresponding manual test cases using available testing tools.
Test Plan->select corresponding test case->Test Script
->click Launch
->record our manual operations with required points
->save that script(click save then it will be saved automatically in TestDirector)
->close that tool and refresh TestDirector
g)Attachments
It is an optional part used to attach required files or screen shots w.r.t testing.
h)Requirements Coverage
Test engineer select name of requirement for corresponding test case with traceability.
Test Lab
After completion of test cases documentation and reasonable automation,test engineers are concentrating on
Test execution.
a)Test Batch creation
Test Lab->Test Set menu->New Folder
->give name to that folder as Usability/Functional/Non-Functional
->click OK and select that folder name
->Test Set->New Test Set->enter batch name and click OK
->select required test cases as members into that batch
->follow same navigation to create multiple batches
b)Run Manual test
->select manual test in Batch
->click Run icon
->put application build in base state and click Execute Steps
->execute every step through comparison of expected and build actual
->and click End of Run icon
Defects
If our test is failed,then the test engineers are using defects part in TestDirector.
Defects->click Add defect icon
->fill fields in defect report(as in notes)
->click Submit
->now click Mail Defect icon in it
->enter “TO” email id’s and send

Vous aimerez peut-être aussi