Vous êtes sur la page 1sur 46

Software Testing Guide

I) Introduction
a) Software, Types of Software
b) Software Project, Application and Product
c) Software Business Process
II) SDLC (Software Development Life Cycle)
a) Requirements Gathering
b) Analysis & Planning
c) Software Design
d) Coding/Implementation
e) Testing
f) Release & Maintenance
III) SDLC Models
a) Sequential Models
1) Waterfall Model
2) ‘V’ Model
b) Incremental / Iterative Models
1) Prototype Model
2) Spiral Model
3) Agile Model
IV) Test Levels
nit Testing/Component Testing/Module Testing
ntegration Testing
ystem Testing
cceptance Testing
V) Software Environment
a) I-Tier or Standalone Applications
b) II-Tier or Client/Server Applications
c) III-Tier or Web Applications
d) N-Tier or Distributed Applications
VI) Test Types
a) Functional Testing
b) Non-Functional Testing
c) Structural Testing
d) Re & Regration Testing
VII) Test Design Techniques
a) Black box Techniques
1) Equivalence Partitioning/ Equivalence Classes (EP/EC)
2) Boundary Value Analysis (BVA)
3) Decision Table Testing
4) State Transition Testing
5) Use Case Testing
b) White box Techniques
1) Statement Testing
2) Decision Testing
3) Condition/Multi Condition Testing
4) Mutation Testing
VIII) Testing Process (STLC)
a) Test Strategy
b) Test Planning
c) Configuration Management
d) Risk Analysis
e) Test Design (Test Scenarios, Test Cases and Test Data)
f) Test Execution
g) Defect Reporting & Tracking
h) Status Reporting
i) Test Closure
IX) Informal Testing
a) Exploratory Testing
b) Error Guessing
X) Quality Standards
a) ISO Standards
b) IEEE Standards
c) CMM/CMM (I) Process Guidelines

XI) Software Business Domains


a) BFSI
b) ERP
c) Healthcare
d) Telecom
e) Ecommerce
f) Others

Software Quality:
Software satisfies quality only when it meets to Customer Requirement / Customer
Satisfaction / Customer Expectations. Meets Customer Requirement refers to proper output
& Customer Expectations refers to extra characteristics, good interface, speed, privacy,
security, easy to operate, good functionality.

Non-technical reasons: Cost of product & Time to market

Software Quality Assurance:

SQA are the Concepts to be followed by company to develop the software. An SQA
team is responsible for Monitoring & Measuring the strength of developing processes.

Software Project:

A set of problems assigned by the client, who can be solved by the software people
through the process of software Engineer called Software project. In short, the problem, the
people, the process called project. Software related problem is solved by software engineers
through software engineer process is software project.

Software Development Life Cycle / Life Cycle Development:

Stages involved in software project development

1) Information gathering; Customer requirement


2) Analysis; Customer requirement v/s Solutions
3) Design; Dividing the project into modules & coupling them
4) Coding; Physical construction of project
5) Testing
6) Maintenance

Information gathering stage:

In this stage, Business Analyst studies the requirement of the client /customer and
they prepare Business Requirement Specification (BRS) documents.

Analysis:

In this stage, Sr. Analyst prepares Software Requirement Specification (S/w RS)
document with respect to corresponding BRS document. This document consists of two sub-
documents System Requirement Specification (SRS) & Functional Requirement Specification
(FRS). SRS contain details about software & hardware requirement. FRS contains details
about the functionality to be used in project.
Designing:

In designing phase, Designers creates two documents High Level Document (HLD) &
Low Level Document (LLD). HLD consists of main modules of the project from root to leaf
and multiple LLD’s. LLD consists of sub-modules of main module along with Data flow
diagrams, ER-Diagrams, etc., are prepared by technical support people or designers called
Internal designers.

• Black box tester should have knowledge of customer requirement


• Black box testing tests BRS & SRS
• Testing external interfacing is Black box testing
• Testing internal interfacing is White box testing
• White box testing is done w.r.t design documents

Testing:

Testing

Development
V MODEL TESTING

Port testing
Test Software changes
Test efficiency

Functional system testing


User acceptance testing
Test documentation

Design phase testing


Programme phase testing
Assesment development
plan
Prepare Test plan
Requirement phase testing

Maintenance

Install Build

Design & Coding

Information

* Test plan is developed based on Development plan

Formula for test efficiency: DRE = A / A + B


DRE = Defect Removal Efficiency
A -> Bugs found at the testing side
B -> Bugs found at the client side
DRE= 0.8 – 0.9 good
0.7 – 0.8 requires improvement
< 0.7 poor
Refinement Form of V-Model

From the above refinement form of V-Model, small & medium scale organizations are
maintaining separate testing team for Functional & System testing stage.

1) Reviews during Analysis

In general, Software development process starts with Information Gathering &


Analysis. In this stage Business Analyst category people are preparing BRS & S/w RS
documents and after completion documents preparation, they conduct reviews on the
documents for completeness & correctness. This review focuses on below factors:

1) Are they complete


2) Are they met with right requirements of client / customer
3) Are they achievable w.r.t technology
4) Are they reasonable w.r.t time & cost
5) Are they testable

2) Reviews during Design

After completion of Analysis phase & their reviews, our Project-level designers will
start logical design of application in terms of External & Internal design (HLD’s & LLD’s). In
this stage, they conduct reviews for completeness & correctness of designed documents.
This review focuses on below factors:
1) Are they understandable
2) Are they met with right requirements of client / customer
3) Are they complete
4) Are they follow able
5) Does they handle errors

3) During Unit Testing

After completeness of Design & their reviews, software programmers will starts
coding the logical design to physical construction of software. During these coding stage
programmers is conducting Unit Testing through a set of White box testing techniques, Unit
Testing also known as Module / Component / Program / Micro testing

White box Testing:

There are three possible White box testing techniques

1) Execution Testing

Basic path coverage – Execution of all possible blocks in a program


Loops coverage – Termination of loop statements
Programmer technique coverage – less no of memory cycles & CPU cycles

2) Operation Testing – Running the application on Customer expected platforms

3) Mutation Testing

Mutation means that a change program. White box testers are performing the
change in the program to estimate test coverage on that program. Mutation testing can
decide whether the test coverage is correct or wrong

4) Integration Testing

After completion of dependent modules of development & testing, Programmers


combine them to form a System. In this Integration, they are conducting Integration testing
on the compiled modules w.r.t HLD.

There are three approaches to conduct Integration testing

a) Top-Down Approach
Top-Down Approach

In this approach, testing is conducted on Main module without conducting testing to


some of sub-modules. From the above diagram, a Stub is a temporary program instead of
under constructive sub-module, it is known as called program.

b) Bottom-Up Approach

Bottom-Up Approach

In this approach, testing is conducted on sub-modules without conducting testing on


main modules. From the above diagram, a Driver is a temporary program instead of main
module, it is known as calling program.
c) Sandwich or Hybrid Approach

Sandwich / Hybrid Approach

In this approach, testing is conducted taking both Top-Down & Bottom Approaches.

* Build: A finally integrated all modules set *.exe form file is called build.

4) Functional & System testing (* imp)

After completion of final integration of modules as a system, Testing Engineers are


planning to conduct Functional & System testing through Black box testing techniques,
these techniques classified into four categories.

1) Usability Testing
2) Functional Testing
3) Performance Testing
4) Security Testing

From Above 1 & 2 are Core level and 3 & 4 are Advance level

 During Usability testing, Testing team validates User-Friendliness of screens.


 During Functional testing, TT validates the correctness of customer requirements
 During Performance testing, TT estimates speed of processing
 During Security testing, Testing team validates privacy to User operations

1) Usability Testing

In general, TT starts with test execution with Usability testing. During test, Testing
team validates User-Friendliness of screens of build. During Usability testing, TT applies two
types of sub-test:

a) User Interface Test


 Easy of use (Understandable screens)
 Look & Feel (Attractiveness & Pleasantness)
 Speed in Interface (Less no of event to complete a test, easy short navigation)

b) Manual Support Test

 Context sensitiveness of user manuals


 Manual support test are conducted at the end of all testing & before release

2) Functional Testing

The major part of Black box testing is Functional testing, during this test, Testing
team1 concentrates on “meet customer requirements”. This Functional Testing is classified
into below sub-test.

a) Functional / Requirement Testing

During this test, Test Engineers validates correctness of every functionality in terms
of below coverage’s.

 Behavioral coverage (changes in object properties)


 Input domain coverage (size & type of every input & output object)
 Error handling coverage (preventing negative navigations)
 Calculations coverage (correctness of output)
 Back-end coverage (impact of Front-end operation on back-end tables contents)
 Service levels coverage (order of functionalities)

b) Input Domain Testing

It is a part of Functionality testing; Test Engineers are maintaining special structures


to define size & type of input object

c) Recovery Testing

It is also known as Reliability testing. During this test, Testing team validates
whether the application is changing from abnormal state to normal state or not.

d) Compatibility Testing

It is also known as Portability testing. During this test, Testing team validates
whether application build run on customer expected platforms or not. During this test,
Testing Engineers are finding Backward compatibility at maximum.

 Forward compatibility -> application is ready to run but Operating system is not supporting.
 Backward compatibility -> Operating system is supporting but the application has some
internal coding problems to run on Operating system

e) Configuration Testing
It is also known as Hardware compatibility testing. During this test, Testing team
validates whether application build supports different technology Hardware devices or not.

f) Inter-Systems Testing

During this test, Testing team validates whether application build co-existence with
other existing software’s or not and also test whether any Dead lock situation occurs or not.

g) Installation Testing

During this test, Testing team validates whether application build along with
supported software’s into customers site like configured systems. During this test, Testing
team observes below factors.

 Setup program execution to start installation


 Easy Interface
 Amount of disk occupied after installation

h) Parallel / Comparative Testing

During this test, Testing team compares application build with competitive products
in market.

i) Sanitation / Garbage Testing

During this test, Testing team tries to find extra features in application build w.r.t
customer requirements.

* Defects
During this test, Testing team reports defects to developers in terms of below
categories

1. Mismatches between expected & actual


2. Missing functionality
3. Extra functionality w.r.t customer requirement

* Manual v/s Automation

A Tester conducts any test on application build without using any testing tool is called
Manual testing, if any testing tool is used then it is called Automation testing
In common testing process, Testing Engineers are using test Automation w.r.t test
impact & criticality. Impact -> test repetition & Criticality -> complex to apply test
manually. Due to these two reasons testing people are using test Automation.

j) Re-testing
The re-execution of a test with multiple test data to validate a function, e.g. To
validate multiplication, Test Engineers use different combinations of input in terms of min,
max, -ve, +ve, zero, int, float, etc.

k) Regression Testing

The re-execution of test on modified build to ensure bug fixing work & occurrence of
any side effects, Test Engineers conducts this test using Automation

l) Error, Defect & Bug

A mistake in code is Error, due to errors in coding, Test Engineers are getting
mismatches in application build are defects, if the defects are accepted by developers to be
solves then it is Bug.

Testing Documents
Above Figure, shows the various levels of documents prepared at project testing.
Test Policy is documented by Quality Control. Test Strategy & Test Methodology are
documented by Quality Analyst or Project Manager. Test Plan, Test Cases, Test Procedure,
Test Script & Defect Report are documented by Quality Assurance Engineers or Test
Engineers.

Test Policy & Test Strategy are Company Level Documents. Test Methodology, Test
Plan, Test Cases, Test Procedure, Test Script, Defect Report & Final Test Summary Report
are Project Level Documents.

1) TEST POLICY:

This document developed by Quality Control people (Management). In this document


Quality Control defines “Testing Objective”.
Test Policy Document

Address of the Company

Test Definition : Verification & Validation

Testing Process : Proper planning before starts testing

Testing Standards : One defect per 250 lines of code or 10 FP (Functional points)

Testing Measurements : QAM, TTM, PCM

*******
(C.E.O
)

QAM: Quality Assurance Measurements, how much quality is expected

TTM: Testing Team Measurements, how much testing is over & is yet to complete

PCM: Process Capability Measurements, depends on old project to the upcoming projects.

2) TEST STRATEGY:

This is a Company level document & developed by Quality Analyst or Project Manager
Category people, it defines “Testing Approach”.

Components:

a) Scope & Objective: Definition & purpose of testing in organization


b) Business Issue: Budget control for testing
c) Test Approach: Mapping between development stages & Testing Issue.
d) Test Deliverables: Required testing documents to be prepared
e) Roles & Responsibilities: Names of the job in testing team & their responsibilities
f) Communication & Status reporting: Required negotiation between testing team &
developing team during test execution
g) Automation & Testing Tools: Purpose of automation & possibilities to go to test automation
h) Testing Measurements & Metrics: QAM, TTM, PCM
i) Risks & Mitigation: Possible problems will come in testing & solutions to overcome
j) Change & Configuration Management: To handle change request during testing
k) Training Plan: Required training sessions to testing team before start testing process

Testing Issues:

1. Authorization: Whether user is valid or not to connect to application


2. Access Control: Whether a valid user have permission to use specific service
3. Audit Trail: Maintains metadata about user operation in our application
4. Continuity of Processing: Inter-process communication
5. Correctness: Meet customer requirement in terms of functionality
6. Coupling: Co-existence with other existence software to share resources
7. Ease of Use: User Friendliness of the screens
8. Ease of Operator: Installation, Un-installations, Dumping, Uploading, Downloading,
etc.,
9. File Integrity: Creation of backup
10. Reliability: Recover from abnormal state
11. Performance: Speed of processing
12. Portable: Run on different platforms
13. Service levels: Order of functionalities
14. Maintainable: Whether our application build is long term serviceable to our customer
15. Methodology: Whether our tester are following standards or not during testing

3) TEST METHODOLOGY:

It is project level document. Methodology provides required testing approach to be followed


for current project. In this level Quality Analyst select possible approach for corresponding
project testing.

Pet Process:

Process involves experts, tools & techniques. It is a refinement form of V-Model. It


defines mapping between development & Testing stages. From this model, Organizations
are maintaining separate team for Functional & System testing & remaining stages of
testing done by development people. This model is developed in HCL & recognized by QA
Forum of INDIA.

TESTING PROCESS
Information Gathering (Business Requirement
Specifications)

Unit &
Integration
Test
Closure
4) TEST PLANNING:

After finalization of possible test for current project, Test Lead category people
concentration on test plan document preparation to define work allocation in terms of What,
Who, When & How to test. To prepare test plan document, test plan order follows below
approach;

1] Team Formation:

In general, Test planning process starts with testing team formation. To define a
testing team, test plan author depends on below factors;

1. Availability of testers
2. Test duration
3. Availability of test environment resource

2] Identify Tactical Risk:

After Testing team formation Plan author analysis possible & mitigation (ad hoc
testing)

# Risk 1: Lack of knowledge of Test Engineer on that domain


# Soln 1: Extra training to Test Engineers
# Risk 2: Lack of Resource

# Risk 3: Lack of budget {less no of time}


# Soln 3: Increase Team size

# Risk 4: Lack of Test data


# Soln 4: Conduct test on past experience basis i.e., ad hoc testing or contact client for
data

# Risk 5: Lack of developer process rigor


# Soln 5: Report to Test Lead for further communication between test & development PM

# Risk 6: Delay of modified build delivery


# Soln 6: Extra hours of work is needed

# Risk 7: Lack of communication in between Test Engineer - > Test team and
Test team - > Development team

3] PREPARE TEST PLAN:

After completion of testing team formation & Risk analysis, Test plan author
concentrate on Test Plan Document in IEEE format.

01) Test Plan ID: Unique No or Name e.g. STP-ATM


02) Introduction: About Project description
03) Test Items: Modules / Functions / Services / Features / etc.
04) Features to be tested: Responsible Modules for Test design (preparing test cases for
added modules)
05) Features not to be tested: Which feature is not to be tested and Why? (Due to test
cases available for the old modules, so for these modules no
need to be tested / no test case

Above (3), (4) & (5) decides which module to be tested – > What to test?

06) Approach: List of selected testing techniques to be applied on above specified


modules in reference to the TRM(Test Responsible Matrix).
07) Feature pass or fail criteria: When a feature is pass or fail description
(Environment is good) (After testing conclusion)
08) Suspension criteria: Possible abnormal situations rose during above features testing
(Environment is not good) (During testing conclusion)
09) Test Environment: Required software & Hardware to be tested on above features
10) Test Deliverables: Required testing document to be prepared (during testing, the
type
of documents are prepared by tester)
11) Testing Task: Necessary tasks to do before start every feature testing

Above (6) to (11) specifies -> How to test?

12) Staff & Training: Names of selected Test Engineers & training requirements to them
13) Responsibilities: Work allocation to every member in the team (dependable modules
are given to single Test Engineer)
14) Schedule: Dates & Times of testing modules

Above (4) specifies -> When to test?

15) List & Mitigation: Possible testing level risks & solution to overcome them
16) Approvals: Signatures of Test plan authors & Project Manager / Quality Analyst

4) Review Test Plan:

After completion of plan document preparation, Test plan author conducts a review
of completion & correctness. In this review, Plan author follows below coverage analysis

 BRS based coverage (What to test? Review)


 Risks based coverage (When & Who to test? Review)
 TRM based coverage (How to test? Review)

5) TEST DESIGNING:

After completion of Test Planning & required training to testing team, corresponding
testing team members will start preparing the list of test cases for their responsible
modules. There are three types of test cases design methods to cover core level testing
(Usability & Functionality testing).

a) Business Logic based test case design (S/w RS)


b) Input Domain based test case design (E-R diagrams / Data Models)
c) User Interface based test case design (MS-Windows rules)

a) Business Logic based Test Case design (S/w RS)

In general, Test Engineers are preparing a set of Test Cases depends on Use Cases
in (S/w RS). Every Use Case describes a functionality in terms of inputs, process & output,
depends on this Use Cases Test Engineers are preparing Test Cases to validate the
functionality
From the above model, Test Engineers are preparing Test Cases depends on
corresponding Use Cases & every test case defines a test condition to be applied.

To prepare test cases, Test Engineers studies Use Cases in below approach:

Steps:

1) Collect Use Cases of our responsible module


2) Select a Use Case & their dependencies from the list
2.1) Identify entry condition (Base state)
2.2) Identify input required (Test data)
2.3) Identify exit condition (End state)
2.4) Identify output & outcome (Expected)
2.5) Identify normal flow (Navigation)
2.6) Identify alternative flows & exceptions

3) Write Test Cases depends on above information


4) Review Test Cases for the completeness & correctness
5) Goto step (2) until completion of all Use Cases completion

Use Case I:

A login process allows user id & password to validate users. During these validations,
login process allows user id in alpha-numeric from 4 to 16 characters long & password in
alphabets in lowercase from 4 to 8 characters long.
Case study:

Test Case 1) Successful entry of user id


BVA (Size)

min -> 4 chars => pass


min-1 -> 3 chars => fail
min+1 -> 5 chars => pass
max-1 -> 15 chars => pass
max+1 -> 17 chars => fail
max -> 16 chars => pass

ECP

Valid Invalid

a-z special chars


A-Z blank
0-9

Test Case Format:

During Test design Test Engineers are writing list of Test Cases in IEEE format.

01) Test Case ID: Unique no or name


02) Test Case Name: Name of test condition to be tested
03) Feature to be tested: Module / Function / Feature
04) Test suit ID: Batch ID, in which this case is member
05) Priority: Importance of Test Case {Low, Med, High}

P0 -> Basic functionality


P1 -> General functionality (I/P domain, Error handling, Compatibility etc,)
P2 -> Cosmetic testing (UIT)

06) Test Environment: Required Software & Hardware to execute


07) Test afford (person / hr): Time to execute this Test Case e.g. 20 minutes
08) Test duration: Date & Time
09) Test Setup: Required testing task to do before starts case execution (pre-requisites)
10) Test Procedure: Step by step procedure to execute Test Case

Test Procedure Format:


1) Step No:
2) Action: -> Test Design
3) Input required:
4) Expected:

5) Actual:
6) Result: -> Test Execution
7) Comments:

11) Test Case passes or fails criteria: When this case


is pass or fail

Note: Test Engineers follows list of Test Cases along with step by step procedures only

Example 1:

Prepare Test Procedure for below test cases “Successful file save operation in Notepad “.

Ste Action Input Required Expected


p
No
1 Open Notepad Empty Editor
2 Fill with text Save Icon enabled
3 Click Save Icon or click File menu Save Dialog box appears with
Option & select save option Default file name
4 Enter File name & Click Save Unique Focus to Notepad & File name
File name Appears in title bar of Notepad

Note: For more examples refer to notes

b) Input Domain based Test Case design (E-R diagrams / Data Models)

In general, Test Engineers are preparing maximum Test Cases depends on Use
Cases or functional requirements in S/wRS. These functional specifications provide
functional descriptions with input, output & process, but they are not responsible to provide
information about size & type of input objects. To correct this type of information, Test
Engineers study Data model of responsible modules E-R diagram. During Data model study,
Test Engineer follows below approach: Steps:

1) Collect Data model of responsible modules


2) Study every input attribute in terms of size, type & constraint
3) Identify critical attributes in the test, which is participated in manipulation & retrivals
4) Identify non-critical attributes such as input & output type
A/C No:

Critical

A/C Name:

Balance:

Non-
Critical

Address:

5) Prepare BVA & ECP


for every input object

ECP BVA(Size / Range)


Input Valid Invalid Minimum Maximum
Attribute
Xxxx xxxx xxxx xxxx xxxx
“ “ “ “ “
“ “ “ “ “
“ “ “ “ “
DATA MATRIX

Note: In general, Test Engineers are preparing step by step procedure based Test Cases
for functionality testing. Test Engineers prepare valid / invalid table based Test Cases for
input domain of object testing {Data Matrix }

Note: For examples refer to


notes

c) User Interface based test case design (MS-Windows rules)

To conduct Usability Testing, Test Engineers are preparing list of Test Cases
depends on our organization User Interface standards or conventions, Global User Interface
rules & interest of customer site people.

Example: Test Cases

1) Spelling check
2) Graphics check (Screen level Align, Font style, Color, Size & Microsoft six rules)
3) Meaning of error messages
4) Accuracy of data displayed
5) Accuracy of data in the database are result of user inputs, if developer restrict the data
in
database level by rounding / truncating then the developer must also restrict the data
in
front-end as well
6) Accuracy of data in the database as the result of external factors. Ex. File attachments
7) Meaningful help messages (Manual support testing)

Review Test Cases:

After completion of all possible Test Cases preparation for responsible modules,
Testing team concentrates on review of Test Cases for completeness & correctness. In this
review, Testing team applies coverage analysis.
Test Case Review

1) BR based coverage
2) Use Cases based coverage
3) Data model based coverage
4) User Interface based coverage
5) TRM based coverage

At the end of this review, Test Lead prepare Requirement Traceability Matrix or
Requirement Validation Matrix (RTM / RVM)

Business Requirement Source (Use Cases, Data model) Test Cases


xxxx xxxx xxxx
xxxx
xxxx

xxxx xxxx
xxxx

xxxx xxxx
xxxx
xxxx

xxxx xxxx xxxx


xxxx

xxxx xxxx
xxxx
xxxx
From RTM / RVM model, it defines mapping between customer requirement &
prepared Test Cases to validate the requirement.

6) TEST EXECUTION:

After completion of Test Cases selections & their review, Testing team concentrate on
Build release from development side & Test execution on that build.

a) Test Execution Levels or Phases:


See figure in next page for clear understandability of this levels

b) Test Execution Levels v/s Test Cases:


Level – 0 -> P0 Test Cases
Level – 1 -> All P0, P1 & P2 Test Cases as batches
Level – 2 -> Selected P0, P1, & P2 w.r.t modification
Level – 3 -> Selected P0, P1, & P2 w.r.t critical areas in the master build

A) Test Execution Levels or Phases:

Defect Report
Level – 1
Comprehensive

Level – 3
Final
Regressi
on

Level – 2
Regression

Defect
Fixing
c) Build Version Control:

In general, Testing Engineers are receiving build from development in below model

Testers

Build

Testing
Environment

FTP (file transport


Protocol)

Server
During Test execution Test Engineer are receiving modified build from software. To
distinguish old & new builds, development team maintains unique version in system, which
is understandable to Tester / Testing team. For this version controlling developers are using
version control tools (Visual Source Safe)

d) Level – 0 (Sanity / Test Acceptance / Build verification test):

After receiving initial build, Testing Engineers concentrate on basic functionality of


that build to estimate stability for complete testing. In this Sanity testing, Testing Engineers
tries to execute all P0 Test Cases to cover basis functionality. If functionality is not working /
functionality missing, Testing team rejects that build. If Tester decided stability then they
concentrate on Test execution of all Test Cases to detect defects.

During this Sanity testing, Testing Engineer observes below factors on that build

1) Understandable
2) Operatable
3) Observable
4) Consistency
5) Controllable
6) Simplicity
7) Maintainable
8) Automation

From the above “8” testable issues, Sanity test is also known as Testability testing /
OCT angle testing.

e) Test Automation:
If test automation is possible than Testing team concentrate on Test script creation
using corresponding testing tools. Every Test script consists of navigation statement along
with required check points.

Stable Build

Test Automation
(Select Automation)

(All P0 & Carefully selected P1 Test Cases)

f) Level – 1 (Comprehensive testing)

After completion of Sanity testing & possible test automation, Testing team
concentrates on test batches formation with depended Test Cases. Test batches are also
known as Test suits / sets. During these Test batches execution, Test Engineers prepare
test log document, this document consist of three types of entries.

1) Passed (Expected = Actual)


2) Failed (Any one Expected != Actual, Any one Expected variants from Actual)
3) Blocked (Corresponding parent functionality failed)

g) Level – 2 (Regression testing)


During Comprehensive test execution, Test Engineers are reporting mismatches as
defects to developers. After receiving modified build from developers, Test Engineers
concentrate on Regression testing to ensure bug-fixing work & occurrences of side effects.

Case I:
If development team resolve bugs severity which is high, Test Engineers re-execute
all P0, P1 & carefully selected P2 Test Cases on modified build

Case II:
Bugs severity is medium, then all P0, carefully selected P1 & some of P2 Test
Cases

Case III:
Bugs severity is low, then some P0, P1 & P2

Case IV:
If development team released modified build due to sudden changes in project
requirement then Test Engineers re-execute all P0, P1 & carefully selected P2 Test Cases w.r.t
that requirement modification.

h) Level – 3 (Final Regression / Pre-Acceptance testing)

7) TEST REPORTING:
During comprehensive testing, Test Engineer are reporting mismatches as defects to
developers through IEEE format.

1) Defect ID: Unique No or Name


2) Description: Summary of the defect
3) Feature: Module / Function / Service , in these module TE found the defect
4) Test Case Name: Corresponding failing test condition
5) Reproducible (Yes / No): Yes -> Every time defect appears during test execution
No -> Rarely defect appears
6) If Yes, attach test procedure:
7) If No, attach snapshot & strong reasons:
8) Status: New / Reopen
9) Severity: Seriousness of defect w.r.t functionality (high / medium / low)
10) Priority: Importance of the defect w.r.t customers (high / medium / low)
11) Reported bug: Name of Test Engineer
12) Reported on: Date of submission
13) Assign to: Name of the responsible person in development team -> PM
14) Build Version ID: In which build, Test Engineer fount the defect
15) Suggested fix (Optional): Tester tries to produce suggestion to solve this defect

16) Fixed by: PM or Team Lead


17) Resolved by: Developer name
18) Resolved on: Date of solving By Developers
19) Resolution type: check out in next page
20) Approved by: Signature of Project Manager (PM)

Defect Age: The time gap between “reported on” & “resolved on”

Defect submission process:


Defect Status Cycle:

New Open / rejected / deferred close reopen

Deferred => Accepted but


not interested to resolve in this version

Defect Resolution:

After receiving defect report from Testers, developers review the defect & they send
resolution type to Tester as a reply

01) Duplicate, rejected due to the defect is same as previously reported defect
02) Enhancement, rejected due to the defect is related future requirement of customer
03) H/w limitation, rejected due to the defect raised w.r.t limitation of H/w devices
04) S/w limitations, rejected due to the defect raised w.r.t limitation of S/w Techno
05) Not applicable, rejected due to the defect has no proper meaning
06) Functions as designed, rejected due to coding is correct w.r.t to designed doc’s
07) Need more information, not (accepted / rejected) but developers requires extra information
to understand the defect
08) Not reproducible, not (accepted / rejected) but developers require correct procedure to
reproduce the defect
09) No plan to fix it, not (accepted / rejected) but developers want extra time to fix
10) Fixed, developers accepted to resolve
11) Fixed indirectly, accepted but not interested to resolve in this version (default)
12) User misunderstanding, need extra negotiation between testing & development team.

Types of defects:

01) User Interface bugs (low severity):


1) Spelling mistakes (high priority)
2) Improper alignment (low priority)

02) Boundary related bugs (medium severity)


1) Doesn’t allows valid type (high priority)
2) Allows invalid type also (low priority)

03) Error handling bugs (medium severity)


1) Doesn’t providing error message window (high priority)
2) Improper meaning of error message (low priority)

04) Calculations bugs (high severity)


1) Final output is wrong (low priority)
2) Dependent results are wrong (high priority)

05) Race condition bugs (high severity)


1) Dead lock (high priority)
2) Improper order of services (low priority)

06) Load conditions bugs (high severity)


1) Doesn’t allow multiple users to access / operate (high priority)
2) Doesn’t allow customers accepted load (low priority)

07) Hardware bugs (high severity)


1) Doesn’t handle device (high priority)
2) Wrong output from device (low priority)

08) ID control bugs (medium severity)


1) Logo missing, wrong logo, Version No mistake, Copyright window missing,
Developers Name missing, Tester Name missing

09) Version control bugs (medium severity)


1) Differences between two consecutive build versions

10) Source bugs (medium severity)


1) Mistake in help documents – Manual support

8) TEST CLOSURE:

After completion of all possible test cycle execution, Test Lead conducts a review to
estimate the completeness & correctness of testing. In this review, Test Lead follows below
factors with Test Engineers

1) Coverage Analysis

a) BR based coverage
b) Use Cases based coverage
c) Data model based coverage
d) User Interface based coverage
e) TRM based coverage

2) Bug density

a) Module A has 20% percentile bugs found


b) Module B has 20% percentile bugs found
c) Module C has 40% percentile bugs found
d) Module D has 20% percentile bugs found

3) Analysis of deferred bugs

Whether deferred bugs are deferrable or not

At the end of this review, Testing team concentrate on high bug density modules or
all modules if time is available.

9) User Acceptance Testing (UAT)

Organization management concentrate on UAT to collect feedback, there are two


approaches to conduct testing.
1. Alpha (α) test
2. Beta (β) test

10) Sign Off

After completion of User Acceptance testing & their modifications, Test Lead
concentrates on final test summary report creation. It is a part of Software Release Node
(S/w RN). This final test summary report consist of below documents

1) Test Strategy / Methodology (TRM)


2) System Test Plan
3) Requirement Traceability Matrix (RTM)
4) Automated Test Scripts
5) Bugs Summary Reports

The below, has to be in horizontal direction

• Bug Description
• Feature
• Found By
• Status (closed / deferred)
• Comments

CASE STUDY ON A PROJECT TESTING PROCESS

The “ X ” is a client/server product. It is running in single computer with windows 2000 as


OS this product provides a facility to search matched records in existing database. w.r.t has
given search keys.

Local host
This product maintains a default administrator to create new users and every valid
user search data in database.

Activity flow diagram: -

Admin
Admin Invalid

(Or) user
Valid
user
Search keys

Existing DB

FUNCTIONAL POINTS: -

 Login is taking user id and password


 User id and password allows alphabets in lower case from 4 to 8 characters long.
 New user id’s created by administrators only
 New user creation windows allows unique user id and password with create and cancel
buttons
 “Search records” opened for valid users only
 “Search records window” maintains below search keys
Customer id: 6 digits number (collected from design document
First name: one character in upper or lower
Last name: one to eight characters in lower case characters
Date of birth: dd\mm\yy in numeric
Age: “from” and “to” in numeric

 Search records windows allows below combination of fields to search records.

→ Last name
→ Last with first name
→ Last name with first name and d-o-b
→ Last name with first name and age
→ Customer id only

 “Search records” window consists of “start search” and “stop search” buttons.
 Allows last name as full or partial
 Allows customer id as full or partial with * or wild card

Customer id

286*----
-
2 8 6 * ------

 Refresh search records window using Alt+ctrl


 Display matched records in a pop-up window with ok button after search completion.
 Returns message like “too many matches to display when matched records >1000

Test methodology:- (by PM)

Testing stage System testing


Testing Factors
Authorization √
Access Control √
Audit trail ×
Continuity to √
Correctness √
Coupling ×
Data integrity √
Ease of use √
Ease of operate √
Reliability √
Fortable ×
Performance ×
Service levels √
Maintainable √
Methodology √
Note: Test responsibility matrix for IIXI (11 by 1)

 “X” product is maintaining metadata so


 it is not sharing resources of other application
 This application went to run on windows 2000
 It is a stand alone application (at time one user can operate)
So need net to test load and stress testing)

System test plan by test lead:-

1. Test plan ID: STP_X


2. Introduction: “X” is a product and it provides a facility to connect to existing data
base and search matching records to retrieve.
3. Test items → user creation
→ Login by user
→ Search records
4. Features to be tested: → user creation
→ Login by user
→ Search records
5. Features not to be tested:-
6. Approach: To apply PM selected 1 factors from TRM, below black box testing
techniques are suitable to follow depends on test lead.
→ User interface testing
→ Manual support testing
→ Functionality testing
→ i/p domain testing
→ Recovery testing
→ Sanitation testing
→ Installation testing
→ Security testing
→ Complaints testing

7. Entry criteria: -
→ Are the necessary documents available?
→ Is the “X” product is ready for release from developers?
→ Is the supporting database available to search required records?
→ Is the test environment ready?

8. Suspension criteria:
→ Data base disconnect may require suspension of testing
→ Suspension of testing is mandatory when searching records process went to infinite.
→ Admin is failed to new user creation, a decision can be mad to continue testing on
“searching records” module using admin or other existing valid users.

9. Exit Criteria: -
→ Ensure that the “X” product provides the required services
→ Ensure that all test documents has been completed up to date.
→All high severity bugs resolved.

10. Test deliverables:-


Test cases
Test procedures Test Engg..
Automated Test scripts
Test log
Defect Report

11. Test environment:-


→ Client pc or local host
→ OS: Win2000
→ DB server: ORACLE/ SQL SERVER/ MS-ACCESS
→ Connectivity: DSN/ Thin Drivers
12. Testing Tasks:-
→ Availability of admin with password
→ Database consists of records
→ Valid users are able to database to search records
13. Staff and Training needs:-
Test Engg.. / QA Engg: Kurugonda Srinaiah

14. Responsibilities:
Document / Report  Responsibility  Effort

1. Tst cases with procedures K.Srinaiah 12 hours

Defect Report K Srinaiah every day during Testing

Test completion reports Test Lead with At the end of the Testing
K Srinaiah
15. Schedule:
Task  Effort  Start date  End date

Test design 12 hours 28-12-2005 29-12-2005


Implement test and 4 hours 29-12-2005 29-12-2005
Review
Execute test 24 hours 30-12-2005 01-01-2006
Evaluate test 8 hours 02-01-2006 02-01-2006

16. Risks and mitigations:-


1. Lack of documentation:
→ Contact business analyst
→ Ad-hoc testing
→ Contact customer site people if possible

2. Lack of time
→ Over-time

3. Delays and delivery:


→ Over time

4. Lack of development process reggur:


→ Contact test lead to motivate developers
→ Over time to complete task is right time

17. APPROVALS: -

Test case 1:
Signature of PM and signature of test lead.

Test case 2: Successful entry of user id

BVA (size)

Min = 4 chars → pass


Max = 8 chars→ pass
Min-1 =3 chars → fail
Min +1 = 5 chars → pass
Max –1 = 7 chars → pass
Max +1 = 9 chars → fail
Test case 3: successful entry of password

BVA (size)
Min = 4 chars → pass
Max = 8 chars → pass
Min –1 = 3 chars → fail
Min +1 = 5 chars → pass
Max + = 7 chars → pass
Max = 9 chars → fail

Test case 4: successful login operation

User id  Password  Criteria

Admin valid pass


Admin invalid fail
Other valid valid pass
Other valid invalid fail
Other invalid invalid fail
Blank value fail
Valid blank fail

Test case 5: Successful selection of new user creation option

Test case 6: Un successful selection of this option due to current user is not admin.

Test case 7 successful entry of user id in create user window

Note: (like as Test case2)

Test case 8: Successful entry of userid in create user window

Note: (-----------test case 3)

Test case9: Successful creation of new user by admin

Test case10: unsuccessful new user creation due to given user id is not unique.

Test case 11: successful closing of new user creation window using cancel
(After enter user id and offer enter passes)

Test case 12: Successful selection of search records option

Test case 13: successful entry of customer id

BVA (size) ECP (TYPE)

Min = max = 6 valid invalid


--------------------------
0–9 a-z, A-Z, special characters
Expect * , blank space
Start with *
 Exp: 123 √, *23Х, 1*23X

Test case 14: successful entry of first name

Min = max = 1 → pass Ecp


---------------------------
o √ fail valid  invalid
----------------------------
a- z o-9,
A-Z special characters

Test case 15: successful entry of last name:


Min = 1 and max = 8 → pass ----------------
valid  invalid
-----------------
a- z A-Z, o-9,
Special characters

Test case 16: successful entry of DOB

BVA (range)

Ecp (type)
Day: min → 01 -----------------

min → 31 valid invalid


-----------------
0- 9 a-z, A-Z
Month: min → 01 special characters
max→ 12

Year: min → 00
max→ 99

Case 17: success entry of age from

Ecp(type)
min → 01 -----------------
max→ 22 valid  invalid
-----------------
0- 9 a-z, A-Z
Special characters
Test case 18: successful entry of age to

Note: like as above

Test case 19: successful display of matched records with full last name and all other fields
are
Blank

Test case 20: successful display of matched records with full last name and first name

Test case21: successful display of matched records with full last name first name and dob

Test case22: successful display of matched records with full last name first name age

Test case 23: on successful search operation due to invalid combination of filled fields.

Test case 24: unsuccessful search operation due to invalid Dob

Test case 25: unsuccessful search operation due to age from greater than age ‘To’

Test case26: successful display of records with search key as partial last name and other
fields
as blank

Test case 27:successful display of records with search key a partial last name and first
name

Test case 28: successful display of records with search key as partial last name, first name
and
Dob

Test case 29: successful display of records with search key as partial last name, first name

Test case 30: successful display of records with search key as customer id

Test case 31: successful display of records with search key as partial customer id with *
as
wild card

Test case 32: Unsuccessful display of records due to no matching records in database
w.r.to
given search keys

Test case 33: unsuccessful display of records due to “too many” records to display when no
of matched records greater than 1000

Test case 34 : successful refresh search window with Alt +Ctrl

Test case 35: Successful termination of search operation when click “stop search” button.

Test case 36: successful closing of records window through click o.k after searching.
Test case 37: spelling check in every screen

Test case 38: Graphics check in every screen


Ex: (alignment, font, size, label, graphical, colour, etc)
Usability

Testing
Test case39: Accuracy of data displayed
Ex:Dob dd\mm\yy

Test case 40: Meaninful help messages and error messages

Vous aimerez peut-être aussi