Vous êtes sur la page 1sur 13

Strategy:

A strategy is a plan of action designed to achieve a particular goal.


Verification: Static testing, code review, are we building the product right?
Validation: Dynamic Testing, Are we building the right product?
Types of Testing: Blac ! white Bo"
White Box (Structural#:$nit ! %ntegration testing
Black Box (Behavioural, functional#:&unctional ! 'on(&unctional
Functional )*ocalisation, +$% , globalisation, regression#
Non Functional )Security, benchmaring, performance)stress,load,volume#
Regression testing: Test the modified functionality, test the bug fi"es, test the basic functionality
Non-Functional Testing: Stress )min memory and e"ecute the app#, load )scalability#
oad: At what point the application fails.
Volu!e Testing: Transaction processing system, logs, data files
Bench !arking: activities that compare an organi-ations product with a similar maret leading one.
"erfor!ance "ara!eters: latency, throughput, utili-ation, response time etc.
Test strategy:
a. Test .b/ective
0hen to deliver
1ow many phases of testing
b. Testing scope
&unctionality to be tested
&unctionality '.T to be tested
c. Testing Dependencies
S,0
1,0
d. Test Approach
1ow many people
0hat are the various phases
Testing phase conducts at offshore,onsite
2anual,automation
e. Test environment
f. Test automation re3uirement
g. Test metrics plan
Defect density
4esidual defect density
5ode coverage
Test case productivity
h. 4is management
#efect !etrics:
Defects uncovered in testing
Severity,priority,age
$ffecti%eness of test case execution:
Defects uncovered after the software is placed into operational status.
#ata concerning s&' defects:
Total no. of defects detected in each activity.
Total no. of defects corrected in each activity.
Avg duration b,w defect detection and duration
Avg effort to correct a defect
Total no. of defects remaining.
#efect trend analysis: Defect status)'ew, .pen , closed#
Test co%erage ratio(&unctional#:'o.of functions covered,Total functions 6 788
T( efficiency: cumulative defects found by new test cases,cumulative new test cases e"ecuted.
T( passing rate: 'o . of testcases passed, 'o. of test cases e"ecuted
#efect se%erity distri)ution : 'o. of defects in the categories of fatal, ma/or, minor
Test schedule de%iation ratio 9 )actual end date : planned end date#,planned date 6 788
Test effort de%iation ratio 9 )actual effort : planned effort#,planned effort 6 788
Release co%erage* 'o. of test cases e"ecuted , Total no. of testcases planned for the release.
Test effecti%eness: 'o . of defects found by test team , 'o . of defects found by test team;no.of
defects found in the product after release.
#efect Turn around ti!e: Defect closed day : defect open day
RT+: 4elationship between re3uirements and test cases)one(one,one to many,many to many,one to
none)may be functionality not implemented#
$sti!ation approach:
5lassify re3,func,usecase,screens based on comple"ity
$se the multiplicity factor and covert to test cases
5ompute test effort
Assume reusability factor and compute test efforts
5omplete resource loading with appropriate no.of test leads,managers
.nsite(offshore resource mi"
(,TS:
Short for commercial off(the(shelf, an ad/ective that describes software or hardware products that are
ready(made and available for sale to the general public. &or e"ample, 2icrosoft .ffice is a 5.TS
product that is a pacaged software solution for businesses. 5.TS products are designed to be
implemented easily into e"isting systems without the need for customi-ation.
Botto!-up approach: %n this approach testing is conducted from sub module to main module, if the
main module is not developed a temporary program called D4%<=4S is used to simulate the main
module. *ower level components are tested properly. Test driver is a program that directs the
e"ecution of another program.
Top-do'n approach: %n this approach testing is conducted from main module to sub module. if the
sub module is not developed a temporary program called ST$B is used for simulate the submodule.
S+,-$ T$ST.N/:
Smoe testing originated in the hardware testing practice of turning on a new piece of hardware for
the first time and considering it a success if it does not catch fire and smoe. %n software industry,
smoe testing is a shallow and wide approach whereby all areas of the application without getting into
too deep, is tested.
A smoe test is scripted, either using a written set of tests or an automated test
A Smoe test is designed to touch every part of the application in a cursory way. %t>s shallow and wide.
Smoe testing is conducted to ensure whether the most crucial functions of a program are woring,
but not bothering with finer details. )Such as build verification#.
Smoe testing is normal health chec up to a build of an application before taing it to testing in depth.
S0N.T1 T$ST.N/:
A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity
testing is usually narrow and deep.
A sanity test is usually unscripted.
A Sanity test is used to determine a small section of the application is still woring after a minor
change.
Sanity testing is a cursory testing, it is performed whenever a cursory testing is sufficient to prove the
application is functioning according to specifications. This level of testing is a subset of regression
testing.
Sanity testing is to verify whether re3uirements are met or not, checing all features breadth(first.
T( (reation producti%ity: Test cases written
2nit of !easure: 'o. of test cases ,hr,wee,day
T( Re%ie' $rror Rate 9 'o. of =rrors,'o. of Test cases
#efect Re3ection Ratio9 'o. of defects re/ected,Total no. of defects
T( $xecution producti%ity: 'o. .f test cases e"ecuted , hr
#efect #ensity:
Defect Density is the number of confirmed defects detected in software,component during a defined
period of development,operation divided by the si-e of the software,component.
A,)A;B#
A Bugs detected by the test team
B Defects detected by the end user or client

$rror: %s an undesirable deviation from re3uirements )2istae in coding is called error#
Bug: %s an error found B=&.4= the application goes into production )difference between
actual result and expected result)
#efect: %s an error found A&T=4 the application goes into production
Residual defects: Defects which are found after testing phase.
(ontingency plan is a bacup plan)=" resource shortage#
+itigation plan is to avoid identified riss
Risk * Schedule slippage
(ause for the Risk: *ac of resources, resources become unavailable
+itigation "lan: 4ecuritment
(ontingency plan: bacup of resources
Risk .dentification: 5ause ! effects
#ifference )et'een client ser%er 4 'e) testing:
5lient s,w has to be installed in all the client machines.
%n case of web testing, s,w has to be installed in server
Reasons for schedule slippage:
There are multiples reasons for schedule slippage, right from improper planning, lac of resources to
unplanned re3uirements and rewor.
What are the %arious !easures for defect pre%ention:
4eview if incase of pre production
45A : ?ost production
*A2( *oo Ahead meeting.
#efect "re%ention:
%dentify critical riss
=stimate the e"pected %mpact
2inimi-e the e"pected impact)=liminate the riss#
#ifference )&' schedule 4 effort:
Schedule : calendar days
=ffort : person hrs,man hrs
Reason for page cannot )e displayed:
'etwor unplug.
?ro"y issues
Server stopped
What are all docu!ents u refer for 'riting test plan:
4e3uirement specification document
S4S)system architecture#
B4D)business rules,flow#
Sow )%nscope,out scope, assumptions,responsbilites,schedule etc#
Severity ! priority)http:,,www.s3atester.com,bugsfi"es,defectparametrs.htm#
Blocker: This bug prevents developers from testing or developing the software.
Critical: The software crashes, hangs, or causes you to lose data.
Major: A ma/or feature is broen.
Normal: %t@s a bug that should be fi"ed.
Minor: 2inor loss of function, and there@s an easy wor around.
Trivial: A cosmetic problem, such as a misspelled word or misaligned te"t.
Severity *evels can be defined as follow:
S1 ( $rgent,Showstopper. *ie system crash or error message forcing to close the window.
Tester@s ability to operate the system either totally )System Down#, or
almost totally, affected. A ma/or area of the users system is affected by the incident and it is significant to
business processes.
S2 ( 2edium,0oraround. ="ist lie when a problem is re3uired in the specs but tester can go on with testing.
%ncident affects an area of functionality but there is a wor(around which negates impact to business process.
This is a problem that:
a# Affects a more isolated piece of functionality.
b# .ccurs only at certain boundary conditions.
c# 1as a woraround )where Adon@t do thatA might be an acceptable answer to the user#.
d# .ccurs only at one or two customers. or is intermittent
S3 ( *ow. This is for minor problems, such as failures at e"treme boundary conditions that are unliely to occur in
normal use, or minor errors in
layout,formatting. ?roblems do not impact use of the product in any substantive way. These are incidents that are
cosmetic in nature and of no or very low impact to business processes.
Defect life cycle
Test scripting techni3ue
Browser related problems
4eports captured
5lassify all the severity levels
0ho will decide the sev ! priority
Testing approach
Defect columns
Defect life cycle
?erformance testing
Test plan
"ilot "ro3ect )?rototype designed pro/ects#
Soft'are testing life cycle:
Test ?lanning, )?repartion of high level test plan#
Test Analysis, )0hat ind of testing, any automation?#
Test Design, )Testplan and testcases#
5onstruction and verification,
Testing 5ycles,
&inal Testing and %mplementation and
?ost %mplementation
"hase 0cti%ities ,utco!e
?lanning 5reate high level test plan
Test plan, 4efined
Specification
Analysis

5reate detailed test plan,
&unctional <alidation 2atri",
test cases
4evised Test ?lan, &unctional
validation matri", test cases
Design

test cases are revisedB select
which test cases to automate
revised test cases, test data
sets, sets, ris assessment
sheet
5onstruction

scripting of test cases to
automate,
test procedures,Scripts,
Drivers, test results,
Bugreports.
Testing cycles complete testing cycles Test results, Bug 4eports
&inal testing
e"ecute remaining stress and
performance tests, complete
documentation
Test results and different
metrics on test efforts
?ost implementation =valuate testing processes
?lan for improvement of testing
process
0gile +odel:
Agile methods break tasks into small increments with minimal planning, and do not directly
involve long-term planning. Iterations are short time frames ("timeboxes") that typically last
from one to for weeks. !ach iteration involves a team working throgh a fll software
development cycle inclding planning, re"irements analysis, design, coding, nit testing,
and acceptance testing when a working prodct is demonstrated to stakeholders. #his helps
minimi$e overall risk, and lets the pro%ect adapt to changes "ickly. &takeholders prodce
docmentation as re"ired. An iteration may not add enogh fnctionality to warrant a
market release, bt the goal is to have an available release (with minimal bgs) at the end of
each iteration.
'()
*ltiple iterations may be re"ired to release a prodct or new featres.
#eam composition in an agile pro%ect is sally cross-fnctional and self-organi$ing withot
consideration for any existing corporate hierarchy or the corporate roles of team members.
#eam members normally take responsibility for tasks that deliver the fnctionality an
iteration re"ires. #hey decide individally how to meet an iteration+s re"irements
inks:
http:&&5tp6)logspot6co!&7889&8:&soft'are-testing-inter%ie'-5uestions6ht!l
0)out +e:
Myself Gopinath, having around 8 yrs of total experience of which around
4 years was in Software testing. I have been with Wipro for the last 4 yrs.
I! a !asters degree holder in Infor!ation "echnology.I have wor#ed on
various do!ains related to security,storage, e!bedded $
web technologies.

In addition to !y %& s#ills, I have hands on in white'box testing of (ava
and )** applications and also involoved in test auto!ation, auto!ated
the scripts using %"+.

My earlier assign!ent was the Intuit account and involved in testing of I++
S,- web server fra!ewor#. "he fra!ewor# provides easy deploy!ent of
new .ava applications without ta#ing care about server setup and
con/guration.With regard to !y %& s#ills, I have involved in end'to'end
syste! testing, functional testing,and regression testing. &s a %& lead, i
was leading a tea! of 0 !e!bers and responsible for all the testing
deliverables which includes syste! test plan prepartion, tc ceration, test
stub1script creation,defect !etrics, tea! !g!t, client interaction.

"he challenges faced in the pro.ect were
' Integration testing is a big challenge for us, as our fra!ewor# interacts
with few external co!ponents.
We !itigated this challenge by exploring the technologies involved in
these external co!ponents and validated the understandings by
developing sa!ple prototypes built using these technologies.

' 2ntire testing depends on test stub and we !itigated this challenge by
preparing the test stubs on ti!e and tested the stubs thoroughly before
the actual testing starts.

+rior to this Intuit, I have wor#ed with sy!antec account and played a vital
role during the releases of 3IS 4556,4558, 4557.

+rior to .oining Wipro, I was wor#ing in 2!bedded do!ain handling
pro.ects for various clients li#e 8+, 2!con 2!sys.
&s a part of this ")92 pro.ect, i will be invloved in transition of +latts,and
-3G applications.

+latts
:::::
+latts application gives an infor!ation about the energy and !etals, the
co!!odity price.It provides the details about the
co!!odities li#e availability of a particular ite! across the globe.

-3G
::::
&pplications, -3G "rac#er and -3G "rader3et are part of -3G.-3G
"rader3et supplies custo!ers with up'to'date news, pricing
infor!ation and also allow the user to calculate netbac# pricing at
selected regasi/cation ter!inals. -3G "rac#er is used to
create1view1!odify the pro.ects.-3G "rac#er also provides the user to
publish the pro.ect to !ar#et."he pro.ects that are
created using -3G trac#er can be viewed in -3G "rader3et.

+iscellaneous
7.%f the re3uirement doc is very limited or not clear,how will you prepare test case or use case.
;Gopi<
We will try to brainstor! by having discussion with onsite and o=shore tea! abt the
r>uire!ents. We will start pereparing testcases after having co!plete clarity on
re>uire!ent.
Discussion with Business analyst.
C.%f the time frame for testing the application is very limited than the target date,will you agree to test?
if yes,what are the measurement will you tae?
;Gopi<
?es. We will not be perfor!ing a co!plete and thorough functional and regression test
due to the lac# of ti!e allowed for preparation. Sanity test will be perfor!ed. 0e will try to
first priorti-e the test cases, may be basic and positive flow will be tested.

D.1ow will you verify the all re3uirement are tested?

EGopi<
,y @"M. We will try to !ap all the testcases against re>uire!ents and ensure whether all
re>uire!ents have been covered

F.1ow will you test bacend data and methods for validation.
E+opiG
It depends upon on the level ofaccess to bac#end syste!s. If incase we dont have
access, we will ensure whether data is retrived properly fro! the bac#end.

H.%f there is no proper bug tracing tool,how will you maintain the bug history and what are the
column should available.

;Gopi<
We will try to use open source bug trac#ing tool else will !aintain in the excel sheet

QA Test Case
ID
Test Objective Step ID Steps Expected Result