Académique Documents
Professionnel Documents
Culture Documents
TESTER’S
GUIDE
Tester’s Guide
TABLE OF CONTENTS
POLICY...............................................................................................................................3
Terms to understand.............................................................................................................3
Life Cycle of Testing Process..............................................................................................4
Levels of Testing..................................................................................................................4
Types of Testing..................................................................................................................5
Testing Techniques..............................................................................................................9
Web Testing Specifics.......................................................................................................13
Testing - When is a program correct?................................................................................17
Test Plan.............................................................................................................................17
Test cases...........................................................................................................................19
Testing Coverage...............................................................................................................19
What if there isn't enough time for thorough testing? ......................................................26
Defect reporting.................................................................................................................26
Types of Automated Tools.................................................................................................27
Page 2 of 2
Tester’s Guide
POLICY
Terms to understand
Page 3 of 2
Tester’s Guide
make things go wrong to determine if things happen when they shouldn't or things don't
happen when they should. It is oriented to 'detection'.
Life Cycle of Testing Process
Levels of Testing
Unit Testing
Page 4 of 2
Tester’s Guide
The most 'micro' scale of testing; to test particular functions or code modules. Typically
done by the programmer and not by testers, as it requires detailed knowledge of the
internal program design and code. Not always easily done unless the application has a
well-designed architecture with tight code; may require developing test driver modules or
test harnesses.
Integration testing
Testing of combined parts of an application to determine if they function together
correctly. The 'parts' can be code modules, individual applications, client and server
applications on a network, etc. This type of testing is especially relevant to client/server
and distributed systems.
Integration can be top-down or bottom-up:
• Top-down testing starts with main and successively replaces stubs with the real
modules.
• Bottom-up testing builds larger module assemblies from primitive modules.
• Sandwich testing is mainly top-down with bottom-up integration and testing applied to
certain widely used components
Acceptance testing
Final testing based on specifications of the end-user or customer, or based on use by end-
users/customers over some limited period of time.
Types of Testing
Page 5 of 2
Tester’s Guide
Compatability testing
Testing how well software performs in a particular hardware/software/operating
system/network/etc. environment.
Exploratory testing
Often taken to mean a creative, informal software test that is not based on formal test
plans or test cases; testers may be learning the software as they test it.
Ad-hoc testing
Similar to exploratory testing, but often taken to mean that the testers have significant
understanding of the software before testing it.
Comparison testing
Comparing software weaknesses and strengths to competing products.
Load testing
Testing an application under heavy loads, such as testing of a web site under a range of
loads to determine at what point the system's response time degrades or fails.
System testing
Black-box type testing that is based on overall requirements specifications; covers all
combined parts of a system.
Functional testing
Black-box type testing geared to functional requirements of an application; this type of
testing should be done by testers. This doesn't mean that the programmers shouldn't
check that their code works before releasing it (which of course applies to any stage of
testing.)
Volume testing
Volume testing involves testing a software or Web application using corner cases of "task
size" or input data size. The exact volume tests performed depend on the application's
functionality, its input and output mechanisms and the technologies used to build the
application. Sample volume testing considerations include, but are not limited to:
If the application reads text files as inputs, try feeding it both an empty text file and a
huge (hundreds of megabytes) text file.
If the application stores data in a database, exercise the application's functions when the
database is empty and when the database contains an extreme amount of data.
Page 6 of 2
Tester’s Guide
If the application is designed to handle 100 concurrent requests, send 100 requests
simultaneously and then send the 101st request.
If a Web application has a form with dozens of text fields that allow a user to enter text
strings of unlimited length, try populating all of the fields with a large amount of text
and submit the form.
Stress testing
Term often used interchangeably with 'load' and 'performance' testing. Also used to
describe such tests as system functional testing while under unusually heavy loads, heavy
repetition of certain actions or inputs, input of large numerical values, large complex
queries to a database system, etc.
Sociability Testing
This means that you test an application in its normal environment, along with other
standard applications, to make sure they all get along together; that is, that they don't
corrupt each other's files, they don't crash, they don't consume system resources, they
don't lock up the system, they can share the printer peacefully, etc.
Usability testing
Testing for 'user-friendliness'. Clearly this is subjective, and will depend on the targeted
end-user or customer. User interviews, surveys, video recording of user sessions, and
other techniques can be used. Programmers and testers are usually not appropriate as
usability testers.
Recovery testing
Testing how well a system recovers from crashes, hardware failures, or other catastrophic
problems.
Security testing
Testing how well the system protects against unauthorized internal or external access,
willful damage, etc; may require sophisticated testing techniques.
Performance Testing
Term often used interchangeably with 'stress' and 'load' testing. Ideally 'performance'
testing (and any other 'type' of testing) is defined in requirements documentation or QA
or Test Plans.
Page 7 of 2
Tester’s Guide
End-to-end testing
Similar to system testing; the 'macro' end of the test scale; involves testing of a complete
application environment in a situation that mimics real-world use, such as interacting
with a database, using network communications, or interacting with other hardware,
applications, or systems if appropriate.
Regression testing
Re-testing after fixes or modifications of the software or its environment. It can be
difficult to determine how much re-testing is needed, especially near the end of the
development cycle. Automated testing tools can be especially useful for this type of
testing.
Parallel testing
With parallel testing, users can easily choose to run batch tests or asynchronous tests
depending on the needs of their test systems. Testing multiple units in parallel increases
test throughput and lower a manufacturer's
Install/uninstall testing
Testing of full, partial, or upgrade install/uninstall processes.
Mutation testing
A method for determining if a set of test data or test cases is useful, by deliberately
introducing various code changes ('bugs') and retesting with the original test data/cases to
determine if the 'bugs' are detected. Proper implementation requires large computational
resources.
Alpha testing
Testing of an application when development is nearing completion; minor design changes
may still be made as a result of such testing. Typically done by end-users or others, not
by programmers or testers.
Beta testing
Testing when development and testing are essentially completed and final bugs and
problems need to be found before final release. Typically done by end-users or others, not
by programmers or testers.
Page 8 of 2
Tester’s Guide
Testing Techniques
Page 9 of 2
Tester’s Guide
1. Good test case reduces by more than one the number of other test cases
which must be developed
2. Good test case covers a large set of other possible cases
3. Classes of valid inputs
4. Classes of invalid inputs
Boundary testing
This method leads to a selection of test cases that exercise boundary values. It
complements equivalence partitioning since it selects test cases at the edges of a class.
Rather than focusing on input conditions solely, BVA derives test cases from the output
domain also. BVA guidelines include:
1. For input ranges bounded by a and b, test cases should include values a and b
and just above and just below a and b respectively.
2. If an input condition specifies a number of values, test cases should be
developed to exercise the minimum and maximum numbers and values just
above and below these limits.
3. Apply guidelines 1 and 2 to the output.
4. If internal data structures have prescribed boundaries, a test case should be
designed to exercise the data structure at its boundary.
Page 10 of 2
Tester’s Guide
Path Testing
A path-coverage test allow us to exercise every transition between the program
statements (and so every statement and branch as well).
Possible criteria:
Page 11 of 2
Tester’s Guide
Condition testing
A condition test can use a combination of Comparison operators and Logical operators.
The Comparison operators compare the values of variables and this comparison produces
a boolean result. The Logical operators combine booleans to produce a single boolean
result that is the result of the condition test.
e.g. (a == b) Result is true if the value of a is the same as the value of b.
Myers: take each branch out of a condition at least once.
White and Cohen: for each relational operator e1 < e2 test all combinations of e1, e2
orderings. For a Boolean condition, test all possible inputs (!).
Branch and relational operator testing---enumerate categories of operator values.
B1 || B2: test {B1=t,B2=t}, {t,f}, {f,t}
B1 || (e2 = e3): test {t,=}, {f,=}, {t,<}, {t,>}.
Loop Testing
Page 12 of 2
Tester’s Guide
• 4 basic types:
o Display a trace message
o Display parameter value(s)
o Return a value from a table
o Return table value selected by parameter
Page 13 of 2
Tester’s Guide
Link Testing
You must ensure that all the hyperlinks are valid
This applies to both internal and external links
Internal links shall be relative , to minimize the overhead and faults when the web site
is moved to production environment
External links shall be referenced to absolute URLs
External links can change without control - So, automate regression testing
Remember that external non- home page links are more likely to break
Be careful at links in "What's New" sections. They are likely to become obsolete
Page 14 of 2
Tester’s Guide
Check that content can be accessed by means of : Search engine, Site Map
Check the accuracy of Search Engine results
Check that web Site Error 404 ("Not Found") is handled by means of a user-friendly
page
Compatibility Testing
Check for the site behaviour across the industry standard browsers. The main issues
involve how different the browsers handle tables, images, caching and scripting
languages
In cross browsers testing , check for :
Behaviour of buttons
Support of Java scripts
Support of tables
Acrobat, Real, Flash behaviour
ActiveX control support
Java compatibility
Text size
Page 15 of 2
Tester’s Guide
t Web
e d d d d
TV
Usability Testing
Aspects to be tested with care:
Coherence of look and feel
Navigational aids
User Interactions
Printing
With respect to
Normal behaviour
Destructive behaviour
Inexperienced users
Usability Tips
Portability Testing
Check that links to URLs outside the web site must be in canonical
form(http://www.srasys.co.in)
Check that links to URLs into the web site must be in relative form(e.g.
…./aaouser/images/images.gif)
Cookies Testing
A "Cookie" is a small piece of information sent by the web server to store on a web
browser. So it can later be read back from the browser. This is useful for having the
browser remember some specific information.
Page 16 of 2
Tester’s Guide
There are levels of correctness. We must determine the appropriate level of correctness
for each system because it costs more and more to reach higher levels.
1. No syntactic errors
2. Compiles with no error messages
3. Runs with no error messages
4. There exists data which gives correct output
5. Gives correct output for required input
6. Correct for typical test data
7. Correct for difficult test data
8. Proven correct using mathematical logic
9. Obeys specifications for all valid data
10. Obeys specifications for likely erroneous input
11. Obeys specifications for all possible input
Test Plan
A software project test plan is a document that describes the objectives, scope, approach,
and focus of a software testing effort. The process of preparing a test plan is a useful way
to think through the efforts needed to validate the acceptability of a software product. The
completed document will help people outside the test group understand the 'why' and
'how' of product validation. It should be thorough enough to be useful but not so thorough
that no one outside the test group will read it. The following are some of the items that
might be included in a test plan, depending on the particular project:
Page 17 of 2
Tester’s Guide
Page 18 of 2
Tester’s Guide
Test cases
Testing Coverage
1. Line coverage. Test every line of code (Or Statement coverage: test every
statement).
2. Branch coverage. Test every line, and every branch on multi-branch lines.
3. N-length sub-path coverage. Test every sub-path through the program of length
N. For example, in a 10,000 line program, test every possible 10-line sequence of
execution.
4. Path coverage. Test every path through the program, from entry to exit. The
number of paths is impossibly large to test.
5. Multicondition or predicate coverage. Force every logical operand to take every
possible value. Two different conditions within the same test may result in the
same branch, and so branch coverage would only require the testing of one of
them.
6. Trigger every assertion check in the program. Use impossible data if necessary.
7. Loop coverage. "Detect bugs that exhibit themselves only when a loop is
executed more than once."
8. Every module, object, component, tool, subsystem, etc. This seems obvious until
you realize that many programs rely on off-the-shelf components. The
programming staff doesn't have the source code to these components, so
measuring line coverage is impossible. At a minimum (which is what is measured
here), you need a list of all these components and test cases that exercise each one
at least once.
9. Fuzzy decision coverage. If the program makes heuristically-based or similarity-
based decisions, and uses comparison rules or data sets that evolve over time,
check every rule several times over the course of training.
Page 19 of 2
Tester’s Guide
10. Relational coverage. "Checks whether the subsystem has been exercised in a way
that tends to detect off-by-one errors" such as errors caused by using < instead of
<=. This coverage includes:
11.
o Every boundary on every input variable.
o Every boundary on every output variable.
o Every boundary on every variable used in intermediate calculations.
12. Data coverage. At least one test case for each data item / variable / field in the
program.
13. Constraints among variables: Let X and Y be two variables in the program. X and
Y constrain each other if the value of one restricts the values the other can take.
For example, if X is a transaction date and Y is the transaction's confirmation date,
Y can't occur before X.
14. Each appearance of a variable. Suppose that you can enter a value for X on three
different data entry screens, the value of X is displayed on another two screens,
and it is printed in five reports. Change X at each data entry screen and check the
effect everywhere else X appears.
15. Every type of data sent to every object. A key characteristic of object-oriented
programming is that each object can handle any type of data (integer, real, string,
etc.) that you pass to it. So, pass every conceivable type of data to every object.
16. Handling of every potential data conflict. For example, in an appointment
calendaring program, what happens if the user tries to schedule two appointments
at the same date and time?
17. Handling of every error state. Put the program into the error state, check for
effects on the stack, available memory, handling of keyboard input. Failure to
handle user errors well is an important problem, partially because about 90% of
industrial accidents are blamed on human error or risk-taking. Under the legal
doctrine of foreseeable misuse, the manufacturer is liable in negligence if it fails
to protect the customer from the consequences of a reasonably foreseeable misuse
of the product.
18. Every complexity / maintainability metric against every module, object,
subsystem, etc. There are many such measures. Jones lists 20 of them. People
sometimes ask whether any of these statistics are grounded in a theory of
measurement or have practical value. But it is clear that, in practice, some
organizations find them an effective tool for highlighting code that needs further
investigation and might need redesign.
19. Conformity of every module, subsystem, etc. against every corporate coding
standard. Several companies believe that it is useful to measure characteristics of
the code, such as total lines per module, ratio of lines of comments to lines of
code, frequency of occurrence of certain types of statements, etc. A module that
doesn't fall within the "normal" range might be summarily rejected (bad idea) or
re-examined to see if there's a better way to design this part of the program.
20. Table-driven code. The table is a list of addresses or pointers or names of
modules. In a traditional CASE statement, the program branches to one of several
places depending on the value of an expression. In the table-driven equivalent, the
program would branch to the place specified in, say, location 23 of the table. The
Page 20 of 2
Tester’s Guide
table is probably in a separate data file that can vary from day to day or from
installation to installation. By modifying the table, you can radically change the
control flow of the program without recompiling or relinking the code. Some
programs drive a great deal of their control flow this way, using several tables.
Coverage measures? Some examples:
21.
o check that every expression selects the correct table element
o check that the program correctly jumps or calls through every table
element
o check that every address or pointer that is available to be loaded into these
tables is valid (no jumps to impossible places in memory, or to a routine
whose starting address has changed)
o check the validity of every table that is loaded at any customer site.
22. Every interrupt. An interrupt is a special signal that causes the computer to stop
the program in progress and branch to an interrupt handling routine. Later, the
program restarts from where it was interrupted. Interrupts might be triggered by
hardware events (I/O or signals from the clock that a specified interval has
elapsed) or software (such as error traps). Generate every type of interrupt in
every way possible to trigger that interrupt.
23. Every interrupt at every task, module, object, or even every line. The interrupt
handling routine might change state variables, load data, use or shut down a
peripheral device, or affect memory in ways that could be visible to the rest of the
program. The interrupt can happen at any time-between any two lines, or when
any module is being executed. The program may fail if the interrupt is handled at
a specific time. (Example: what if the program branches to handle an interrupt
while it's in the middle of writing to the disk drive?)
24.
The number of test cases here is huge, but that doesn't mean you don't have to
think about this type of testing. This is path testing through the eyes of the
processor (which asks, "What instruction do I execute next?" and doesn't care
whether the instruction comes from the mainline code or from an interrupt
handler) rather than path testing through the eyes of the reader of the mainline
code. Especially in programs that have global state variables, interrupts at
unexpected times can lead to very odd results.
25. Every anticipated or potential race. Imagine two events, A and B. Both will occur,
but the program is designed under the assumption that A will always precede B.
This sets up a race between A and B -if B ever precedes A, the program will
probably fail. To achieve race coverage, you must identify every potential race
condition and then find ways, using random data or systematic test case selection,
to attempt to drive B to precede A in each case.
26.
Races can be subtle. Suppose that you can enter a value for a data item on two
different data entry screens. User 1 begins to edit a record, through the first
screen. In the process, the program locks the record in Table 1. User 2 opens the
second screen, which calls up a record in a different table, Table 2. The program
is written to automatically update the corresponding record in the Table 1 when
Page 21 of 2
Tester’s Guide
User 2 finishes data entry. Now, suppose that User 2 finishes before User 1. Table
2 has been updated, but the attempt to synchronize Table 1 and Table 2 fails.
What happens at the time of failure, or later if the corresponding records in Table
1 and 2 stay out of synch?
27. Every time-slice setting. In some systems, you can control the grain of switching
between tasks or processes. The size of the time quantum that you choose can
make race bugs, time-outs, interrupt-related problems, and other time-related
problems more or less likely. Of course, coverage is an difficult problem here
because you aren't just varying time-slice settings through every possible value.
You also have to decide which tests to run under each setting. Given a planned set
of test cases per setting, the coverage measure looks at the number of settings
you've covered.
28. Varied levels of background activity. In a multiprocessing system, tie up the
processor with competing, irrelevant background tasks. Look for effects on races
and interrupt handling. Similar to time-slices, your coverage analysis must specify
29.
o categories of levels of background activity (figure out something that
makes sense) and
o all timing-sensitive testing opportunities (races, interrupts, etc.).
30. Each processor type and speed. Which processor chips do you test under? What
tests do you run under each processor? You are looking for:
31.
o speed effects, like the ones you look for with background activity testing,
and
o consequences of processors' different memory management rules, and
o floating point operations, and
o any processor-version-dependent problems that you can learn about.
32. Every opportunity for file / record / field locking.
33. Every dependency on the locked (or unlocked) state of a file, record or field.
34. Every opportunity for contention for devices or resources.
35. Performance of every module / task / object. Test the performance of a module
then retest it during the next cycle of testing. If the performance has changed
significantly, you are either looking at the effect of a performance-significant
redesign or at a symptom of a new bug.
36. Free memory / available resources / available stack space at every line or on
entry into and exit out of every module or object.
37. Execute every line (branch, etc.) under the debug version of the operating
system. This shows illegal or problematic calls to the operating system.
38. Vary the location of every file. What happens if you install or move one of the
program's component, control, initialization or data files to a different directory or
drive or to another computer on the network?
39. Check the release disks for the presence of every file. It's amazing how often a
file vanishes. If you ship the product on different media, check for all files on all
media.
40. Every embedded string in the program. Use a utility to locate embedded strings.
Then find a way to make the program display each string.
Page 22 of 2
Tester’s Guide
Page 23 of 2
Tester’s Guide
67. Every color shade displayed or printed to every color output device (video card /
monitor / printer / etc.) and associated drivers at every resolution level. And
check the conversion to grey scale or black and white.
68. Every color shade readable or scannable from each type of color input device at
every resolution level.
69. Every possible feature interaction between video card type and resolution,
pointing device type and resolution, printer type and resolution, and memory
level. This may seem excessively complex, but I've seen crash bugs that occur
only under the pairing of specific printer and video drivers at a high resolution
setting. Other crashes required pairing of a specific mouse and printer driver,
pairing of mouse and video driver, and a combination of mouse driver plus video
driver plus ballistic setting.
70. Every type of CD-ROM drive, connected to every type of port (serial / parallel /
SCSI) and associated drivers.
71. Every type of writable disk drive / port / associated driver. Don't forget the fun
you can have with removable drives or disks.
72. Compatibility with every type of disk compression software. Check error
handling for every type of disk error, such as full disk.
73. Every voltage level from analog input devices.
74. Every voltage level to analog output devices.
75. Every type of modem and associated drivers.
76. Every FAX command (send and receive operations) for every type of FAX card
under every protocol and driver.
77. Every type of connection of the computer to the telephone line (direct, via PBX,
etc.; digital vs. analog connection and signaling); test every phone control
command under every telephone control driver.
78. Tolerance of every type of telephone line noise and regional variation
(including variations that are out of spec) in telephone signaling (intensity,
frequency, timing, other characteristics of ring / busy / etc. tones).
79. Every variation in telephone dialing plans.
80. Every possible keyboard combination. Sometimes you'll find trap doors that the
programmer used as hotkeys to call up debugging tools; these hotkeys may crash
a debuggerless program. Other times, you'll discover an Easter Egg (an
undocumented, probably unauthorized, and possibly embarrassing feature). The
broader coverage measure is every possible keyboard combination at every
error message and every data entry point. You'll often find different bugs when
checking different keys in response to different error messages.
81. Recovery from every potential type of equipment failure. Full coverage includes
each type of equipment, each driver, and each error state. For example, test the
program's ability to recover from full disk errors on writable disks. Include
floppies, hard drives, cartridge drives, optical drives, etc. Include the various
connections to the drive, such as IDE, SCSI, MFM, parallel port, and serial
connections, because these will probably involve different drivers.
82. Function equivalence. For each mathematical function, check the output against
a known good implementation of the function in a different program. Complete
Page 24 of 2
Tester’s Guide
coverage involves equivalence testing of all testable functions across all possible
input values.
83. Zero handling. For each mathematical function, test when every input value,
intermediate variable, or output variable is zero or near-zero. Look for severe
rounding errors or divide-by-zero errors.
84. Accuracy of every graph, across the full range of graphable values. Include
values that force shifts in the scale.
85. Accuracy of every report. Look at the correctness of every value, the formatting
of every page, and the correctness of the selection of records used in each report.
86. Accuracy of every message.
87. Accuracy of every screen.
88. Accuracy of every word and illustration in the manual.
89. Accuracy of every fact or statement in every data file provided with the product.
90. Accuracy of every word and illustration in the on-line help.
91. Every jump, search term, or other means of navigation through the on-line
help.
92. Check for every type of virus / worm that could ship with the program.
93. Every possible kind of security violation of the program, or of the system while
using the program.
94. Check for copyright permissions for every statement, picture, sound clip, or
other creation provided with the program.
95. Verification of the program against every program requirement and published
specification.
96. Verification of the program against user scenarios. Use the program to do real
tasks that are challenging and well-specified. For example, create key reports,
pictures, page layouts, or other documents events to match ones that have been
featured by competitive programs as interesting output or applications.
97. Verification against every regulation (IRS, SEC, FDA, etc.) that applies to the
data or procedures of the program.
98.
Usability tests of:
99. Every feature / function of the program.
100. Every part of the manual.
101. Every error message.
102. Every on-line help topic.
103. Every graph or report provided by the program.
104.
Localizability / localization tests:
105. Every string. Check program's ability to display and use this string if it is
modified by changing the length, using high or low ASCII characters, different
capitalization rules, etc.
106. Compatibility with text handling algorithms under other languages
(sorting, spell checking, hyphenating, etc.)
107. Every date, number and measure in the program.
108. Hardware and drivers, operating system versions, and memory-resident
programs that are popular in other countries.
Page 25 of 2
Tester’s Guide
109. Every input format, import format, output format, or export format that
would be commonly used in programs that are popular in other countries.
110. Cross-cultural appraisal of the meaning and propriety of every string
and graphic shipped with the program.
Defect reporting
The bug needs to be communicated and assigned to developers that can fix it. After the
problem is resolved, fixes should be re-tested, and determinations made regarding
requirements for regression testing to check that fixes didn't create problems elsewhere.
The following are items to consider in the tracking process:
• Complete information such that developers can understand the bug, get an idea of
it's severity, and reproduce it if necessary.
• Bug identifier (number, ID, etc.)
Page 26 of 2
Tester’s Guide
Page 27 of 2
Tester’s Guide
Enter Special character other than Braces,-,+ IT should display and error msg
Page 28 of 2