Académique Documents
Professionnel Documents
Culture Documents
com/
Some tests are more straightforward than others. For example, say you need to verify that all the
links in your web site work. There are several different approaches to checking this:
• you can read your HTML code to see that all the link code is correct
• you can run an HTML DTD validator to see that all of your HTML syntax is correct, which would
imply that your links are correct
• you can use your browser (or even multiple browsers) to check every link manually
• you can use a link-checking program to check every link automatically
• you can use a site maintenance program that will display graphically the relationships
between pages on your site, including links good and bad
• you could use all of these approaches to test for any possible failures or inconsistencies in the
tests themselves
Verifying that your site's links are not broken is relatively unambiguous. You simply need to decide
which one of more of these tests best suits your site structure, your test resources, and your need for
granularity of results. You run the test, and you get your results showing any broken links.
Notice that you now have a list of broken links, not of incorrect links. If a link is valid syntactically, but
points at the incorrect page, your link test won't catch the problem. My general point here is that you
must understand what you are testing. A testcase is a series of explicit actions and examinations that
identifies the "what".
A testcase for checking links might specify that each link is tested for functionality, appropriateness,
usability, style, consistency, etc. For example, a testcase for checking links on a typical page of a site
might include these steps:
Link Test: for each link on the page, verify that
As you can see, this is a detailed testing of many aspects of the link, with the result that on
completion of the test, you can say definitively what you know works. However, this is a simple
example: testcases can run to hundreds of instructions, depending on the types of functionality being
tested and the need for iterations of steps.
Test Case ID: It is unique number given to test case in order to be identified.
Test description: The description if test case you are going to test.
Revision history: Each test case has to have its revision history in order to know when and by
whom it is created or modified.
Test Setup: Anything you need to set up outside of your application for example printers, network
and so on.
Expected Results: The description of what you expect the function to do.
Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in
description of what you've observed.
Sample Testcase
Here is a simple test case for applying bold formatting to a text.
Purpose:
The purpose of the Test case, usually to verify a specific requirement.
Owner:
The persons or department responsible for keeping the Test cases accurate.
Expected Result :
Describe the expected results and outputs from this Test Case. It is also desirable to include some
method of recording whether or not the expected results actually occurred (i.e.) if the test case, or
even individual steps of the test case, passed.
Test Data:
Any required data input for the Test Case.
Test Tools:
Any specific or unusual tools or utilities required for the execution of this Test Case.
Dependencies :
If correct execution of this Test Case depends on being pleceded by any other Test Cases, that fact
should be mentioned here. Similarly any dependency on factory outside the immediate test
environment should also be mentioned.
Initialization :
If the system software or hardware has to be initialized in a particular manner in order for this Test
case to succeed, such initialization should be mentioned here.
Description:
Describe what will take place during the Test Case the description should take the form of a narrative
description of the Test Case, along with a Test procedure , which in turn can be specified by test case
steps, tables of values or configurations, further narrative or whatever is most appropriate to the type
of testing taking place.
Test Case 2
Test ID Description Expected Results Actual Resuls
Test Case 3
Project Name Project ID
Version Date
Test Purpose
Pre – Test Conditions
Test Case 4
Test Case Description : Identify the Items or features to be tested by this test case.
Pre and post conditions: Description of changes (if any) to be standard environment. Any
modification should be automatically done
Case Component Author Date Version
Test Case Description
Pre and Post Conditions
Input / Output Specification
Test Procedure
Expected Results
Failure Recovery
Comments
Date : MM – DD – YY
Test Procedure
Identify any special constrains on the test case. Focus on key elements such as special setup.
Expected Results
Fill this row with the description of the test results
Failure Recovery
Explanations regarding which actions should be performed in case of test failure.
Comments
Suggestions, description of possible improvements, etc.
Test Case 5
Test Test Test Case Test Steps Test Test Test Defect
Case Case Description Case Status Priority Severity
ID Name Step Expected Actual Status (P/F)
The first step to making a testcase is finding a bug in the first place. There are four ways of doing
this:
1. Letting someone else do it for you: Most of the time, the testcases you write will be for bugs that
other people have filed. In those cases, you will typically have a Web page which renders incorrectly,
either a demo page or an actual Web site. However, it is also possible that the bug report will have no
problem page listed, just a problem description.
2. Alternatively, you can find a bug yourself while browsing the Web. In such cases, you will have a
Web site that renders incorrectly.
3. You could also find the bug because one of the existing testcases fails. In this case, you have a
Web page that renders incorrectly.
4. Finally, the bug may be hypothetical: you might be writing a test suite for a feature without
knowing if the feature is broken or not, with the intention of finding bugs in the implementation of
that feature. In this case you do not have a Web page, just an idea of what a problem could be.
If you have a Web page showing a problem, move to the next step. Otherwise, you will have to create
an initial testcase yourself. This is covered on the section on "Creating testcases from scratch" later.
Once you have put as many of the external dependencies into the test file as you can, start cutting
the file down.
Go to the middle of the file. Delete everything from the middle of the file to the end. (Don't pay
attention to whether the file is still valid or not.) Check that the error still occurs. If it doesn't, put that
part pack, and remove the top half instead, or a smaller part.
Continue in this vein until you have removed almost all the file and are left with 20 or fewer lines of
markup, or at least, the smallest amount that you need to reproduce the problem.
Now, start being intelligent. Look at the file. Remove bits that clearly will have no effect on the bug.
For example if the bug is that the text "investments are good" is red but should be green, replace the
text with just "test" and check it is still the wrong colour.
Remove any scripts. If the scripts are needed, try doing what the scripts do then removing them -- for
example, replace this:
<script>document.write('<p>test<\/p>')</script>;
..with:
<p>test</p>
...and check that the bug still occurs.
Merge any <style> blocks together.
Change presentational markup for CSS. For example, change this:
<font color="red">
...to:
span { color: red; } /* in the stylesheet */
<span> <!-- in the markup -->
Do the same with style="" attributes (remove the attributes, but it in a <style> block instead).
Remove any classes, and use element names instead. For example: .
.a { color: red; }
.b { color: green; }
<div class="a"><p class="b">This should be green.</p></div>
...becomes:
div { color: red; }
p { color: green; }
<div><p>This should be green.</p></div>
Do the same with IDs. Make sure there is a strict mode DOCTYPE:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN">
Remove any <meta> elements. Remove any "lang" attributes or anything that isn't needed to show
the bug.
If you have images, replace them with very simple images, e.g.:
http://hixie.ch/resources/images/sample
If there is script that is required, remove as many functions as possible, merge functions together,
put them inline instead of in functions.
The final step is to make sure that the test can be used quickly. It must be possible to look at a test
and determine if it has passed or failed within about 2 seconds.
There are many tricks to do this, which are covered in other documents such as the CSS2.1 Test Case
Authoring Guidelines:
http://www.w3.org/Style/CSS/Test/guidelines.html
Make sure your test looks like it has failed even if no script runs or anything. Make sure the test
doesn't look blank if it fails.
Normal Test Cases: These are inputs that would be considered "normal" or "average" for your
program. For example, if your program computes square roots, you could try several positive
numbers, both less than and greater than 1, including some perfect squares such as 16 and some
numbers without rational square roots.
Boundary Test Cases: These are inputs that are legal, but on or near the boundary between legal
and illegal values. For example, in a square root program, you should try 0 as a boundary cases.
Exception Test Cases: These are inputs that are illegal. Your program may give an error message
or it might crash. In a square root program, negative numbers would be exception test cases.
You must hand in outputs (saved in file form) of your test runs. In addition to handing in your actual
test runs, give us a quick explanation of how you picked them. For example, if you write a program to
compute square roots, you might say "my test input included zero, small and large positive numbers,
perfect squares and numbers without a rational square root, and a negative number to demonstrate
error handling". You may give this explanation in the separate README file, or included alongside the
test cases.
You will be marked for how well the test cases you pick demonstrate that your program works
correctly. If your program doesn't work correctly in all cases, please be honest about it. It is perfectly
valid to have test cases which illustrate the circumstances in which your program does not yet work.
If your program doesn't run at all, you can hand in a set of test cases with an explanation of how you
picked them and what the correct output would be. Both of these will get you full marks for testing. If
you pick test cases to hide the faults in your program, you will lose marks.
The objective of the "White Box Test Case Design" (WBTD) is to detect errors by means of execution-
oriented test cases.
Operational Sequence
White Box Testing is a test strategy which investigates the internal structure of the object to be
assessed in order to specify execution-oriented test cases on the basis of the program logic. In this
connection the specifications have to be taken into consideration, though. In a test case design, the
portion of the assessed object which is addressed by the test cases is taken into consideration. The
considered aspect may be a path, a statement, a branch, and a condition. The test cases are selected
in such a manner that the correspondingly addressed portion of the assessed object is increased.
• Path coverage
• Statement coverage
• Branch coverage
• Condition coverage
• Branch/condition coverage
• Coverage of all multiple conditions
1. Path Coverage
Operational Sequence
By taking into consideration the specification, the paths to be executed and the
corresponding test cases will be defined.
2. Statement Coverage
Operational Sequence
By taking into consideration the specification, statements are identified and the
corresponding test cases are defined. Depending on the required coverage
degree, either all or only a certain number of statements are to be used for the
test case definition.
3. Branch Coverage
Operational Sequence
By taking into consideration the specification, a sufficiently large number of test
cases must be designed by means of an analysis so both the THEN and the
ELSE branch are executed at least once for each decision. I. e. the exit for the
fulfilled condition and the exit for the unfulfilled must be utilized and each entry
must be addressed at least once. For multiple decisions there exists the
additional requirement to test each possible exit at least once and to address
each entry at least once.
4. Condition Coverage
Operational Sequence
By taking into consideration the specification, conditions are identified and the
corresponding test cases are defined. The test cases are defined on the basis of
a path sequence analysis.
5. Branch/Condition Coverage
Operational Sequence
By taking into consideration the specification, branches and conditions are
identified and the corresponding test cases are defined.
Operational Sequence
By taking into consideration the specification, condition combinations for
decisions are identified and the corresponding test cases are defined. When
defining test cases it must be observed that all entries are addressed at least
once.
Black Box Test Case Design
The purpose of the Black Box Test Case Design (BBTD) is to discover circumstances under which the
assessed object will not react and behave according to the requirements or respectively the
specifications.
Operational Sequence
The test cases in a black box test case design are deviated from the requirements or respectively the
specifications. The object to be assessed is considered as a black box, i. e. the assessor is not
interested in the internal structure and the behavior of the object to be assessed.
It can be differentiated between the following black box test case designs:
generation of equivalence classes
marginal value analysis
intuitive test case definition
function coverage
The definition of test cases via equivalence classes is realized by means of the following steps:
Analysis of the input data requirements, the output data requirements, and the conditions
according to the specifications
2. Definition of the equivalence classes by setting up the ranges for input and output data
3. Definition of the test cases by means of selecting values for each class
When defining equivalence classes, two groups of equivalence classes have to be
differentiated:
valid equivalence classes
4. invalid equivalence classes
For valid equivalence classes, the valid input data are selected; in case of invalid equivalence
classes erroneous input data are selected. If the specification is available, the definition of
equivalence classes is predominantly a heuristic process.
5. Marginal Value Analysis
Objective and Purpose
It is the objective of the marginal value analysis to define test cases that can be used to
discover errors connected with the handling of range margins.
Operational Sequence
The principle of the marginal value analysis is to consider the range margins in connection
with the definition of test cases. This analysis is based on the equivalence classes defined by
means of the generation of equivalence classes. Contrary to the generation of equivalence
classes, not any one representative of the class is selected as test case but only the
representatives at the class margins. Therefore, the marginal value analysis represents an
addition to the test case design according to the generation of equivalence classes.
6. Intuitive Test Case Definition
Objective and Purpose
It is the objective of the intuitive test case definition to improve systematically detected test
cases qualitatively, and also to detect supplementary test cases.
Operational Sequence
Basis for this methodical approach is the intuitive ability and experience of human beings to
select test cases according to expected errors. A regulated procedure does not exist. Apart
from the analysis of the requirements and the systematically defined test cases (if realized) it
is most practical to generate a list of possible errors and error-prone situations. In this
connection it is possible to make use of the experience with repeatedly occurred standard
errors. Based on these identified errors and critical situations the additional test cases will
then be defined.
7. Function Coverage
Objective and Purpose
It is the purpose of the function coverage to identify test cases that can be used to proof that
the corresponding function is available and can be executed as well. In this connection the
test case concentrates on the normal behavior and the exceptional behavior of the object to
be assessed.
Operational Sequence
Based on the defined requirements, the functions to be tested must be identified. Then the
test cases for the identified functions can be defined.
Recommendation
With the help of a test case matrix it is possible to check if functions are covered by several
test cases. In order to improve the efficiency of the tests, redundant test cases ought to be
deleted.
For example how to calculate Boundary Value for Company name field
For Boundary value you have to check + or – minimum length and + or – Maximum length
for Company name field minimum value =3,4,5
maximum value=14,15,16
Valid values=4,5,14,15
Invalid values=3,16 because this values are out of range where as given in software requirement
specification.
Test Test Case Test Input Steps to Expected Result Acutual Status
Case Description execute Result
ID test case
cd_001 Check Company name: Enter all An alert message”
Company other data
Test Test Case Test Input Steps to Expected Result Acutual Status
Case Description execute Result
ID test case
name for in the
mandatory Company Enter Company
Details name”
form.
Click OK
button.
cd_002 Check Company name:sdfgh Enter all An alert
Company other data message”Company
name in the details information
Company is stored “
for valid i/p details
form.
Click OK
button.
cd_003 Check Company name:*&^%$ Enter all An alert message”
Company other data Company name
name for in the should be in
Special Company characters”
characters details
form.
Click OK
button.
Click OK
button.
cd_005 Check for Company name:434232 Enter all An alert message”
Company other data
name in the Company name
Test Test Case Test Input Steps to Expected Result Acutual Status
Case Description execute Result
ID test case
numerics Company
details should be in
form. characters”
Click OK
button.
cd_006 Check for Company name:aaa Enter all An alert
Company other data message”Company
name in the name should be
invalid Company atleast 4 and
values(3) details atmost 15
form. characters”
Click OK
button.
cd_007 Check Company Enter all An alert
Company name:asdfghjklmnbvcxz other data message”Company
name for in the name should be
invalid Company atleast 4 and
values(16) details atmost 15
form. characters”
Click OK
button.
cd_008 Check Company name: asdf Enter all An alert
Company other data message”Company
name for in the details information
valid Company is stored “
values(4) details
form.
Click OK
button.
cd_009 Check for Company name: asdfm Enter all An alert
Company other data message”Company
name valid in the details information
values(5) Company is stored “
details
form.
Test Test Case Test Input Steps to Expected Result Acutual Status
Case Description execute Result
ID test case
Click OK
button.
cd_010 Check Company name: Enter all An alert
Company asdfmvcxbnmjhg other data message”Company
name for in the details information
valid login is stored “
values(14) page.
Click OK
button.
cd_011 Check for Company name: Enter all An alert
Company asdfmvcxbnmjhge other data message”Company
name valid in the details information
values(15) Company is stored “
details
form.
Click OK
button.
You have to check all the possible Test input given in test cases and then check whether all the test
cases are executed or not
How to execute?
For example
whether you are checking company name as a mandatory means
you need not give any input to Company name field and then enter password .then click OK button
means.
That alert message “Enter Company name:” must be displayed. This was your expected result . If it is
happen while you are executing the test cases with the project .
Mandatory->compulsory