Vous êtes sur la page 1sur 18

http://www.testingtemplates.

com/

Basic Testcase Concepts


A testcase is simply a test with formal steps and instructions; testcases are valuable because they
are repeatable, reproducible under the same environments, and easy to improve upon with feedback.
A testcase is the difference between saying that something seems to be working okay and proving
that a set of specific tasks are known to be working correctly.

Some tests are more straightforward than others. For example, say you need to verify that all the
links in your web site work. There are several different approaches to checking this:

• you can read your HTML code to see that all the link code is correct
• you can run an HTML DTD validator to see that all of your HTML syntax is correct, which would
imply that your links are correct
• you can use your browser (or even multiple browsers) to check every link manually
• you can use a link-checking program to check every link automatically
• you can use a site maintenance program that will display graphically the relationships
between pages on your site, including links good and bad
• you could use all of these approaches to test for any possible failures or inconsistencies in the
tests themselves

Verifying that your site's links are not broken is relatively unambiguous. You simply need to decide
which one of more of these tests best suits your site structure, your test resources, and your need for
granularity of results. You run the test, and you get your results showing any broken links.

Notice that you now have a list of broken links, not of incorrect links. If a link is valid syntactically, but
points at the incorrect page, your link test won't catch the problem. My general point here is that you
must understand what you are testing. A testcase is a series of explicit actions and examinations that
identifies the "what".

A testcase for checking links might specify that each link is tested for functionality, appropriateness,
usability, style, consistency, etc. For example, a testcase for checking links on a typical page of a site
might include these steps:
Link Test: for each link on the page, verify that

the link works (i.e., it is not broken)


the link points at the correct page
the link text effectively and unambiguously describes the target page
the link follows the approved style guide for this web site (for example, closing punctuation is or is
not included in the link text, as per the style guide specification)
every instance of a link to the same target page is coded the same way

As you can see, this is a detailed testing of many aspects of the link, with the result that on
completion of the test, you can say definitively what you know works. However, this is a simple
example: testcases can run to hundreds of instructions, depending on the types of functionality being
tested and the need for iterations of steps.

Defining Test and Testcase Parameters


A testcase should set up any special environment requirements the test may have, such as clearing
the browser cache, enabling JavaScript support, or turning on the warnings for the dropping of
cookies.
In addition to specific configuration instructions, testcases should also record browser types and
versions, operating system, machine platforms, connection speeds -- in short, the testcase should
record any parameter that would affect the reproducibility of the results or could aid in
troubleshooting any defects found by testing. Or to state this a little differently, specify what
platforms this testcase should be run against, record what platforms it is run against, and in the case
of defects report the exact environment in which the defect was found. The various required fields of
a test case are as follows

Test Case ID: It is unique number given to test case in order to be identified.

Test description: The description if test case you are going to test.

Revision history: Each test case has to have its revision history in order to know when and by
whom it is created or modified.

Function to be tested: The name of function to be tested.

Environment: It tells in which environment you are testing.

Test Setup: Anything you need to set up outside of your application for example printers, network
and so on.

Test Execution: It is detailed description of every step of execution.

Expected Results: The description of what you expect the function to do.

Actual Results: pass / failed If pass - What actually happen when you run the test. If failed - put in
description of what you've observed.

Sample Testcase
Here is a simple test case for applying bold formatting to a text.

• Test case ID: B 001


• Test Description: verify B - bold formatting to the text
• Revision History:
3/ 23/ 00 1.0- Valerie- Created
• Function to be tested: B - bold formatting to the text
Environment: Win 98
• Test setup: N/A
• Test Execution:
1. Open program
2. Open new document
3. Type any text
4. Select the text to make bold.
5. Click Bold
• Expected Result: Applies bold formatting to the text
• Actual Result: pass
Test Case 1
Test Case ID : Test Case Title
The test case ID may be any convenient identifier, as decided upon by the tester. Identifiers should
follow a consistent pattern within Test cases, and a similar consistency should apply access Test
Modules written for the same project.

Test Purpose Owner Expected Test Test Dependencies Initialization Description


Case Results Data Tools
ID

Purpose:
The purpose of the Test case, usually to verify a specific requirement.

Owner:
The persons or department responsible for keeping the Test cases accurate.

Expected Result :
Describe the expected results and outputs from this Test Case. It is also desirable to include some
method of recording whether or not the expected results actually occurred (i.e.) if the test case, or
even individual steps of the test case, passed.

Test Data:
Any required data input for the Test Case.

Test Tools:
Any specific or unusual tools or utilities required for the execution of this Test Case.

Dependencies :
If correct execution of this Test Case depends on being pleceded by any other Test Cases, that fact
should be mentioned here. Similarly any dependency on factory outside the immediate test
environment should also be mentioned.

Initialization :
If the system software or hardware has to be initialized in a particular manner in order for this Test
case to succeed, such initialization should be mentioned here.

Description:
Describe what will take place during the Test Case the description should take the form of a narrative
description of the Test Case, along with a Test procedure , which in turn can be specified by test case
steps, tables of values or configurations, further narrative or whatever is most appropriate to the type
of testing taking place.
Test Case 2
Test ID Description Expected Results Actual Resuls

Test Case 3
Project Name Project ID
Version Date
Test Purpose
Pre – Test Conditions

Step Test Test Data Test Actions Expected Actual


Description Result Result

Test Case 4
Test Case Description : Identify the Items or features to be tested by this test case.

Pre and post conditions: Description of changes (if any) to be standard environment. Any
modification should be automatically done
Case Component Author Date Version
Test Case Description
Pre and Post Conditions
Input / Output Specification
Test Procedure
Expected Results
Failure Recovery
Comments

Test Case 4 - Description

Case : Test Case Name

Component : Component Name


Author : Developer Name

Date : MM – DD – YY

Version : Version Number

Input / Output Specifications:


Identify all inputs / Outputs required to execute the test case. Be sure to identify all required inputs /
outputs not just data elements and values:

• Data (Values , ranges, sets )


• Conditions (States: initial, intermediate, final)
• Files (database, control files)

Test Procedure
Identify any special constrains on the test case. Focus on key elements such as special setup.

Expected Results
Fill this row with the description of the test results

Failure Recovery
Explanations regarding which actions should be performed in case of test failure.

Comments
Suggestions, description of possible improvements, etc.

Test Case 5
Test Test Test Case Test Steps Test Test Test Defect
Case Case Description Case Status Priority Severity
ID Name Step Expected Actual Status (P/F)

Test Case ID Test Case Title


Purpose
Pre Requisite
Test Data
Steps
Expected Result
Test Case ID Test Case Title
Actual Result
Status

Writing Test Cases for Web Browsers


This is a guide to making test cases for Web browsers, for example making test cases to show HTML,
CSS, SVG, DOM, or JS bugs. There are always exceptions to all the rules when making test cases. The
most important thing is to show the bug without distractions. This isn't something that can be done
just by following some steps, you have to be intelligent about it. Minimising existing testcases..

STEP ONE: FINDING A BUG

The first step to making a testcase is finding a bug in the first place. There are four ways of doing
this:
1. Letting someone else do it for you: Most of the time, the testcases you write will be for bugs that
other people have filed. In those cases, you will typically have a Web page which renders incorrectly,
either a demo page or an actual Web site. However, it is also possible that the bug report will have no
problem page listed, just a problem description.
2. Alternatively, you can find a bug yourself while browsing the Web. In such cases, you will have a
Web site that renders incorrectly.
3. You could also find the bug because one of the existing testcases fails. In this case, you have a
Web page that renders incorrectly.
4. Finally, the bug may be hypothetical: you might be writing a test suite for a feature without
knowing if the feature is broken or not, with the intention of finding bugs in the implementation of
that feature. In this case you do not have a Web page, just an idea of what a problem could be.

If you have a Web page showing a problem, move to the next step. Otherwise, you will have to create
an initial testcase yourself. This is covered on the section on "Creating testcases from scratch" later.

STEP TWO: REMOVING DEPENDENCIES

You have a page that renders incorrectly.


Make a copy of this page and all the files it uses, and update the links so they all point to the copies
you made of the files. Make sure that it still renders incorrectly in the same way -- if it doesn't, find
out why not. Make your copy of the original files as close to possible as the original environment, as
close as needed to reproduce the bug. For example, instead of loading the files locally, put the files
on a remote server and try it from there. Make sure the MIME types are the same if they need to be,
etc.
Once you have your page and its dependencies all set up and still showing the same problem, embed
the dependencies one by one.
For example, change markup like this:
<link rel="stylesheet" href="foo.css">
...to this:
<style type="text/css">
/* contents of foo.css */
</style>
Each time you do this, check that you haven't broken any relative URIs and that the page still shows
the problem. If the page stops showing the problem, you either made a mistake when embedding the
external files, or you found a bug specifically related to the way that particular file was linked. Move
on to the next file.

STEP THREE: MAKING THE TEST FILE SMALLER

Once you have put as many of the external dependencies into the test file as you can, start cutting
the file down.
Go to the middle of the file. Delete everything from the middle of the file to the end. (Don't pay
attention to whether the file is still valid or not.) Check that the error still occurs. If it doesn't, put that
part pack, and remove the top half instead, or a smaller part.
Continue in this vein until you have removed almost all the file and are left with 20 or fewer lines of
markup, or at least, the smallest amount that you need to reproduce the problem.
Now, start being intelligent. Look at the file. Remove bits that clearly will have no effect on the bug.
For example if the bug is that the text "investments are good" is red but should be green, replace the
text with just "test" and check it is still the wrong colour.
Remove any scripts. If the scripts are needed, try doing what the scripts do then removing them -- for
example, replace this:
<script>document.write('<p>test<\/p>')</script>;
..with:
<p>test</p>
...and check that the bug still occurs.
Merge any <style> blocks together.
Change presentational markup for CSS. For example, change this:
<font color="red">
...to:
span { color: red; } /* in the stylesheet */
<span> <!-- in the markup -->
Do the same with style="" attributes (remove the attributes, but it in a <style> block instead).
Remove any classes, and use element names instead. For example: .
.a { color: red; }
.b { color: green; }
<div class="a"><p class="b">This should be green.</p></div>
...becomes:
div { color: red; }
p { color: green; }
<div><p>This should be green.</p></div>
Do the same with IDs. Make sure there is a strict mode DOCTYPE:
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN">
Remove any <meta> elements. Remove any "lang" attributes or anything that isn't needed to show
the bug.
If you have images, replace them with very simple images, e.g.:
http://hixie.ch/resources/images/sample
If there is script that is required, remove as many functions as possible, merge functions together,
put them inline instead of in functions.

STEP FOUR: GIVE THE TEST AN OBVIOUS PASS CONDITION

The final step is to make sure that the test can be used quickly. It must be possible to look at a test
and determine if it has passed or failed within about 2 seconds.
There are many tricks to do this, which are covered in other documents such as the CSS2.1 Test Case
Authoring Guidelines:
http://www.w3.org/Style/CSS/Test/guidelines.html
Make sure your test looks like it has failed even if no script runs or anything. Make sure the test
doesn't look blank if it fails.

Creating testcases from scratch


STEP ONE: FIND SOMETHING TO TEST

Read the relevant specification.


Read it again.
Read it again, making sure you read every last bit of it, cover to cover.
Read it one more time, this time checking all the cross-references.
Read the specification in random order, making sure you understand every last bit of it.
Now, find a bit you think is likely to be implemented wrongly.
Work out a way in which a page could be created so that if the browser gets it right, the page
will look like the test has passed, and if the browser gets it wrong, the page will look like it
failed.
Write that page.
Now jump to step four above.

Test Cases & Explanation


We will not supply you with test input for most of your assignments. Part of your job will be to select
input cases to show that your program works correctly. You should select input from the following
categories:

Normal Test Cases: These are inputs that would be considered "normal" or "average" for your
program. For example, if your program computes square roots, you could try several positive
numbers, both less than and greater than 1, including some perfect squares such as 16 and some
numbers without rational square roots.

Boundary Test Cases: These are inputs that are legal, but on or near the boundary between legal
and illegal values. For example, in a square root program, you should try 0 as a boundary cases.

Exception Test Cases: These are inputs that are illegal. Your program may give an error message
or it might crash. In a square root program, negative numbers would be exception test cases.

You must hand in outputs (saved in file form) of your test runs. In addition to handing in your actual
test runs, give us a quick explanation of how you picked them. For example, if you write a program to
compute square roots, you might say "my test input included zero, small and large positive numbers,
perfect squares and numbers without a rational square root, and a negative number to demonstrate
error handling". You may give this explanation in the separate README file, or included alongside the
test cases.

You will be marked for how well the test cases you pick demonstrate that your program works
correctly. If your program doesn't work correctly in all cases, please be honest about it. It is perfectly
valid to have test cases which illustrate the circumstances in which your program does not yet work.
If your program doesn't run at all, you can hand in a set of test cases with an explanation of how you
picked them and what the correct output would be. Both of these will get you full marks for testing. If
you pick test cases to hide the faults in your program, you will lose marks.

White Box Test Case Design

Objective and Purpose

The objective of the "White Box Test Case Design" (WBTD) is to detect errors by means of execution-
oriented test cases.
Operational Sequence
White Box Testing is a test strategy which investigates the internal structure of the object to be
assessed in order to specify execution-oriented test cases on the basis of the program logic. In this
connection the specifications have to be taken into consideration, though. In a test case design, the
portion of the assessed object which is addressed by the test cases is taken into consideration. The
considered aspect may be a path, a statement, a branch, and a condition. The test cases are selected
in such a manner that the correspondingly addressed portion of the assessed object is increased.

The following White Box Test Case methods exist:

• Path coverage
• Statement coverage
• Branch coverage
• Condition coverage
• Branch/condition coverage
• Coverage of all multiple conditions

1. Path Coverage

Objective and Purpose


It is the objective of the path coverage to identify test cases executing a
required minimum number of paths in the object to be assessed. The execution
of all paths cannot be realized as a rule.

Operational Sequence
By taking into consideration the specification, the paths to be executed and the
corresponding test cases will be defined.

2. Statement Coverage

Objective and Purpose


It is the objective of the statement coverage to identify test cases executing a
required minimum number of statements in the object to be assessed.

Operational Sequence
By taking into consideration the specification, statements are identified and the
corresponding test cases are defined. Depending on the required coverage
degree, either all or only a certain number of statements are to be used for the
test case definition.
3. Branch Coverage

Objective and Purpose


It is the objective of the branch coverage to identify test cases executing a
required minimum number of branches, i. e. at least once in the object to be
assessed.

Operational Sequence
By taking into consideration the specification, a sufficiently large number of test
cases must be designed by means of an analysis so both the THEN and the
ELSE branch are executed at least once for each decision. I. e. the exit for the
fulfilled condition and the exit for the unfulfilled must be utilized and each entry
must be addressed at least once. For multiple decisions there exists the
additional requirement to test each possible exit at least once and to address
each entry at least once.

4. Condition Coverage

Objective and Purpose


The objective of the condition coverage is to identify test cases executing a
required minimum number of conditions in the object to be assessed.

Operational Sequence
By taking into consideration the specification, conditions are identified and the
corresponding test cases are defined. The test cases are defined on the basis of
a path sequence analysis.

5. Branch/Condition Coverage

Objective and Purpose


The objective of the branch/condition coverage is to identify test cases
executing a required minimum number of branches and conditions in the object
to be assessed.

Operational Sequence
By taking into consideration the specification, branches and conditions are
identified and the corresponding test cases are defined.

6. Coverage of all Multiple Conditions

Objective and Purpose


The objective of the coverage of all multiple conditions is to identify test cases
executing a required minimum number of all possible condition combinations
for a decision in the object to be assessed.

Operational Sequence
By taking into consideration the specification, condition combinations for
decisions are identified and the corresponding test cases are defined. When
defining test cases it must be observed that all entries are addressed at least
once.
Black Box Test Case Design

Objective and Purpose

The purpose of the Black Box Test Case Design (BBTD) is to discover circumstances under which the
assessed object will not react and behave according to the requirements or respectively the
specifications.

Operational Sequence
The test cases in a black box test case design are deviated from the requirements or respectively the
specifications. The object to be assessed is considered as a black box, i. e. the assessor is not
interested in the internal structure and the behavior of the object to be assessed.

It can be differentiated between the following black box test case designs:
generation of equivalence classes
marginal value analysis
intuitive test case definition
function coverage

1. Generation of Equivalence Classes


Objective and Purpose
It is the objective of the generation of equivalence classes to achieve an optional probability to
detect errors with a minimum number of test cases.
Operational Sequence
The principle of the generation of equivalence classes is to group all input data of a program
into a finite number of equivalence classes so it can be assumed that with any representative
of a class it is possible to detect the same errors as with any other representative of this class.

The definition of test cases via equivalence classes is realized by means of the following steps:

Analysis of the input data requirements, the output data requirements, and the conditions
according to the specifications
2. Definition of the equivalence classes by setting up the ranges for input and output data
3. Definition of the test cases by means of selecting values for each class
When defining equivalence classes, two groups of equivalence classes have to be
differentiated:
valid equivalence classes
4. invalid equivalence classes
For valid equivalence classes, the valid input data are selected; in case of invalid equivalence
classes erroneous input data are selected. If the specification is available, the definition of
equivalence classes is predominantly a heuristic process.
5. Marginal Value Analysis
Objective and Purpose
It is the objective of the marginal value analysis to define test cases that can be used to
discover errors connected with the handling of range margins.
Operational Sequence
The principle of the marginal value analysis is to consider the range margins in connection
with the definition of test cases. This analysis is based on the equivalence classes defined by
means of the generation of equivalence classes. Contrary to the generation of equivalence
classes, not any one representative of the class is selected as test case but only the
representatives at the class margins. Therefore, the marginal value analysis represents an
addition to the test case design according to the generation of equivalence classes.
6. Intuitive Test Case Definition
Objective and Purpose
It is the objective of the intuitive test case definition to improve systematically detected test
cases qualitatively, and also to detect supplementary test cases.
Operational Sequence
Basis for this methodical approach is the intuitive ability and experience of human beings to
select test cases according to expected errors. A regulated procedure does not exist. Apart
from the analysis of the requirements and the systematically defined test cases (if realized) it
is most practical to generate a list of possible errors and error-prone situations. In this
connection it is possible to make use of the experience with repeatedly occurred standard
errors. Based on these identified errors and critical situations the additional test cases will
then be defined.
7. Function Coverage
Objective and Purpose
It is the purpose of the function coverage to identify test cases that can be used to proof that
the corresponding function is available and can be executed as well. In this connection the
test case concentrates on the normal behavior and the exceptional behavior of the object to
be assessed.
Operational Sequence
Based on the defined requirements, the functions to be tested must be identified. Then the
test cases for the identified functions can be defined.
Recommendation
With the help of a test case matrix it is possible to check if functions are covered by several
test cases. In order to improve the efficiency of the tests, redundant test cases ought to be
deleted.

How to Write Test Cases


To write test cases one should be clear on the specifications required for a particular case. Once the
case is decided check out for the requirments and then write test cases. For writing test cases first
you must find Boundary Value Analysis. Let us write a test case for a Consignee Details Form.
(Consignee Details : Consignee is the customer whoever to purchase our product. Here he want to
give the information about himself. For example name, address and etc...)
Here is the screen shot of the form

Software Requirement Specification


According to the software requirement specification (SRS) one should write test cases upto expected
results.
Here is the screen shot of SRS
Boundary Value Analysis:
It concentrate on range between minimum value and maximum values. It does not concentrate on
centre values.

For example how to calculate Boundary Value for Company name field

Minimum length is 4 & Maximum length is 15

For Boundary value you have to check + or – minimum length and + or – Maximum length
for Company name field minimum value =3,4,5
maximum value=14,15,16

According to the Software Requirement Specification


The boundary values given above are

Valid values=4,5,14,15
Invalid values=3,16 because this values are out of range where as given in software requirement
specification.
Test Test Case Test Input Steps to Expected Result Acutual Status
Case Description execute Result
ID test case
cd_001 Check Company name: Enter all An alert message”
Company other data
Test Test Case Test Input Steps to Expected Result Acutual Status
Case Description execute Result
ID test case
name for in the
mandatory Company Enter Company
Details name”
form.

Click OK
button.
cd_002 Check Company name:sdfgh Enter all An alert
Company other data message”Company
name in the details information
Company is stored “
for valid i/p details
form.

Click OK
button.
cd_003 Check Company name:*&^%$ Enter all An alert message”
Company other data Company name
name for in the should be in
Special Company characters”
characters details
form.

Click OK
button.

cd_004 Check for Company name Enter all An alert message”


Company other data
name Alpha :sdsw232 in the Company name
numerics Company
details should be in
form. characters”

Click OK
button.
cd_005 Check for Company name:434232 Enter all An alert message”
Company other data
name in the Company name
Test Test Case Test Input Steps to Expected Result Acutual Status
Case Description execute Result
ID test case
numerics Company
details should be in
form. characters”

Click OK
button.
cd_006 Check for Company name:aaa Enter all An alert
Company other data message”Company
name in the name should be
invalid Company atleast 4 and
values(3) details atmost 15
form. characters”

Click OK
button.
cd_007 Check Company Enter all An alert
Company name:asdfghjklmnbvcxz other data message”Company
name for in the name should be
invalid Company atleast 4 and
values(16) details atmost 15
form. characters”

Click OK
button.
cd_008 Check Company name: asdf Enter all An alert
Company other data message”Company
name for in the details information
valid Company is stored “
values(4) details
form.

Click OK
button.
cd_009 Check for Company name: asdfm Enter all An alert
Company other data message”Company
name valid in the details information
values(5) Company is stored “
details
form.
Test Test Case Test Input Steps to Expected Result Acutual Status
Case Description execute Result
ID test case

Click OK
button.
cd_010 Check Company name: Enter all An alert
Company asdfmvcxbnmjhg other data message”Company
name for in the details information
valid login is stored “
values(14) page.

Click OK
button.
cd_011 Check for Company name: Enter all An alert
Company asdfmvcxbnmjhge other data message”Company
name valid in the details information
values(15) Company is stored “
details
form.

Click OK
button.

• You have to write test cases for Boundary values also.


For single user id field you have 11 test case including boundary value.
• You have to write test cases upto expected result after getting software requirement
specification itself you can start writing a test cases.
• After the creation of test cases completed.
• Arrival of build will be arises to the testing field
• Build->Its a complete project
• After that you have to execute the test cases

EXECUTION OF TEST CASES

You have to check all the possible Test input given in test cases and then check whether all the test
cases are executed or not
How to execute?

For example
whether you are checking company name as a mandatory means
you need not give any input to Company name field and then enter password .then click OK button
means.
That alert message “Enter Company name:” must be displayed. This was your expected result . If it is
happen while you are executing the test cases with the project .

Mandatory->compulsory

Vous aimerez peut-être aussi