Vous êtes sur la page 1sur 3

Syntel CQA Forum Test Techniques and Levels CQA

Doc No 21
1 Test Techniques Data-flow Coverage

This section describes techniques and Data-flow coverage tests paths from the
strategies for testing. Techniques cover definition of a variable to its use.
different ways testing can be accomplished. Control-flow Coverage
These are combined together with each
other and with the levels of testing Statement Coverage
described in the next section to create test Statement coverage requires that every
plans that meet the needs of each unique statement in the code under test has been
project. executed.
1.1 Preparation Branch Coverage
1.1.1 Formal Testing Branch coverage requires that every point
Testing performed with a plan, documented of entry and exit in the program has been
set of test cases, etc that outline the executed at least once, and every decision
methodology and test objectives. Test in the program has taken all possible
documentation can be developed from outcomes at least one.
requirements, design, equivalence Condition Coverage
partitioning, domain coverage, error
guessing, etc. The level of formality and Condition coverage is branch coverage with
thoroughness of test cases will depend the additional requirement that “every
upon the needs of the project. Some condition in a decision in the program has
projects can have rather informal ‘formal taken all possible outcomes at least once.”
test cases’, while others will require a Multiple
highly refined test process. Some projects
will require light testing of nominal paths Multiple condition coverage requires that all
while others will need rigorous testing of possible combinations of the possible
exceptional cases. outcomes of each condition have been
tested.
1.1.2 Informal Testing
Modified
Ad hoc testing performed without a
documented set of objectives or plans. Modified condition coverage requires that
Informal testing relies on the intuition and each condition has been tested
skills of the individual performing the independently.
testing. Experienced engineers can be 1.3.2 Functional Testing
productive in this mode by mentally
performing test cases for the scenarios Functional testing compares the behavior of
being exercised. the test item to its specification without
knowledge of the item’s internal structure.
1.2 Execution Functional testing is also referred to as
1.2.1 Manual Testing black box testing.
Manual testing involves direct human Requirements Coverage
interaction to exercise software Requirements coverage requires at least
functionality and note behavior and one test case for each specified
deviations from expected behavior. requirement. A traceability matrix can be
1.2.2 Automated Testing used to insure that requirements coverage
has been satisfied.
Testing that relies on a tool, built-in test
harness, test framework, or other automatic Input Domain Coverage
mechanism to exercise software Input domain coverage executes a function
functionality, record output, and possibly with a sufficient set of input values from the
detect deviations. The test cases performed function’s input domain (the set of values it
by automated testing are usually defined as can take as input parameters). The notion
software code or script that drives the of a sufficient set is not completely
automatic execution. definable, and complete coverage of the
1.3 Approach input domain is typically impossible.
Therefore the input domain is broken into
1.3.1 Structural Testing subsets, or equivalence classes, such
Structural testing depends upon knowledge that all values within a subset are likely to
of the internal structure of the software. reveal the same defects. Any one value
Structural testing is also referred to as within an equivalence class can be used to
white-box testing. represent the whole equivalence class. In
addition to a generic representative, each
Syntel CQA Forum Test Techniques and Levels CQA
Doc No 21
extreme value within an equivalence class any project). Component testing expands
(e.g. the minimum and maximum in a unit testing to include called components
numeric range) should be covered by a test and data types. Component testing is often
case. Testing the extreme values of the automated and may require creation of
equivalence classes is referred to as harness, stubs, or drivers.
boundary value testing.
2.3 Single Step Testing
Output Domain Coverage
Single step testing is performed by stepping
Output domain coverage executes a through new or modified statements of
function in such a way that a sufficient set code with a debugger. Single step testing is
of output values from the function’s output normally manual and informal.
domain (the set of values it can return) is
2.4 Bench Testing
produced. Equivalence classes and
boundary values (see Input Domain Bench testing is functional testing of a
Coverage) are used to provide coverage of component after the system (or a vertical
the output domain. A set of test cases that slice of the system that includes the new or
“reach” the boundary values and a typical modified component) has been built in a
value for each equivalence class is local environment. Bench testing is often
considered to have achieved output domain manual and informal.
coverage.
2.5 Developer Integration Testing
Developer integration testing is functional
2 Test Levels testing of a component after the
component has been released and the
This section captures the different types or
system has been deployed in a standard
levels of testing. Testing levels capture a
testing environment. Special attention is
combination of what, why, how, and when
given to the flow of data between the new
something is tested. Testing levels are
component and the rest of the system.
combined with the techniques of the
previous section to creation test plans that 2.6 Smoke Testing
meet the unique needs of each project.
Smoke testing determines whether the
Although many testing levels tend to be system is sufficiently stable and functional
combined with certain techniques, there are to warrant the cost of further, more rigorous
no hard and fast rules. Some types of testing. Smoke testing may also
testing imply certain lifecycle stages, communicate the general disposition of the
software deliverables, or other project current code base to the project team.
context (e.g., system testing). Other types Specific standards for the scope or format
of testing are general enough to be done of smoke test cases and for their success
almost any time on any part of the system criteria may vary widely among projects.
(e.g., bench testing). Some require a
2.7 Feature Testing
particular methodology (e.g., single step
testing). When appropriate common Feature testing is functional testing directed
utilizations of a particular testing type will at a specific feature of the system. The
be described. feature is tested for correctness and proper
integration into the system. Feature testing
The project’s test plan will normally define
occurs after all components of a feature
the types of testing that will be used on the
have been completed and released by
project, when they will be used, and the
development.
strategies they will be used with. Test cases
are then created for each testing type. 2.8 Integration Testing
2.1 Unit Testing Integration testing focuses on verifying the
functionality and stability of the overall
A unit is an abstract term for the smallest
system when it is integrated with external
thing that can be conveniently tested. This
systems, subsystems, third party
will vary based on the nature of a project
components, or other external interfaces.
and its technology but usually focuses at
the subroutine level. Unit testing is the 2.9 System Testing
testing of these units. Unit testing is often
System testing occurs when all necessary
automated and may require creation of a
harness, stubs, or drivers. components have been released internally
and the system has been deployed onto a
2.2 Component Testing standard environment. System testing is
concerned with the behavior of the whole
A component is an aggregate of one or
system. When appropriate, system testing
more components (units may be considered
encompasses all external software,
the lowest level atomic test components on
Syntel CQA Forum Test Techniques and Levels CQA
Doc No 21
hardware, operating environments, etc. of system configurations. Relevant
that will make up the final system. configuration issues depend upon the
particular product and may include
2.10 Release Testing
peripherals, network patterns, operating
Release tests ensure that interim builds can systems, hardware devices and drivers,
successfully deployed by the customer. This user settings.
includes product deployment (web
download, cd install, etc.), installation, and
a pass through the primary functionality.
This test is done immediately before
releasing to the customer.
2.11 Beta Testing
Beta testing consists of deploying the
system to many external users who have
agreed to provide feedback about the
system. Beta testing may also provide the
opportunity to explore release and
deployment issues (e.g. mass replication,
system versioning).
2.12 Acceptance Testing
Acceptance testing compares the system to
a predefined set of acceptance criteria. If
the acceptance criteria are satisfied by the
system, the customer will accept delivery of
the system.
2.13 Regression Testing
Exercises functionality that has stabilized
(i.e. have not failed for a predetermined
number of testing cycles). Once high
confidence has been established for certain
parts of the system, it is generally wasted
effort to continue rigorous, detailed testing
of those parts. However, it is possible that
continued evolution of the system will have
negative effects on previously stable and
reliable parts of the system. Regression
testing offers a low-cost method of
detecting such side effects. Regression
testing is often automated and focused on
critical functionality.
2.14 Performance Testing
Performance testing measures the
efficiency with respect to time and
hardware resources of the test item under
typical usage. This assumes that a set of
non-functional requirements regarding
performance exist in the item’s
specification.
2.15 Stress Testing
Stress testing evaluates the performance of
the test item during extreme usage
patterns. Typical examples of “extreme
usage patterns” are large data sets,
complex calculations, extended operation,
limited system resources, etc.
2.16 Configuration Testing
Configuration testing evaluates the
performance of the test item under a range

Vous aimerez peut-être aussi