Vous êtes sur la page 1sur 3

Validation and Verification

Validation: •Establishing the fitness of a software product for its use. •“Are we building
the right product?” •Requires interaction with customers.

Verification: •Establishing the correspondence between the software and its


specification. •“Are we building the product right?” •Requires interaction with software.

Static V&V: •Software inspections •Static analysis of source code -Control/data flow
analysis

Dynamic V&V: •Defect-testing -Looks for errors in functionality •Load testing -Looks
for errors in scalability, performance, and reliability

Failure: •Occurs when a program misbehaves.

Fault: •A problem that exists in source code.

Error: •A human action that results in software containing a fault. An error leads to the
inclusion of a fault that may or may not lead to a failure. Testing cannot prove that a
program never fails; only that it contains faults.

Software Testing: •The process of executing a program and comparing the actual
behavior with the expected behavior. •Deviations between actual and expected behavior
should indicate defects in the program whose removal improves the quality of the
software.

Defect classification:
•The process of identifying the type of deviation between actual and expected (design
error, specification error, testing error, etc.)

Debugging:
•The process of tracking down the source of the defect and removing it.

Exhaustive testing: •Execute program on all possible inputs and compare actual to
expected behavior. •Could “prove” program correctness. •Not practical for any non-trivial
program.

Practical testing: •Select a tiny % of all possible tests. •Goal: executing the tiny % of
tests will uncover a large % of defects present! •A “testing method” is essentially a way
to decide which tiny % to pick.

White box testing (also “Clear” or “Glass”) •Assume knowledge of code •Coverage-
based testing •Automated tools available to assess coverage
Black box testing
•Don’t assume knowledge of code •Specification-based testing •Automated tools
available (requires formal specification) other methods exist •Mutation testing, etc.

Statement coverage
For a test case to uncover a defect, the statement containing the defect must be executed.
Therefore, a set of test cases, which guarantees all statements are executed, might
uncover a large number of the defects present. Whether or not the defects are actually
uncovered depends upon the program state at the time each statement is executed.

Control flow coverage


Control flow coverage adds conditions to statement coverage to raise the odds of
discovering defects.

Branch coverage: •Every conditional is evaluated as both true and false during testing.

Loop coverage: •Every loop must be executed both 0 times and more than 1 time.

Path coverage: •All permutations of paths through the program are taken

Specification based testing

Specification: •A mapping between the possible “inputs” to the program and the
corresponding expected “outputs” Specification-based testing: •Design a set of test cases
to see if inputs actually map to outputs.
•Does not require access to source code Differences with White Box (coverage) testing:
•Can catch errors of omission.
•Effectiveness depends upon a high quality specification

Equivalence classes

Goal: Divide the possible inputs into categories such that testing one point in each
category is equivalent to testing all points in the category. Provide one test case for each
point. Equivalence class definition is usually an iterative process and goes on throughout
development. Use heuristics to get you started designing your test cases.

Additions for web app test

Case design
Every page is retrieved at least once •Prevent 404 errors.
Every link is followed at least once. •Prevent 404 errors.
All form input fields are tested with: •Normal values •Erroneous values
•Maximum/minimum values Always check response for appropriateness. Yamaura, IEEE
Software, Nov. 1998 •Proper test case density is one test case per 10-15 LOC. •Test case
type percentages: -Basic and normal tests < 60% -Boundary and limit tests >10%, -Error
tests > 15% •Run a 48 hour continuous operation test (reiterating basic functions) for
memory leaks, deadlock, time-out, etc. •Tendency is to write too many tests for well
Understood functions, and too few for poorly understood. Use density to uncover this.

Vous aimerez peut-être aussi