Vous êtes sur la page 1sur 6


CQA Doc No 20
GLOSSARY OF TESTING the internal program structure or data. Also
known as closed-box testing.
Bottom-up Testing: An integration testing
PRACTICE OBJECTIVE technique that tests the low-level
This glossary of testing terminology has two components first using test drivers for those
objectives: first to define the test terms that components that have not yet been
will be used throughout this manual; and developed to call the low-level components
second to provide a basis for establishing a for test.
glossary of testing terminology for your Boundary Value Analysis: A test data
organization. Information services (I/S) selection technique in which values are
organizations that use a common testing chosen to lie along data extremes. Boundary
vocabulary are better able to accelerate the values include maximum, mini-mum, just
maturity of their testing process. inside/outside boundaries, typical values,
QAI believes that the testing terminology as and error values.
defined in this glossary is the most Brainstorming: A group process for
commonly held definition for these terms. generating creative and diverse ideas.
Therefore, QAI recommends that this
glossary be adopted as a set of core testing Branch Coverage Testing: A test method
term definitions. The glossary can then be satisfying coverage criteria that requires
supplemented with testing terms that are each decision point at each possible branch
specific to your organization. to be executed at least once.
These additional terms might include: Bug: A design flaw that will result in
symptoms exhibited by some object (the
 Names of acquired testing tools object under test or some other object) when
 Testing tools developed in your organ- an object is subjected to an appropriate test.
ization Cause-and-Effect (Fishbone) Diagram: A
 Names of testing libraries tool used to identify possible causes of a
 Names of testing reports problem by representing the relationship
between some effect and its possible cause.
 Terms which cause specific action to
occur Cause-effect Graphing: A testing
technique that aids in selecting, in a
GLOSSARY OF TERMS systematic way, a high-yield set of test cases
Acceptance Testing: Formal testing that logically relates causes to effects to
conducted to determine whether or not a produce test cases. It has a beneficial side
system satisfies its acceptance criteria— effect in pointing out incompleteness and
enables an end user to determine whether or ambiguities in specifications.
not to accept the system. Checksheet: A form used to record data as
Affinity Diagram: A group process that it is gathered.
takes large amounts of language data, such Clear-box Testing: Another term for white-
as a list developed by brainstorming, and box testing. Structural testing is sometimes
divides it into categories. referred to as clear-box testing, since "white
Alpha Testing: Testing of a software product boxes" are considered opaque and do not
or system conducted at the developer's site really permit visibility into the code. This is
by the end user. also known as glass-box or open-box testing.
Audit: An inspection/assessment activity Client: The end user that pays for the
that verifies compliance with plans, policies, product received, and receives the benefit
and procedures, and ensures that resources from the use of the product.
are conserved. Audit is a staff function; it Control Chart: A statistical method for
serves as the "eyes and ears" of distinguishing between common and special
management. cause variation exhibited by processes.
Automated Testing: That part of software Customer (end user): The individual or
testing that is assisted with software tool(s) organization, internal or external to the
that does not require operator input, producing organization, that receives the
analysis, or evaluation. product.
Beta Testing: Testing conducted at one or Cyclomatic Complexity: A measure of the
more end user sites by the end user of a number of linearly independent paths
delivered software product or system. through a program module.
Black-box Testing: Functional testing
based on requirements with no knowledge of
Page 1 of 6
CQA Doc No 20
Data Flow Analysis: Consists of the Execution: The process of a computer
graphical analysis of collections of carrying out an instruction or instructions of
(sequential) data definitions and reference a computer.
patterns to determine constraints that can be Exhaustive Testing: Executing the program
placed on data values at various points of with all possible combinations of values for
executing the source program. program variables.
Debugging: The act of attempting to Failure: The inability of a system or system
determine the cause of the symptoms of component to perform a required function
malfunctions detected by testing or by within specified limits. A failure may be
frenzied user complaints. produced when a fault is encountered.
Defect: NOTE: Operationally, it is useful to Failure-directed Testing: Testing based on
work with two definitions of a defect: 1) From the knowledge of the types of errors made in
the producer's viewpoint: a product the past that are likely for the system under
requirement that has not been met or a test.
product attribute possessed by a product or
a function performed by a product that is not Fault: A manifestation of an error in
in the statement of requirements that define software. A fault, if encountered, may cause
the product. 2) From the end user's a failure.
viewpoint: anything that causes end user Fault-based Testing: Testing that employs
dissatisfaction, whether in the statement of a test data selection strategy designed to
requirements or not. generate test data capable of demonstrating
Defect Analysis: Using defects as data for the absence of a set of prespecified faults,
continuous quality improvement. Defect typically, frequently occurring faults.
analysis generally seeks to classify defects Fault Tree Analysis: A form of safety
into categories and identify possible causes analysis that assesses hardware safety to
in order to direct process improvement provide failure statistics and sensitivity
efforts. analyses that indicate the possible effect of
Defect Density: Ratio of the number of critical failures.
defects to program length (a relative Flowchart: A diagram showing the
number). sequential steps of a process or of a
Desk Checking: A form of manual static workflow around a product or service.
analysis usually performed by the originator. Formal Review: A technical review
Source code documentation, etc., is visually conducted with the end user, including the
checked against requirements and types of reviews called for in the standards.
standards. Functional Testing: Application of test data
Dynamic Analysis: The process of derived from the specified functional
evaluating a program based on execution of requirements without regard to the final
that program. Dynamic analysis approaches program structure. Also known as black-box
rely on executing a piece of software with testing.
selected test data. Function Points: A consistent measure of
Dynamic Testing: Verification or validation software size based on user requirements.
performed which executes the system's Data components include inputs, outputs,
code. etc. Environment characteristics include data
Error: 1) A discrepancy between a communications, performance, reusability,
computed, observed, or measured value or operational ease, etc. Weight scale: 0 = not
condition and the true, specified, or present; 1 = minor influence, 5 = strong
theoretically correct value or condition; and influence.
2) a mental mistake made by a programmer Heuristics Testing: Another term for
that may result in a program fault. failure-directed testing.
Error-based Testing: Testing where Histogram: A graphical description of
information about programming style, error- individual measured values in a data set that
prone language constructs, and other is organized according to the frequency or
programming knowledge is applied to select relative frequency of occurrence. A
test data capable of detecting faults, either a histogram illustrates the shape of the
specified class of faults or all possible faults. distribution of individual values in a data set
Evaluation: The process of examining a along with information regarding the average
system or system component to determine and variation.
the extent to which specified properties are
Page 2 of 6
CQA Doc No 20
Hybrid Testing: A combination of top-down from the organization responsible for
testing combined with bottom-up testing of developing the product.
prioritized or available components. Life Cycle: The period that starts when a
Incremental Analysis: Incremental analysis software product is conceived and ends when
occurs when (partial) analysis may be the product is no longer available for use.
performed on an incomplete product to allow The software life cycle typically includes a
early feedback on the development of that requirements phase, design phase,
product. implementation (code) phase, test phase,
Infeasible Path: Program statements installation and checkout phase, operation
sequence that can never be executed. and maintenance phase, and a retirement
Inputs: Products, services, or information
needed from suppliers to make a process Manual Testing: That part of software
work. testing that requires operator input, analysis,
or evaluation.
Inspection: 1) A formal evaluation
technique in which software requirements, Mean: A value derived by adding several
design, or code are examined in detail by a qualities and dividing the sum by the number
person or group other than the author to of these quantities.
detect faults, violations of development Measure: To ascertain or appraise by
standards, and other problems. 2) A quality comparing to a standard; to apply a metric.
improvement process for written material Measurement: 1) The act or process of
that consists of two dominant components: measuring. 2) A figure, extent, or amount
product (document) improvement and obtained by measuring.
process improvement (document production
and inspection). Metric: A measure of the extent or degree
to which a product possesses and exhibits a
Instrument: To install or insert devices or certain quality, property, or attribute.
instructions into hardware or software to
monitor the operation of a system or Mutation Testing: A method to determine
component. test set thoroughness by measuring the
extent to which a test set can discriminate
Integration: The process of combining the program from slight variants of the
software components or hardware program.
components, or both, into an overall system.
Nonintrusive Testing: Testing that is
Integration Testing: An orderly progression transparent to the software under test; i.e.,
of testing in which software components or testing that does not change the timing or
hardware components, or both, are processing characteristics of the software
combined and tested until the entire system under test from its behavior in a real
has been integrated. environment. Usually involves additional
Interface: A shared boundary. An interface hardware that collects timing or processing
might be a hardware component to link two information and processes that information
devices, or it might be a portion of storage or on another platform.
registers accessed by two or more computer Operational Requirements: Qualitative
programs. and quantitative parameters that specify the
Interface Analysis: Checks the interfaces desired operational capabilities of a system
between program elements for consistency and serve as a basis for determining the
and adherence to predefined rules or axioms. operational effectiveness and suitability of a
Intrusive Testing: Testing that collects system prior to deployment.
timing and processing information during Operational Testing: Testing performed by
program execution that may change the the end user on software in its normal
behavior of the software from its behavior in operating environment.
a real environment. Usually involves Outputs: Products, services, or information
additional code embedded in the software supplied to meet end user needs.
being tested or additional processes running
concurrently with software being tested on Path Analysis: Program analysis performed
the same platform. to identify all possible paths through a
program, to detect incomplete paths, or to
IV&V: Independent verification and discover portions of the program that are not
validation is the verification and validation of on any path.
a software product by an organization that is
both technically and managerially separate Path Coverage Testing: A test method
satisfying coverage criteria that each logical
path through the program is tested. Paths
Page 3 of 6
CQA Doc No 20
through the program often are grouped into components of the intended system to obtain
a finite set of classes; one path from each rapid feedback of analysis and design
class is tested. decisions.
Peer Reviews: A methodical examination of Qualification Testing: Formal testing,
software work products by the producer's usually conducted by the developer for the
peers to identify defects and areas where end user, to demonstrate that the software
changes are needed. meets its specified requirements.
Policy: Managerial desires and intents Quality: A product is a quality product if it is
concerning either process (intended defect free. To the producer a product is a
objectives) or products (desired attributes). quality product if it meets or conforms to the
Problem: Any deviation from defined statement of requirements that defines the
standards. Same as defect. product. This statement is usually shortened
to: quality means meets requirements.
Procedure: The step-by-step method NOTE: Operationally, the work quality refers
followed to ensure that standards are met. to products.
Process: The work effort that produces a Quality Assurance (QA): The set of
product. This includes efforts of people and support activities (including facilitation,
equipment guided by policies, standards, and training, measurement, and analysis) needed
procedures. to provide adequate confidence that
Process Improvement: To change a processes are established and continuously
process to make the process produce a given improved in order to produce products that
product faster, more economically, or of meet specifications and are fit for use.
higher quality. Such changes may require the Quality Control (QC): The process by
product to be changed. The defect rate must which product quality is compared with
be maintained or reduced. applicable standards; and the action taken
Product: The output of a process; the work when nonconformance is detected. Its focus
product. There are three useful classes of is defect detection and removal. This is a line
products: manufactured products (standard function, that is, the performance of these
and custom), administrative/ information tasks is the responsibility of the people
products (invoices, letters, etc.), and service working within the process.
products (physical, intellectual, physiological, Quality Improvement: To change a
and psychological). Products are defined by a production process so that the rate at which
statement of requirements; they are defective products (defects) are produced is
produced by one or more people working in a reduced. Some process changes may require
process. the product to be changed.
Product Improvement: To change the Random Testing: An essentially black-box
statement of requirements that defines a testing approach in which a program is
product to make the product more satisfying tested by randomly choosing a subset of all
and attractive to the end user (more possible input values. The distribution may
competitive). Such changes may add to or be arbitrary or may attempt to accurately
delete from the list of attributes and/or the reflect the distribution of inputs in the
list of functions defining a product. Such application environment.
changes frequently require the process to be
changed. NOTE: This process could result in a Regression Testing: Selective retesting to
totally new product. detect faults introduced during modification
of a system or system component, to verify
Productivity: The ratio of the output of a that modifications have not caused
process to the input, usually measured in the unintended adverse effects, or to verify that
same units. It is frequently useful to compare a modified system or system component still
the value added to a product by a process to meets its specified requirements.
the value of the input resources required
(using fair market values for both input and Reliability: The probability of failure-free
output). operation for a specified period.
Proof Checker: A program that checks Requirement: A formal statement of: 1) an
formal proofs of program properties for attribute to be possessed by the product or a
logical correctness. function to be performed by the product; 2)
the performance standard for the attribute or
Prototyping: Evaluating requirements or function; or 3) the measuring process to be
designs at the conceptualization phase, the used in verifying that the standard has been
requirements analysis phase, or design met.
phase by quickly building scaled-down

Page 4 of 6
CQA Doc No 20
Review: A way to use the diversity and Static Testing: Verification performed
power of a group of people to point out without executing the system's code. Also
needed improvements in a product or called static analysis.
confirm those parts of a product in which Statistical Process Control: The use of
improvement is either not desired or not statistical techniques and tools to measure
needed. A review is a general work product an ongoing process for change or stability.
evaluation technique that includes desk
checking, walkthroughs, technical reviews, Structural Coverage: This requires that
peer reviews, formal reviews, and each pair of module invocations be executed
inspections. at least once.
Run Chart: A graph of data points in Structural Testing: A testing method where
chronological order used to illustrate trends the test data is derived solely from the
or cycles of the characteristic being program structure.
measured for the purpose of suggesting an Stub: A software component that usually
assignable cause rather than random minimally simulates the actions of called
variation. components that have not yet been
Scatter Plot (correlation diagram): A integrated during top-down testing.
graph designed to show whether there is a Supplier: An individual or organization that
relationship between two changing factors. supplies inputs needed to generate a
Semantics: 1) The relationship of characters product, service, or information to an end
or a group of characters to their meanings, user.
independent of the manner of their Syntax: 1) The relationship among
interpretation and use. 2) The relationships characters or groups of characters
between symbols and their meanings. independent of their meanings or the
Software Characteristic: An inherent, manner of their interpretation and use; 2)
possibly accidental, trait, quality, or property the structure of expressions in a language;
of software (for example, functionality, and 3) the rules governing the structure of
performance, attributes, design constraints, the language.
number of states, lines of branches). System: A collection of people, machines,
Software Feature: A software and methods organized to accomplish a set
characteristic specified or implied by of specified functions.
requirements documentation (for example, System Simulation: Another name for
functionality, performance, attributes, or prototyping.
design constraints). System Testing: The process of testing an
Software Tool: A computer program used to integrated hardware and software system to
help develop, test, analyze, or maintain verify that the system meets its specified
another computer program or its requirements.
documentation; e.g., automated design tools, Technical Review: A review that refers to
compilers, test tools, and maintenance tools. content of the technical material being
Standards: The measure used to evaluate reviewed.
products and identify nonconformance. The Test Bed: 1) An environment that contains
basis upon which adherence to policies is the integral hardware, instrumentation,
measured. simulators, software tools, and other support
Standardize: Procedures are implemented elements needed to conduct a test of a
to ensure that the output of a process is logically or physically separate component.
maintained at a desired level. 2) A suite of test programs used in
Statement Coverage Testing: A test conducting the test of a component or
method satisfying coverage criteria that system.
requires each statement be executed at least Test Case: The definition of test case differs
once. from company to company, engineer to
Statement of Requirements: The engineer, and even project to project. A test
exhaustive list of requirements that define a case usually includes an identified set of
product. NOTE: The statement of information about observable states,
requirements should document requirements conditions, events, and data, including inputs
proposed and rejected (including the reason and expected outputs.
for the rejection) during the requirements Test Development: The development of
determination process. anything required to conduct testing. This
may include test requirements (objectives),

Page 5 of 6
CQA Doc No 20
strategies, processes, plans, software, as when walking through code, line by line,
procedures, cases, documentation, etc. with an imagined set of inputs. The term has
Test Executive: Another term for test been extended to the review of material that
harness. is not procedural, such as data descriptions,
reference manuals, specifications, etc.
Test Harness: A software tool that enables
the testing of software components that links White-box Testing: Testing approaches
test capabilities to perform specific tests, that examine the program structure and
accept program inputs, simulate missing derive test data from the program logic.
components, compare actual outputs with
expected outputs to determine correctness,
and report discrepancies.
Test Objective: An identified set of software
features to be measured under specified
conditions by comparing actual behavior with
the required behavior described in the
software documentation.
Test Plan: A formal or informal plan to be
followed to assure the controlled testing of
the product under test.
Test Procedure: The formal or informal
procedure that will be followed to execute a
test. This is usually a written document that
allows others to execute the test with a
minimum of training.
Testing: Any activity aimed at evaluating an
attribute or capability of a program or
system to determine that it meets its
required results. The process of exercising or
evaluating a system or system component
by manual or automated means to verify that
it satisfies specified requirements or to
identify differences between expected and
actual results.
Top-down Testing: An integration testing
technique that tests the high-level
components first using stubs for lower-level
called components that have not yet been
integrated and that stimulate the required
actions of those components.
Unit Testing: The testing done to show
whether a unit (the smallest piece of
software that can be independently compiled
or assembled, loaded, and tested) satisfies
its functional specification or its implemented
structure matches the intended design
User: The end user that actually uses the
product received.
Validation: The process of evaluating
software to determine compliance with
specified requirements.
Verification: The process of evaluating the
products of a given software development
activity to determine correctness and
consistency with respect to the products and
standards provided as input to that activity.
Walkthrough: Usually, a step-by-step
simulation of the execution of a procedure,
Page 6 of 6