Vous êtes sur la page 1sur 8

Chapter1: pg 24 Chapter1: Introduction to Testing as an Engineering Activity Test specialist: A test specialist who is trained as an engineer should have

knowledge of test-related principles,processes, measurements, standards, plans, tools, and methods, and should learn how to apply them to the testing tasks to be performed. 1 . 1 The Role of Process in Software Quality Process, in the software engineering domain, is the set of methods, practices, standards, documents, activities, policies, and procedures that software engineers use to develop and maintain a software system and its associated artifacts, such as project and test plans, design documents, code, and manuals. 1 . 2 Testing as a Process Testing itself is related to two other processes called verification and validation as shown in Figure 1.3. Validation is the process of evaluating a software system or component during, or at the end of, the development cycle in order to determine whether it satisfies specified requirements [11]. Validation is usually associated with traditional execution-based testing, that is, exercising the code with test cases. Verification is the process of evaluating a software system or component to determine whether the products of a given development phase satisfy the conditions imposed at the start of that phase [11]. Testing itself has been defined in several ways. Two definitions are shown below.
Testing is generally described as a group of procedures carried out to evaluate some aspect of a piece of software. Testing can be described as a process used for revealing defects in software, and for establishing that the software has attained a specified degree of quality with respect to selected attributes.

Also note that testing and debugging, or fault localization, are two very different activities. The debugging process begins after testing has been carried out and the tester has noted that the software is not behaving as specified.

Debugging, or fault localization is the process of (1) locating the fault or defect, (2) repairing the code, and (3) retesting the code.

1.3

An Overview of the Testing Maturity Model Capability Maturity Model for software (CMM), a process improvement model that has received widespread support from the software industry in the United States [8]. Other process improvement models can be described as having a continuous type of architecture, for example, the SPICE model. In this type of architecture there is no fixed set of levels or stages to proceed through. The CMM has five levels or stages that describe an evolutionary pattern of software process maturity and serve as a guide for improvement. Each level has a set of Key Process Areas (KPA) that an organization needs to focus on to achieve maturity at that level. There are also key practices associated with each level that provide support for implementing improvements at that level. The CMM also has an assessment procedure that allows an organization to evaluate the current state of its software process and identify process strengths and weaknesses.
1 . 3 . 1 TMM L e v e l s

There are five levels in the TMM that prescribe a maturity hierarchy and an evolutionary path to test process improvement. Each level with the exception of level 1 has a structure that consists of the following: A set of maturity goals. The TMM levels and associated maturity goals are shown in Figure 1.5.

Supporting maturity subgoals. Activities, tasks and responsibilities (ATR). Level 1Initial: (No maturity goals) A documented set of specifications for software behavior often does not exist. Tests are developed in an ad hoc way after coding is completed. There is a lack of resources, tools and properly trained staff.

Level 2Phase Definition: (Goal 1: Develop testing and debugging goals; Goal 2: Initiate a testing planning process; Goal 3: Institutionalize basic testing techniques and methods). Level 3Integration: (Goal 1: Establish a software test organization; Goal 2: Establish a technical training program; Goal 3: Integrate testing into the software life cycle; Goal 4: Control and monitor testing): Unlike level 2, planning for testing at TMM level 3 begins at the requirements phase and continues throughout the life cycle supported by a version of the V-model Level 4Management and Measurement: (Goal 1: Establish an organization wide review program; Goal 2: Establish a test measurement program; Goal 3: Software quality evaluation): An extension of the V-model as shown in Figure 1.6 can be used to support the implementation of this goal [6,7]. Software products are tested for quality attributes such as reliability, usability, and maintainability. Test cases from all projects are collected and recorded in a test case database for the purpose of test case reuse and regression testing. Defects are logged and given a severity level.

Level 5Optimization/Defect Prevention/Quality Control: (Goal 1: Defect prevention; Goal 2: Quality control; Goal 3: Test process optimization) At level 5, mechanisms are in place so that testing can be fine-tuned and continuously improved. Defect prevention and quality control are practiced. Statistical sampling, measurements of confidence levels, trustworthiness, and reliability drive the testing process. Automated tools totally support the running and rerunning of test cases. Tools also provide support for test case design, maintenance of test-related items, and defect collection and analysis.

Chapter 2: Testing Fundamentals


Errors
An error is a mistake, misconception, or misunderstanding on the part of a software developer.

Faults (Defects)

A fault (defect) is introduced into the software as the result of an error. It is an anomaly in the software that may cause it to behave incorrectly, and not according to its specification.

Failures

A failure is the inability of a software system or component to perform its required functions within specified performance requirements [2].

Test Cases

A test case in a practical sense is a test-related item which contains the following information: 1. A set of test inputs. These are data items received from an external source by the code under test. The external source can be hardware, software, or human. 2. Execution conditions. These are conditions required for running the test, for example, a certain state of a database, or a configuration of a hardware device. 3. Expected outputs. These are the specified results to be produced by the code under test.

Test

A test is a group of related test cases, or a group of related test cases and test procedures (steps needed to carry out a test, as described in Chapter 7).

Test Oracle

A test oracle is a document, or piece of software that allows testers to determine whether a test has been passed or failed.

Test Bed

A test bed is an environment that contains all the hardware and software needed to test a software component or a software system.

Software Quality Two concise definitions for quality are found in the IEEE Standard Glossary of Software Engineering Terminology [2]:

1. Quality relates to the degree to which a system, system component, or process meets specified requirements. 2. Quality relates to the degree to which a system, system component, or process meets customer or user needs, or expectations.

quality attributes reflect quality.

A metric is a quantitative measure of the degree to which a system, system component, or process possesses a given attribute [2].

Quality metrics are a special kind of metric.

A quality metric is a quantitative measurement of the degree to which an item possesses a given quality attribute [2].

Some examples of quality attributes with brief explanations are the following:

correctnessthe degree to which the system performs its intended function reliabilitythe degree to which the software is expected to perform its required functions under stated conditions for a stated period of time usabilityrelates to the degree of effort needed to learn, operate, prepare input, and interpret output of the software integrityrelates to the systems ability to withstand both intentional and accidental attacks portabilityrelates to the ability of the software to be transferred from one environment to another maintainabilitythe effort needed to make changes in the software interoperabilitythe effort needed to link or couple one system to another. Testability: Testability can be expressed in the following two ways: 1. the amount of effort needed to test the software to ensure it performs according to specified requirements (relates to number of test cases needed), 2. the ability of the software to reveal defects under testing conditions (some software is designed in such a way that defects are well hidden during ordinary testing conditions).

Vous aimerez peut-être aussi