Académique Documents
Professionnel Documents
Culture Documents
characteristics:
• To perform effective testing, you should conduct effective technical reviews . By doing this, many errors will
be eliminated before testing commences.
• Testing begins at the component level and works “outward” toward the integration of the entire computer-
based system.
• Different testing techniques are appropriate for different software engineering approaches and at different
points in time.
• Testing is conducted by the developer of the software and (for large projects) an independent test group.
• Testing and debugging are different activities, but debugging must be accommodated in any testing
strategy.
• Verification refers to the set of tasks that ensure that software correctly implements a specific function.
• Validation refers to a different set of tasks that ensure that the software that has been built is traceable to
customer requirements.
• Boehm states this another way:
• Verification: “Are we building the product right?”
• Validation: “Are we building the right product?”
• The software developer is always responsible for testing the individual units (components) of the program,
ensuring that each performs the function.
• In many cases, the developer also conducts integration testing—a testing step that leads to the construction
(and test) of the complete software architecture.
• The role of an independent test group (ITG) is to remove the inherent problems associated with software .
• Independent testing removes the conflict of interest that may otherwise be present. After all, ITG personnel
are paid to find errors.
• The developer and the ITG work closely throughout a software project to ensure that thorough tests will be
conducted. While testing is conducted, the developer must be available to correct errors that are uncovered.
• The ITG is part of the software development project team in the sense that it becomes involved during
analysis and design and stays involved (planning and specifying test procedures) throughout a large project.
• However, in many cases the ITG reports to the software quality assurance organization.
• Initially system engineering defines the role of software and leads to software requirements analysis, where
the information domain, function, behaviour, performance, constraints, and validation criteria for software
are established.
• Moving inward along the spiral, you come to design and finally to coding.
• To develop computer software, you spiral inward (counterclockwise) along streamlines that decrease the
level of abstraction on each turn.
• Unit testing begins at the vortex of the spiral and concentrates on each unit (e.g., component, class and
object) of the software as implemented in source code.
• integration testing, where the focus is on design and the construction of the software architecture.
• validation testing, where requirements established as part of requirements modeling are validated against
the software that has been constructed.
• system testing, where the software and other system elements are tested as a whole.
• To test computer software, you spiral out in a clockwise direction along streamlines that broaden the scope
of testing with each turn.
1
• Considering the process from a procedural point of view, testing within the context of software engineering
is actually a series of four steps that are implemented sequentially. The steps are shown in Figure 17.2.
Initially, tests focus on each
• Unit testing makes heavy use of testing techniques that exercise specific paths in a component’s control
structure to ensure complete coverage and maximum error detection.
• Next, components must be assembled or integrated to form the complete software package. Integration
testing addresses the issues associated with the dual problems of verification and program construction.
• After the software has been integrated (constructed), a set of high-order tests is conducted. Validation
criteria (established during requirements analysis) must be evaluated.
• Validation testing provides final assurance that software meets all informational, functional, behavioral, and
performance requirements.
• The last high-order testing step falls outside the boundary of software engineering and into the broader
context of computer system engineering.
• Software, once validated, must be combined with other system elements (e.g., hardware, people,
databases). System function/performance is achieved.
2
• Unit testing focuses verification effort on the smallest unit of software design—the software component or
module.
• The unit test focuses on the internal processing logic and data structures within the boundaries of a
component.
• This type of testing can be conducted in parallel for multiple components.
• The module interface is tested to ensure that information properly flows into and out of the program.
• Local data structures are examined to ensure that data stored temporarily maintains its integrity during all
steps in an execution.
• All independent paths ensure that all statements in a module have been executed at least once.
• Boundary conditions are tested to ensure that the module operates properly at boundaries established to
limit or restrict processing.
• finally, all error-handling paths are tested.
• A good design anticipates error conditions and establishes error-handling paths to reroute or cleanly
terminate processing when an error does occur. calls this approach antibugging
• Unit-test procedure : The design of unit tests can occur before coding begins or after source code has been
generated.
• Each test case should be coupled with a set of expected results.
• Because a component is not a stand-alone program, driver and/or stub software must often be developed
for each unit test.
• The unit test environment is illustrated in Figure 17.4.
• In most applications a driver is nothing more than a “main program” that accepts test case data, passes such
data to the component (to be tested), and prints relevant results.
• Stubs serve to replace modules that are subordinate (invoked by) the component to be tested.
• A stub or “dummy subprogram” uses the subordinate module’s interface, may do minimal data
manipulation, prints verification of entry, and returns control to the module undergoing testing.
• Integration testing is a systematic technique for constructing the software architecture while at the same
time conducting tests to uncover errors associated with interfacing.
• There is using a “big bang” approach. All components are combined in advance. The entire program is tested
as a whole. And chaos usually results. so The program is constructed and tested in small increments, where
errors are easier to isolate and correct;
• Top-down integration : Top-down integration testing is an incremental approach to construction of the
software architecture. Modules are integrated by moving downward through the control hierarchy,
3
beginning with the main control module (main program). Modules subordinate (and ultimately subordinate)
to the main control module are incorporated into the structure in either a depth-first manner.
• Referring to Figure 17.5, depth-first integration integrates all components on a major control path of the
program structure.
•
• For example, selecting the left-hand path, components M1, M2 , M5 would be integrated first. Next, M8 or
M6 would be integrated.
• From the figure, components M2, M3, and M4 would be integrated first. The next control level, M5, M6, and
so on, follows.
• The integration process is performed in a series of five steps:
• 1. The main control module is used as a test driver and stubs are substituted for all components directly
subordinate to the main control module.
• 2. Depending on the integration approach selected subordinate stubs are replaced one at a time with actual
components.
• 3. Tests are conducted as each component is integrated.
• 4. On completion of each set of tests, another stub is replaced with the real component.
• 5. Regression testing may be conducted to ensure that new errors have not been introduced.
• Bottom-up integration.:it begins construction and testing with atomic modules (i.e., components at the
lowest levels in the program structure). Because components are integrated from the bottom up, the
functionality provided by components subordinate to a given level is always available.
• A bottom-up integration strategy may be implemented with the following steps:
• 1. Low-level components are combined into clusters (sometimes called builds) that perform a specific
software sub function.
• 2. A driver (a control program for testing) is written to coordinate test case input and output.
• 3. The cluster is tested.
• 4. Drivers are removed and clusters are combined moving upward in the program structure
• Integration follows the pattern illustrated in Figure 17.6. Components are combined to form clusters 1, 2,
and 3. Each of the clusters is tested using a driver (shown as a dashed block).
• Components in clusters 1 and 2 are subordinate to Ma. Drivers D1 and D2 are removed and the clusters are
interfaced directly to Ma.
4
• Similarly, driver D3 for cluster 3 is removed prior to integration with module Mb. Both Ma and Mb will
ultimately be integrated with component Mc, and so forth.
• Regression testing : Each time a new module is added as part of integration testing, the software changes.
New data flow paths are established, new I/O may occur, and new control logic is invoked.
• regression testing is the reexecution of some subset of tests that have already been conducted
• Regression testing helps to ensure that changes, do not introduce unintended behaviour or additional errors.
• Regression testing may be conducted manually, by reexecuting a subset of all test cases or using automated
capture/playback tools.
• Capture/playback tools enable the software engineer to capture test cases and results for subsequent
playback and comparison.
• The regression test suite contains three different classes of test cases:
• A tests that will exercise all software functions. Additional tests that focus on software functions that are
likely to be affected by the change. Tests that focus on the software components that have been changed.
• Smoke testing : Smoke testing is an integration testing approach that is commonly used when product
software is developed.
• It is designed as a pacing mechanism for time-critical projects, allowing the software team to assess the
project on a frequent basis.
• the smoke-testing approach encompasses the following activities:
• 1. Software components have been translated into code are integrated into a build. A build includes all data
files, libraries, reusable modules, and engineered components that are required to implement one or more
product functions.
• 2. A series of tests is designed to expose errors that will keep the build from properly performing its function.
• 3. The build is integrated with other builds, and the entire product (in its current form) is smoke tested daily.
The integration approach may be top down or bottom up.
Strategic options
Critical modules should be tested as early as is possible. In addition, regression tests should focus on critical module
function.
• An overall plan for integration of the software and a description of specific test is documented in a Test
Specification.
• This test plan and test procedure becomes part of the software configuration.
5
• Testing is divided into phases and builds that address specific functional and behavioural characteristics of
the software.
• Ex:Safe home test pahases
• User interaction
• Sensor processing
• Communications functions
• Alarm processing
• The following criteria and corresponding tests are applied for all test phases:
• Interface integrity: Internal and external interfaces are tested as each module.
• Functional validity: Tests designed to uncover functional errors are conducted.
• Information content: Tests designed to uncover errors associated with local or global data structures are
conducted.
• Performance: Tests designed to verify performance bounds established during software design are
conducted.
• Validation testing begins at the culmination (final stage) of integration testing, when individual components
have been exercised, the software is completely assembled as a package, and interfacing errors have been
uncovered and corrected
• a Validation Criteria section that forms the basis for a validation-testing approach.
• 1.validation testing criteria
• 2.Configuration review
• 3.Alpha and beta testing
•
• 1.validation testing criteria
• Software validation is achieved through a series of tests that demonstrate conformity with requirements.
• A test plan outlines the classes of tests to be conducted, and a test procedure defines specific test cases that
are designed to ensure that all functional requirements are satisfied, all behavioral characteristics are
achieved, all content is accurate and properly presented, all performance requirements are attained,
documentation is correct, and usability and other requirements are met (e.g., transportability, compatibility,
error recovery, maintainability).
• After each validation test case has been conducted, one of two possible conditions exists:
(1) The function or performance characteristic conforms to specification and is accepted
(2) a deviation from specification is uncovered and a deficiency (a need of something that is absent )list is
created.
2.Configuration Review
• The intent of the review is to ensure that all elements of the software configuration have been properly
developed.
• The configuration review, sometimes called an audit.
• The alpha test is conducted at the developer’s site by a representative group of end users.
• The software is used in a natural setting with the developer “looking over the shoulder” of the users and
recording errors and usage problems.
• Alpha tests are conducted in a controlled environment.
• The beta test is conducted at one or more end-user sites. Unlike alpha testing, the developer generally is not
present.
• Therefore, the beta test is a “live” application of the software in an environment that cannot be controlled
by the developer.
• The customer records all problems that are encountered during beta testing and reports these to the
developer at regular intervals.
6
• A variation on beta testing, called customer acceptance testing, is sometimes performed when custom
software is delivered to a customer under contract.
System Testing
• software is incorporated with other system elements (e.g., hardware, people, information), and a series of
system integration and validation tests are conducted.
• A classic system-testing problem is “finger pointing.” This occurs when an error is uncovered, and the
developers of different system elements blame each other for the problem.
• Rather than blaming team should anticipate potential interfacing problems like
(1) design error-handling paths that test all information coming from other elements of the system
(2) conduct a series of tests
(3) record the results of tests to use as “evidence” if finger pointing does occur
(4) participate in planning and design of system tests to ensure that software is adequately tested
• Recovery Testing
• Security Testing
• Stress Testing
• Performance Testing
• Deployment Testing
•
Recovery Testing : Many computer-based systems must recover from faults and resume processing with
little or no downtime.
• a system must be fault tolerant; that is, processing faults must not cause overall system function to cease.
• a system failure must be corrected within a specified period of time.
• Recovery testing is a system test that forces the software to fail in a variety of ways and verifies that recovery
is properly performed
•
• Security testing :Security testing attempts to verify that protection mechanisms built into a system , in fact,
protect it from improper penetration.
• Ex :Hackers
•
• Stress Testing :Stress testing executes a system in a manner that demands resources in abnormal quantity,
frequency, or volume.
• For example,
• (1) special tests may be designed that generate ten interrupts per second.
• (2) input data rates may be increased by an order of magnitude to determine how input functions will
respond
• (3) test cases that require maximum memory or other resources are executed
• (4) test cases that may cause thrashing in a virtual operating system are designed
7
• Deployment Testing
• software must execute on a variety of platforms and under more than one operating system environment.
Deployment testing, sometimes called configuration testing, exercises the software in each environment in
which it is to operate.
• As an example The SafeHome WebApp must be tested using all Web browsers with various operating
systems (e.g., Linux, Mac OS, Windows).
•
• White Box Testing
• White-box testing, sometimes called glass-box testing.
• It is a test-case design uses the control structure described as part of component-level design to derive test
cases.
• Using white-box testing methods, you can derive test cases that
(1) guarantee that all independent paths within a module have been exercised at least once
(2) exercise all logical decisions on their true and false sides
(3) Execute all loops at their boundaries and within their operational bounds
(4) Exercise internal data structures to ensure their validity.
•
8
• The symbolic representation of a graph is shown in Figure 18.8a. Nodes are represented as circles connected
by links that take a number of different forms.
• A directed link (represented by an arrow) indicates that a relationship moves in only one direction. A
bidirectional link, also called a symmetric link, implies that the relationship applies in both directions. Parallel
links are used when a number of different relationships are established between graph nodes
• As a simple example, consider a portion of a graph for a word-processing
• application (Figure 18.8b) where
• Object #1 newFile (menu selection)
• Object #2 documentWindow
• Object #3 documentText
•
• Equivalence partitioning
• It divides the input domain of a program into classes of data from which test cases can be derived.
• An ideal test There can not be single test case ,many test cases need to be executed before the general
error is observed.
• Typically, an input condition is either a specific numeric value, a range of values, a set of related values, or a
Boolean condition. Equivalence classes may be defined according to the following guidelines:
1. If an input condition specifies a range, one valid and two invalid equivalence classes are defined.
Ex : range 10-20
<10 :not valid
>20 :not valid
10-20 :valid
2. If an input condition requires a specific value, one valid and two invalid equivalence classes are defined.
Ex: value=18
<18 :not valid
>18 :not valid
=18: valid
3. If an input condition specifies a member of a set, one valid and one invalid equivalence class are defined.
Value=4 set{1,2,3,4}
Value of a set is valid
Not a Value of a set : not valid
4. If an input condition is Boolean, one valid and one invalid class are defined. By applying the guidelines for the
derivation of equivalence classes, test cases for each input domain data item can be developed and executed.
True :valid
False : not valid
•
• In this data can be traversed in all 3 dimensions.
•
• THE ART OF DEBUGGING
• Software testing is a process that can be systematically planned and specified. Testcase strategy can be
defined, and results can be evaluated against prescribed expectations.
• Debugging occurs as a outcome of successful testing. That is, when a test case uncovers an error, debugging
is the process that results in the removal of the error.
• The Debugging Process
• the debugging process begins with the execution of a test case.
• Results are assessed and a lack of correspondence between expected and actual performance is
encountered.
• The debugging process attempts to match symptom with cause, thereby leading to error correction.
• The debugging process will usually have one of two outcomes:
(1) the cause will be found and corrected or
(2) the cause will not be found.
In the latter case, the person performing debugging may find a cause, and work toward error correction in an
iterative fashion.
10
• Debugging Strategies
• The objective is to find and correct the cause of a software error or defect.
• three debugging strategies are:
Each of these strategies can be conducted manually, but modern debugging tools can make the process much more
effective
• Alternatively, a list of all possible causes is developed and tests are conducted to eliminate each.
• Automated debugging.
• Each of these debugging approaches can be supplemented with debugging tools that can provide you with
semiautomated support as debugging strategies are attempted.
• Integrated development environments (IDEs) provide a way to capture some of the language specific
predetermined errors (e.g., missing end-of-statement characters, undefined variables, and so on) without
requiring compilation.”
• Correcting the Error
• Once a bug has been found, it must be corrected.
• But, as we have already noted, the correction of a bug can introduce other errors.
• Van Vleck suggests three simple questions that you should ask before making the “correction” that removes
the cause of a bug:
• Is the cause of the bug reproduced in another part of the program?
• What "next bug" might be introduced by the fix I'm about to make?
• What could we have done to prevent this bug in the first place
•
• Product metrics
• A key element of any engineering process is measurement.
• You can use measures to better understand the attributes of the models like measurement of functionality,
performance etc.
• What is Software Quality
• Conformance to the explicitly stated functional and performance requirements, explicitly documented
development standards, and implicit characteristics (e.g. ease of use and reliable performance) that are
expected of all professionally developed S/W.
The 11 factors are grouped into three categories – product operation, product revision, and product transition
factors.
•
A) Product Operation Software Quality Factors
• These factors deal with the requirements that directly affect the daily operation of the software.
1) Correctness: The extent to which a program satisfies and fulfills the customer’s mission objectives.
They include −
2) Reliability: The extent to which a program can be expected to perform its intended function with required
precision.
• It deals with service failure of entire system or one or more of its functions.
• They determine the maximum allowed failure rate of the software system
3) Efficiency: The amount of computing resources (h/w and s/w) and code required to perform is function.
• It includes processing capabilities (given in MHz), its storage capacity (given in MB or GB) and the data
communication capability (given in MBPS or GBPS).
• It also deals with the time between recharging of the system’s portable units
4) Integrity: The extent to which access to S/W or data by unauthorized persons can be controlled.
12
• It deals with system security and granting access permissions.
5) Usability: The effort required to learn, operate, prepare input for, and interpret output of a program and train the
new employees to use the software.
•
B) Product Revision Quality Factors
• These factors deals with the software’s ability to undergo change.
1) Maintainability: The effort required to locate and fix errors in a program and verify the success of the corrections.
• These include adapting the current software to additional circumstances and customers without changing
the software.
• It also supports perfective maintenance activities, such as changes and additions to the software in order to
improve its service and to adapt it to changes in the firm’s technical or commercial environment.
3) Testability: The effort required to test a program to ensure that it performs its intended function.
• It includes predefined intermediate results, log files, and also the automatic diagnostics performed by the
software system prior to starting the system
• to find out whether all components of the system are in working order and to obtain a report about the
detected faults.
•
C) Product Transition Software Quality Factor
• It deals with the adaptation of software to other environments and its interaction with other software
systems.
1) Portability: The effort required to transfer the program from one hardware and/or software system
environment to another.
• The software should be possible to continue using the same basic software in diverse situations.
2) Reusability: The extent to which a program can be reused in other applications-related to the packaging and scope
of the functions that the program performs.
• Also it means that the use of software modules originally designed for one project in a new software project
currently being developed.
• The reuse of software is expected to save development resources, shorten the development period, and
provide higher quality modules.
• It focuses on creating interfaces with other software systems or with other equipment firmware.
• For example, the firmware of the production machinery and testing equipment interfaces with the
production control software.
•
• ISO 9126
• ISO 9126 is an international standard for the evaluation of software quality.
• Functionality - A set of attributes that bear on the existence of a set of functions and their specified
properties. The functions are those that satisfy stated or implied needs.
– Suitability
– Accuracy
– Interoperability
– Compliance
– Security
13
• Reliability - A set of attributes that bear on the capability of software to maintain its level of performance
under stated conditions for a stated period of time.
– Maturity
– Recoverability
•
• Usability - A set of attributes that bear on the effort needed for use, and on the individual assessment of such
use, by a stated or implied set of users.
– Learnability
– Understandability
– Operability
• Efficiency - A set of attributes that bear on the relationship between the level of performance of the software
and the amount of resources used, under stated conditions.
– Time Behavior
– Resource Behavior
•
• Maintainability - A set of attributes that bear on the effort needed to make specified modifications.
– Stability
– Analyzability
– Changeability
– Testability
• Portability - A set of attributes that bear on the ability of software to be transferred from one environment to
another.
– Installability
– Replaceability
– Adaptability
• 23.1 A Framework for product metric
• 23.1.1 Measures, Metrics, and Indicators
• 23.1.2 The Challenge of Product Metrics
• 23.1.3 Measurement Principles
• 23.1.4 Goal-Oriented Software Measurement
• 23.1.5 The Attributes of Effective Software Metrics
•
• Measures, Metrics and Indicators
• A measure provides a quantitative indication of the extent, amount, dimension, capacity, or size of some
attribute of a product or process
• The IEEE glossary defines a metric as “a quantitative measure of the degree to which a system, component, or
process possesses a given attribute.”
• An indicator is a metric or combination of metrics that provide insight (feeling of understanding) into the
software process, a software project, or the product itself .
• An indicator provides insight that enables the project manager or software engineers to adjust the process,
the project, or the product to make things better.
• The Challenge of Product Metrics
• a single metric can not provides a comprehensive measure of software complexity.
• Many researchers has given product metric for software but each product metric was leads to different
attributes of a system or software.
• Measurement Principles
• product metrics introduces
(1) establish an explicit measurement goal that is specific to the process activity or product characteristic that is to
be assessed
(2) define a set of questions that must be answered in order to achieve the goal, and (3) identify well-formulated
metrics that help to answer these questions.
(1) estimate the cost or effort required to design, code, and test the software;
(2) predict the number of errors that will be encountered during testing;
(3) forecast the number of components and/or the number of projected source lines in the implemented system.
15
• Number of internal logical files (ILFs).:it is a logical grouping of data that resides within the application’s
boundary and is maintained via external inputs. :ex :database
• Number of external interface files (EIFs).: Each external interface file is a logical grouping of data that resides
external to the application but provides information that may be of use to the application.
• To compute function points (FP), the following relationship is used:
• FP = count total* [0.65 + 0.01 * ∑ (Fi)] (23.1)
• where count total is the sum of all FP entries obtained from Figure 23.1
• .
• The Fi (i 1 to 14) are value adjustment factors (VAF) based on responses to the question related to software.
• Ex:
• Are the inputs, outputs, files, or inquiries complex?
• . Is the internal processing complex?
• . Is the code designed to be reusable
• Example of safe home software
•
• The data flow diagram is evaluated to determine a set of key information domain measures required for
computation of the function point metric.
• Three external inputs—password, panic button, and activate/deactivate—are shown in the figure
• two external inquiries—zone inquiry and sensor inquiry.
• One ILF (system configuration file) is shown. Two external outputs (messages and sensor status)
• four EIFs (test sensor, zone setting, activate/deactivate, and alarm alert) are also present.
• These data, along with the appropriate complexity, are shown in Figure 23.3.
•
• The count total shown in Figure 23.3 must be adjusted using Equation (23.1).
•
• For the purposes of this example, we assume that (Fi) is 46 (a moderately complex product). Therefore,
• FP= 50 * [0.65 + (0.01 * 46)] = 56
• Based on the projected FP value derived from the requirements model,
16
• 23.2.2 Metrics for Specification Quality
• a list of characteristics that can be used to assess the quality of the requirements model and the
corresponding requirements
• where nui : is the number of requirements for which all reviewers had identical interpretations. The closer
the value of Q to 1, the lower is the ambiguity of the specification.
•
•
• Metrics for design model :Determine metric for various aspect of design quality.it measures complexity and
goodness of a design
• Architectural design metric
• Metric for object oriented design.
• Class oriented metric-the CK Metric suite.
• Class oriented metric-the mood metric suite.
• OO Metric proposed by lorenz and kidd
• Component level design metric
• Operation Oriented metric
• User interface design metric
•
• Architectural design metric
• It focus on characteristics of the program architecture structure and the effectiveness of modules or
components within the architecture.
• Card and glass define three software design complexity measures:
– Structural complexity = g(fan-out)
– Data complexity = f(input & output variables, fan-out)
– System complexity = h(structural & data complexity)
•
• F(out) is the number of module subordinate to module i.
17
•
•
• Fenton suggest a number of simple morphology (shape) metrics that enable different program architecture
o be compared using a set of straight forward dimension
• Referring to the call-and-return architecture in Figure 23.4, the following metrics can be defined:
• Size = n + a
• where n is the number of nodes
• a is the number of arcs.
• For the architecture
• shown in Figure 23.4,
• Size = 17 + 18 = 35
• Depth= longest path from the root (top) node to a leaf node. For the architecture shown in Figure 23.4,
depth 4.
• Width= maximum number of nodes at any one level of the architecture. For the architecture shown in
Figure 23.4, width 6.
• The arc-to-node ratio, r= a/n, measures the connectivity density of the architecture and may provide a
simple indication of the coupling of the architecture. For the architecture shown in Figure 23.4, r = 18/17 =
1.06.
• Metrics for OO Design-I
• Whitmire [Whi97] describes nine distinct and measurable characteristics of an OO design:
– Size
• Size is defined in terms of four views: population, volume, length, and functionality
• Population : static count of OO entities.
• Volume: instance of a time.
• Length: chain of interconnected design elements.
• Functionality :value delivered to the customer.
– Complexity
• How classes of an OO design are inter related to one another
– Coupling
• The physical connections between elements of the OO design
– Sufficiency
• “the degree to which an abstraction possesses the features required of it, or the degree to
which a design component possesses features in its abstraction, from the point of view of
the current application.”
– Completeness
• An indirect implication about the degree to which the abstraction or design component can
be reused
18
– Cohesion
• The degree to which all operations working together to achieve a single, well-defined
purpose
– Primitiveness
• Applied to both operations and classes, the degree to which an operation is atomic
(operation should be constructed with the sequence of other operation in a class).
– Similarity
• The degree to which two or more classes are similar in terms of their structure, function,
behavior, or purpose
– Volatility
• Measures the likelihood that a change will occur
• Class-Oriented Metrics—The CK Metrics Suite
• It measures and metrics for an individual class, the class hierarchy, and class collaborations.
• OO software metrics has been proposed by Chidamber and Kemerer.
• There are six class-based design metrics for OO systems.
• There are six CK class-based design metrics for OO systems
• WMC (Weighted Methods per Class)
• DIT (Depth of Inheritance Tree)
• NOC (Number of Children)
• CBO (Coupling Between Objects)
• RFC (Response for Class)
• LCOM (Lack of Cohesion of Methods)
•
• WMC (Weighted Methods per Class)
• Assume that n methods of complexity c1, c2, . . ., cn are defined for a class C. The specific complexity metric
that is chosen should be normalized so that nominal complexity for a method takes on a value of 1.0.
• WMC = ci
• for i = 1 to n. The number of methods and their complexity are reasonable indicators of the amount of effort
required to implement and test a class.
• the larger the number of methods, the more complex is the inheritance .
• Finally, as the number of methods grows for a given class, it is likely to become more and more application
specific, thereby limiting potential reuse.
• For all of these reasons, WMC should be kept as low as is reasonable
• Depth of the inheritance tree (DIT).
• This metric is “the maximum length from the node to the root of the tree” . Referring to figure, the value of
DIT for the class-hierarchy shown is 4. As DIT grows, it is likely that lower-level classes will inherit many
methods. This leads to potential difficulties when attempting to predict the behavior of a class. A deep class
hierarchy (DIT is large) also leads to greater design complexity. On the positive side, large DIT values imply
that many methods may be reused.
•
• Number of children (NOC).
• The subclasses that are immediately subordinate to a class in the class hierarchy are termed its children.
• Referring to figure, class C2 has three children—subclasses C21, C22, and C23.
• As the number of children grows, reuse increases but also, as NOC increases.
• That is, some of the children may not really be appropriate members of the parent class. As NOC increases,
the amount of testing will also increase.
19
Coupling between object classes (CBO): is the number of collaborations listed for a class on its CRC index card. The
CBO must be kept low. If CBO increases, the reusability of class decreases ,testing and modification becomes
difficult.
5. Response for a class (RFC): is a set of methods that can potentially be executed in response to a message received
by an object of that class. RFC is the number of methods in the response set. RFC must be kept low. As RFC increases
the effort for testing and design complexity also increases.
6) Lack of cohesion in methods (LCOM): is the number of methods that access one or more of the same attributes
(common attributes). Keep LCOM low
• Metrics for the Source Code Halstead shows that length N can be estimated
N = n1 log2 n1 + n2 log2 n2
• Halstead defines a set of primitive measures that can be derived after the code is generated or design is
ready.
• These primitives can be used to determine the program length, volume, level, development effort, time and
number of faults.
• Architectural design metrics tell how easy or difficult integrating testing will be.
• Cyclomatic complexity of components or modules tell which modules must be unit tested and which are
error prone.
Using Halstead’s measures The percentage of overall testing effort to be allocated to a module k can be estimated
as,
Percentage of testing effort (k) = e(k) / ∑e(i)
Where e(k) is calculated for a module k as
e = V / PL
and PL = 1 / ((n1 / 2) x (N2/n2) )
And e(i) is the sum of Halstead effort across all modules of the system.
• Metrics for Maintainence
• the stability of a software product (based on changes after each release of the product) is based on
following factors:
MT = number of modules in the current release
Fc = number of modules in the current release that have been changed
Fa = number of modules in the current release that have been added
Fd = number of modules from the preceding release that were deleted in the current release
Hence The software maturity index is computed in the following manner:
SMI = (MT – (Fa + Fc + Fd) ) / MT
20