Académique Documents
Professionnel Documents
Culture Documents
If a piece of Software is modified for any reason testing needs to be done to ensure that it works
as specified and that it has not negatively impacted any functionality that it offered previously.
This is known as Regression Testing.
Regression testing means rerunning test cases from existing test suites to build confidence that
software changes have no unintended side-effects. The “ideal” process would be to create an
extensive test suite and run it after each and every change.
Every time a change occurs one or more of the following scenarios may occur:
- More Functionality may be added to the system
- More complexity may be added to the system
- New bugs may be introduced
- New vulnerabilities may be introduced in the system
- System may tend to become more and more fragile with each change
After the change the new functionality may have to be tested along with all the original
functionality.
integration testing
A type of testing in which software and/or hardware components are
combined and tested to confirm that they interact according to their
requirements. Integration testing can continue progressively until the
entire system has been integrated.
Integration Testing
The fundamentals of Integration Testing: Definition, Analogy, Ws,
Approaches, Tips
Integration Testing is a level of the software testing process where individual units
are combined and tested as a group.
Test drivers and test stubs are used to assist in Integration Testing.
Note: The definition of a unit could vary from people to people and it could mean
any of the following:
During the process of manufacturing a ballpoint pen, the cap, the body, the tail and
clip, the ink cartridge and the ballpoint are produced separately and unit tested
separately. When two or more units are ready, they are assembled and Integration
Testing is performed. For example, whether the cap fits into the body as required or
not.
Integration Testing is performed after Unit Testing and before System Testing.
Any of the Black Box Testing, White Box Testing, and Gray Box Testing methods can
be used. Normally the method depends on your definition of ‘unit’.
1. Big Bang is an approach to Integration Testing where all or most of the units
are combined together and tested at one go. This approach is taken when the
testing team receives the entire software in a bundle. So what is the
difference between Big Bang Integration Testing and System Testing? Well,
the former tests only the interactions between the units while the latter tests
the entire system.
2. Top Down is an approach to Integration Testing where top level units are
tested first and lower level units step by step after that. This approach is
taken when top down development approach is followed. Test Stubs are
needed to simulate lower level units which may not be available during the
initial phases.
3. Bottom Up is an approach to Integration Testing where bottom level units
are tested first and upper level units step by step after that. This approach is
taken when bottom up development approach is followed. Test Drivers are
needed to simulate higher level units which may not be available during the
initial phases.
4. Sandwich/Hybrid is an approach to Integration Testing which is a
combination of Top Down and Bottom Up approaches.
• Ensure that you have a proper Detail Design document where interactions
between each unit are clearly defined. In fact, you will not be able to perform
Integration Testing without this information.
• Ensure that you have a robust Software Configuration Management system in
place. Or else, you will have a tough time tracking the right version of each
unit, especially if the number of units to be integrated is huge.
• Make sure that each unit is first unit tested before you start Integration
Testing.
• As far as possible, automate your tests, especially when you use the Top
Down or Bottom Up approach, since regression testing is important each time
you integrate a unit, and manual regression testing can be inefficient.
Unit Testing is a level of the software testing process where individual units/components of a
software/system are tested. The purpose is to validate that the software performs as designed.
A unit is the smallest testable part of software. It usually has one or a few inputs and usually a
single output. In procedural programming a unit may be an individual program, function,
procedure, etc. In object-oriented programming, the smallest unit is a method, which may belong
to a base/super class, abstract class or derived/child class. (Some treat a module of an application
as a unit. This is to be discouraged as there will probably be many individual units within that
module.)
Unit testing frameworks, drivers, stubs and mock or fake objects are used to assist in unit testing.
Unit Testing is normally performed by software developers themselves or their peers. In rare
cases it may also be performed by independent software testers.
Unit Testing is primarily performed by using the White Box Testing method.
• Unit testing increases confidence in changing/maintaining code. If good unit tests are
written and if they are run every time any code is changed, the likelihood of any defects
due to the change being promptly caught is very high. If unit testing is not in place, the
most one can do is hope for the best and wait till the test results at higher levels of testing
are out. Also, if codes are already made less interdependent to make unit testing possible,
the unintended impact of changes to any code is less.
• Codes are more reusable. In order to make unit testing possible, codes need to be
modular. This means that codes are easier to reuse.
• Development is faster. How? If you do not have unit testing in place, you write your code
and perform that fuzzy ‘developer test’ (You set some breakpoints, fire up the GUI,
provide a few inputs that hopefully hit your code and hope that you are all set.) In case
you have unit testing in place, you write the test, code and run the tests. Writing tests
takes time but the time is compensated by the time it takes to run the tests. The test runs
take very less time: You need not fire up the GUI and provide all those inputs. And of
course, the unit tests are more reliable that ‘developer tests’. Development is faster in the
long run too. How? The effort required to find and fix defects found during unit testing is
peanuts in comparison to those found during system testing or acceptance testing.
• The cost of fixing a defect detected during unit testing is lesser in comparison to that of
defects detected at higher levels. Compare the cost (time, effort, destruction, humiliation)
of a defect detected during acceptance testing or say when the software is live.
• Debugging is easy. When a test fails, only the latest changes need to be debugged. With
testing at higher levels, changes made over the span of several days/weeks/months need
to be debugged.
• Codes are more reliable. Why? I think there is no need to explain this to a sane person.
DEFINITION
Gray Box Testing is a software testing method which is a combination of Black Box
Testing method and White Box Testing method. In Black Box Testing, the internal
structure of the item being tested is unknown to the tester and in White Box Testing
the internal structure in known. In Gray Box Testing, the internal structure is
partially known. This involves having access to internal data structures and
algorithms for purposes of designing the test cases, but testing at the user, or
black-box level.
Gray Box Testing is named so because the software program, in the eyes of the
tester is like a gray/semi-transparent box; inside which one can partially see.
EXAMPLE
An example of Gray Box Testing would be when the codes for two units/modules are
studied (White Box Testing method) for designing test cases and actual tests are
conducted using the exposed interfaces (Black Box Testing method).
LEVELS APPLICABLE TO
Though Gray Box Testing method may be used in other levels of testing, it is
primarily useful in Integration Testing.
Note that Gray is also spelt as Grey. Hence Grey Box Testing and Gray Box Testing
mean the same.
DEFINITION
White Box Testing (also known as Clear Box Testing, Open Box Testing, Glass Box
Testing, Transparent Box Testing or Structural Testing) is a software testing method
in which the internal structure/design/implementation of the item being tested is
known to the tester. The tester chooses inputs to exercise paths through the code
and determines the appropriate outputs. Programming know-how and the
implementation knowledge is essential. White box testing is testing beyond the user
interface and into the nitty-gritty of a system.
White Box Testing method is named so because the software program, in the eyes
of the tester, is like a white/transparent box; inside which one clearly sees.
White Box Testing is contrasted with Black Box Testing. View Differences between
Black Box Testing and White Box Testing.
EXAMPLE
LEVELS APPLICABLE TO
White Box Testing method is applicable to the following levels of the software
testing process:
• Testing can be commenced at an earlier stage. One need not wait for the GUI
to be available.
• Testing is more thorough, with the possibility of covering most paths.
• Since tests can be very complex, highly skilled resources are required, with
thorough knowledge of programming and implementation.
• Test script maintenance can be a burden if the implementation changes too
frequently.
• Since this method of testing it closely tied with the application being testing,
tools to cater to every kind of implementation/platform may not be readily
available.
White Box Testing is like the work of a mechanic who examines the engine to see
why the car is not moving.
DEFINITION
Black Box Testing, also known as Behavioral Testing, is a software testing method in
which the internal structure/design/implementation of the item being tested is not
known to the tester. These tests can be functional or non-functional, though usually
functional.
Black Box Testing method is named so because the software program, in the eyes
of the tester, is like a black box; inside which one cannot see.
Black Box Testing is contrasted with White Box Testing. View Differences between
Black Box Testing and White Box Testing.
EXAMPLE
A tester, without knowledge of the internal structures of a website, tests the web
pages by using a browser; providing inputs (clicks, keystrokes) and verifying the
outputs against the expected outcome.
LEVELS APPLICABLE TO
Black Box Testing method is applicable to all levels of the software testing process:
• Unit Testing
• Integration Testing
• System Testing
• Acceptance Testing
The higher the level, and hence the bigger and more complex the box, the more
black box testing method comes into use.
Equivalence partitioning
Cause Effect Graphing is a software test design technique that involves identifying
the cases (input conditions) and effects (output conditions), producing a Cause-
Effect Graph, and generating test cases accordingly.
• Tests are done from a user's point of view and will help in exposing
discrepancies in the specifications
• Tester need not know programming languages or how the software has been
implemented
• Tests can be conducted by a body independent from the developers, allowing
for an objective perspective and the avoidance of developer-bias
• Test cases can be designed as soon as the specifications are complete
DISADVANTAGES OF BLACK BOX TESTING
• Only a small number of possible inputs can be tested and many program
paths will be left untested
• Without clear specifications, which is the situation in many projects, test
cases will be difficult to design
• Tests can be redundant if the software designer/ developer has already run a
test case.
Ever wondered why a soothsayer closes the eyes when foretelling events? So is
almost the case in Black Box Testing.
Programming
Not Required Required
Knowledge
Implementati
Not Required Required
on Knowledge
For a combination of the two testing methods, see Gray Box Testing.
If you have any other differences between Black Box Testing and White Box Testing,
let me know and I will add them to the list.
system testing
<testing> (Or "application testing") A type of testing to
confirm that all code modules work as specified, and that the
system as a whole performs adequately on the platform on
which it will be deployed.
How does System Testing fit into the Software Development Life Cycle?
In a typical Enterprise, ‘unit testing’ is done by the programmers. This ensures that the individual
components are working OK. The ‘Integration testing’ focuses on successful integration of all
the individual pieces of software (components or units of code).
Once the components are integrated, the system as a whole needs to be rigorously tested to
ensure that it meets the Quality Standards.
Thus the System testing builds on the previous levels of testing namely unit testing and
Integration Testing.
........- In the Software Development Life cycle System Testing is the first level where
...........the System is tested as a whole
........- The System is tested to verify if it meets the functional and technical
...........requirements
........- The application/System is tested in an environment that closely resembles the
...........production environment where the application will be finally deployed
........- The System Testing enables us to test, verify and validate both the Business
...........requirements as well as the Application Architecture
When necessary, several iterations of System Testing are done in multiple environments.
As you may have read in the other articles in the testing series, this document typically describes
the following:
.........- The Testing Goals
.........- The key areas to be focused on while testing
.........- The Testing Deliverables
.........- How the tests will be carried out
.........- The list of things to be Tested
.........- Roles and Responsibilities
.........- Prerequisites to begin Testing
.........- Test Environment
.........- Assumptions
.........- What to do after a test is successfully carried out
.........- What to do if test fails
.........- Glossary
A Test Case describes exactly how the test should be carried out.
The System test cases help us verify and validate the system.
The System Test Cases are written such that:
........- They cover all the use cases and scenarios
........- The Test cases validate the technical Requirements and Specifications
........- The Test cases verify if the application/System meet the Business & Functional
...........Requirements specified
........- The Test cases may also verify if the System meets the performance standards
Since a dedicated test team may execute the test cases it is necessary that System Test Cases. The
detailed Test cases help the test executioners do the testing as specified without any ambiguity.
The format of the System Test Cases may be like all other Test cases as illustrated below:
• Test Case ID
• Test Case Description:
o What to Test?
o How to Test?
• Input Data
• Expected Result
• Actual Result
Test
What To How to Expected Actual
Case Input Data Pass/Fail
Test? Test? Result Result
ID
. . . . . . .
Additionally the following information may also be captured:
........a) Test Suite Name
........b) Tested By
........c) Date
........d) Test Iteration (The Test Cases may be executed one or more times)
1) Test Coverage: System Testing will be effective only to the extent of the coverage of Test
Cases. What is Test coverage? Adequate Test coverage implies the scenarios covered by the test
cases are sufficient. The Test cases should “cover” all scenarios, use cases, Business
Requirements, Technical Requirements, and Performance Requirements. The test cases should
enable us to verify and validate that the system/application meets the project goals and
specifications.
2) Defect Tracking: The defects found during the process of testing should be tracked.
Subsequent iterations of test cases verify if the defects have been fixed.
3) Test Execution: The Test cases should be executed in the manner specified. Failure to do so
results in improper Test Results.
4) Build Process Automation: A Lot of errors occur due to an improper build. ‘Build’ is a
compilation of the various components that make the application deployed in the appropriate
environment. The Test results will not be accurate if the application is not ‘built’ correctly or if
the environment is not set up as specified. Automating this process may help reduce manual
errors.
5) Test Automation: Automating the Test process could help us in many ways:
b. Some scenarios can be simulated if the tests are automated for instance
simulating a large number of users or simulating increasing large amounts
of input/output data
6) Documentation: Proper Documentation helps keep track of Tests executed. It also helps
create a knowledge base for current and future projects. Appropriate metrics/Statistics can be
captured to validate or verify the efficiency of the technical design /architecture.
Summary:
In this article we studied the necessity of ‘System Testing’ and how it is done.
User Acceptance Testing is often the final step before rolling out the application.
Usually the end users who will be using the applications test the application before ‘accepting’
the application.
This type of testing gives the end users the confidence that the application being delivered to
them meets their requirements.
This testing also helps nail bugs related to usability of the application.
Before the User Acceptance testing can be done the application is fully developed.
Various levels of testing (Unit, Integration and System) are already completed before User
Acceptance Testing is done. As various levels of testing have been completed most of the
technical bugs have already been fixed before UAT.
During this type of testing the specific focus is the exact real world usage of the application. The
Testing is done in an environment that simulates the production environment.
The Test cases are written using real world scenarios for the application
The user acceptance testing is usually a black box type of testing. In other words, the focus is on
the functionality and the usability of the application rather than the technical aspects. It is
generally assumed that the application would have already undergone Unit, Integration and
System Level Testing.
However, it is useful if the User acceptance Testing is carried out in an environment that closely
resembles the real world or production environment.
The steps taken for User Acceptance Testing typically involve one or more of the following:
.......1) User Acceptance Test (UAT) Planning
.......2) Designing UA Test Cases
.......3) Selecting a Team that would execute the (UAT) Test Cases
.......4) Executing Test Cases
.......5) Documenting the Defects found during UAT
.......6) Resolving the issues/Bug Fixing
.......7) Sign Off
Each User Acceptance Test Case describes in a simple language the precise steps to be taken to
test something. The Business Analysts and the Project Team review the User Acceptance Test
Cases.
Sign Off:
Upon successful completion of the User Acceptance Testing and resolution of the issues the team
generally indicates the acceptance of the application. This step is important in commercial
software sales. Once the User “Accept” the Software delivered they indicate that the software
meets their requirements.
The users now confident of the software solution delivered and the vendor can be paid for the
same.
This index is surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:
They are normalized per function point (or per LOC) at product delivery (first 3 months or first
year of operation) or Ongoing (per year of operation) by level of severity, by category or cause,
e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect
introduced by fixes, etc.
4. Product volatility
• Ratio of maintenance fixes (to repair the system & bring it into compliance with
specifications), vs. enhancement requests (requests by users to enhance or change
functionality)
5. Defect ratios
8. Test coverage
9. Cost of defects
11. Re-work
12. Reliability
• Availability (percentage of time a system is available, versus the time the system is
needed to be available)
• Mean time between failure (MTBF).
• Man time to repair (MTTR)
• Reliability ratio (MTBF / MTTR)
• Number of product recalls or fix releases
• Number of production re-runs as a ratio of production runs
In this tutorial you will learn about metrics used in testing, The Product Quality Measures - 1.
Customer satisfaction index, 2. Delivered defect quantities, 3. Responsiveness (turnaround time)
to users, 4. Product volatility, 5. Defect ratios, 6. Defect removal efficiency, 7. Complexity of
delivered product, 8. Test coverage, 9. Cost of defects, 10. Costs of quality activities, 11. Re-
work, 12. Reliability and Metrics for Evaluating Application System Testing.
This index is surveyed before product delivery and after product delivery
(and on-going on a periodic basis, using standard questionnaires).The following are analyzed:
They are normalized per function point (or per LOC) at product delivery (first 3 months or first
year of operation) or Ongoing (per year of operation) by level of severity, by category or cause,
e.g.: requirements defect, design defect, code defect, documentation/on-line help defect, defect
introduced by fixes, etc.
4. Product volatility
• Ratio of maintenance fixes (to repair the system & bring it into compliance
with specifications), vs. enhancement requests (requests by users to enhance
or change functionality)
5. Defect ratios
8. Test coverage
9. Cost of defects
• Business losses per defect that occurs during operation
• Business interruption costs; costs of work-arounds
• Lost sales and lost goodwill
• Litigation costs resulting from defects
• Annual maintenance cost (per function point)
• Annual operating cost (per function point)
• Measurable damage to your boss's career
11. Re-work
12. Reliability
Metric = Formula
Test Coverage = Number of units (KLOC/FP) tested / total size of the system. (LOC represents
Lines of Code)
Number of tests per unit size = Number of test cases per KLOC/FP (LOC represents Lines of
Code).
Quality of Testing = No of defects found during Testing/(No of defects found during testing +
No of acceptance defects found after delivery) *100
Effectiveness of testing to business = Loss due to problems / total resources processed by the
system.
Source Code Analysis = Number of source code statements changed / total number of tests.
Effort Productivity = Test Planning Productivity = No of Test cases designed / Actual Effort for
Design and Documentation
Test Execution Productivity = No of Test cycles executed / Actual Effort for testing
Project Planning is an aspect of Project Management that focuses a lot on Project Integration.
The project plan reflects the current status of all project activities and is used to monitor and
control the project.
The Project Planning tasks ensure that various elements of the Project are coordinated and
therefore guide the project execution.
Why is it important?
Project Planning spans across the various aspects of the Project. Generally Project Planning is
considered to be a process of estimating, scheduling and assigning the projects resources in order
to deliver an end product of suitable quality. However it is much more as it can assume a very
strategic role, which can determine the very success of the project. A Project Plan is one of the
crucial steps in Project Planning in General!
Typically Project Planning can include the following types of project Planning:
1) Project Scope Definition and Scope Planning
2) Project Activity Definition and Activity Sequencing
3) Time, Effort and Resource Estimation
4) Risk Factors Identification
5) Cost Estimation and Budgeting
6) Organizational and Resource Planning
7) Schedule Development
8) Quality Planning
9) Risk Management Planning
10) Project Plan Development and Execution
11) Performance Reporting
12) Planning Change Management
13) Project Rollout Planning
2) Quality Planning:
The relevant quality standards are determined for the project. This is an important aspect of
Project Planning. Based on the inputs captured in the previous steps such as the Project Scope,
Requirements, deliverables, etc. various factors influencing the quality of the final product are
determined. The processes required to deliver the Product as promised and as per the standards
are defined.
6) Schedule Development:
The time schedule for the project can be arrived at based on the activities, interdependence and
effort required for each of them. The schedule may influence the cost estimates, the cost benefit
analysis and so on.
Project Scheduling is one of the most important task of Project Planning and also the most
difficult tasks. In very large projects it is possible that several teams work on developing the
project. They may work on it in parallel. However their work may be interdependent.
Each of the Project tasks and activities are periodically monitored. The team and the stakeholders
are informed of the progress. This serves as an excellent communication mechanism. Any delays
are analyzed and the project plan may be adjusted accordingly
In this article we explored the various aspects of Project Planning and Scheduling.
In this tutorial you will learn about Effective Software Testing? How do we measure
‘Effectiveness’ of Software Testing? Steps to Effective Software Testing, Coverage and Test
Planning and Process.
A 1994 study in US revealed that only about “9% of software projects were successful”
A large number of projects upon completion do not have all the promised features or they do not
meet all the requirements that were defined when the project was kicked off.
Whether you are part of a team that is building a book keeping application or a software that runs
a power plant you cannot afford to have less than reliable software.
Unreliable software can severely hurt businesses and endanger lives depending on the criticality
of the application. The simplest application poorly written can deteriorate the performance of
your environment such as the servers, the network and thereby causing an unwanted mess.
To ensure software application reliability and project success Software Testing plays a very
crucial role.
Everything can and should be tested –
The effectiveness of testing can be measured with the degree of success in achieving the above
goals.
Several factors influence the effectiveness of Software Testing Effort, which ultimately
determine the success of the Project.
A) Coverage:
• All the scenarios that can occur when using the software application
• Each business requirement that was defined for the project
• Specific levels of testing should cover every line of code written for the
application
There are various levels of testing which focus on different aspects of the software application.
The often-quoted V model best explains this:
The various levels of testing illustrated above are:
• Unit Testing
• Integration Testing
• System Testing
• User Acceptance Testing
The goal of each testing level is slightly different thereby ensuring the overall project reliability.
The system testing is done in an environment that is similar to the production environment i.e.
the environment where the product will be finally deployed.
There are various types of System Testing possible which test the various aspects of the software
application.
Having followed the above steps for various levels of testing the product is rolled.
It is not uncommon to see various “bugs”/Defects even after the product is released to
production. An effective Testing Strategy and Process helps to minimize or eliminate these
defects. The extent to which it eliminates these post-production defects (Design Defects/Coding
Defects/etc) is a good measure of the effectiveness of the Testing Strategy and Process.
As the saying goes - 'the proof of the pudding is in the eating'
Summary:
The success of the project and the reliability of the software application depend a lot on the
effectiveness of the testing effort. This article discusses “What is effective Software
Testing?”
This article gives an overview of Software Quality Management and various processes that are a
part of Software Quality Management. Software Quality is a highly overused term and it may
mean different things to different people. You will learn What is Software Quality
Management?, What does it take to Manage Software Quality?, Quality Planning, Quality
Assurance, Quality Control, Importance of Documentation and What is Defect Tracking?
“Totality of characteristics of an entity that bears on its ability to satisfy stated and implied
needs.”
This means that the Software product delivered should be as per the requirements defined. We
now examine a few more terms used in association with Software Quality.
Quality Planning:
In the Planning Process we determine the standards that are relevant for the Software Product,
the Organization and the means to achieve them.
Quality Assurance:
Once the standards are defined and we start building the product. It is very important to have
processes that evaluate the project performance and aim to assure that the Quality standards are
being followed and the final product will be in compliance.
Quality Control:
Once the software components are built the results are monitored to determine if they comply
with the standards. The data collected helps in measuring the performance trends and as needed
help in identifying defective pieces of code.
Software Quality Management simply stated comprises of processes that ensure that the
Software Project would reach its goals. In other words the Software Project would meet the
clients expectations.
The key processes of Software Quality Management fall into the following three categories:
1) Quality Planning
2) Quality Assurance
3) Quality Control
The Software Quality Management comprises of Quality Planning, Quality Assurance and
Quality Control Processes. We shall now take a closer look at each of them.
1) Quality Planning
Quality Planning is the most important step in Software Quality Management. Proper planning
ensures that the remaining Quality processes make sense and achieve the desired results. The
starting point for the Planning process is the standards followed by the Organization. This is
expressed in the Quality Policy and Documentation defining the Organization-wide standards.
Sometimes additional industry standards relevant to the Software Project may be referred to as
needed. Using these as inputs the Standards for the specific project are decided. The Scope of the
effort is also clearly defined. The inputs for the Planning are as summarized as follows:
Using these as Inputs the Quality Planning process creates a plan to ensure that standards agreed
upon are met. Hence the outputs of the Quality Planning process are:
To create these outputs namely the Quality Plan various tools and techniques are used. These
tools and techniques are huge topics and Quality Experts dedicate years of research on these
topics. We would briefly introduce these tools and techniques in this article.
a. Benchmarking: The proposed product standards can be decided using the existing
performance benchmarks of similar products that already exist in the market.
b. Design of Experiments: Using statistics we determine what factors influence the Quality or
features of the end product
c. Cost of Quality: This includes all the costs needed to achieve the required Quality levels. It
includes prevention costs, appraisal costs and failure costs.
d. Other tools: There are various other tools used in the Planning process such as Cause and
Effect Diagrams, System Flow Charts, Cost Benefit Analysis, etc.
All these help us to create a Quality Management Plan for the project.
2) Quality Assurance
The Input to the Quality Assurance Processes is the Quality Plan created during Planning.
Quality Audits and various other techniques are used to evaluate the performance of the project.
This helps us to ensure that the Project is following the Quality Management Plan.
The tools and techniques used in the Planning Process such as Design of Experiments, Cause and
Effect Diagrams may also be used here, as required.
3) Quality Control
If the work done meets the standards defined then the work done is accepted and released to the
clients.
Importance of Documentation:
In all the Quality Management Processes special emphasis is put on documentation. Many
software shops fail to document the project at various levels. Consider a scenario where the
Requirements of the Software Project are not sufficiently documented. In this case it is quiet
possible that the client has a set of expectations and the tester may not know about them. Hence
the testing team would not be able test the software developed for these expectations or
requirements. This may lead to poor “Software Quality” as the product does not meet the
expectations.
Similarly consider a scenario where the development team does not document the installation
instructions. If a different person or a team is responsible for future installations they may end up
making mistakes during installation, thereby failing to deliver as promised.
Once again consider a scenario where a tester fails to document the test results after executing
the test cases. This may lead to confusion later. If there were an error, we would not be sure at
what stage the error was introduced in the software at a component level or when integrating it
with another component or due to environment on a particular server etc. Hence documentation
is the key for future analysis and all Quality Management efforts.
Steps:
In a typical Software Development Life Cycle the following steps are necessary for Quality
Management:
This is very important to ensure the Quality of the end Product. As test cases are executed at
various levels defects if any are found in the Software being tested. The Defects are logged and
data is collected. The Software Development fixes these defects and documents how they were
fixed The testing team verifies whether the defect was really fixed and closes the defects. This
information is very useful. Proper tracking ensures that all Defects were fixed. The information
also helps us for future projects.
The Capability Maturity Model defines various levels of Organization based on the processes
that they follow.
Level 0
The following is true for “Level 0” Organizations -
There are no Processes, tracking mechanisms, no plans. It is left to the developer or any person
responsible for Quality to ensure that the product meets expectations.
However the process is not standardized throughout the Organization. All the teams within the
organization do not follow the same standard.
Level 3 – Well-Defined
In “Level 3” Organizations the processes are well defined and followed throughout the
organization.
Level 4 – Quantitatively Controlled
In “Level 4” Organizations -
- The processes are well defined and followed throughout the organization
- The Goals are defined and the actual output is measured
- Metrics are collected and future performance can predicted
Summary: