Vous êtes sur la page 1sur 101

1

unit

Principles of Testing
There are seven principles of testing. They are as follows:
1) Testing shows presence of defects:Testing can show the defects are
present, but cannot prove that there are no defects. Even after testing the
application or product thoroughly we cannot say that the product is 100%
defect free. Testing always reduces the number of undiscovered defects
remaining in the software but even if no defects are found, it is not a proof of
correctness.
2) Exhaustive testing is impossible: Testing everything including all
combinations of inputs and preconditions is not possible. So, instead of doing
the exhaustive testing we can use risks and priorities to focus testing efforts.
For example: In an application in one screen there are 15 input fields, each
having 5 possible values, then to test all the valid combinations you would
need 30 517 578 125 (515) tests. This is very unlikely that the project
timescales would allow for this number of tests. So, accessing and managing
risk is one of the most important activities and reason for testing in any
project.
3) Early testing: In the software development life cycle testing activities
should start as early as possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the
defects discovered during pre-release testing or shows the most operational
failures.

5) Pesticide paradox: If the same kinds of tests are repeated again and
again, eventually the same set of test cases will no longer be able to find any
new bugs. To overcome this Pesticide Paradox, it is really very important to
review the test cases regularly and new and different tests need to be written
to exercise different parts of the software or system to potentially find more
defects.
6) Testing is context depending: Testing is basically context dependent.
Different kinds of sites are tested differently. For example, safety critical
software is tested differently from an e-commerce site.
7) Absence of errors fallacy: If the system built is unusable and does not
fulfil the users needs and expectations then finding and fixing defects does
not help.
Just because testing didnt find any defects in the software, it doesnt mean that the
software is ready to be shipped. Were the executed tests really designed to catch
the most defects? or where they designed to see if the software matched the users
requirements? There are many other factors to be considered before making a
decision to ship the software.
Other principles to note are:
o Testing must be done by an independent party.
Testing should not be performed by the person or team that developed the software
since they tend to defend the correctness of the program.
o Assign best personnel to the task.
Because testing requires high creativity and responsibility only the best personnel
must be assigned to design, implement, and analyze test cases, test data and test
results.

o Test for invalid and unexpected input conditions as well as valid


conditions.
The program should generate correct messages when an invalid test is encountered
and should generate correct results when the test is valid.
o Keep software static during test.
The program must not be modified during the implementation of the set of designed
test cases.
o Provide expected test results if possible.
A necessary part of test documentation is the specification of expected results, even
if providing such results is impractical.

software development life cycle


SDLC provides a series of steps to be followed to design and develop a
software product efficiently. SDLC framework includes the following steps:

Communication
This is the first step where the user initiates the request for a desired
software product. He contacts the service provider and tries to negotiate
the terms. He submits his request to the service providing organization in
writing.

Requirement Gathering
This step onwards the software development team works to carry on the
project. The team holds discussions with various stakeholders from problem
domain and tries to bring out as much information as possible on their
requirements. The requirements are contemplated and segregated into user

requirements, system requirements and functional requirements. The


requirements are collected using a number of practices as given -

studying the existing or obsolete system and software,


conducting interviews of users and developers,
referring to the database or
collecting answers from the questionnaires.

Feasibility Study
After requirement gathering, the team comes up with a rough plan of
software process. At this step the team analyzes if a software can be made
to fulfill all requirements of the user and if there is any possibility of
software being no more useful. It is found out, if the project is financially,
practically and technologically feasible for the organization to take up. There
are many algorithms available, which help the developers to conclude the
feasibility of a software project.

System Analysis
At this step the developers decide a roadmap of their plan and try to bring
up the best software model suitable for the project. System analysis
includes Understanding of software product limitations, learning system
related problems or changes to be done in existing systems beforehand,
identifying and addressing the impact of project on organization and
personnel etc. The project team analyzes the scope of the project and plans
the schedule and resources accordingly.

Software Design
Next step is to bring down whole knowledge of requirements and analysis
on the desk and design the software product. The inputs from users and
information gathered in requirement gathering phase are the inputs of this

step. The output of this step comes in the form of two designs; logical
design and physical design. Engineers produce meta-data and data
dictionaries, logical diagrams, data-flow diagrams and in some cases pseudo
codes.

Coding
This step is also known as programming phase. The implementation of
software design starts in terms of writing program code in the suitable
programming language and developing error-free executable programs
efficiently.

Testing
An estimate says that 50% of whole software development process should
be tested. Errors may ruin the software from critical level to its own
removal. Software testing is done while coding by the developers and
thorough testing is conducted by testing experts at various levels of code
such as module testing, program testing, product testing, in-house testing
and testing the product at users end. Early discovery of errors and their
remedy is the key to reliable software.

Integration
Software may need to be integrated with the libraries, databases and other
program(s). This stage of SDLC is involved in the integration of software
with outer world entities.

Implementation
This means installing the software on user machines. At times, software
needs post-installation configurations at user end. Software is tested for
portability and adaptability and integration related issues are solved during
implementation.

Operation and Maintenance

This phase confirms the software operation in terms of more efficiency and
less errors. If required, the users are trained on, or aided with the
documentation on how to operate the software and how to keep the
software operational. The software is maintained timely by updating the
code according to the changes taking place in user end environment or
technology. This phase may face challenges from hidden bugs and realworld unidentified problems.

Disposition
As time elapses, the software may decline on the performance front. It may
go completely obsolete or may need intense upgradation. Hence a pressing
need to eliminate a major portion of the system arises. This phase includes
archiving data and required software components, closing down the system,
planning disposition activity and terminating system at appropriate end-ofsystem time.

Software Development Project: Phases


Overview
Posted on May 4, 2004 by Basil Tesler Leave a Comment

Most materials discussing the phases of a software development project are


intended for the developers community. I decided to take a different look at the issue
and help those novices who are going to outsource a software development project
to an outsource service provider (OSP).
Software development isnt all about the code. In fact, coding is part of the overall
project lifecycle. The project phases that Im going to review in this article are a
slightly modified version of the classical sequential model that is appropriate for a lot
of projects. However, you shouldnt think that this model is universal throughout the
industry and that it cant be modified; on the contrary, almost each business applying
this model adapts it to the specific needs of real situations.
The typical software project includes the following phases:
1. Requirements Analysis and Definition. System Overview
2. Estimation
3. Functional Specification and UI Prototype
4. Software Architecture and Test Plan
5. Implementation (Coding) and Testing
6. Release. Delivery and Installation
7. Operation and Maintenance

Below you will find a brief description of these phases. Later on, were going to
publish a separate article on each phase.
Requirements Analysis and Definition. System Overview
This phase begins with analyzing what exactly you want to have done. The system
overview helps you see the big picture of the project and understand which steps
need to be carried out. You should determine and document the vision for the target
product or system; the user profile(s); the hardware and software environment; the
most important components, functions, or features the software must have; the
security requirements, etc. To aid in the needs analysis, it is sometimes necessary to
have prototypes created or to have them created by professionals, for that matter.
All this can and often should be done in cooperation with your vendor.
The product of this stage is the general system requirements (and sometimes, draft
user manual). This document will be modified as the project is undertaken.
Estimation
This is a phase that is usually obscure to customers. Vendors tend to supply you
with an estimate itself, and thats it. Personally, I believe that customers may and
should take more active part in the estimation process. For example, you have to be
able to select from different options discussing the platforms, technologies, and tools
that will be used for the target system. Also, make sure your vendor does a research
of the existing libraries and tools that can be used in the project. Remember that an
estimate should explicitly list what is included in the price, as well as why and how
much any additional features will cost. Never let the vendor baffle you with technical
jargon and complex details. Finally, if you are in doubt about the provided estimate,
consult an expert; if the vendor appears to try to take advantage of you, dont
bargain with such a company just say thank you and look for another OSP.
Outsourcing is risky by nature, so you cant afford to take chances with a vendor like
that.
The estimate isnt the only document that results from this phase. The project
contract (or project bid) and rough project schedule usually come into existence at
this point, too.

Functional Specification and UI Prototype


A functional specification determines what exactly the target system must do and the
premises for its implementation. All requirements should be thoroughly defined and
documented. The general system requirements and other documents created in the
first phase serve as input here. Depending on the nature of the system, creating a UI
prototype in this phase may be crucially important for the success of the project.
If your company has appropriate experience, you can have the functional
specification and UI prototype created in-house. However, I recommend ordering the
development of the specification and UI prototype from your OSP. This will help you
check the vendors expertise; at the same time, the vendor will have an opportunity
to get a better idea of the project and get prepared for its implementation.
Besides the functional specification and UI prototype, this phase may also result in
creating an exact project plan that contains the project schedule, milestones, and
human resources.
Software Architecture and Test Plan
In this phase, it is necessary to determine the system components covering your
requirements and the way these components will work together. The software
architecture design may be logically divided into two parts: general design and
detailed design. The general design consists of the structural design, development
strategy, and system design documentation. Working out the general design,
developers break the target system into high-level components and describe them in
the context of the whole system. When it comes to the detailed design, specification
and documentation on each component are developed. The general system
requirements, functional specification, and UI prototype serve as input for this phase.
Completing this phase, your vendor should produce the description of the software
architecture, the algorithmic structure of the system components and their
specifications, the documentation of all design decisions, and a thorough test plan.
Implementation (Coding) and Testing
The goal of this phase is building the target system based on the specifications
developed in the previous phases. Transferring the specification algorithms into a

programming language, your vendor creates and integrates the system components.
Performing code reviews and test cases worked out by the vendors QA/QC division,
as well as unit, integration, and system tests are other key activities of this phase.
Comprehensive testing and correcting any errors identified ensures that components
function together properly, and that the project implementation meets the system
specification.
Outsourcing a software development project, I advise you to have a project delivered
and paid for in parts. This is one of the best ways to minimize the risk for you and
your vendor. If you arent satisfied with the way the project is being implemented, you
can take to another vendor the specification and the code that was previously
delivered.
Release. Delivery and Installation
In the release phase, your vendor must transfer the target product or system to you.
The key activities usually include installation and configuration in the operational
environment, acceptance testing, and training of the users if necessary.
A crucial point here is formal acceptance testing which comprises a series of end-toend tests. It is performed to confirm that the product or system fulfills the acceptance
requirements determined by the functional specification.
After this phase is complete, the product or system is considered formally delivered
and accepted. If iterative development is used, the next iteration should be
commenced.
Operation and Maintenance
The Operation and Maintenance phase begins once you have formally accepted the
product or system delivered by the vendor. The task of this phase is the proper
functioning of the software. To improve a product or system, it should be
continuously maintained. Software maintenance involves detecting and correcting
errors, as well as extending and improving the software itself.

Software Testing - QA, QC & Testing

Advertisements

Previous Page
Next Page

Most people get confused when it comes to pin down the differences among
Quality Assurance, Quality Control, and Testing. Although they are
interrelated and to some extent, they can be considered as same activities,
but there exist distinguishing points that set them apart. The following table
lists the points that differentiate QA, QC, and Testing.

Quality Assurance

Quality Control

Testing

QA includes activities that


ensure the implementation
of processes, procedures
and standards in context to
verification of developed
software and intended
requirements.

It includes activities that


ensure the verification of
a developed software
with respect to
documented (or not in
some cases)
requirements.

It includes activities
that ensure the
identification of
bugs/error/defects in a
software.

Focuses on processes and

Focuses on actual testing

Focuses on actual

procedures rather than


conducting actual testing on
the system.

by executing the
software with an aim to
identify bug/defect
through implementation
of procedures and
process.

testing.

Process-oriented activities.

Product-oriented
activities.

Product-oriented
activities.

Preventive activities.

It is a corrective process.

It is a preventive
process.

It is a subset of Software
Test Life Cycle (STLC).

QC can be considered as
the subset of Quality
Assurance.

Testing is the subset of


Quality Control.

Audit and Inspection


Audit : It is a systematic process to determine how the actual testing
process is conducted within an organization or a team. Generally, it is an
independent examination of processes involved during the testing of a
software. As per IEEE, it is a review of documented processes that
organizations

implement

and

follow.

Types

of

audit

include

Legal

Compliance Audit, Internal Audit, and System Audit.


Inspection : It is a formal technique that involves formal or informal
technical reviews of any artifact by identifying any error or gap. As per
IEEE94, inspection is a formal evaluation technique in which software
requirements, designs, or codes are examined in detail by a person or a

group other than the author to detect faults, violations of development


standards, and other problems.
Formal inspection meetings may include the following processes: Planning,
Overview Preparation, Inspection Meeting, Rework, and Follow-up.

Testing and Debugging


Testing : It involves identifying bug/error/defect in a software without
correcting it. Normally professionals with a quality assurance background
are involved in bugs identification. Testing is performed in the testing
phase.
Debugging :

It

involves

identifying,

isolating,

and

fixing

the

problems/bugs. Developers who code the software conduct debugging upon


encountering an error in the code. Debugging is a part of White Box Testing
or Unit Testing. Debugging can be performed in the development phase
while conducting Unit Testing or in phases while fixing the reported bugs.

Verification & Validation


These two terms are very confusing for most people, who use them interchangeably. The
following table highlights the differences between verification and validation.

S.N
.

Verification

Validation

Verification addresses the concern: "Are you


building it right?"

Validation addresses the concern: "Are you


building the right thing?"

Ensures that the software system meets all the


functionality.

Ensures that the functionalities meet the


intended behavior.

Verification takes place first and includes the


checking for documentation, code, etc.

Validation occurs after verification and


mainly involves the checking of the overall
product.

Done by developers.

Done by testers.

It has static activities, as it includes collecting


reviews, walkthroughs, and inspections to verify a
software.

It has dynamic activities, as it includes


executing the software against the
requirements.

It is an objective process and no subjective decision


should be needed to verify a software.

It is a subjective process and involves


subjective decisions on how well a software
works.

2 nd unit

What is Performance Testing?


Performance testing, a non-functional testing technique performed to
determine the system parameters in terms of responsiveness and stability
under

various

workload.

Performance

testing

measures

the

quality

attributes of the system, such as scalability, reliability and resource usage.

Performance Testing Techniques:


Load testing - It is the simplest form of testing conducted to
understand the behaviour of the system under a specific load. Load
testing

will

result

in

measuring

important

business

critical

transactions and load on the database, application server, etc., are


also monitored.
Stress testing - It is performed to find the upper limit capacity of
the system and also to determine how the system performs if the
current load goes well above the expected maximum.
Soak testing - Soak Testing also known as endurance testing, is
performed to determine the system parameters under continuous
expected load. During soak tests the parameters such as memory
utilization

is

monitored

to

detect

memory

leaks

or

other

performance issues. The main aim is to discover the system's


performance under sustained use.
Spike testing - Spike testing is performed by increasing the
number of users suddenly by a very large amount and measuring

the performance of the system. The main aim is to determine


whether the system will be able to sustain the workload.

Performance Testing Process:

Attributes of Performance Testing:


Speed
Scalability
Stability
reliability

Performance Testing Tools


Jmeter - http://jmeter.apache.org/
Open STA - http://opensta.org/

Load Runner - http://www.hp.com/


Web Load

7 Types of Web Performance Tests


1. Performance Test: A performance test is any test that measures
stability, performance, scalability and/or throughput of your web
application(s).
2. Capacity Test: A capacity test is a test to determine how many users
your application can handle before either performance or stability becomes
unacceptable. By knowing the number of users your application can
handle successfully, you will have better visibility into events that might
push your site beyond its limitations. This is a way to avoid potential
problems in the future.
3. Load Test: A load test consists of applying load to an application and
measuring the results. The load may or may not be at the high end of
application capacity. These tests can help determine normal performance
metrics. By using iterative testing, you can determine whether new code
has helped or hurt performance.
4. Stress Test: A stress test is a test that pushes an application beyond
normal load conditions. When you push your application to the extreme,
you will see which components fail first. Making these components more
robust, or efficient, will help determine new thresholds.
5. Soak Test: A soak test is a long-running test that is used to determine
application performance and/or stability over time. An application may work

well for an hour or two, and then start to experience issues. These tests are
especially useful when trying to track down memory leaks or corruption.
6. Component Test: Testing a discrete component of your application
requires a component test. Examples might include a search function, a
file upload, a chat feature, an email function, or a 3rd-party component like
a shopping cart.
7. Smoke Test: A smoke test is a test run under very low load that merely
shows that the application works as expected. The term originated in the
electronics industry and refers to the application of power to an electronic
component. If smoke is generated, the test fails and no further testing is
necessary until the simplest test passes successfully. For example, there
may be correlation issues with your scenario or script if you can run a
single user test successfully, the scenario is sound. It is a best practice to
initate one of these verification runs before running larger tests to ensure
that the test is valid.
What is performance testing?

Software performance testing is a means of quality assurance (QA). It


involves testing software applications to ensure they will perform
well under their expected workload.
Features and Functionality supported by a software system is not the
only concern. A software application's performance like its response
time, do matter. The goal of performance testing is not to find bugs
but to eliminate performance bottlenecks

The focus of Performance testing is checking a software program's

Speed - Determines whether the application responds quickly


Scalability - Determines maximum user load the software
application can handle.

Stability - Determines if the application is stable under varying


loads

Why do performance testing?

Performance testing is done to provide stakeholders with


information about their application regarding speed, stability and
scalability. More importantly, performance testing uncovers what
needs to be improved before the product goes to market. Without
performance testing, software is likely to suffer from issues such as:
running slow while several users use it simultaneously,
inconsistencies across different operating systems and poor
usability. Performance testing will determine whether or not their
software meets speed, scalability and stability requirements under
expected workloads. Applications sent to market with poor
performance metrics due to non existent or poor performance
testing are likely to gain a bad reputation and fail to meet expected
sales goals.Also, mission critical applications like space launch
programs or life saving medical equipments should be performance

tested to ensure that they run for a long period of time without
deviations.

Types of performance testing

Load testing - checks the application's ability to perform under


anticipated user loads. The objective is to identify performance
bottlenecks before the software application goes live.
Stress testing - involves testing an application under extreme
workloads to see how it handles high traffic or data processing
.The objective is to identify breaking point of an application.
Endurance testing - is done to make sure the software can
handle the expected load over a long period of time.
Spike testing - tests the software's reaction to sudden large
spikes in the load generated by users.
Volume testing - Under Volume Testing large no. of. Data is
populated in database and the overall software system's behavior
is monitored. The objective is to check software application's
performance under varying database volumes.
Scalability testing - The objective of scalability testing is to
determine the software application's effectiveness in "scaling up"
to support an increase in user load. It helps plan capacity addition
to your software system.

Common Performance Problems

Most performance problems revolve around speed, response time,


load time and poor scalability. Speed is often one of the most

important attributes of an application. A slow running application will


lose potential users. Performance testing is done to make sure an
app runs fast enough to keep a user's attention and interest. Take a
look at the following list of common performance problems and
notice how speed is a common factor in many of them:
Long Load time - Load time is normally the initial time it takes an
application to start. This should generally be kept to a minimum.
While some applications are impossible to make load in under a
minute, Load time should be kept under a few seconds if possible.

Poor response time - Response time is the time it takes from


when a user inputs data into the application until the application
outputs a response to that input. Generally this should be very
quick. Again if a user has to wait too long, they lose interest.

Poor scalability - A software product suffers from poor


scalability when it cannot handle the expected number of users or
when it does not accommodate a wide enough range of users.
Load testing should be done to be certain the application can
handle the anticipated number of users.

Bottlenecking - Bottlenecks are obstructions in system which


degrade overall system performance. Bottlenecking is when either
coding errors or hardware issues cause a decrease of throughput
under certain loads. Bottlenecking is often caused by one faulty
section of code. The key to fixing a bottlenecking issue is to find
the section of code that is causing the slow down and try to fix it
there. Bottle necking is generally fixed by either fixing poor
running processes or adding additional Hardware. Some common
performance bottlenecks are
o
CPU utilization
o
Memory utilization

o
o
o

Network utilization
Operating System limitations
Disk usage

Performance Testing Process

The methodology adopted for performance testing can vary widely


but the objective for performance tests remain the same. It can help
demonstrate that your software system meets certain pre-defined
performance criteria. Or it can help compare performance of two
software systems. It can also help identify parts of your software
system which degrade its performance.
Below is a generic performance testing process

1.

Identify your testing environment - Know your physical test


environment, production environment and what testing tools are
available. Understand details of the hardware, software and
network configurations used during testing before you begin the
testing process. It will help testers create more efficient tests. It
will also help identify possible challenges that testers may
encounter during the performance testing procedures.

2.

Identify the performance acceptance criteria - This includes


goals and constraints for throughput, response times and
resource allocation. It is also necessary to identify project success
criteria outside of these goals and constraints. Testers should be
empowered to set performance criteria and goals because often
the project specifications will not include a wide enough variety of

performance benchmarks. Sometimes there may be none at all.


When possible finding a similar application to compare to is a
good way to set performance goals.
3.

Plan & design performance tests - Determine how usage is


likely to vary amongst end users and identify key scenarios to test
for all possible use cases. It is necessary to simulate a variety of
end users, plan performance test data and outline what metrics
will be gathered.

4.

Configuring the test environment - Prepare the testing


environment before execution. Also, arrange tools and other
resources.

5.

Implement test design - Create the performance tests


according to your test design.

6.
7.

Run the tests - Execute and monitor the tests.


Analyze, tune and retest - Consolidate, analyze and share test
results. Then fine tune and test again to see if there is an
improvement or decrease in performance. Since improvements
generally grow smaller with each retest, stop when bottlenecking
is caused by the CPU. Then you may have the consider option of
increasing CPU power.

Performance Parameters Monitored

The basic parameters monitored during performance testing include:

Processor Usage - amount of time processor spends executing


non-idle threads.
Memory use - amount of physical memory available to
processes on a computer.
Disk time - amount of time disk is busy executing a read or
write request.
Bandwidth - shows the bits per second used by a network
interface.
Private bytes - number of bytes a process has allocated that
can't be shared amongst other processes. These are used to
measure memory leaks and usage.
Committed memory - amount of virtual memory used.
Memory pages/second - number of pages written to or read
from the disk in order to resolve hard page faults. Hard page faults
are when code not from the current working set is called up from
elsewhere and retrieved from a disk.
Page faults/second - the overall rate in which fault pages are
processed by the processor. This again occurs when a process
requires code from outside its working set.

CPU interrupts per second - is the avg. number of hardware


interrupts a processor is receiving and processing each second.
Disk queue length - is the avg. no. of read and write requests
queued for the selected disk during a sample interval.
Network output queue length - length of the output packet
queue in packets. Anything more than two means a delay and
bottlenecking needs to be stopped.
Network bytes total per second - rate which bytes are sent
and received on the interface including framing characters.
Response time - time from when a user enters a request until
the first character of the response is received.
Throughput - rate a computer or network receives requests per
second.
Amount of connection pooling - the number of user requests
that are met by pooled connections. The more requests met by
connections in the pool, the better the performance will be.
Maximum active sessions - the maximum number of sessions
that can be active at once.
Hit ratios - This has to do with the number of SQL statements
that are handled by cached data instead of expensive I/O
operations. This is a good place to start for solving bottlenecking
issues.
Hits per second - the no. of hits on a web server during each
second of a load test.
Rollback segment - the amount of data that can rollback at any
point in time.
Database locks - locking of tables and databases needs to be
monitored and carefully tuned.

Top waits - are monitored to determine what wait times can be


cut down when dealing with the how fast data is retrieved from
memory

Thread counts - An applications health can be measured by the


no. of threads that are running and currently active.

Garbage collection - has to do with returning unused memory


back to the system. Garbage collection needs to be monitored for
efficiency.
Performance Test Tools

There are a wide variety of performance testing tools available in


market. The tool you choose for testing will depend on many factors
such as types of protocol supported , license cost , hardware
requirements , platform support etc. Below is a list of popularly used
testing tools.

HP Loadrunner - is the most popular performance testing tools


on the market today. This tool is capable of simulating hundreds
of thousands of users, putting applications under real life loads to
determine their behavior under expected
loads. Loadrunner features a virtual user generator which
simulates the actions of live human users.

HTTP Load - a throughput testing tool aimed at testing web


servers by running several http or https fetches simultaneously to
determine how a server handles the workload.

Proxy Sniffer - one of the leading tools used for load testing of
web and application servers. It is a cloud based tool that's capable
of simulating thousands of users. Summary
Summary

Performance testing is necessary before marketing any software


product. It ensures customer satisfaction & protects investor's
investment against product failure. Costs of performance testing are
usually more than made up for with improved customer satisfaction,
loyalty and retention.

What is White Box Testing?


White box testing is a testing technique, that examines the program
structure and derives test data from the program logic/code. The other
names of glass box testing are clear box testing, open box testing, logic
driven testing or path driven testing or structural testing.

White Box Testing Techniques:


Statement Coverage - This technique is aimed at exercising all
programming statements with minimal tests.
Branch Coverage - This technique is running a series of tests to
ensure that all branches are tested at least once.
Path Coverage - This technique corresponds to testing all possible
paths which means that each statement and branch is covered.

Calculating Structural Testing Effectiveness:


Statement Testing = (Number of Statements Exercised / Total Number of Statements) x
100 %

Branch Testing = (Number of decisions outcomes tested / Total Number of decision


Outcomes) x 100 %

Path Coverage = (Number paths exercised / Total Number of paths in the program) x
100 %

Advantages of White Box Testing:


Forces test developer to reason carefully about implementation.
Reveals errors in "hidden" code.
Spots the Dead Code or other issues with respect to best
programming practices.

Disadvantages of White Box Testing:


Expensive as one has to spend both time and money to perform
white box testing.
Every possibility that few lines of code are missed accidentally.
In-depth knowledge about the programming language is necessary
to perform white box testing.

How to Perform White Box


Testing Explained with a
Simple Example
Posted In | Testing Concepts, Testing Methodologies, Types of testing |
Last Updated: "November 3, 2015"

Understanding White Box Testing with a Simple


Example

In my career so far, I have seen the testers to be the most


enthusiastic community in the software industry.
The reason being, that testers always have something in
their scope to learn. Be it domain, process or technology, a
tester can have a holistic development if they wish to.
But as they say There is always a dark side. Testers also
avoid one type of testing which they feel is very
complicated and developers piece of cake. Yes, you have
guessed it right. Its the WHITE BOX TESTING.

White box and black box testing:


If we go by definition, White box testing (also known as
clear, glass box or structural testing) is a testing technique
which evaluates the code and internal structure of the
program.
Its the counterpart of Black box testing.
In simple words In Black box testing, we test the
software from a users point of view, but in White box, we
see and test the actual code. In Black box, we do testing
without seeing the internal system code, but in White box
we do see and test the internal code.

White box testing technique is used by both developers as


well as testers. It helps them understand which line of code
is actually executed and which is not. This may indicate
that there is either missing logic or a typo, which
eventually can lead into some negative consequences.
Steps to perform White box testing:
Step #1 Understand the functionality of the application
through its source code. Having said that, it simply means
that the tester must be well versed with the programming
language and other tools and techniques used to develop
the software.
Step #2 Create the tests and execute them.
When we discuss about testing, coverage is the most
important factor. Here I will explain how to have maximum
coverage in the context of White box testing.
Also read => Cause and Effect Graph Dynamic Test
Case Writing Technique For Maximum Coverage

Types of white box testing:


There are different types and different methods for each
white box testing type. See below image. (click on image
to enlarge)

Today, we are going to focus mainly on the execution


testing types of Unit testing white box technique.
The three main White box testing Techniques are:
1. Statement Coverage
2. Branch Coverage
3. Path Coverage
Lets understand these techniques one by one with a simple
example.
#1 Statement coverage
In programming language, statement is nothing but the
line of code or instruction for the computer to understand
and act accordingly. A statement becomes an executable
statement when it gets compiled and converted into the
object code and performs the action when the program is
in running mode.
Hence Statement Coverage, as the name suggests, is the
method of validating that each and every line of code is
executed at least once.

#2 Branch Coverage
Branch in programming language is like the IF
statements. If statement has two branches: true and
false.
So in Branch coverage (also called Decision coverage), we
validate that each branch is executed at least once.
In case of a IF statement, there will be two test
conditions:
One to validate the true branch and
Other to validate the false branch
Hence in theory, Branch Coverage is a testing method
which when executed ensures that each branch from each
decision point is executed.
#3 Path Coverage
Path coverage tests all the paths of the program. This is a
comprehensive technique which ensures that all the paths
of the program are traversed at least once. Path Coverage
is even more powerful that Branch coverage. This
technique is useful for testing the complex programs.
Lets take a simple example to understand all these white
box testing techniques.

White box testing example


Consider below simple pseudo code:

INPUT A & B

C = A + B
IF C>100
PRINT ITS DONE
For Statement Coverage we would need only one test
case to check all the lines of code.
That means:
If I consider TestCase_01 to be (A=40 and B=70), then all
the lines of code will be executed
Now the question arises:
Is that sufficient?
What if I consider my Test case as A=33 and B=45?
Because Statement coverage will only cover the true side,
for the pseudo code, only one test case would NOT be
sufficient to test it. As a tester, we have to consider the
negative cases as well.
Hence for maximum coverage, we need to
consider Branch Coverage, which will evaluate the
FALSE conditions.
In real world, you may add appropriate statements when
condition fails.
So now the pseudo code becomes:

INPUT A & B

C = A + B
IF C>100
PRINT ITS DONE
ELSE
PRINT ITS PENDING
Since Statement coverage is not sufficient to test the entire
pseudo code, we would require Branch coverage to ensure
maximum coverage.
-----------So for Branch coverage, we would require two test cases to
complete testing of this pseudo code.
TestCase_01: A=33, B=45
TestCase_02: A=25, B=30
With this, we can see that each and every line of code is
executed at least once.
Here are the conclusions so far:
Branch Coverage ensures more coverage than
Statement coverage
Branch coverage is more powerful than Statement
coverage,
100% Branch coverage itself means 100% statement
coverage,

100 % statement coverage does not guarantee 100%


branch coverage
Now lets move on to the Path Coverage:
As said earlier, Path coverage is used to test the complex
code snippets, which basically involves loop statements or
combination of loops and decision statements.
Consider this pseudo code:

INPUT A & B
C = A + B
IF C>100
PRINT ITS DONE
END IF
IF A>50
PRINT ITS PENDING
END IF
Now to ensure maximum coverage, we would require 4 test
cases.
How?
Simply there are 2 decision statements, so for each
decision statement we would need to branches to test. One

for true and other for false condition. So for 2 decision


statements, we would require 2 test cases to test the true
side and 2 test cases to test the false side, which makes
total of 4 test cases.
To simplify this lets consider below flowchart of the
pseudo code we have:

So, In order to have the full coverage, we would need


following test cases:
TestCase_01: A=50, B=60

TestCase_02: A=55, B=40


TestCase_03: A=40, B=65
TestCase_04: A=30, B=30
So the path covered will be:

Red Line TestCase_01 = (A=50, B=60)


Blue Line = TestCase_02 = (A=55, B=40)
Orange Line = TestCase_03 = (A=40, B=65)
Green Line = TestCase_04 = (A=30, B=30)

See also => Different Types of testing

Conclusion
Note that the statement, branch or path coverage does not
identify any bug or defect that needs to be fixed. It only
identifies those lines of code which are either never
executed or remains untouched. Based on this further
testing can be focused on.
Relying only on black box testing is not sufficient for
maximum test coverage. We need to have combination of
both black box and white box testing techniques to cover
maximum defects.
If done properly, White box testing will certainly contribute
to the software quality. Its also good for testers to
participate in this testing as it can provide the most
unbiased opinion about the code.

What is Static Testing?

Static Testing, a software testing technique in which the software is tested


without executing the code. It has two parts as listed below:

Review - Typically used to find and eliminate errors or ambiguities


in documents such as requirements, design, test cases, etc.
Static analysis - The code written by developers are analysed
(usually by tools) for structural defects that may lead to defects.

Types of Reviews:
The types of reviews can be given by a simple diagram:

Static Analysis - By Tools:


Following are the types of defects found by the tools during static analysis:

A variable with an undefined value


Inconsistent interface between modules and components
Variables that are declared but never used

Unreachable code (or) Dead Code


Programming standards violations
Security vulnerabilities
Syntax violations

What is Acceptance Testing?


Acceptance testing, a testing technique performed to determine whether or
not the software system has met the requirement specifications. The main
purpose of this test is to evaluate the system's compliance with the
business requirements and verify if it is has met the required criteria for
delivery to end users.
There are various forms of acceptance testing:

User acceptance Testing


Business acceptance Testing
Alpha Testing
Beta Testing

Acceptance Testing - In SDLC


The following diagram explains the fitment of acceptance testing in the
software development life cycle.

The acceptance test cases are executed against the test data or using an
acceptance test script and then the results are compared with the expected
ones.

Acceptance Criteria
Acceptance criteria are defined on the basis of the following attributes

Functional Correctness and Completeness


Data Integrity
Data Conversion
Usability
Performance
Timeliness

Confidentiality and Availability


Installability and Upgradability
Scalability
Documentation

Acceptance Test Plan - Attributes


The acceptance test activities are carried out in phases. Firstly, the basic
tests are executed, and if the test results are satisfactory then the
execution of more complex scenarios are carried out.
The Acceptance test plan has the following attributes:

Introduction
Acceptance Test Category
operation Environment
Test case ID
Test Title
Test Objective
Test Procedure
Test Schedule
Resources
The acceptance test activities are designed to reach at one of the
conclusions:

1. Accept the system as delivered

2. Accept the system after the requested modifications have been


made
3. Do not accept the system

Acceptance Test Report - Attributes


The Acceptance test Report has the following attributes:

Report Identifier
Summary of Results
Variations
Recommendations
Summary of To-DO List
Approval Decision

What is Structural Testing? Explain any Two Techniques used


in it
BY DINESH THAKUR

Structural testing on the other hand is concerned with testing the implementation of the
program. The intent of structural testing is not to exercise all the different input or
output conditions but to exercise the different programming structures and data
structures used in the program.

To test the structure of a program, structural testing aims to achieve test cases that will
force the desired coverage of different structures. Various criteria have been proposed
for this. Unlike the criteria for functional testing, which are frequently imprecise, the
criteria for structural testing are generally quite precise as they are based on program
structure formal and precise.
Control Flow Based Criteria
Most common structure based criteria are based on the control flow of the program. In
these criteria the control flow graph of a program is considered and coverage of various
aspects of the graph are specified as criteria. Hence before we consider the criteria let us
precisely define a control flow graph for a program.

Let the control flow graph (or simply flow graph) of a program P be G. A node in this
graph represents a block of statements that is always executed together i.e., whenever
the first statement is executed all other statements are also executed an edge 9.i.j (from
node i to node j) represents a possible transfer of control after executing the last
statement of the block represented by node i to the first statement of the block
represented by node j.

A node corresponding to a block whose first statement is the start statement of P is


called the start node of G and a node corresponding to a block whose last statement is an
exit statement is called an exit node. A path is a finite sequence of nodes (n1,n2,.......kk)
k>1, node nk) .A complete path is a path whose first node is the start node and the last
node is an exit node.

The simplest coverage criterion is statement coverage, which requires that each
statement of the program be executed at least once during testing. In other words it
requires that the paths executed during testing include all the nodes in the graph. This is
also called the all nodes criterion. This coverage criterion is not very strong and can
leave errors undetected.

For example, if there is an if statement in the program without having an else clause the
statement coverage criterion for this statement will be satisfied by a test case that
evaluates the condition to true. No test case is needed that ensures that the condition in
the if statement evaluates to false. This is a serious shortcoming because decisions in
programs are potential sources of errors. As an example consider the following function
to compute the absolute value of a number.

int abs (x)


int x;
{
if (x>=0) x=0 -x;
return (x)
}

A little more general coverage criterion is branch coverage, which requires that each
edge in the control flow graph be traversed at least once during testing. In other words
branch coverage requires that each decision in the program be evaluated to true and

false values at least once during testing. Testing based on branch coverage is often called
branch testing.

The Trouble with branch coverage comes if a decision has many conditions in it .For
example; consider the following function that checks the validity of a data item. The data
item is valid if it lies between 0 and 100 as it is checking for x <200 instead of 100
(perhaps a typing error made by the programmer).
Data Flow -Based Testing
The basic idea behind data flow based testing is to make sure that during testing the
definitions of variables and their subsequent use is tested. Just like the all nodes and all
edges criteria try to generate confidence in testing by making sure that at least all
statements and all branches have been tested the data flow testing tries to ensure some
coverage of the definitions of variables.

For data flow based criteria a definition use graph (de flues graph for short) for the
program is first constructed from the control flow graph representing a block of code
has variable occurrences in it. A variable occurrence can be one of the following three
types (RW 85)

Def represents the definition of a variable. The variable on the left hand side of an

assignment is the one getting defined.

C-use represents computational use of a variable. Any statement (e.g. read write an

assignment) that uses the value of variables for computation purposes is said to be
making c- use of the variables. In an assignment statement all variables on the right

hand side have a c- use occurrence. In a read and a write statement all variable
occurrences are of this type.

P- use represents predicate use. These are all the occurrences of the variables in a

predicate (i.e. variables whose values are used for computing the value of the predicate),
which is used for transfer of control.

In control flow based and data flow based testing the focus was on which paths to
execute during testing. Mutation testing does not take a path-based approach. Instead it
takes the program and creates many mutants of it by making simple changes to the
program. The goal of testing is to make sure that during the course of testing each
mutant produces an output different from the output of the original program.

In other words the mutation testing criterion does not say that the set of test cases must
be such that certain paths are executed instead it requires the set of test cases to be such
that they can distinguish between the original program and its mutants.

What is White Box Testing?


White box testing is a testing technique, that examines the program
structure and derives test data from the program logic/code. The other
names of glass box testing are clear box testing, open box testing, logic
driven testing or path driven testing or structural testing.

White Box Testing Techniques:

Statement Coverage - This technique is aimed at exercising all


programming statements with minimal tests.
Branch Coverage - This technique is running a series of tests to
ensure that all branches are tested at least once.
Path Coverage - This technique corresponds to testing all possible
paths which means that each statement and branch is covered.

Calculating Structural Testing Effectiveness:


Statement Testing = (Number of Statements Exercised / Total Number of Statements) x
100 %

Branch Testing = (Number of decisions outcomes tested / Total Number of decision


Outcomes) x 100 %

Path Coverage = (Number paths exercised / Total Number of paths in the program) x
100 %

Advantages of White Box Testing:


Forces test developer to reason carefully about implementation.
Reveals errors in "hidden" code.
Spots the Dead Code or other issues with respect to best
programming practices.

Disadvantages of White Box Testing:


Expensive as one has to spend both time and money to perform
white box testing.

Every possibility that few lines of code are missed accidentally.


In-depth knowledge about the programming language is necessary
to perform white box testing.

What is Black box Testing?


Black-box testing is a method of software testing that examines the
functionality of an application based on the specifications. It is also known
as Specifications based testing. Independent Testing Team usually performs
this type of testing during the software testing life cycle.
This method of test can be applied to each and every level of software
testing such as unit, integration, system and acceptance testing.

Behavioural Testing Techniques:


There are different techniques involved in Black Box testing.

Equivalence Class
Boundary Value Analysis
Domain Tests

Orthogonal Arrays
Decision Tables
State Models
Exploratory Testing
All-pairs testing

BLACK BOX TESTING Fundamentals


DEFINITION
Black Box Testing, also known as Behavioral Testing, is a software
testing method in which the internal structure/ design/ implementation of
the item being tested is not known to the tester. These tests can be
functional or non-functional, though usually functional.

This method is named so because the software program, in the eyes of


the tester, is like a black box; inside which one cannot see. This method
attempts to find errors in the following categories:
Incorrect or missing functions
Interface errors
Errors in data structures or external database access

Behavior or performance errors


Initialization and termination errors
Definition by ISTQB
black box testing: Testing, either functional or non-functional,
without reference to the
internal structure of the component or system.
black box test design technique: Procedure to derive and/or
select test cases based on an
analysis of the specification, either functional or non-functional, of a
component or system
without reference to its internal structure.
EXAMPLE
A tester, without knowledge of the internal structures of a website, tests
the web pages by using a browser; providing inputs (clicks, keystrokes)
and verifying the outputs against the expected outcome.
LEVELS APPLICABLE TO
Black Box Testing method is applicable to the following levels of software
testing:
Integration Testing
System Testing
Acceptance Testing
The higher the level, and hence the bigger and more complex the box,
the more black box testing method comes into use.

BLACK BOX TESTING TECHNIQUES


Following are some techniques that can be used for designing black box
tests.
Equivalence partitioning: It is a software test design technique that
involves dividing input values into valid and invalid partitions and
selecting representative values from each partition as test data.
Boundary Value Analysis: It is a software test design technique that
involves determination of boundaries for input values and selecting
values that are at the boundaries and just inside/ outside of the
boundaries as test data.
Cause Effect Graphing: It is a software test design technique that
involves identifying the cases (input conditions) and effects (output
conditions), producing a Cause-Effect Graph, and generating test
cases accordingly.
BLACK BOX TESTING ADVANTAGES
Tests are done from a users point of view and will help in exposing
discrepancies in the specifications.
Tester need not know programming languages or how the software
has been implemented.
Tests can be conducted by a body independent from the developers,
allowing for an objective perspective and the avoidance of
developer-bias.
Test cases can be designed as soon as the specifications are
complete.

BLACK BOX TESTING DISADVANTAGES


Only a small number of possible inputs can be tested and many
program paths will be left untested.
Without clear specifications, which is the situation in many projects,
test cases will be difficult to design.
Tests can be redundant if the software designer/ developer has
already run a test case.
Ever wondered why a soothsayer closes the eyes when foretelling
events? So is almost the case in Black Box Testing.
Black Box Testing is contrasted with White Box Testing. Read Differences
between Black Box Testing and White Box Testing.

What is Integration Testing


and How It is Performed?
Posted In | Testing Methodologies, Types of testing | Last Updated:
"November 3, 2015"

Background
We have studied about various Software developments
lifecycle models. All the SDLC models have Integration
testing as one of the layers. In my opinion, Integration

testing is actually a level of testing rather than a Type


of testing.
Many a time we feel that Integration testing involves
writing code snippets to test the integrated modules, so it
is basically a white box testing technique. This is not fully
wrong, but I feel the concept of Integration testing can be
applied in Black box technique too.
When talking in terms of testing large application
using black box testing technique, involves the combination
of many modules which are tightly coupled with each other.
We can apply the Integration testing technique concepts for
testing these types of scenarios.

In the subsequent section, I will try to elaborate the


concept of Integration testing and its

implementation in both White box and Black box


technique.

Meaning:
We normally do Integration testing after Unit testing.
Once all the individual units are created and tested, we
start combining those Unit Tested modules and start
doing the integrated testing. So the meaning of Integration
testing is quite straight forward- Integrate/combine the
unit tested module one by one and test the behavior as a
combined unit.
The main function or goal of Integration testing is to test
the interfaces between the units/modules.
The individual modules are first tested in isolation. Once
the modules are unit tested, they are integrated one by
one, till all the modules are integrated, to check the
combinational behavior, and validate whether the
requirements are implemented correctly or not.
Here we should understand that, Integration testing does
not happens at the end of the cycle, rather it is conducted
simultaneously with the development. So in most of the
times all the modules are not actually available to test and
here is what the challenge comes to test something which
does not exists!

Approaches
There are fundamentally 2 approaches for doing
Integration testing:

1. Bottom up approach
2. Top down approach.
Lets consider the below figure to test the approaches:

Bottom up approach:
Bottom up testing, as the name suggests starts from the
lowest or the innermost unit of the application, and
gradually moves up. The Integration testing starts from the
lowest module and gradually progresses towards the upper
modules of the application. This integration continues till all
the modules are integrated and the entire application is
tested as a single unit.
In this case, modules B1C1, B1C2 & B2C1, B2C2 are the
lowest module which is unit tested. Module B1 & B2 are not
yet developed. The functionality of Module B1 and B2 is
that, it calls the modules B1C1, B1C2 & B2C1, B2C2. Since
B1 and B2 are not yet developed, we would need some
program or a stimulator which will call the B1C1, B1C2 &
B2C1, B2C2 modules. These stimulator programs are
called DRIVERS.
In simple words, DRIVERS are the dummy programs
which are used to call the functions of the lowest module in
case when the calling function does not exists. Bottom up

technique requires module driver to feed test case input to


the interface of the module being tested.
Advantage for this approach is that, if a major fault exists
at the lowest unit of the program, it is easier to detect it,
and corrective measures can be taken.
Disadvantage is that the main program actually does not
exist until the last module is integrated and tested. As a
result, the higher level design flaws will be detected only at
the end.
Top down approach
This technique starts from the top most module and
gradually progress towards the lower modules. Only the
top module is unit tested in isolation. After this, the lower
modules are integrated one by one. The process is
repeated until all the modules are integrated and tested.
In the context of our figure, testing starts from Module A,
and lower modules B1 and B2 are integrated one by one.
Now here the lower modules B1 and B2 are not actually
available for integration. So in order to test the top most
modules A, we develop STUBS.
Stubs can be referred to as code a snippet which accepts
the inputs / requests from the top module and returns the
results/ response. This way, in spite of the lower modules
do not exist, we are able to test the top module.
In practical scenarios, behavior of stubs is not that simple
as it seems. In this era of complex modules and
architecture, the called module, most of the time involves
complex business logic like connecting to a data base. As a
result creating Stubs becomes as complex and time taking

as the real module. In some cases, Stub module may turn


out to be bigger than stimulated module.
Both Stubs and drivers are dummy piece of code which is
used for testing the non- existing modules. They trigger
the functions / method and return the response, which is
compared against the expected behavior
Lets conclude some difference between Stubs and
Driver:
Stubs

Driver

Used in Top down approach

Used in Bottom up approach

Top most module is tested first

Lowest modules are tested first.

Stimulates the lower level of


components

Stimulates the higher level of


components

Dummy program of lower level


components

Dummy program for Higher level


component

Only change is Constant in this world, so we have another


approach called Sandwich testing which combines the
features of both Top down and bottom up approach. When
we test huge programs like Operating systems, we have to
have some more techniques which is efficient and boosts
more confidence. Sandwich testing plays a very important
role here, where both, the Top down and bottom up testing
are started simultaneously.
Integration starts from the middle layer and moves
simultaneously towards up and down. In case of our figure,
our testing will start from B1 and B2, where one arm will

test the upper module A and another arm will test the
lower modules B1C1, B1C2 & B2C1, B2C2 .
Since both the approach starts simultaneously, this
technique is a bit complex and requires more people along
with specific skill sets and thus adds to the cost.

Integration testing GUI application


Now lets talk about how we can imply integration testing in
Black box technique.
We all understand that a web application is a multitier
application. We have a front end which is visible to the
user, we have a middle layer which has business logic, we
have some more middle layer which does some validations,
integrate some third party APIs etc., then we have the back
layer which is the data base.
Lets check the below example:
I am the owner of an advertising company and I post ads
in different websites. At the end of the month I want to see
how many people saw my ads and how many people
clicked on my ads. I need a report for my ads displayed
and I charge accordingly to my clients.
GenNext software developed this product for me and below
was the architecture:

-----------UI User Interface module, which is visible to the end


user, where all the inputs are given.
BL Is the Business Logic module, which has all the all the
calculations and business specific methods.
VAL Is the Validation module, which has all the
validations of the correctness of the input.
CNT Is the content module which has all the static
contents, specific to the inputs entered by the user. These
contents are displayed in the reports.
EN Is the Engine module, this module reads all the data
that comes from BL, VAL and CNT module and extracts the
SQL query and triggers it to the database.
Scheduler Is a module which schedules all the reports
based on the user selection (monthly, quarterly,
semiannually & annually)
DB Is the Database.
Now, having seen the architecture of the entire web
application, as a single unit, Integration testing in this case
will focus on the flow of data between the modules.

The questions here are:


1. How the BL, VAL and the CNT module will read and
interpret ate the data entered in the UI module?
2. Is BL, VAL and CNT module receiving the correct data
from UI?
3. In which format the data from BL, VAL and CNT is
transferred to the EQ module?
4. How the EQ will read the data and extract the query?
5. Is the query extracted correct?
6. Is the Scheduler getting the correct data for reports?
7. Is the result set received by the EN, from the data
base is correct and as expected?
8. Is EN able to send the response back to the BL, VAL
and CNT module?
9. Is UI module able to read the data and display it
appropriately to the interface?
In real world, the communication of data is done in an XML
format. So whatever data the user enters in the UI, it gets
converted into an XML formats.
In our scenario, the data entered in the UI module gets
converted into XML file which is interpreted by the 3
modules BL, VAL and CNT. The EN module reads the
resultant XML file generated by the 3 modules and extracts
the SQL from it and queries into the database. The EN
module also receives the result set and converts it into an
XML file and returns it back to the UI module which
converts the results in user readable form and displays it.
In the middle we have the scheduler module which receives
the result set from the EN module, creates and schedules
the reports.

So where Integration testing does comes into the picture?


Well, testing whether the information / data is flowing
correctly or not will be your integration testing, which in
this case would be validating the XML files. Are the XML
files generated correctly? Do they have the correct data?
Are the data is being transferred correctly from one module
to another? All these things will be tested as part of
Integration testing.
Try to generate or get the XML files and update the tags
and check the behavior. This is something very different
from the usual testing which testers normally do, but this
will add value to the testers knowledge and understanding
of the application.
Few other sample test conditions can be as follows:
Are the menu options generating the correct window?
Are the windows able to invoke the window under
test?
For every window, identify the function calls for the
window that the application should allow.
Identify all calls from the window to other features
that the application should allow
Identify reversible calls: closing a called window
should return to the calling window.
Identify irreversible calls: calling windows closes
before called window appears.
Test the different ways of executing calls to another
window e.g. menus, buttons, keywords.

Why Integration?

We feel that Integration testing is complex and requires


some development and logical skill. Thats true! Then what
is the purpose of integrating Integration testing in our
testing strategy.
Here are some reasons:
1. In real world, when applications are developed, it is
broken down into smaller modules and individual
developers are assigned 1 module. The logic
implemented by one developer is quite different than
other developer, so it becomes important to check
whether the logic implemented by a developer is as
per the expectations and rendering the correct value in
accordance to the prescribed standards.
2. Many a time the face or the structure of data changes
when it travels from one module to another. Some
values are appended or removed, which causes issues
in the later modules.
3. Modules also interact with some third party tools or
APIs which also need to be tested that the data
accepted by that API / tool is correct and that the
response generated are also as expected.
4. A very common problem in testing Frequent
requirement change!
Many a time developer
deploys the changes without unit testing it. Integration
testing becomes important at that time.

Finally, in conclusion some steps to kick off


Integration tests
1. Understand the architecture of your application.

2. Identify the modules


3. Understand what each module do
4. Understand how the data is transferred from one
module to another.
5. Understand how the data is entered and received into
the system ( entry point and exit point of the
application)
6. Segregate the application to suite your testing needs.
7. Identify and create the test conditions
8. Take one condition at a time and write down the test
cases.
This is all about Integration testing and its implementation
in both White box and Black box technique. Hope we
explained it clearly with relevant examples.

types
Types of Integration Testing
1) Linear integration testing
If the modules are sequentially related then linear
integration is used.
ex : Amount transfer and amount balance are sequentially
related
2)Non -Linear integration
If the modules in an application are randomly related or not
sequentially related and it is called Non -Linear integration.
ex:-In the messenger application like gmail or yahoo ,
the links or module are related randomly

Integration Testing Fundamentals


DEFINITION
Integration Testing is a level of software testing where individual units
are combined and tested as a group.

The purpose of this level of testing is to expose faults in the interaction


between integrated units. Test drivers and test stubs are used to assist in
Integration Testing.
Definition by ISTQB
integration testing: Testing performed to expose defects in the
interfaces and in the
interactions between integrated components or systems. See also
component integration
testing, system integration testing.
component integration testing: Testing performed to expose
defects in the interfaces and
interaction between integrated components.

system integration testing: Testing the integration of systems


and packages; testing
interfaces to external organizations (e.g. Electronic Data
Interchange, Internet).
ANALOGY
During the process of manufacturing a ballpoint pen, the cap, the body,
the tail and clip, the ink cartridge and the ballpoint are produced
separately and unit tested separately. When two or more units are ready,
they are assembled and Integration Testing is performed. For example,
whether the cap fits into the body or not.
METHOD
Any of Black Box Testing, White Box Testing, and Gray Box
Testing methods can be used. Normally, the method depends on your
definition of unit.
TASKS
Integration Test Plan
o Prepare
o Review
o Rework
o Baseline
Integration Test Cases/Scripts
o Prepare

o Review
o Rework
o Baseline
Integration Test
o Perform
When is Integration Testing performed?
Integration Testing is performed after Unit Testing and before System
Testing.
Who performs Integration Testing?
Either Developers themselves or independent Testers perform Integration
Testing.
APPROACHES
Big Bang is an approach to Integration Testing where all or most of
the units are combined together and tested at one go. This
approach is taken when the testing team receives the entire
software in a bundle. So what is the difference between Big Bang
Integration Testing and System Testing? Well, the former tests only
the interactions between the units while the latter tests the entire
system.
Top Down is an approach to Integration Testing where top level
units are tested first and lower level units are tested step by step
after that. This approach is taken when top down development
approach is followed. Test Stubs are needed to simulate lower level
units which may not be available during the initial phases.

Bottom Up is an approach to Integration Testing where bottom level


units are tested first and upper level units step by step after that.
This approach is taken when bottom up development approach is
followed. Test Drivers are needed to simulate higher level units
which may not be available during the initial phases.
Sandwich/Hybrid is an approach to Integration Testing which is a
combination of Top Down and Bottom Up approaches.
TIPS
Ensure that you have a proper Detail Design document where
interactions between each unit are clearly defined. In fact, you will
not be able to perform Integration Testing without this information.
Ensure that you have a robust Software Configuration Management
system in place. Or else, you will have a tough time tracking the
right version of each unit, especially if the number of units to be
integrated is huge.
Make sure that each unit is first unit tested before you start
Integration Testing.
As far as possible, automate your tests, especially when you use
the Top Down or Bottom Up approach, since regression testing is
important each time you integrate a unit, and manual regression
testing can be inefficient.

3 rd unit

What is Performance Testing?


Software Performance testing is type of testing perform to determine the
performance of system to major the measure, validate or verify quality
attributes of the system like responsiveness, Speed, Scalability, Stability under
variety of load conditions. The system is tested under a mixture of load
conditions and check the time required responding by the system under varying
workloads. Software performance testing involves the testing of application
under test to ensure that application is working as expected under variety of
load conditions. The goal of performance testing is not only find the bugs in the
system but also eliminate the performance bottlenecks from the system.

Why do performance testing?


Before going live in the market, the software system should be tested against
the Speed, Stability and scalability under variety of load conditions. If system
goes live without doing performance testing may cause the issues like running
system slow while simultaneously accessing system by several users, poor
usability which likely to gain the bad reputation and it affects the expected sales
goal directly. Performance testing encompasses a range of different tests which
enable analysis of various aspects of the system. The Performance testing is
tells about what needs to fix before going live (mainly the issues faced under
the variety of load conditions).

Types of Performance Testing:

1) Load Testing:

Load Testing is type of performance testing to check system with constantly


increasing the load on the system until the time load is reaches to its threshold
value. Here Increasing load means increasing number of concurrent users,
transactions & check the behavior of application under test. It is normally
carried out underneath controlled environment in order to distinguish between
two different systems. It is also called as Endurance testing and Volume
testing. The main purpose of load testing is to monitor the response time and
staying power of application when system is performing well under heavy load.
Load testing comes under the Non Functional Testing & it is designed to test
the non-functional requirements of a software application.

Load testing is perform to make sure that what amount of load can be
withstand the application under test. The successfully executed load testing is
only if the specified test cases are executed without any error in allocated time.
Simple examples of load testing:

Testing printer by sending large job.

Editing a very large document for testing of word processor

Continuously reading and writing data into hard disk.


Running multiple applications simultaneously on server.

Testing of mail server by accessing thousands of mailboxes

In case of zero-volume testing & system fed with zero load.


2) Stress Testing:

Stress Testing is performance testing type to check the stability of software


when hardware resources are not sufficient like CPU, memory, disk space etc.
To determine or validate an applications behavior when it is pushed
beyond normal or peak load conditions.
Stress testing is Negative testing where we load the software with large number
of concurrent users/processes which cannot be handled by the systems
hardware resources. This testing is also known as Fatigue testing, this testing
should capture the stability of the application by testing it beyond its bandwidth
capacity.
The main idea behind stress testing is to determine the failure of system and to
keep an eye on how the system gracefully get recover back, this quality is
known as recoverability. Stress testing comes under the Non Functional
Testing & it is designed to test the non-functional requirements of a software
application. This testing is to be carried out under controlled environment before
launch, so that we can accurately capture the system behavior under most
erratic scenarios
3) Spike testing:

Spike testing is subset of Stress Testing. A spike test is carried out to validate
the performance characteristics when the system under test subjected to
workload models and load volumes that repeatedly increase beyond anticipated
production operations for short periods of time.

4) Endurance testing:
Endurance testing is a non functional type of testing. Endurance testing involves
testing a system with a expected amount of load over a long period of time to
find the behavior of system. Lets take a example where system is designed to
work for 3 hrs of time but same system endure for 6 hrs of time to check the
staying power of system. Most commonly test cases are executed to check the
behavior of system like memory leaks or system fails or random behavior.
Sometimes endurance testing is also referred as Soak testing.

5) Scalability Testing:
Scalability Testing is type of non-functional tests and it is the testing of a
software application for determine its capability to scale up in terms of any of its
non-functional capability like the user load supported, the number of
transactions, the data volume etc. The main aim if this testing is to understand
at what peak the system prevent more scaling.

6) Volume testing:
Volume testing is non-functional testing which refers to testing a software
application with a large amount of data to be processed to check the efficiency
of the application. The main goal of this testing is to monitor the performance of
application under varying database volumes.

Top Performance Testing Tools:

WebLOAD
LoadRunner

Apache JMeter

NeoLoad

LoadUI

OpenSTA

LoadImpact

WAPT

Loadster
Httperf

Rational Performance Tester

QEngine (ManageEngine)

Testing Anywhere

CloudTest

Loadstorm

Performance Testing Process:


The following sections discuss the seven activities that most commonly occur
across successful performance-testing projects.
Below is a generic performance testing process

1) Identify your testing environment


Do proper requirement study & analyzing test goals and its objectives. Also
determine the testing scope along with test Initiation Checklist. Identify the
logical and physical production architecture for performance testing, identify the
software, hardware and networks configurations required for kick off the
performance testing. Compare the both test and production environments while
identifying the testing environment. Get resolve the environment-related
concerns if any, analyze that whether additional tools are required for
performance testing. This step also helps to identify the probable challenges
tester may face while performance testing.

2) Identify the performance acceptance criteria


Identify the desired performance characteristics of the application like Response
time, Throughput and Resource utilization.

3) Plan & design performance tests

Planning and designing performance tests involves identifying key usage


scenarios, determining appropriate variability across users, identifying and
generating test data, and specifying the metrics to be collected. Ultimately,
these items will provide the foundation for workloads and workload profiles. The
output of this stage is prerequisites for Test execution are ready, all required
resources, tools & test data are ready.

4) Configuring the test environment


Prepare with conceptual strategy, available tools, designed tests along with
testing environment before execution. The output of this stage is configured
load-generation environment and resource-monitoring tools.

5) Implement test design


According to test planning and design create your performance tests.

6) Execute the tests

Collect and analyze the data.

Problem Investigation like bottlenecks (memory, disk, processor, process,


cache, network, etc.) resource usage like (memory, CPU, network, etc.,)

Generate the Performance analysis reports containing all performance


attributes of the application.

Based on the analysis prepare recommendation report.

Repeat the above test for the new build received from client after fixing
the bugs and implementing the recommendations

7) Analyze Results, Report, and Retest


Consolidate, analyze and share test results.
Based on the test report re-prioritize the test & re-execute the same. If any
specific test result within the specified metric limit & all results are between the

thresholds limits then testing of same scenario on particular configuration is


completed.

Common Performance Problems:


In the software testing of an application Speed is one of the important attribute.
User will not happy to work with slow system. The performance testing
uncovers the performance bottlenecks & defects to maintain interest and
attention of user. Here is the list of most commonly performance problems
observed in software system:

Poor response time

Long Load time

Bottlenecking

Poor scalability

Software configuration issues (for the Web server, load balancers,


databases etc.)

Operating System limitations

Poor network configuration

Memory utilization

Disk usage

CPU utilization

Insufficient hardware resources

15 Top Factors That Impact


Application Performance
April 17, 2013

What is it that drives the need for Application Performance Management


(APM)? What are the main factors that can negatively impact application
performance? What should you be looking out for? That is what this new
APMdigest list reveals.
Many of the APM industry's top experts from analysts and consultants to
users and the top vendors offer their perspective on the root causes of
application performance problems.
These factors are not listed in order of importance. Some of the categories
overlap. Some of the categories could actually be considered subsets of the
other categories. Some of the quotes could fit into multiple categories. But
the bottom line is that the list accomplishes the goal: to provide a broad
picture of the many factors out there impacting application performance.
On this list you will see a wide range of impacts from the applications
themselves, to the environment and the network, to the people behind those
applications. What this list really shows is that all of these factors must be
considered when managing application performance.
I think Julie Craig, Research Director, Application Management, Enterprise
Management Associates (EMA) said it best in her response: When trying to
pin down the top factors impacting application performance, the right answer
is that there is no right answer ... the source of a performance problem could
be almost anywhere!
Almost anything that touches an application either improves or degrades
performance, she adds. Determining whether that is infrastructure, code,
data, the network, the application architecture, the endpoint, or another
application is the name of the game and the big reason why APM solutions
are so valuable.
The following are the industry's top 15 factors that impact application
performance:

1. APPLICATION COMPLEXITY
Todays enterprise applications are increasingly a large collective of
distributed software components and cloud services that enable complex
business services. With so many moving parts, somethings always bound to
have the chance of impacting performance, even with a resilient
architecture. Complexity and the fact that all these components are
monitored in different silos also makes it hard to manage a business
service or application as a whole, which also impacts performance. But its
the reality of todays enterprise application and monitoring architectures.
Nicola Sanna
President and CEO, Netuitive
Application complexity is one of the biggest factors impacting application
performance. Todays applications and services, particularly those delivered
via the Web, are a mosaic of components sourced from multiple places: data
center, cloud, third-party, et al. While the customer or employee looking at a
browser window sees a single application, multiple moving parts must
execute in the expected manner to deliver a great end-user experience.
Maybe the Web server and app server are running fine, but if the database is
faltering, user experience will suffer. Being able to measure and keep tabs on
all those moving parts is the challenge and requires an APM tool that can
provide a view into the performance of all the parts, not just individual
components. As the saying goes, The more moving parts, the more that can
go wrong.
Jason Meserve
Product Marketing Manager, CA Technologies(link is external)

2. APPLICATION DESIGN
One of the biggest factors that impacts application performance is design.
Performance must be designed in. When applications are specified,
performance goals need to be delineated along with the details of the
environment the applications will run in. Often development is left out of this
and applications are monitored, analyzed and fixed after they are released
into production. This never works as well as when performance is one of the
key goals of the application design before a line of code is written.
Charley Rich
VP Product Management and Marketing, Nastel Technologies(link is external)
One of the biggest impacts to application performance is caused by
companies outsourcing/subcontracting their application development outside

of their company and their quality control domain. Application quality and
performance needs to be built into the application platform and cannot be an
afterthought or something that well fix later. The subpar app performance
that is accepted in the development phase is bound to manifest itself in the
production stage. Modern APM solutions capture this poor performance, but
cant provide the cure. The only way to prevent poor app performance is to
expose your app development to the rigorous quality controls and processes
early on in the application lifecycle and actually fix them early in the
cycle.
Petri Maanonen
Sr. Product Marketing Manager for HP Application Performance
Management(link is external)
From my perspective the biggest factor affecting application performance
today is poorly optimized code and infrastructure, such as suboptimal SQL
queries, poorly configured network infrastructure, or inefficient code
algorithms at the application layer. All of these problems can be difficult to
isolate, and the emphasis on DevOps processes can cause these issues to
multiply quickly by increasing the rate of change in the data center. Because
of this it is important to adequately tool the data center to monitor and
report on all aspects of a deployed application using code level
instrumentation, EURT and network performance tools, and traditional IT
infrastructure monitoring solutions.
Rick Houlihan
VP of Engineering, Boundary(link is external)

3. APPLICATION TESTING
Today's applications are often developed in simulation labs without testing
performance on real-world networks. Before applications are deployed,
transport across today's highly distributed network architectures should be
monitored and optimized.
Doug Roberts
Managing Director of Product Strategy, Fluke Networks Visual(link is
external)
Insufficient testing of the application in the actual production environment
and under varying conditions impacts performance. Tied to that is for
developers and testers to have a clear understanding of the non-functional
performance criteria.

Michael Azof
Principal Analyst, Ovum(link is external)
Agile release cycles the reality is that less than 5% of developers
performance test their code before it is pushed to production. The make it
work over make it perform mantra is one of the biggest factors that
impacts application performance today. Most organizations don't have the
time, resource or budget to replicate production environments in test for
every agile release, this is why a growing trend of customers have started to
test in production out of working hours. When you consider that the
codebase of an application changes several times per month, you can begin
to understand why performance anti-patterns and bottlenecks make their
way into production.
Stephen Burton
Tech Evangelist, AppDynamics(link is external)

4. THE BUTTERFLY EFFECT


Its the "Butterfly Effect" in IT, which theoretically describes a hurricane's
formation being contingent on whether or not a distant butterfly had flapped
its wings weeks before. Sensitive dependence on environmental conditions
where a small change at one place (Dev env.) can result in large differences
to a later state (Production). Its possible that a small innocuous code change
could go undetected, being promoted through each Dev/QA environment,
and then have catastrophic effects on performance once it reaches
production. The environmental variants need to be minimized and closely
monitored to prevent the anomalous events. I'm suggesting that it is not
necessarily the number of features or technical stamina of each monitoring
tool to process large volumes of data that will make an APM implementation
successful it's the choices you make in how you put them together to
support the multiple environments within IT.
Larry Dragich
Director of Enterprise Application Services at the Auto Club Group
Click here for more on the Butterfly Effect in IT

5. THE INFRASTRUCTURE AND COMPONENTS OF THE


APPLICATION SERVICE
Application performance is impacted by componentry used to deliver the
service to the user, the users interfaces with the application, and the
connectivity between these components. The variance and complexity is

what makes the problem hard to solve, and often causes approaches to fail
on given architectures.
Jonah Kowall
Research Vice President, IT Operations Management, Gartner(link is
external)
Applications are distributed by nature, and unless the underlying
infrastructure is responsive on all the different components of the application
service, the entire application service is impacted.
Vikas Aggarwal
CEO, Zyrion
Without a doubt, third-party web components are among the biggest factors
impacting web application performance today. To deliver the functions and
features online visitors expect, websites and web applications are actually a
composite of your own resources plus numerous third-party web
components. These include content delivery networks (CDNs), site search
functions, shopping cart and payment processing functions, ad networks,
multiple social network connections, ratings and reviews for gathering
feedback and web analytics. Today, the average website includes
components from eight or more different hosts, and a slowdown for any one
service can degrade performance for an entire website or web application. If
anything goes wrong (and inevitably it will), only one party will get the
blame: you, as the primary website owner. Organizations leveraging thirdparty web components must adopt an end-user focused approach to APM, in
order to better identify and fix performance problems associated with thirdparty services beyond ones own firewall.
Stephen Pierzchala
Technology Strategist, Compuware APM's Center of Excellence(link is
external)
One of the most critical factors that affect application performance, and
often the hardest to identify and track, are application dependencies on
supporting applications, as well as the underlying system and network
components that connect them all together. With the advent of virtualized
servers and networks, the complexity of the application delivery
infrastructure has increased significantly, and so the challenge is finding an
application performance monitoring solution that can automatically discover
and monitor the network and server topologies for the entire application
service.
Brian Jacobs

Senior Product Manager, WhatsUp Gold suite of network and application


management products by Ipswitch(link is external)
Today's distributed applications particularly for large organizations can
have thousands of individual connections stretching across many tiers and
even reaching outside services. We've moved beyond simple 3-tiered web
applications into complex distributed applications (made up of load-balanced
web and application servers, multiple layers of middleware and databases,
storage arrays, mainframe transactions, and even outside services). In this
world, problems are no longer concentrated in application code instead
they are randomly distributed throughout the application infrastructure. Just
this week, we've seen LDAP, anti-virus, database, firewall, and DNS
misconfiguration all create application problems; and that's just the tip of the
iceberg.
Chris Neal
CEO and Co-Founder, BlueStripe(link is external)
As applications tie together more and more disparate services, both internal
and external, they become exposed to new opportunities for failure. These
interactions are the single biggest source of performance and availability
problems for applications. Not only does a bad link in the chain say, an
unresponsive external API generally mean a key part of the app is
unavailable, but in fact, most systems are architected in such a way that
timeouts cascade to bring down the entire environment. A failed request
means a bad experience for a single user; a stalled request ties up resources
in services shared by many users, which in the worst case means total
system failure.
Dan Kuebrich
Director of APM Product Management, AppNeta

6. THE NETWORK
Network latency and bandwidth is king for any application that isn't local
(remote workforce, customer facing website, web applications, etc.).
Monitoring network bandwidth and web application performance from
multiple locations helps isolate the problem to the network tier.
Jennifer Kuvlesky
Product Marketing Manager, SolarWinds(link is external)
The network on which the application is used impacts performance
tremendously, especially for mobile and cloud. Inconsistent bandwidth, high
jitter, increased latency and packet loss all work to degrade application

performance. While you might not be able to control mobile or most cloud
networks, you can build and test apps with these network conditions in mind.
This gives organizations the best chance to optimize app performance before
the network impacts are felt by users.
Dave Berg
Vice President of Product Strategy, Shunra
Bandwidth Bottlenecks are a big problem as it causes network queues to
develop and data to be lost, impacting the performance of applications. My
advice is to keep tabs on the number of devices, users and new applications
utilizing the network.
Jim Swepson
Pre-Sales Technologist, iTrinegy(link is external)

7. THE DYNAMIC IT ENVIRONMENT: VIRTUALIZATION AND THE


CLOUD
Applications today are an intricate mesh of multi-tier software running on
servers, networks, and storage. In addition, there is a good chance they are
running on virtualized hardware that is shared with other applications. It is
very challenging in this dynamic environment to understand what will impact
your application performance as it requires intimate knowledge of your everchanging application structure at any given moment. Many IT organizations
are very advanced on the application side but unfortunately still struggle to
move beyond managing applications via a silo approach to the different
technology tiers application, server, network, storage, etc. This is why
many organization will experience application performance issues without
any useful tools to help them to resolve the problems. Only an application
performance management tool that uses a unified, cross-domain view of the
application and its supporting infrastructure components with an accurate
run-time update of their fast-changing inter-dependencies can ensure
highly available and optimally performing systems.
Ariel Gordon
CTO and Co-Founder, Neebula
Virtualized environments from the desktop layer to applications and the
underlying infrastructure are becoming too complex to troubleshoot with
traditional silo tools. Too many isolated metrics and alerts that dont make
much sense and confuse administrators. The next generation of APM requires
awareness of performance across all virtual and physical domains from the
desktop to the datacenter and cloud, presented in an intuitive dashboard.

Equally important are capabilities to proactively alert admins before users


call and complain about slow apps. And not only alert about general issues
but with smart auto-diagnosis that points right to the root cause so that
admins can quickly restore performance levels without spending days
troubleshooting.
Srinivas Ramanathan
CEO, eG Innovations(link is external)
The modern application is complex and a single transaction trace can sprawl
across many layers in a virtualized, cloud world a perfect storm impacting
application performance. This growing complexity impacts application
performance from the end user experience all the way back through
transactions, the application layer, application infrastructure, and IT
infrastructure.
Paul Brady
Sr. VP and GM of Riverbed Performance Management
The rapid rise of communications and content via the Cloud among
increasingly dispersed employees seeking to better serve customers and
collaborate with their peers and partners will have the greatest impact on
application performance.
Jefrey Kaplan
Managing Director of THINKstrategies(link is external) and Founder of
the Cloud Computing Showplace(link is external)

8. MOBILITY
One of the biggest factors we see is the acceleration of mobility and IT
consumerization, which will propel the ongoing shift in application
architectures required to deliver the most dynamic, modern mobile end user
interfaces.
John Newsom
Executive Director, Application Performance Monitoring, Dell
Mobile usage numbers are soaring, which is certainly having an impact on
application performance. Users have multiplied as they engage more often
using a variety of devices.
Gregor Rechberger
Product Manager, Performance Testing, Micro Focus(link is external)

9. THE WEB BROWSER

TRACs 2013 APM Spectrum shows that the Web browser is the key blind spot
for gaining true end-to-end visibility into application performance. With new
approaches to application design and the increased usage of Web Services,
the ability to monitor the processing that takes place within the browser has
become one of the key requirements for full visibility into application
performance.
Bojan Simic
President and Principal Analyst, TRAC Research

10. CONFIGURATION CHANGES


Application performance has been impacted severely by what is now known
as the "chronic change and configuration management challenges" a big
data problem with lack of actionable insight into changes in the IT
environment. A typical application includes hundreds of thousands of
different configuration parameters and is subject to numerous changes. The
volume, velocity, and variety of configuration changes overwhelm IT
operations. This is especially true when considering that any minute
misconfiguration can cause a high impact application performance and
availability incident.
Sasha Gilenson
CEO, Evolven(link is external)

11. PEAK USAGE


One of the most important factors that affect application performance is poor
understanding of how the application will be used (i.e. how many people will
simultaneously use it and for what kind of transactions), and the
corresponding application architecture and its scaling assumptions that go
into its design and deployment. This lack of understanding real user
transactions and performance manifests itself as bottlenecks in performance
during the most critical peak usage period.
Pratik Gupta
IBM(link is external)Distinguished Engineer and Director of APM and
Analytics
One of the biggest factors impacting application performance is that todays
applications are essentially like one-gear bicycles without gears to shift for
different terrain. By enabling clients applications with performance gears,
our industry can have a much more positive impact with applications and
business performance. Just like todays bicycles can shift gears depending on
the challenge, applications should be able to rev up during peak user spikes

or to deliver a premium user experience maybe even by target customer or


transaction types. They should be able to downshift to conserve resources
when demand is low. To make this functionality work, APM technology should,
at a minimum, provide a governor or feedback loop to the app regarding end
user experience and internal app/infra operations.
Ray Solnik
President US Operations, Appnomic Systems(link is external)

12. PEOPLE: COMMUNICATION


My vote goes to a failure to communicate. Not to steal from Jack Nicholson
in A Few Good Men. There are many good technologies out there, but as APM
has evolved to become more than an introverted, single-domain discipline,
sharing information effectively will require an investment in dialog. This will
include next-step process awareness, so that key stakeholders are identified
and know who each other are, and clear avenues for optimizing their
collective insights. But it also requires, in many organizations, a cultural and
often a political shift to promote a willingness to step beyond traditional
boundaries and ways of working. As it matures, social media should also
help. But no single technology will count more to promote effective APM than
a revitalized and intelligent willingness to communicate across roles. Without
it, most technology investments are wasted, or at least poorly optimized.
Dennis Drogseth
VP of Research, Enterprise Management Associates (EMA)(link is external)
Today's application environments have become so complex that there is no
single individual in the IT organization that understands all that is required to
effectively deliver that application at the performance level the business
expects. Crowd-sourcing and peer review of human knowledge combined
with existing systems-based knowledge is the only way to fill in the gaps and
ensure the organization can benefit from the knowledge everyone
collectively possesses to ensure the needed quality of service.
Matthew Selheimer
VP of Marketing, ITinvolve(link is external)

13. PEOPLE: SKILLS


In my view, the single biggest factor that impacts application performance is
people. Once you correctly align your resource aptitude, skill sets, and
knowledge to your application portfolio and equip them with the tools and
knowledge to be successful, applications and environments flourish. In the
fitness world, they say diet is everything. Why not apply that same rigor to

your organization? Satisfy your companies appetite for high availability,


business continuity, rationalized portfolios, predictive environments and
analytics through the targeted and thoughtful process of feeding your best
and brightest with knowledge and support! In return, youll gain all that you
require to mold yourself into a lean and mean organization, able to meet the
toughest challenges.
Clark Cunningham
Practice Executive, Unisys Application Managed Services(link is external)

14. THE UNKNOWN UNKNOWNS


The Unknown Unknowns include unanticipated application behavior (e.g. "We
never expected 80% of our users to be using mobile devices!"),
unanticipated load (e.g. "Who would have guessed we'd get a traffic spike of
600% during the summer?") and unanticipated user behavior (e.g. "We didn't
expect users to keep hitting that button.") Application architects plan for the
known-knowns and QA checks for the known-unknowns. But APM is critical
for the unknown-unknowns that eventually impact application performance.
Russell Rothstein
Founder and CEO, IT Central Station(link is external)

15. LACK OF PROACTIVE MONITORING


A lack of visibility impacts application performance. Given today's complex
and dynamic operational environments, traditional management tools are
unable to provide a complete picture of application heath, availability and
risk. As more organizations move toward hybrid and converged compute
environments, it becomes challenging to provide complete visibility across
internal and external resources. Organizations need management tools that
provide a single pane of glass view across all IT assets and workloads,
regardless of where they reside, to ensure critical business applications are
always available and running at peak performance.
Julia Lim
VP Marketing, ScienceLogic(link is external)

Performance Testing Tools


LoadRunner - The 800 pound gorilla of performance testing tools. Sold now by HP,
LoadRunner can test almost any type of application or database. It is commonly used by

larger companies with a big IT budget. There seems to be two major camps around this
product: 1) You are a professional performance engineer and love all the functionality, or 2)
You are frustrated by the complexity and don't have time for months of learning curve.
Visual Studio Team System - Microsoft has a suite of testing tools for web applications as a
part of Visual Studio. Typical MS software: if you are committed to the MS stack, then it can
be a good option. If you have to pay extra for it because you company didn't buy the Test
Edition (or whatever licensing mechanism is in play), then you might want to look at a
cheaper alternative.
Silk - Enterprise load testing tool from Borland. Generally accepted as the next-best option
for big companies that don't want or can't afford LoadRunner. Lots of features and capability.
Still a budget challenge for most web developers, but a great fit for many testing
professionals working for or with the Fortune 1,000.
IBM Rational Performance Tester - Performance testing tool in the enterprise class and price
range. Extra value from extensions to Seibel and SAP Solutions. Pricey, but a good fit for
large organizations that normally buy from IBM.
LoadStorm - A cloud-based (Amazon EC2) load testing tool for web applications. They target
web developers on a tight budget with a simple tool that's easy to afford. It doesn't have all
the features of enterprise tools, but it does have what most developers need to simulate
http traffic for form submission and following links. Nice real-time graphs show metrics like
response times, errors, throughput, requests per second, and concurrent users. Can test up
to 50,000 concurrent users. Free account with 25 concurrent users never expires (handy!).
Xceptance LoadTest - Load testing tool for creating and running regression and load tests, in
particular for web applications. XLT combines the automation of regression tests with the
execution of load tests, as the test cases already created for the automated regression test
can subsequently be applied as load tests. In short: Every regression test can also be a load
test. Free for up to five virtual users.
Web Page Tester - Analyzes the performance of a single page in an IE browser. You provide
the URL of a page, and it is requested from one of five locations around the world. Summary
and detail reports are generated that show the times for overall page loading, DNS lookups,
rendering, and downloading for each resource such as HTML, CSS, images, and script files.
An optimization checklist is presented to make suggestions such as compression or caching
for improving the performance.
SiteBlaster - SiteBlaster is a web site load and stress testing tool built to load and stress test
USGovXML. USGovXML is an index to publicly available web services and XML data sources

provided by the US Government. When testing is complete, a report is available that can be
viewed or printed. SiteBlaster simulates Internet Explorer web browsing functionality and is
best used to test those sites that use URL query strings to pass data to its web pages.
Gomez - Gomez provides an on-demand platform that you use to optimize the performance,
availability and quality of your Web and mobile applications. It identifies business-impacting
issues by testing and measuring Web applications from the outside-in across your users,
browsers, mobile devices and geographies using a global network of 100,000+ locations.
BrowserMob - Hosted Selenium on-demand load testing tool to simulate Firefox browsers.
Captures screenshots of website load issues to help you debug. It uses Amazon EC2 cloud to
run many instances of Selenium. More accurately mimics the real user experience by
supporting AJAX and Flash processing. Free trial available for 25 real browser user load tests
or 100 simulated virtual users.
LoadImpact - Online load testing service from Gatorhole for stress testing websites. Although
not on a specific cloud infrastructure, it is a hosted solution with no download or installation,
and provides user traffic against your website through their network of load generator server
clusters with very fast connections to enable simulation of tens of thousands of users
accessing your website concurrently. Free low level load tests for 1-50 simulated users;
higher levels have monthly fees.
OpenSTA - 'Open System Testing Architecture' is a free, open source web load/stress testing
application, licensed under the Gnu GPL. Utilizes a distributed software architecture based
on CORBA. OpenSTA binaries available for Windows.
JMeter - Desktop open source performance testing tool written in Java by the Apache
Software Foundation. It was originally designed for testing Web Applications but has since
expanded to other test functions. It may be used to test performance both on static and
dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP
Servers and more). It can be used to simulate a heavy load on a server, network or object to
test its strength or to analyze overall performance under different load types.
Curl-Loader - (also known as "omes-nik" and "davilka") is an open-source tool written in Clanguage, simulating application load and application behavior of thousands and tens of
thousand HTTP/HTTPS and FTP/FTPS clients, each with its own source IP-address. In contrast
to other tools curl-loader is using real C-written client protocol stacks, namely, HTTP and FTP
stacks of libcurl and TLS/SSL of openssl, and simulates user behavior with support for login
and authentication flavors. The goal of the project is to deliver a powerful and flexible opensource testing solution as a real alternative to Spirent Avalanche and IXIA IxLoad.

The Grinder - a Java load testing framework that makes it easy to run a distributed test using
many load injector machines. It is freely available under a BSD-style open-source license. It
has a generic approach to load testing anything that has a Java API. This includes common
cases such as HTTP web servers, SOAP and REST web services, and application servers
(CORBA, RMI, JMS, EJBs), as well as custom protocols. Test scripts are written in Jython. A
graphical console allows multiple load injectors to be monitored and controlled, and provides
centralised script editing and distribution.
ApacheBench - (sometimes simply called "ab") a Perl API for Apache benchmarking and
regression testing. Intended as the foundation for a complete benchmarking and regression
testing suite for transaction-based mod_perl sites. For stress-testing server while verifying
correct HTTP responses. Based on the Apache 1.3.12 ab code. Available via CPAN as .tar.gz
file.
http-load - Unix load test application to generate web server loads from ACME Software.
Handles HTTP and HTTPS. http_load runs multiple http fetches in parallel, to test the
throughput of a web server. However unlike most such test clients, it runs in a single
process, so it doesn't bog down the client machine. It can be configured to do https fetches
as well. You give it a file containing a list of URLs that may be fetched, a flag specifying how
to start connections (either by rate or by number of simulated users), and a flag specifying
when to quit (either after a given number of fetches or a given elapsed time). There are also
optional flags for checksums, throttling, random jitter, and progress reports.
Pylot - Corey Goldberg's open source load testing tool for Python enthusiasts. Uses XML to
define test cases.
Loadea - Win XP based load testing tool. Control and capture modules use C# for building
test scenario scripts, schedule stress test execution, and XML for test data. The analysis
module provides reporting capabilities. Free evaluation version for two virtual users.
CloudIntelligence - a software development company specializing in Cloud-Computing and
Open-Source technologies. By leveraging best-of-breed Open-Source technologies, CloudComputing resources and together with our own proprietary development, we are able to
present the best and affordable Cloud-Testing solution.
StressTester - enterprise class performance / web load testing tool used by organizations of
all sizes to ensure that their web, internet, extranet and web service applications can meet
their users needs in terms of performance, scalability and stability. Free trial download
available.

AutoPilot M6 Suite - Enterprise performance management syste for business transactions


and complex composite applications. AutoPilot M6 gives you a 360 view of your entire IT
environment to assure the successful and timely completion of your business critical
transactions and peak performance the applications that run your business.
LoadManager - Focused on performance criteria for banks, telecom giants and industrialists.
Runs on all platforms supported by Eclipse and Java. Consists of one controller and several
agent modules that can be spread over several machines.
NeoLoad - Load testing tool that runs on Windows Vista, XP, Win 2000, Win Server
2003/2008, Linux (RedHat, Mandriva), and Solaris 10. It has two main components: the
Controller and the Load Generator. The controller has a load generator pre-installed. This
means only one machine is needed to carry out medium-volume tests. Additional load
generators (free of charge) may be deployed on other machines to create much higher
loads. Free 30-day evaluation download.
TestComplete - An enterprise automated test manager with project level support for a full
range of internal and UI testing. Supports testing of Windows applications created in Visual
C++, Visual Basic, Delphi, C++Builder, PowerBuilder, Visual FoxPro, .NET, WPF, Java and
JavaFX applications, web applications and web services, Flash, Flex and Silverlight
applications. Also tests applications running on portable devices such as PDAs, Pocket PCs
and smartphones.
QTest - An enterprise performance & load testing tool specializing in complex Web
applications like Siebel, SAP, and Epiphany. It offers high capacity for load simulation. It
supports all Web, Web Service, J2EE, .Net, ASP, AJAX, CGI, and Mainframe Portal
environments and integrate with their APM solution. With the Winload module, it can test
Windows, client/server and ERP applications, in particular SAP Fat Clients, PeopleSoft, Oracle,
and Citrix.
httperf - Open source web server performance/benchmarking tool from HP Research Labs. It
provides a flexible facility for generating various HTTP workloads and for measuring server
performance. The focus of httperf is not on implementing one particular benchmark but on
providing a robust, high-performance tool that facilitates the construction of both micro- and
macro-level benchmarks. The three distinguishing characteristics of httperf are its
robustness, which includes the ability to generate and sustain server overload, support for
the HTTP/1.1 and SSL protocols, and its extensibility to new workload generators and
performance measurements.

TestMaker - A single platform for Functional Testing, Regression, Load and Performance
Testing, and Business Service Monitoring, all from the same single test script. Pass smoke
tests, surface performance bottlenecks, and enforce Service Level Agreements (SLAs) in one
product.
Siege - Unix-based http load tester and benchmarking utility. It was designed to let web
developers measure the performance of their code under duress, to see how it will stand up
to load on the internet. It lets the user hit a web server with a configurable number of
concurrent simulated users. Those users place the server "under siege." Written on
GNU/Linux and has been successfully ported to AIX, BSD, HP-UX and Solaris. It should
compile on most System V UNIX variants and on most newer BSD systems. Because Siege
relies on POSIX.1b features not supported by Microsoft, it will not run on Windows.
SiteTester1 - A load-testing utility designed to test web servers and web applications by
simulating virtual users following predefined procedures for HTTP1.0/1.1 compatible
requests, POST/GET methods, cookies, running in multi-threaded or single-threaded mode.
Generates various reports in HTML format, keeps and reads XML formatted files for test
definitions and test logs. Requires JDK1.2 or higher.
eValid LoadTest - A testing tool suite built into an IE browser for 100% client side quality
checking, dynamic testing, content validation, page performance tuning, and webserver
loading and capacity analysis. It performs functions needed for detailed static and dynamic
website testing, regression testing, QA/Validation, page timing and tuning, transaction
monitoring, and realistic & scalable server loading.
WebPerformance Load Tester - Load test tool for generating and analyzing automated load
tests on a server. Permanent fixed-seat, permanent floating, and temporary fixed-seat
licenses available. Automatically detects and configures the test cases for many situations.
Supports all browsers and web servers. Modem simulation allows each virtual user to be
bandwidth limited. For Windows and many UNIX variants.

5 th unit

What is Test Management?

Test management, process of managing the tests. A test management is


also performed using tools to manage both types of tests, automated and
manual, that have been previously specified by a test procedure.
Test management tools allow automatic generation of the requirement test
matrix (RTM), which is an indication of functional coverage of the
application under test (SUT).
Test Management tool often has multifunctional capabilities such as
testware management, test scheduling, the logging of results, test tracking,
incident management and test reporting.

Test Management Responsibilities:


Test Management has a clear set of roles and responsibilities for
improving the quality of the product.
Test management helps the development and maintenance of
product metrics during the course of project.
Test management enables developers to make sure that there are
fewer design or coding faults.

Software Testing Metrics: Complete


Tutorial
What is Software Testing Metric?
In software testing, Metric is a quantitative measure of the degree to
which a system, system component, or processpossesses a given
attribute.
In other words, metrics helps estimating the progress, quality and
health of a software testing effort. The ideal example to understand
metrics would be a weekly mileage of a car compared to its ideal
mileage recommended by the manufacturer.

"Software testing metrics - Improves the efficiency and


effectiveness of a software testing process."
Software testing metrics or software test measurement is the
quantitative indication of extent, capacity, dimension, amount or size
of some attribute of a process or product.
Example for software test measurement: Total number of defects
In this tutorial we will learn

Steps to test metrics


Why do Test Metrics?
Types of Metrics
Manual Test Metrics
Test Metrics Life Cycle
How to calculate test metric
Test Metrics Glossary

Steps to test metrics


Sr
#

Steps to test metrics

Identify the key software testing processes to be measured

Use the data as baseline to define the metrics

Testing progress tracking

The number of test case

Determination of the information to be tracked, frequency of


tracking and the person responsible

Effective calculation, management and interpretation of the


defined metrics

Identify the areas of improvement depending on the interpretation


of defined metrics

The actual test execution

manager at the end of the d

The actual test cases exe

The test case execution

investigate the reason and s

Why do Test Metrics?


"We cannot improve what we cannot measure" and Test Metrics
helps us to do exactly the same.

Take decision for next phase of activities


Evidence of the claim or prediction
Understand the type of improvement required
Take decision or process or technology change

Read more about its Importance of Test Metrics

Types of Metrics

Process Metrics: It can be used to improve the process


efficiency of the SDLC ( Software Development Life Cycle)

Product Metrics: It deals with the quality of the software


product

Project Metrics: It can be used to measure the efficiency of a


project team or any tools being used by the team members
Identification of correct testing metrics is very important. Few things
need to be considered before identifying the test metrics

Fix the target audience for the metric preparation


Define the goal for metrics
Introduce all the relevant metrics based on project needs
Analyze the cost benefits aspect of each metrics and the project
lifestyle phase in which it results into the maximum output

Manual Test Metrics


Manual test metrics is classified into two classes

Base Metrics
Calculated Metrics

Base metrics is the raw data collected by Test Analyst during the test
case development and execution (# of test cases executed, # of
test cases). While, calculated metrics is derived from the data
gathered in base metrics. Calculated metrics is usually tracked by the
test manager for test reporting purpose (% Complete, % Test
Coverage).
Depending on the project or business model some of the important
metrics are

Test case execution productivity metrics


Test case preparation productivity metrics
Defect metrics
Defects by priority
Defects by severity
Defect slippage ratio

Test Metrics Life Cycle


Different stages of Metrics life cycle

Analysis

Steps during each stage

Identification of the Metrics

Communicate

Define the identified Metrics

Explain the need for metric to stakeholder and te

Educate the testing team about the data points n


the metric

Evaluation

Report

Capture and verify the data

Calculating the metrics value using the data capt

Develop the report with effective conclusion

Distribute report to the stakeholder and respectiv

Take feedback from stakeholder

How to calculate test metric


To understand how to calculate the software test metrics, we will see
an example of percentage test case executed.
To obtain the execution status of the test cases in percentage, we use
the formula.

Percentage test cases executed= (No of test cases


executed/ Total no of test cases written) X 100

Likewise, you can calculate for other parameters like test cases not
executed, test cases passed, test cases failed, test cases
blocked, etc.

Test Metrics Glossary

Rework Effort Ratio = (Actual rework efforts spent in that


phase/ total actual efforts spent in that phase) X 100
Requirement Creep= ( Total number of requirements
added/No of initial requirements)X100
Schedule Variance= ( Actual efforts estimated efforts ) /
Estimated Efforts) X 100
Cost of finding defect in testing= ( Total effort spent on
testing/ defects found in testing)
Schedule slippage: (Actual end date Estimated end date) /
(Planned End Date Planned Start Date) X 100

You Might L

Vous aimerez peut-être aussi