Académique Documents
Professionnel Documents
Culture Documents
unit
Principles of Testing
There are seven principles of testing. They are as follows:
1) Testing shows presence of defects:Testing can show the defects are
present, but cannot prove that there are no defects. Even after testing the
application or product thoroughly we cannot say that the product is 100%
defect free. Testing always reduces the number of undiscovered defects
remaining in the software but even if no defects are found, it is not a proof of
correctness.
2) Exhaustive testing is impossible: Testing everything including all
combinations of inputs and preconditions is not possible. So, instead of doing
the exhaustive testing we can use risks and priorities to focus testing efforts.
For example: In an application in one screen there are 15 input fields, each
having 5 possible values, then to test all the valid combinations you would
need 30 517 578 125 (515) tests. This is very unlikely that the project
timescales would allow for this number of tests. So, accessing and managing
risk is one of the most important activities and reason for testing in any
project.
3) Early testing: In the software development life cycle testing activities
should start as early as possible and should be focused on defined objectives.
4) Defect clustering: A small number of modules contains most of the
defects discovered during pre-release testing or shows the most operational
failures.
5) Pesticide paradox: If the same kinds of tests are repeated again and
again, eventually the same set of test cases will no longer be able to find any
new bugs. To overcome this Pesticide Paradox, it is really very important to
review the test cases regularly and new and different tests need to be written
to exercise different parts of the software or system to potentially find more
defects.
6) Testing is context depending: Testing is basically context dependent.
Different kinds of sites are tested differently. For example, safety critical
software is tested differently from an e-commerce site.
7) Absence of errors fallacy: If the system built is unusable and does not
fulfil the users needs and expectations then finding and fixing defects does
not help.
Just because testing didnt find any defects in the software, it doesnt mean that the
software is ready to be shipped. Were the executed tests really designed to catch
the most defects? or where they designed to see if the software matched the users
requirements? There are many other factors to be considered before making a
decision to ship the software.
Other principles to note are:
o Testing must be done by an independent party.
Testing should not be performed by the person or team that developed the software
since they tend to defend the correctness of the program.
o Assign best personnel to the task.
Because testing requires high creativity and responsibility only the best personnel
must be assigned to design, implement, and analyze test cases, test data and test
results.
Communication
This is the first step where the user initiates the request for a desired
software product. He contacts the service provider and tries to negotiate
the terms. He submits his request to the service providing organization in
writing.
Requirement Gathering
This step onwards the software development team works to carry on the
project. The team holds discussions with various stakeholders from problem
domain and tries to bring out as much information as possible on their
requirements. The requirements are contemplated and segregated into user
Feasibility Study
After requirement gathering, the team comes up with a rough plan of
software process. At this step the team analyzes if a software can be made
to fulfill all requirements of the user and if there is any possibility of
software being no more useful. It is found out, if the project is financially,
practically and technologically feasible for the organization to take up. There
are many algorithms available, which help the developers to conclude the
feasibility of a software project.
System Analysis
At this step the developers decide a roadmap of their plan and try to bring
up the best software model suitable for the project. System analysis
includes Understanding of software product limitations, learning system
related problems or changes to be done in existing systems beforehand,
identifying and addressing the impact of project on organization and
personnel etc. The project team analyzes the scope of the project and plans
the schedule and resources accordingly.
Software Design
Next step is to bring down whole knowledge of requirements and analysis
on the desk and design the software product. The inputs from users and
information gathered in requirement gathering phase are the inputs of this
step. The output of this step comes in the form of two designs; logical
design and physical design. Engineers produce meta-data and data
dictionaries, logical diagrams, data-flow diagrams and in some cases pseudo
codes.
Coding
This step is also known as programming phase. The implementation of
software design starts in terms of writing program code in the suitable
programming language and developing error-free executable programs
efficiently.
Testing
An estimate says that 50% of whole software development process should
be tested. Errors may ruin the software from critical level to its own
removal. Software testing is done while coding by the developers and
thorough testing is conducted by testing experts at various levels of code
such as module testing, program testing, product testing, in-house testing
and testing the product at users end. Early discovery of errors and their
remedy is the key to reliable software.
Integration
Software may need to be integrated with the libraries, databases and other
program(s). This stage of SDLC is involved in the integration of software
with outer world entities.
Implementation
This means installing the software on user machines. At times, software
needs post-installation configurations at user end. Software is tested for
portability and adaptability and integration related issues are solved during
implementation.
This phase confirms the software operation in terms of more efficiency and
less errors. If required, the users are trained on, or aided with the
documentation on how to operate the software and how to keep the
software operational. The software is maintained timely by updating the
code according to the changes taking place in user end environment or
technology. This phase may face challenges from hidden bugs and realworld unidentified problems.
Disposition
As time elapses, the software may decline on the performance front. It may
go completely obsolete or may need intense upgradation. Hence a pressing
need to eliminate a major portion of the system arises. This phase includes
archiving data and required software components, closing down the system,
planning disposition activity and terminating system at appropriate end-ofsystem time.
Below you will find a brief description of these phases. Later on, were going to
publish a separate article on each phase.
Requirements Analysis and Definition. System Overview
This phase begins with analyzing what exactly you want to have done. The system
overview helps you see the big picture of the project and understand which steps
need to be carried out. You should determine and document the vision for the target
product or system; the user profile(s); the hardware and software environment; the
most important components, functions, or features the software must have; the
security requirements, etc. To aid in the needs analysis, it is sometimes necessary to
have prototypes created or to have them created by professionals, for that matter.
All this can and often should be done in cooperation with your vendor.
The product of this stage is the general system requirements (and sometimes, draft
user manual). This document will be modified as the project is undertaken.
Estimation
This is a phase that is usually obscure to customers. Vendors tend to supply you
with an estimate itself, and thats it. Personally, I believe that customers may and
should take more active part in the estimation process. For example, you have to be
able to select from different options discussing the platforms, technologies, and tools
that will be used for the target system. Also, make sure your vendor does a research
of the existing libraries and tools that can be used in the project. Remember that an
estimate should explicitly list what is included in the price, as well as why and how
much any additional features will cost. Never let the vendor baffle you with technical
jargon and complex details. Finally, if you are in doubt about the provided estimate,
consult an expert; if the vendor appears to try to take advantage of you, dont
bargain with such a company just say thank you and look for another OSP.
Outsourcing is risky by nature, so you cant afford to take chances with a vendor like
that.
The estimate isnt the only document that results from this phase. The project
contract (or project bid) and rough project schedule usually come into existence at
this point, too.
programming language, your vendor creates and integrates the system components.
Performing code reviews and test cases worked out by the vendors QA/QC division,
as well as unit, integration, and system tests are other key activities of this phase.
Comprehensive testing and correcting any errors identified ensures that components
function together properly, and that the project implementation meets the system
specification.
Outsourcing a software development project, I advise you to have a project delivered
and paid for in parts. This is one of the best ways to minimize the risk for you and
your vendor. If you arent satisfied with the way the project is being implemented, you
can take to another vendor the specification and the code that was previously
delivered.
Release. Delivery and Installation
In the release phase, your vendor must transfer the target product or system to you.
The key activities usually include installation and configuration in the operational
environment, acceptance testing, and training of the users if necessary.
A crucial point here is formal acceptance testing which comprises a series of end-toend tests. It is performed to confirm that the product or system fulfills the acceptance
requirements determined by the functional specification.
After this phase is complete, the product or system is considered formally delivered
and accepted. If iterative development is used, the next iteration should be
commenced.
Operation and Maintenance
The Operation and Maintenance phase begins once you have formally accepted the
product or system delivered by the vendor. The task of this phase is the proper
functioning of the software. To improve a product or system, it should be
continuously maintained. Software maintenance involves detecting and correcting
errors, as well as extending and improving the software itself.
Advertisements
Previous Page
Next Page
Most people get confused when it comes to pin down the differences among
Quality Assurance, Quality Control, and Testing. Although they are
interrelated and to some extent, they can be considered as same activities,
but there exist distinguishing points that set them apart. The following table
lists the points that differentiate QA, QC, and Testing.
Quality Assurance
Quality Control
Testing
It includes activities
that ensure the
identification of
bugs/error/defects in a
software.
Focuses on actual
by executing the
software with an aim to
identify bug/defect
through implementation
of procedures and
process.
testing.
Process-oriented activities.
Product-oriented
activities.
Product-oriented
activities.
Preventive activities.
It is a corrective process.
It is a preventive
process.
It is a subset of Software
Test Life Cycle (STLC).
QC can be considered as
the subset of Quality
Assurance.
implement
and
follow.
Types
of
audit
include
Legal
It
involves
identifying,
isolating,
and
fixing
the
S.N
.
Verification
Validation
Done by developers.
Done by testers.
2 nd unit
various
workload.
Performance
testing
measures
the
quality
will
result
in
measuring
important
business
critical
is
monitored
to
detect
memory
leaks
or
other
well for an hour or two, and then start to experience issues. These tests are
especially useful when trying to track down memory leaks or corruption.
6. Component Test: Testing a discrete component of your application
requires a component test. Examples might include a search function, a
file upload, a chat feature, an email function, or a 3rd-party component like
a shopping cart.
7. Smoke Test: A smoke test is a test run under very low load that merely
shows that the application works as expected. The term originated in the
electronics industry and refers to the application of power to an electronic
component. If smoke is generated, the test fails and no further testing is
necessary until the simplest test passes successfully. For example, there
may be correlation issues with your scenario or script if you can run a
single user test successfully, the scenario is sound. It is a best practice to
initate one of these verification runs before running larger tests to ensure
that the test is valid.
What is performance testing?
tested to ensure that they run for a long period of time without
deviations.
o
o
o
Network utilization
Operating System limitations
Disk usage
1.
2.
4.
5.
6.
7.
Proxy Sniffer - one of the leading tools used for load testing of
web and application servers. It is a cloud based tool that's capable
of simulating thousands of users. Summary
Summary
Path Coverage = (Number paths exercised / Total Number of paths in the program) x
100 %
#2 Branch Coverage
Branch in programming language is like the IF
statements. If statement has two branches: true and
false.
So in Branch coverage (also called Decision coverage), we
validate that each branch is executed at least once.
In case of a IF statement, there will be two test
conditions:
One to validate the true branch and
Other to validate the false branch
Hence in theory, Branch Coverage is a testing method
which when executed ensures that each branch from each
decision point is executed.
#3 Path Coverage
Path coverage tests all the paths of the program. This is a
comprehensive technique which ensures that all the paths
of the program are traversed at least once. Path Coverage
is even more powerful that Branch coverage. This
technique is useful for testing the complex programs.
Lets take a simple example to understand all these white
box testing techniques.
INPUT A & B
C = A + B
IF C>100
PRINT ITS DONE
For Statement Coverage we would need only one test
case to check all the lines of code.
That means:
If I consider TestCase_01 to be (A=40 and B=70), then all
the lines of code will be executed
Now the question arises:
Is that sufficient?
What if I consider my Test case as A=33 and B=45?
Because Statement coverage will only cover the true side,
for the pseudo code, only one test case would NOT be
sufficient to test it. As a tester, we have to consider the
negative cases as well.
Hence for maximum coverage, we need to
consider Branch Coverage, which will evaluate the
FALSE conditions.
In real world, you may add appropriate statements when
condition fails.
So now the pseudo code becomes:
INPUT A & B
C = A + B
IF C>100
PRINT ITS DONE
ELSE
PRINT ITS PENDING
Since Statement coverage is not sufficient to test the entire
pseudo code, we would require Branch coverage to ensure
maximum coverage.
-----------So for Branch coverage, we would require two test cases to
complete testing of this pseudo code.
TestCase_01: A=33, B=45
TestCase_02: A=25, B=30
With this, we can see that each and every line of code is
executed at least once.
Here are the conclusions so far:
Branch Coverage ensures more coverage than
Statement coverage
Branch coverage is more powerful than Statement
coverage,
100% Branch coverage itself means 100% statement
coverage,
INPUT A & B
C = A + B
IF C>100
PRINT ITS DONE
END IF
IF A>50
PRINT ITS PENDING
END IF
Now to ensure maximum coverage, we would require 4 test
cases.
How?
Simply there are 2 decision statements, so for each
decision statement we would need to branches to test. One
Conclusion
Note that the statement, branch or path coverage does not
identify any bug or defect that needs to be fixed. It only
identifies those lines of code which are either never
executed or remains untouched. Based on this further
testing can be focused on.
Relying only on black box testing is not sufficient for
maximum test coverage. We need to have combination of
both black box and white box testing techniques to cover
maximum defects.
If done properly, White box testing will certainly contribute
to the software quality. Its also good for testers to
participate in this testing as it can provide the most
unbiased opinion about the code.
Types of Reviews:
The types of reviews can be given by a simple diagram:
The acceptance test cases are executed against the test data or using an
acceptance test script and then the results are compared with the expected
ones.
Acceptance Criteria
Acceptance criteria are defined on the basis of the following attributes
Introduction
Acceptance Test Category
operation Environment
Test case ID
Test Title
Test Objective
Test Procedure
Test Schedule
Resources
The acceptance test activities are designed to reach at one of the
conclusions:
Report Identifier
Summary of Results
Variations
Recommendations
Summary of To-DO List
Approval Decision
Structural testing on the other hand is concerned with testing the implementation of the
program. The intent of structural testing is not to exercise all the different input or
output conditions but to exercise the different programming structures and data
structures used in the program.
To test the structure of a program, structural testing aims to achieve test cases that will
force the desired coverage of different structures. Various criteria have been proposed
for this. Unlike the criteria for functional testing, which are frequently imprecise, the
criteria for structural testing are generally quite precise as they are based on program
structure formal and precise.
Control Flow Based Criteria
Most common structure based criteria are based on the control flow of the program. In
these criteria the control flow graph of a program is considered and coverage of various
aspects of the graph are specified as criteria. Hence before we consider the criteria let us
precisely define a control flow graph for a program.
Let the control flow graph (or simply flow graph) of a program P be G. A node in this
graph represents a block of statements that is always executed together i.e., whenever
the first statement is executed all other statements are also executed an edge 9.i.j (from
node i to node j) represents a possible transfer of control after executing the last
statement of the block represented by node i to the first statement of the block
represented by node j.
The simplest coverage criterion is statement coverage, which requires that each
statement of the program be executed at least once during testing. In other words it
requires that the paths executed during testing include all the nodes in the graph. This is
also called the all nodes criterion. This coverage criterion is not very strong and can
leave errors undetected.
For example, if there is an if statement in the program without having an else clause the
statement coverage criterion for this statement will be satisfied by a test case that
evaluates the condition to true. No test case is needed that ensures that the condition in
the if statement evaluates to false. This is a serious shortcoming because decisions in
programs are potential sources of errors. As an example consider the following function
to compute the absolute value of a number.
A little more general coverage criterion is branch coverage, which requires that each
edge in the control flow graph be traversed at least once during testing. In other words
branch coverage requires that each decision in the program be evaluated to true and
false values at least once during testing. Testing based on branch coverage is often called
branch testing.
The Trouble with branch coverage comes if a decision has many conditions in it .For
example; consider the following function that checks the validity of a data item. The data
item is valid if it lies between 0 and 100 as it is checking for x <200 instead of 100
(perhaps a typing error made by the programmer).
Data Flow -Based Testing
The basic idea behind data flow based testing is to make sure that during testing the
definitions of variables and their subsequent use is tested. Just like the all nodes and all
edges criteria try to generate confidence in testing by making sure that at least all
statements and all branches have been tested the data flow testing tries to ensure some
coverage of the definitions of variables.
For data flow based criteria a definition use graph (de flues graph for short) for the
program is first constructed from the control flow graph representing a block of code
has variable occurrences in it. A variable occurrence can be one of the following three
types (RW 85)
Def represents the definition of a variable. The variable on the left hand side of an
C-use represents computational use of a variable. Any statement (e.g. read write an
assignment) that uses the value of variables for computation purposes is said to be
making c- use of the variables. In an assignment statement all variables on the right
hand side have a c- use occurrence. In a read and a write statement all variable
occurrences are of this type.
P- use represents predicate use. These are all the occurrences of the variables in a
predicate (i.e. variables whose values are used for computing the value of the predicate),
which is used for transfer of control.
In control flow based and data flow based testing the focus was on which paths to
execute during testing. Mutation testing does not take a path-based approach. Instead it
takes the program and creates many mutants of it by making simple changes to the
program. The goal of testing is to make sure that during the course of testing each
mutant produces an output different from the output of the original program.
In other words the mutation testing criterion does not say that the set of test cases must
be such that certain paths are executed instead it requires the set of test cases to be such
that they can distinguish between the original program and its mutants.
Path Coverage = (Number paths exercised / Total Number of paths in the program) x
100 %
Equivalence Class
Boundary Value Analysis
Domain Tests
Orthogonal Arrays
Decision Tables
State Models
Exploratory Testing
All-pairs testing
Background
We have studied about various Software developments
lifecycle models. All the SDLC models have Integration
testing as one of the layers. In my opinion, Integration
Meaning:
We normally do Integration testing after Unit testing.
Once all the individual units are created and tested, we
start combining those Unit Tested modules and start
doing the integrated testing. So the meaning of Integration
testing is quite straight forward- Integrate/combine the
unit tested module one by one and test the behavior as a
combined unit.
The main function or goal of Integration testing is to test
the interfaces between the units/modules.
The individual modules are first tested in isolation. Once
the modules are unit tested, they are integrated one by
one, till all the modules are integrated, to check the
combinational behavior, and validate whether the
requirements are implemented correctly or not.
Here we should understand that, Integration testing does
not happens at the end of the cycle, rather it is conducted
simultaneously with the development. So in most of the
times all the modules are not actually available to test and
here is what the challenge comes to test something which
does not exists!
Approaches
There are fundamentally 2 approaches for doing
Integration testing:
1. Bottom up approach
2. Top down approach.
Lets consider the below figure to test the approaches:
Bottom up approach:
Bottom up testing, as the name suggests starts from the
lowest or the innermost unit of the application, and
gradually moves up. The Integration testing starts from the
lowest module and gradually progresses towards the upper
modules of the application. This integration continues till all
the modules are integrated and the entire application is
tested as a single unit.
In this case, modules B1C1, B1C2 & B2C1, B2C2 are the
lowest module which is unit tested. Module B1 & B2 are not
yet developed. The functionality of Module B1 and B2 is
that, it calls the modules B1C1, B1C2 & B2C1, B2C2. Since
B1 and B2 are not yet developed, we would need some
program or a stimulator which will call the B1C1, B1C2 &
B2C1, B2C2 modules. These stimulator programs are
called DRIVERS.
In simple words, DRIVERS are the dummy programs
which are used to call the functions of the lowest module in
case when the calling function does not exists. Bottom up
Driver
test the upper module A and another arm will test the
lower modules B1C1, B1C2 & B2C1, B2C2 .
Since both the approach starts simultaneously, this
technique is a bit complex and requires more people along
with specific skill sets and thus adds to the cost.
Why Integration?
types
Types of Integration Testing
1) Linear integration testing
If the modules are sequentially related then linear
integration is used.
ex : Amount transfer and amount balance are sequentially
related
2)Non -Linear integration
If the modules in an application are randomly related or not
sequentially related and it is called Non -Linear integration.
ex:-In the messenger application like gmail or yahoo ,
the links or module are related randomly
o Review
o Rework
o Baseline
Integration Test
o Perform
When is Integration Testing performed?
Integration Testing is performed after Unit Testing and before System
Testing.
Who performs Integration Testing?
Either Developers themselves or independent Testers perform Integration
Testing.
APPROACHES
Big Bang is an approach to Integration Testing where all or most of
the units are combined together and tested at one go. This
approach is taken when the testing team receives the entire
software in a bundle. So what is the difference between Big Bang
Integration Testing and System Testing? Well, the former tests only
the interactions between the units while the latter tests the entire
system.
Top Down is an approach to Integration Testing where top level
units are tested first and lower level units are tested step by step
after that. This approach is taken when top down development
approach is followed. Test Stubs are needed to simulate lower level
units which may not be available during the initial phases.
3 rd unit
1) Load Testing:
Load testing is perform to make sure that what amount of load can be
withstand the application under test. The successfully executed load testing is
only if the specified test cases are executed without any error in allocated time.
Simple examples of load testing:
Spike testing is subset of Stress Testing. A spike test is carried out to validate
the performance characteristics when the system under test subjected to
workload models and load volumes that repeatedly increase beyond anticipated
production operations for short periods of time.
4) Endurance testing:
Endurance testing is a non functional type of testing. Endurance testing involves
testing a system with a expected amount of load over a long period of time to
find the behavior of system. Lets take a example where system is designed to
work for 3 hrs of time but same system endure for 6 hrs of time to check the
staying power of system. Most commonly test cases are executed to check the
behavior of system like memory leaks or system fails or random behavior.
Sometimes endurance testing is also referred as Soak testing.
5) Scalability Testing:
Scalability Testing is type of non-functional tests and it is the testing of a
software application for determine its capability to scale up in terms of any of its
non-functional capability like the user load supported, the number of
transactions, the data volume etc. The main aim if this testing is to understand
at what peak the system prevent more scaling.
6) Volume testing:
Volume testing is non-functional testing which refers to testing a software
application with a large amount of data to be processed to check the efficiency
of the application. The main goal of this testing is to monitor the performance of
application under varying database volumes.
WebLOAD
LoadRunner
Apache JMeter
NeoLoad
LoadUI
OpenSTA
LoadImpact
WAPT
Loadster
Httperf
QEngine (ManageEngine)
Testing Anywhere
CloudTest
Loadstorm
Repeat the above test for the new build received from client after fixing
the bugs and implementing the recommendations
Bottlenecking
Poor scalability
Memory utilization
Disk usage
CPU utilization
1. APPLICATION COMPLEXITY
Todays enterprise applications are increasingly a large collective of
distributed software components and cloud services that enable complex
business services. With so many moving parts, somethings always bound to
have the chance of impacting performance, even with a resilient
architecture. Complexity and the fact that all these components are
monitored in different silos also makes it hard to manage a business
service or application as a whole, which also impacts performance. But its
the reality of todays enterprise application and monitoring architectures.
Nicola Sanna
President and CEO, Netuitive
Application complexity is one of the biggest factors impacting application
performance. Todays applications and services, particularly those delivered
via the Web, are a mosaic of components sourced from multiple places: data
center, cloud, third-party, et al. While the customer or employee looking at a
browser window sees a single application, multiple moving parts must
execute in the expected manner to deliver a great end-user experience.
Maybe the Web server and app server are running fine, but if the database is
faltering, user experience will suffer. Being able to measure and keep tabs on
all those moving parts is the challenge and requires an APM tool that can
provide a view into the performance of all the parts, not just individual
components. As the saying goes, The more moving parts, the more that can
go wrong.
Jason Meserve
Product Marketing Manager, CA Technologies(link is external)
2. APPLICATION DESIGN
One of the biggest factors that impacts application performance is design.
Performance must be designed in. When applications are specified,
performance goals need to be delineated along with the details of the
environment the applications will run in. Often development is left out of this
and applications are monitored, analyzed and fixed after they are released
into production. This never works as well as when performance is one of the
key goals of the application design before a line of code is written.
Charley Rich
VP Product Management and Marketing, Nastel Technologies(link is external)
One of the biggest impacts to application performance is caused by
companies outsourcing/subcontracting their application development outside
of their company and their quality control domain. Application quality and
performance needs to be built into the application platform and cannot be an
afterthought or something that well fix later. The subpar app performance
that is accepted in the development phase is bound to manifest itself in the
production stage. Modern APM solutions capture this poor performance, but
cant provide the cure. The only way to prevent poor app performance is to
expose your app development to the rigorous quality controls and processes
early on in the application lifecycle and actually fix them early in the
cycle.
Petri Maanonen
Sr. Product Marketing Manager for HP Application Performance
Management(link is external)
From my perspective the biggest factor affecting application performance
today is poorly optimized code and infrastructure, such as suboptimal SQL
queries, poorly configured network infrastructure, or inefficient code
algorithms at the application layer. All of these problems can be difficult to
isolate, and the emphasis on DevOps processes can cause these issues to
multiply quickly by increasing the rate of change in the data center. Because
of this it is important to adequately tool the data center to monitor and
report on all aspects of a deployed application using code level
instrumentation, EURT and network performance tools, and traditional IT
infrastructure monitoring solutions.
Rick Houlihan
VP of Engineering, Boundary(link is external)
3. APPLICATION TESTING
Today's applications are often developed in simulation labs without testing
performance on real-world networks. Before applications are deployed,
transport across today's highly distributed network architectures should be
monitored and optimized.
Doug Roberts
Managing Director of Product Strategy, Fluke Networks Visual(link is
external)
Insufficient testing of the application in the actual production environment
and under varying conditions impacts performance. Tied to that is for
developers and testers to have a clear understanding of the non-functional
performance criteria.
Michael Azof
Principal Analyst, Ovum(link is external)
Agile release cycles the reality is that less than 5% of developers
performance test their code before it is pushed to production. The make it
work over make it perform mantra is one of the biggest factors that
impacts application performance today. Most organizations don't have the
time, resource or budget to replicate production environments in test for
every agile release, this is why a growing trend of customers have started to
test in production out of working hours. When you consider that the
codebase of an application changes several times per month, you can begin
to understand why performance anti-patterns and bottlenecks make their
way into production.
Stephen Burton
Tech Evangelist, AppDynamics(link is external)
what makes the problem hard to solve, and often causes approaches to fail
on given architectures.
Jonah Kowall
Research Vice President, IT Operations Management, Gartner(link is
external)
Applications are distributed by nature, and unless the underlying
infrastructure is responsive on all the different components of the application
service, the entire application service is impacted.
Vikas Aggarwal
CEO, Zyrion
Without a doubt, third-party web components are among the biggest factors
impacting web application performance today. To deliver the functions and
features online visitors expect, websites and web applications are actually a
composite of your own resources plus numerous third-party web
components. These include content delivery networks (CDNs), site search
functions, shopping cart and payment processing functions, ad networks,
multiple social network connections, ratings and reviews for gathering
feedback and web analytics. Today, the average website includes
components from eight or more different hosts, and a slowdown for any one
service can degrade performance for an entire website or web application. If
anything goes wrong (and inevitably it will), only one party will get the
blame: you, as the primary website owner. Organizations leveraging thirdparty web components must adopt an end-user focused approach to APM, in
order to better identify and fix performance problems associated with thirdparty services beyond ones own firewall.
Stephen Pierzchala
Technology Strategist, Compuware APM's Center of Excellence(link is
external)
One of the most critical factors that affect application performance, and
often the hardest to identify and track, are application dependencies on
supporting applications, as well as the underlying system and network
components that connect them all together. With the advent of virtualized
servers and networks, the complexity of the application delivery
infrastructure has increased significantly, and so the challenge is finding an
application performance monitoring solution that can automatically discover
and monitor the network and server topologies for the entire application
service.
Brian Jacobs
6. THE NETWORK
Network latency and bandwidth is king for any application that isn't local
(remote workforce, customer facing website, web applications, etc.).
Monitoring network bandwidth and web application performance from
multiple locations helps isolate the problem to the network tier.
Jennifer Kuvlesky
Product Marketing Manager, SolarWinds(link is external)
The network on which the application is used impacts performance
tremendously, especially for mobile and cloud. Inconsistent bandwidth, high
jitter, increased latency and packet loss all work to degrade application
performance. While you might not be able to control mobile or most cloud
networks, you can build and test apps with these network conditions in mind.
This gives organizations the best chance to optimize app performance before
the network impacts are felt by users.
Dave Berg
Vice President of Product Strategy, Shunra
Bandwidth Bottlenecks are a big problem as it causes network queues to
develop and data to be lost, impacting the performance of applications. My
advice is to keep tabs on the number of devices, users and new applications
utilizing the network.
Jim Swepson
Pre-Sales Technologist, iTrinegy(link is external)
8. MOBILITY
One of the biggest factors we see is the acceleration of mobility and IT
consumerization, which will propel the ongoing shift in application
architectures required to deliver the most dynamic, modern mobile end user
interfaces.
John Newsom
Executive Director, Application Performance Monitoring, Dell
Mobile usage numbers are soaring, which is certainly having an impact on
application performance. Users have multiplied as they engage more often
using a variety of devices.
Gregor Rechberger
Product Manager, Performance Testing, Micro Focus(link is external)
TRACs 2013 APM Spectrum shows that the Web browser is the key blind spot
for gaining true end-to-end visibility into application performance. With new
approaches to application design and the increased usage of Web Services,
the ability to monitor the processing that takes place within the browser has
become one of the key requirements for full visibility into application
performance.
Bojan Simic
President and Principal Analyst, TRAC Research
larger companies with a big IT budget. There seems to be two major camps around this
product: 1) You are a professional performance engineer and love all the functionality, or 2)
You are frustrated by the complexity and don't have time for months of learning curve.
Visual Studio Team System - Microsoft has a suite of testing tools for web applications as a
part of Visual Studio. Typical MS software: if you are committed to the MS stack, then it can
be a good option. If you have to pay extra for it because you company didn't buy the Test
Edition (or whatever licensing mechanism is in play), then you might want to look at a
cheaper alternative.
Silk - Enterprise load testing tool from Borland. Generally accepted as the next-best option
for big companies that don't want or can't afford LoadRunner. Lots of features and capability.
Still a budget challenge for most web developers, but a great fit for many testing
professionals working for or with the Fortune 1,000.
IBM Rational Performance Tester - Performance testing tool in the enterprise class and price
range. Extra value from extensions to Seibel and SAP Solutions. Pricey, but a good fit for
large organizations that normally buy from IBM.
LoadStorm - A cloud-based (Amazon EC2) load testing tool for web applications. They target
web developers on a tight budget with a simple tool that's easy to afford. It doesn't have all
the features of enterprise tools, but it does have what most developers need to simulate
http traffic for form submission and following links. Nice real-time graphs show metrics like
response times, errors, throughput, requests per second, and concurrent users. Can test up
to 50,000 concurrent users. Free account with 25 concurrent users never expires (handy!).
Xceptance LoadTest - Load testing tool for creating and running regression and load tests, in
particular for web applications. XLT combines the automation of regression tests with the
execution of load tests, as the test cases already created for the automated regression test
can subsequently be applied as load tests. In short: Every regression test can also be a load
test. Free for up to five virtual users.
Web Page Tester - Analyzes the performance of a single page in an IE browser. You provide
the URL of a page, and it is requested from one of five locations around the world. Summary
and detail reports are generated that show the times for overall page loading, DNS lookups,
rendering, and downloading for each resource such as HTML, CSS, images, and script files.
An optimization checklist is presented to make suggestions such as compression or caching
for improving the performance.
SiteBlaster - SiteBlaster is a web site load and stress testing tool built to load and stress test
USGovXML. USGovXML is an index to publicly available web services and XML data sources
provided by the US Government. When testing is complete, a report is available that can be
viewed or printed. SiteBlaster simulates Internet Explorer web browsing functionality and is
best used to test those sites that use URL query strings to pass data to its web pages.
Gomez - Gomez provides an on-demand platform that you use to optimize the performance,
availability and quality of your Web and mobile applications. It identifies business-impacting
issues by testing and measuring Web applications from the outside-in across your users,
browsers, mobile devices and geographies using a global network of 100,000+ locations.
BrowserMob - Hosted Selenium on-demand load testing tool to simulate Firefox browsers.
Captures screenshots of website load issues to help you debug. It uses Amazon EC2 cloud to
run many instances of Selenium. More accurately mimics the real user experience by
supporting AJAX and Flash processing. Free trial available for 25 real browser user load tests
or 100 simulated virtual users.
LoadImpact - Online load testing service from Gatorhole for stress testing websites. Although
not on a specific cloud infrastructure, it is a hosted solution with no download or installation,
and provides user traffic against your website through their network of load generator server
clusters with very fast connections to enable simulation of tens of thousands of users
accessing your website concurrently. Free low level load tests for 1-50 simulated users;
higher levels have monthly fees.
OpenSTA - 'Open System Testing Architecture' is a free, open source web load/stress testing
application, licensed under the Gnu GPL. Utilizes a distributed software architecture based
on CORBA. OpenSTA binaries available for Windows.
JMeter - Desktop open source performance testing tool written in Java by the Apache
Software Foundation. It was originally designed for testing Web Applications but has since
expanded to other test functions. It may be used to test performance both on static and
dynamic resources (files, Servlets, Perl scripts, Java Objects, Data Bases and Queries, FTP
Servers and more). It can be used to simulate a heavy load on a server, network or object to
test its strength or to analyze overall performance under different load types.
Curl-Loader - (also known as "omes-nik" and "davilka") is an open-source tool written in Clanguage, simulating application load and application behavior of thousands and tens of
thousand HTTP/HTTPS and FTP/FTPS clients, each with its own source IP-address. In contrast
to other tools curl-loader is using real C-written client protocol stacks, namely, HTTP and FTP
stacks of libcurl and TLS/SSL of openssl, and simulates user behavior with support for login
and authentication flavors. The goal of the project is to deliver a powerful and flexible opensource testing solution as a real alternative to Spirent Avalanche and IXIA IxLoad.
The Grinder - a Java load testing framework that makes it easy to run a distributed test using
many load injector machines. It is freely available under a BSD-style open-source license. It
has a generic approach to load testing anything that has a Java API. This includes common
cases such as HTTP web servers, SOAP and REST web services, and application servers
(CORBA, RMI, JMS, EJBs), as well as custom protocols. Test scripts are written in Jython. A
graphical console allows multiple load injectors to be monitored and controlled, and provides
centralised script editing and distribution.
ApacheBench - (sometimes simply called "ab") a Perl API for Apache benchmarking and
regression testing. Intended as the foundation for a complete benchmarking and regression
testing suite for transaction-based mod_perl sites. For stress-testing server while verifying
correct HTTP responses. Based on the Apache 1.3.12 ab code. Available via CPAN as .tar.gz
file.
http-load - Unix load test application to generate web server loads from ACME Software.
Handles HTTP and HTTPS. http_load runs multiple http fetches in parallel, to test the
throughput of a web server. However unlike most such test clients, it runs in a single
process, so it doesn't bog down the client machine. It can be configured to do https fetches
as well. You give it a file containing a list of URLs that may be fetched, a flag specifying how
to start connections (either by rate or by number of simulated users), and a flag specifying
when to quit (either after a given number of fetches or a given elapsed time). There are also
optional flags for checksums, throttling, random jitter, and progress reports.
Pylot - Corey Goldberg's open source load testing tool for Python enthusiasts. Uses XML to
define test cases.
Loadea - Win XP based load testing tool. Control and capture modules use C# for building
test scenario scripts, schedule stress test execution, and XML for test data. The analysis
module provides reporting capabilities. Free evaluation version for two virtual users.
CloudIntelligence - a software development company specializing in Cloud-Computing and
Open-Source technologies. By leveraging best-of-breed Open-Source technologies, CloudComputing resources and together with our own proprietary development, we are able to
present the best and affordable Cloud-Testing solution.
StressTester - enterprise class performance / web load testing tool used by organizations of
all sizes to ensure that their web, internet, extranet and web service applications can meet
their users needs in terms of performance, scalability and stability. Free trial download
available.
TestMaker - A single platform for Functional Testing, Regression, Load and Performance
Testing, and Business Service Monitoring, all from the same single test script. Pass smoke
tests, surface performance bottlenecks, and enforce Service Level Agreements (SLAs) in one
product.
Siege - Unix-based http load tester and benchmarking utility. It was designed to let web
developers measure the performance of their code under duress, to see how it will stand up
to load on the internet. It lets the user hit a web server with a configurable number of
concurrent simulated users. Those users place the server "under siege." Written on
GNU/Linux and has been successfully ported to AIX, BSD, HP-UX and Solaris. It should
compile on most System V UNIX variants and on most newer BSD systems. Because Siege
relies on POSIX.1b features not supported by Microsoft, it will not run on Windows.
SiteTester1 - A load-testing utility designed to test web servers and web applications by
simulating virtual users following predefined procedures for HTTP1.0/1.1 compatible
requests, POST/GET methods, cookies, running in multi-threaded or single-threaded mode.
Generates various reports in HTML format, keeps and reads XML formatted files for test
definitions and test logs. Requires JDK1.2 or higher.
eValid LoadTest - A testing tool suite built into an IE browser for 100% client side quality
checking, dynamic testing, content validation, page performance tuning, and webserver
loading and capacity analysis. It performs functions needed for detailed static and dynamic
website testing, regression testing, QA/Validation, page timing and tuning, transaction
monitoring, and realistic & scalable server loading.
WebPerformance Load Tester - Load test tool for generating and analyzing automated load
tests on a server. Permanent fixed-seat, permanent floating, and temporary fixed-seat
licenses available. Automatically detects and configures the test cases for many situations.
Supports all browsers and web servers. Modem simulation allows each virtual user to be
bandwidth limited. For Windows and many UNIX variants.
5 th unit
Types of Metrics
Base Metrics
Calculated Metrics
Base metrics is the raw data collected by Test Analyst during the test
case development and execution (# of test cases executed, # of
test cases). While, calculated metrics is derived from the data
gathered in base metrics. Calculated metrics is usually tracked by the
test manager for test reporting purpose (% Complete, % Test
Coverage).
Depending on the project or business model some of the important
metrics are
Analysis
Communicate
Evaluation
Report
Likewise, you can calculate for other parameters like test cases not
executed, test cases passed, test cases failed, test cases
blocked, etc.
You Might L