Vous êtes sur la page 1sur 6

www.hssworld.

com
 2002 Hughes Software Systems Ltd. All rights reserved.
No part of this document may be reproduced or transmitted in any form or by any means,
electronic or otherwise, including photocopying, reprinting, or recording, for any purpose,
without the express written permission of Hughes Software Systems Ltd.
DISCLAIMER
Information in this document is subject to change without notice and should not be
construed as a commitment on the part of Hughes Software Systems. Hughes Software
Systems does not assume
any responsibility or make any warranty against errors that may appear in this document
and disclaims any implied warranty of merchantability or fitness for a particular purpose.
TRADEMARKS
All company, brand, product or service names mentioned herein are the trademarks or
registered trademarks of their respective owners.
Hughes Software Systems Ltd.
Plot 31, Electronic City
Sector 18, Gurgaon – 122015
Haryana (INDIA)
Tel: +91-124-6346666/6455555
Fax: +91-124-6342415/6342810
E-mail: info@hssworld.com
Visit us at: http://www.hssworld.com
TEST AUTOMATION
Abstract
Automated testing is essential for performing thorough and fast testing. This is more
important than ever before given the need to accelerate software development and
reduce the time to market in the fast changing business environment.
The key challenges in adopting automated testing are:
_ Selection of the Automation Level to be targeted
_ Selection of the appropriate test tool
_ Customization of the tool to support the desired scripting interface
_ Development of reusable and modular scripts
_ Verification of scripts
_ Implementation of a test management system (- which controls and synchronizes the
multiple tests points, executes tests in batch mode and consolidates the testing
results)
This paper addresses these and other key issues to be considered by the Test Manager
before adopting automated testing. It also suggests a practical and verified approach for
implementation.
Introduction
Test automation saves a lot of effort needed for rigorous testing of the system. It
also ensures uniformity in the testing process each time the test is executed.
Using automation, tests can be run faster, in a consistent manner, and over and
over again with fewer overheads. Automation is used to replace or supplement
manual testing with a suite of test programs. Benefits to product developers
include increased software quality, improved time to market, repeatable test
procedures, and reduced testing costs.
Manual testing never goes away, but these efforts could now be focused on more
rigorous tests.
Automated testing has its own advantages and disadvantages and involves lots
of challenges. If not planned carefully, it may lead to poor quality of testing. Thus,
many key factors need to be considered before adopting automation and even
during the automation process.
This paper first describes the different factors affecting Automation and presents a scientific
approach to calculate the productivity of automation. This analysis is followed by the discussion
on the challenges faced for making the testing automated.
FACTORS AFFECTING AUTOMATION
1. Number of External Interfaces
A system can have one or more external interfaces. The complexity of
automated testing is directly related to this. The more the interfaces a SUT
has, the more complex would be the automated test setup. This conclusion is
derived from the fact that whenever a software is put under test, simulators
are used at all its external interfaces in order to test its valid as well as invalid
behavior and automated testing is achieved through writing scripts at these
simulators. Therefore, if the software has many external interfaces, complete
automated testing would require lot of effort in selecting and customizing tools
at each interface and scripting. This would not be cost-effective in terms of
effort and resources. It is the responsibility of the Test Manager to critically
evaluate the feasibility of automation and go for it accordingly.
2. Type of External Interface
Every external interface of software cannot be simulated. This holds true for
software having interface with “ firmware” or “ device drivers.” In such cases
testing needs to be done with real standard interfaces and not with the
simulators. Such a situation leads to two types of problems.
All invalid behavior becomes very difficult to test
Complete automation becomes impossible on these interfaces as invalid
scenarios of the driver/ firmware gets stuck in a state where human
intervention (by giving a command or triggering external action like
resetting the physical interface etc.) is required to come out of it.
If software handles “ interrupts” coming out of some hardware interface
then this also needs to be tested manually.
3 Effort needed for Automation
Every automation activity requires a certain effort. This effort needs to be
estimated and evaluated for “ cost-benefit “ . One has to see how much
effort is being saved through automation against the effort needed to achieve
this. If automation gives 2 mandays effort saving but requires 5 mandays to
achieve this , then this is not good solution at all.
4 Number of Releases Expected for testing
One of the main objectives of automation is to reduce and cut down similar
type of effort again and again. Thus, automation is directly related to the “
number of releases “ expected for the product. If only one or two releases are
expected, then going for complete automation is not practical unless
achieved through minimal effort .
Taking the above example, if we expect 5 releases of the product , then
total effort saving would be 10 mandays against 5 mandays effort to achieve
this . In this case going for automation is a worth while decision .
5 Maturity of the product
A new product can not be tested in a completely automated manner .
Automated testing assumes certain stability in the product which may not be
present in a new product or system . The first release of any product should
be tested with maximum level 2 automation ( defined below ).
Levels of Automation
Automation is a stepwise process and has different levels. Achieving one level
that implies all the lower levels are also achieved . For any project one has to
decide what level of automation is to be targeted by considering the factors
described above .
Following are the different levels of automation
♦ Level 1 : This level of automation means that all the messages /triggers are
available with the tester and he need not build messages during test
execution . This is the very basic level and should be achieved for all types
of testing . All such messages should be tested on some sample pre-QA
release before the start of the testing
♦ Level 2 : Achieving this level means all generic plus regression test scripts
are available and tester has to do modification during the execution for other
test cases . There is no verdict of test case ( pass/fail) available through
scripts and the tester has to observe and give the verdict .
♦ Level 3 : This level of automation is said to be achieved if level 2 is achieved
with the test scripts having verdict ( pass/fail )also . Adding verdict to the
scripts require considerable amount of effort which may not be feasible every
time .
♦ Level 4 : This is an extreme level of automation where verdict ( pass
/fail)based scripts are available for all the test cases . If level 3 of automation
is achieved before the start of testing, then this level can be achieved after
first round of testing.
The above levels of automation are applicable for each interface of SUT. The
level of automation at different interfaces could be different and Test Manager
has to select the best combination to achieve most efficient and cost-effective
test solution . For example ,we may have level 3 automation available at “ User
“ interface and level 1 at “ SME” interface . The different levels of automation to
be achieved at different interfaces should be decided before going ahead with
the testing.
The testing of a product /system is completely automated if all interfaces have
automation level 4. In this case a “ Test Controller” would be required to invoke
and control the scripts at different interfaces and synchronize them for
simultaneous execution.
Productivity of Automation
The level of automation should decided upon to give maximum costeffectiveness
to the client.
Productivity of automation =1,
when effort required to make the testing automated = effort saved due to
automation.
The value of P should be as high as possible in order to get the benefits from
automation.
Sample Case Analysis
Figure 1 shows a sample case study where the Productivity of Automation “ P”
is calculated for different level of automation using a scientific model and plotted
as the function of the number of test iterations .
Fig. 1 Variation of Automation Productivity with Test Iterations
Series1 _ Productivity with Automation Level 1
Series 2 _ Productivity with Automation Level 2
Series 3 _ Productivity with Automation Level 4 (Complete Automation).
It can be seen that productivity of automation increases with test iteration and
automation of any level is not of much use if the number of test iterations
expected is less than 3
Variation of Automation Productivity with Test Iteration
0
1
2
3
4
5
1 2 3 4 5 10 20 30
Test Iterations
Automation Productivity
Series1
Series2
Series3
Maximizing Productivity: Challenges in Automated Testing
Once the decision regarding automation is made, the next step is implementation
of the same. Following are the main challenges in the implementation part of
automation
1. Test Tools Selection
Test tool selection is a critical factor in the success of test automation. This
requires the study of the scope of testing and test strategy and then selection of
the right test tool, to meet the requirements of automating test-suite for a
particular product and release. Needless to say, while selecting the test tool,
reusability, reliability, and cost are the prime criteria to leverage the maximum
benefit out of the tool being procured/developed/customized.
A test tool should essentially support
scripting Interface
facility to give invalid input
facility to Control IUT (Implementation Under Test) & Peer
result comparison & verdict declaration facility
logging facility
In fact , the level of automation targeted should be done alongwith the test tool
selection .
2. Customization of the Tool
The Test Manager should always look for standard tools if available in the
market, and go customize the existing tools. New tools should be developed only
as a last resort. Customization should be generic and there should be always
room for easy and quick enhancement.
3. Development and Verification of Scripts
Scripting effort includes development and testing of scripts. This depends upon
the two factors. First, skill of the persons involved and secondly, the capability of
the test tool to provide flexibility to develop scripts for all valid and invalid
scenarios. However, it is recommended that scripting should always be done in a
modular manner so that the same modules can be reused in different scripts.
Modular approach will lead to less scripting time and also help in testing and reuse
across different releases.
It is always advised that all scripts should be tested with a release prior to the
actual test execution so that during the real testing no problems are found in
scripts.
4. Implementation of a Test Management system
This is to facilitate complete automation in terms of managing the scripts at the
System Under Test (SUT) and tester simultaneously and for consolidating the
test results at one place. This is not applicable if automation of level 4 is not
targeted. The figure below depicts a sample configuration where there are
individual tools (Upper Tester – UT) above the SUT and at the Peer side (Lower
Tester – LT) and also a test manager, which controls the complete testing cycle
automatically.
The test manager provides a communication interface, which enables remote
control of the tests running on the Upper and the Lower Testers. It helps in the
following functions:
Manage test cases
Manage test activities with schedule, priority
Manage test results and log files per Release basis
Interface with automation tools
Generate test reports, statistics, trends
This configuration helps in deciding the order of test execution and eliminating
the need to manually trigger the scripts at the Upper Tester and the Peer side
and in having the complete status of the test execution at one place.
Automated Testing : A complete procedure
The automation activity for any testing activity should be started right from the
beginning. So far we have discussed different factors affecting the automation
level and key challenges behind automation. Now, in this section we discuss how
the different automation activities should be aligned with the testing activities.
System
Under
Test
Peer
Client Appl (UT)
Client Appl( LT)
Scripting I/F Scripting I/F
Service Provider La
The flow chart given below give the time alignment of automation and testing
activity
TEST ACTIVITY AUTOMATION ACTIVITY
TEST STRATEGY FINALISATION FINALISATION OF TEST AUTOMATION
LEVEL TARGETED AND MAKING OF TEST
AUTOMATION STRATEGY
TEST PLAN PREPERATION CUSTOMISATION OF TOOL , MAKING OF
BASIC MESSAGES
FINAL TEST PLAN AVAILABLE START MAKING TEST SCRIPTS
PRE-QA RELEASE AVAILABLE TEST SCRIPTING COMPLETED., TESTING
OF SCRIPTS STARTED
FINAL RELEASE AVAILABLE FOR
EVALUATION
START TESTING WITH AUTOMATED
SCRIPTS
Conclusion
As presented in the paper, automation is very critical for efficient testing of any
product/project and all factors affecting it needs to be taken into consideration
before targeting a particular automation level. The scientific model used to
evaluate automation productivity helps in deciding the automation level to be
targeted. The challenges in successful automation lie with the selection of an
appropriate test tool, efficient and error free scripting and proper time
synchronization of testing and automation activities.

Vous aimerez peut-être aussi